Jan 26 07:53:39 crc systemd[1]: Starting Kubernetes Kubelet... Jan 26 07:53:39 crc restorecon[4612]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 07:53:39 crc restorecon[4612]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 07:53:40 crc restorecon[4612]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 07:53:40 crc restorecon[4612]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 26 07:53:40 crc kubenswrapper[4806]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 26 07:53:40 crc kubenswrapper[4806]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 26 07:53:40 crc kubenswrapper[4806]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 26 07:53:40 crc kubenswrapper[4806]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 26 07:53:40 crc kubenswrapper[4806]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 26 07:53:40 crc kubenswrapper[4806]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.853324 4806 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.857748 4806 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.857771 4806 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.857778 4806 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.857784 4806 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.857791 4806 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.857797 4806 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.857805 4806 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.857811 4806 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.857816 4806 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.857821 4806 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.857825 4806 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.857830 4806 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.857844 4806 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.857853 4806 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.857858 4806 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.857864 4806 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.857868 4806 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.857873 4806 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.857878 4806 feature_gate.go:330] unrecognized feature gate: Example Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.857883 4806 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.857888 4806 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.857893 4806 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.857899 4806 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.857904 4806 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.857913 4806 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.857918 4806 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.857922 4806 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.857926 4806 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.857931 4806 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.857935 4806 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.857940 4806 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.857944 4806 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.857948 4806 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.857953 4806 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.857957 4806 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.857961 4806 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.857966 4806 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.857975 4806 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.857980 4806 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.857985 4806 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.857989 4806 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.857994 4806 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.857999 4806 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.858003 4806 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.858007 4806 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.858013 4806 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.858018 4806 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.858025 4806 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.858029 4806 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.858039 4806 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.858043 4806 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.858047 4806 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.858051 4806 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.858058 4806 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.858064 4806 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.858069 4806 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.858074 4806 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.858080 4806 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.858087 4806 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.858093 4806 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.858099 4806 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.858111 4806 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.858115 4806 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.858120 4806 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.858124 4806 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.858129 4806 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.858134 4806 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.858138 4806 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.858145 4806 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.858151 4806 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.858159 4806 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.858304 4806 flags.go:64] FLAG: --address="0.0.0.0" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.858317 4806 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.858326 4806 flags.go:64] FLAG: --anonymous-auth="true" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.858339 4806 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.858346 4806 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.858352 4806 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.858360 4806 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.858366 4806 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.858372 4806 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.858378 4806 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.858388 4806 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.858394 4806 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.858399 4806 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.858405 4806 flags.go:64] FLAG: --cgroup-root="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.858410 4806 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.858416 4806 flags.go:64] FLAG: --client-ca-file="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.858423 4806 flags.go:64] FLAG: --cloud-config="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.858429 4806 flags.go:64] FLAG: --cloud-provider="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.858436 4806 flags.go:64] FLAG: --cluster-dns="[]" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.858582 4806 flags.go:64] FLAG: --cluster-domain="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.858589 4806 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.858594 4806 flags.go:64] FLAG: --config-dir="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.858599 4806 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.858633 4806 flags.go:64] FLAG: --container-log-max-files="5" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.858650 4806 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.858654 4806 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.858660 4806 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.858665 4806 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.858670 4806 flags.go:64] FLAG: --contention-profiling="false" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.858674 4806 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.858680 4806 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.858686 4806 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.858725 4806 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.858732 4806 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.858736 4806 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.858741 4806 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.858745 4806 flags.go:64] FLAG: --enable-load-reader="false" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.859086 4806 flags.go:64] FLAG: --enable-server="true" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.859095 4806 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.859105 4806 flags.go:64] FLAG: --event-burst="100" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.859110 4806 flags.go:64] FLAG: --event-qps="50" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.859114 4806 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.859120 4806 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.859124 4806 flags.go:64] FLAG: --eviction-hard="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.859130 4806 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.859134 4806 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.859139 4806 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.859144 4806 flags.go:64] FLAG: --eviction-soft="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.859148 4806 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.859153 4806 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.859157 4806 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.859162 4806 flags.go:64] FLAG: --experimental-mounter-path="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.859166 4806 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.859170 4806 flags.go:64] FLAG: --fail-swap-on="true" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.859174 4806 flags.go:64] FLAG: --feature-gates="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.859180 4806 flags.go:64] FLAG: --file-check-frequency="20s" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.859184 4806 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.859189 4806 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.859193 4806 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.859198 4806 flags.go:64] FLAG: --healthz-port="10248" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.859202 4806 flags.go:64] FLAG: --help="false" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.859207 4806 flags.go:64] FLAG: --hostname-override="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.859212 4806 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.859216 4806 flags.go:64] FLAG: --http-check-frequency="20s" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.859221 4806 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.859225 4806 flags.go:64] FLAG: --image-credential-provider-config="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.859229 4806 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.859233 4806 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.859237 4806 flags.go:64] FLAG: --image-service-endpoint="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.859244 4806 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.859248 4806 flags.go:64] FLAG: --kube-api-burst="100" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.859252 4806 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.859256 4806 flags.go:64] FLAG: --kube-api-qps="50" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.859261 4806 flags.go:64] FLAG: --kube-reserved="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.859265 4806 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.859269 4806 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.859273 4806 flags.go:64] FLAG: --kubelet-cgroups="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.859277 4806 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.859281 4806 flags.go:64] FLAG: --lock-file="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.859285 4806 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.859289 4806 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.859293 4806 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.859299 4806 flags.go:64] FLAG: --log-json-split-stream="false" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.859304 4806 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.859308 4806 flags.go:64] FLAG: --log-text-split-stream="false" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.859312 4806 flags.go:64] FLAG: --logging-format="text" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.859316 4806 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.859320 4806 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.859324 4806 flags.go:64] FLAG: --manifest-url="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.859328 4806 flags.go:64] FLAG: --manifest-url-header="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.859335 4806 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.859341 4806 flags.go:64] FLAG: --max-open-files="1000000" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.859346 4806 flags.go:64] FLAG: --max-pods="110" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.859351 4806 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.859356 4806 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.859360 4806 flags.go:64] FLAG: --memory-manager-policy="None" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.859365 4806 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.859369 4806 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.859373 4806 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.859377 4806 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.859397 4806 flags.go:64] FLAG: --node-status-max-images="50" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.859401 4806 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.859406 4806 flags.go:64] FLAG: --oom-score-adj="-999" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.859409 4806 flags.go:64] FLAG: --pod-cidr="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.859413 4806 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.859424 4806 flags.go:64] FLAG: --pod-manifest-path="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.859429 4806 flags.go:64] FLAG: --pod-max-pids="-1" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.859433 4806 flags.go:64] FLAG: --pods-per-core="0" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.859437 4806 flags.go:64] FLAG: --port="10250" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.859442 4806 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.859445 4806 flags.go:64] FLAG: --provider-id="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.859449 4806 flags.go:64] FLAG: --qos-reserved="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.859453 4806 flags.go:64] FLAG: --read-only-port="10255" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.859458 4806 flags.go:64] FLAG: --register-node="true" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.859461 4806 flags.go:64] FLAG: --register-schedulable="true" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.859465 4806 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.859472 4806 flags.go:64] FLAG: --registry-burst="10" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.859476 4806 flags.go:64] FLAG: --registry-qps="5" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.859480 4806 flags.go:64] FLAG: --reserved-cpus="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.859487 4806 flags.go:64] FLAG: --reserved-memory="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.859493 4806 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.859497 4806 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.859501 4806 flags.go:64] FLAG: --rotate-certificates="false" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.859505 4806 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.859512 4806 flags.go:64] FLAG: --runonce="false" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.859516 4806 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.859538 4806 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.859542 4806 flags.go:64] FLAG: --seccomp-default="false" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.859546 4806 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.859550 4806 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.859555 4806 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.859559 4806 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.859563 4806 flags.go:64] FLAG: --storage-driver-password="root" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.859567 4806 flags.go:64] FLAG: --storage-driver-secure="false" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.859571 4806 flags.go:64] FLAG: --storage-driver-table="stats" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.859575 4806 flags.go:64] FLAG: --storage-driver-user="root" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.859579 4806 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.859583 4806 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.859588 4806 flags.go:64] FLAG: --system-cgroups="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.859592 4806 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.859600 4806 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.859605 4806 flags.go:64] FLAG: --tls-cert-file="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.859610 4806 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.859618 4806 flags.go:64] FLAG: --tls-min-version="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.859622 4806 flags.go:64] FLAG: --tls-private-key-file="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.859626 4806 flags.go:64] FLAG: --topology-manager-policy="none" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.859630 4806 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.859634 4806 flags.go:64] FLAG: --topology-manager-scope="container" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.859638 4806 flags.go:64] FLAG: --v="2" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.859645 4806 flags.go:64] FLAG: --version="false" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.859651 4806 flags.go:64] FLAG: --vmodule="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.859656 4806 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.859660 4806 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.859812 4806 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.859817 4806 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.859824 4806 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.859829 4806 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.859833 4806 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.859837 4806 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.859846 4806 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.859850 4806 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.859854 4806 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.859858 4806 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.859861 4806 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.859865 4806 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.859869 4806 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.859873 4806 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.859878 4806 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.859882 4806 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.859887 4806 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.859890 4806 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.859894 4806 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.859898 4806 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.859901 4806 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.859905 4806 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.859909 4806 feature_gate.go:330] unrecognized feature gate: Example Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.859912 4806 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.859916 4806 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.859920 4806 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.859924 4806 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.859927 4806 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.859931 4806 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.859935 4806 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.859938 4806 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.859942 4806 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.859961 4806 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.859965 4806 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.859970 4806 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.859977 4806 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.859982 4806 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.859986 4806 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.859994 4806 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.859999 4806 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.860003 4806 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.860007 4806 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.860011 4806 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.860015 4806 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.860018 4806 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.860022 4806 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.860027 4806 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.860032 4806 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.860036 4806 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.860040 4806 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.860044 4806 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.860048 4806 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.860053 4806 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.860057 4806 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.860061 4806 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.860066 4806 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.860071 4806 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.860075 4806 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.860079 4806 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.860083 4806 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.860087 4806 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.860091 4806 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.860095 4806 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.860099 4806 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.860103 4806 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.860108 4806 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.860112 4806 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.860117 4806 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.860121 4806 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.860125 4806 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.860131 4806 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.860145 4806 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.871392 4806 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.871449 4806 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.871628 4806 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.871646 4806 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.871659 4806 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.871670 4806 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.871683 4806 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.871695 4806 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.871706 4806 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.871716 4806 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.871727 4806 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.871739 4806 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.871749 4806 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.871760 4806 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.871772 4806 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.871782 4806 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.871792 4806 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.871802 4806 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.871812 4806 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.871821 4806 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.871831 4806 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.871840 4806 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.871851 4806 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.871861 4806 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.871870 4806 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.871880 4806 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.871889 4806 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.871900 4806 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.871909 4806 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.871923 4806 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.871936 4806 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.871948 4806 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.871961 4806 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.871973 4806 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.871983 4806 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.871994 4806 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.872016 4806 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.872027 4806 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.872040 4806 feature_gate.go:330] unrecognized feature gate: Example Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.872051 4806 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.872061 4806 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.872073 4806 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.872083 4806 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.872094 4806 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.872104 4806 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.872113 4806 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.872123 4806 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.872132 4806 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.872142 4806 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.872152 4806 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.872161 4806 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.872171 4806 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.872180 4806 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.872194 4806 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.872208 4806 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.872219 4806 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.872229 4806 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.872239 4806 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.872277 4806 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.872287 4806 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.872298 4806 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.872309 4806 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.872318 4806 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.872330 4806 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.872341 4806 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.872352 4806 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.872363 4806 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.872372 4806 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.872385 4806 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.872398 4806 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.872409 4806 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.872419 4806 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.872433 4806 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.872453 4806 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.872834 4806 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.872855 4806 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.872867 4806 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.872878 4806 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.872891 4806 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.872907 4806 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.872920 4806 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.872932 4806 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.872944 4806 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.872954 4806 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.872965 4806 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.872979 4806 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.872991 4806 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.873002 4806 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.873015 4806 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.873027 4806 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.873039 4806 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.873051 4806 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.873064 4806 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.873076 4806 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.873086 4806 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.873094 4806 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.873102 4806 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.873110 4806 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.873118 4806 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.873126 4806 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.873134 4806 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.873142 4806 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.873150 4806 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.873158 4806 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.873166 4806 feature_gate.go:330] unrecognized feature gate: Example Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.873174 4806 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.873182 4806 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.873190 4806 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.873200 4806 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.873208 4806 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.873215 4806 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.873223 4806 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.873231 4806 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.873238 4806 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.873246 4806 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.873254 4806 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.873261 4806 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.873269 4806 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.873277 4806 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.873285 4806 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.873294 4806 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.873302 4806 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.873310 4806 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.873318 4806 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.873326 4806 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.873334 4806 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.873342 4806 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.873350 4806 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.873357 4806 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.873365 4806 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.873372 4806 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.873380 4806 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.873388 4806 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.873396 4806 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.873403 4806 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.873411 4806 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.873419 4806 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.873427 4806 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.873435 4806 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.873443 4806 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.873450 4806 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.873458 4806 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.873466 4806 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.873474 4806 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.873482 4806 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.873496 4806 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.873803 4806 server.go:940] "Client rotation is on, will bootstrap in background" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.879097 4806 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.879254 4806 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.880149 4806 server.go:997] "Starting client certificate rotation" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.880199 4806 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.880717 4806 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-01 09:47:32.077216258 +0000 UTC Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.880870 4806 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.889501 4806 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 26 07:53:40 crc kubenswrapper[4806]: E0126 07:53:40.891620 4806 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.66:6443: connect: connection refused" logger="UnhandledError" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.893571 4806 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.905035 4806 log.go:25] "Validated CRI v1 runtime API" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.931576 4806 log.go:25] "Validated CRI v1 image API" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.934413 4806 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.937800 4806 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-26-07-47-34-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.937835 4806 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.952026 4806 manager.go:217] Machine: {Timestamp:2026-01-26 07:53:40.950406927 +0000 UTC m=+0.214815013 CPUVendorID:AuthenticAMD NumCores:8 NumPhysicalCores:1 NumSockets:8 CpuFrequency:2800000 MemoryCapacity:25199472640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:8cee8155-a08c-4d0d-aec6-2f132dd9ee01 BootID:6d591560-a509-477f-85dc-1a92a429bf2e Filesystems:[{Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:12599738368 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:2519945216 Type:vfs Inodes:615221 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:3076107 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:12599734272 Type:vfs Inodes:3076107 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:5039894528 Type:vfs Inodes:819200 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:429496729600 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:de:4a:5e Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:de:4a:5e Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:fd:9a:be Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:33:61:6f Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:69:4e:cc Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:e2:0e:4b Speed:-1 Mtu:1496} {Name:eth10 MacAddress:16:49:45:e4:c8:b7 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:ba:23:10:85:85:d5 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:25199472640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.952379 4806 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.952588 4806 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.952976 4806 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.953178 4806 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.953223 4806 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.953474 4806 topology_manager.go:138] "Creating topology manager with none policy" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.953487 4806 container_manager_linux.go:303] "Creating device plugin manager" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.953776 4806 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.953825 4806 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.954272 4806 state_mem.go:36] "Initialized new in-memory state store" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.954378 4806 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.955245 4806 kubelet.go:418] "Attempting to sync node with API server" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.955277 4806 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.955305 4806 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.955321 4806 kubelet.go:324] "Adding apiserver pod source" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.955334 4806 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.957451 4806 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.958014 4806 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.958099 4806 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.66:6443: connect: connection refused Jan 26 07:53:40 crc kubenswrapper[4806]: E0126 07:53:40.958187 4806 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.66:6443: connect: connection refused" logger="UnhandledError" Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.958196 4806 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.66:6443: connect: connection refused Jan 26 07:53:40 crc kubenswrapper[4806]: E0126 07:53:40.958339 4806 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.66:6443: connect: connection refused" logger="UnhandledError" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.959092 4806 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.959800 4806 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.959836 4806 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.959849 4806 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.959862 4806 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.959884 4806 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.959897 4806 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.959931 4806 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.959954 4806 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.959968 4806 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.959981 4806 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.960005 4806 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.960026 4806 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.960313 4806 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.961010 4806 server.go:1280] "Started kubelet" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.961808 4806 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.66:6443: connect: connection refused Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.961808 4806 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.961893 4806 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.962343 4806 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 26 07:53:40 crc systemd[1]: Started Kubernetes Kubelet. Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.963240 4806 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.963288 4806 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.963771 4806 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 16:01:00.569970354 +0000 UTC Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.964086 4806 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.964109 4806 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.964197 4806 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.964622 4806 server.go:460] "Adding debug handlers to kubelet server" Jan 26 07:53:40 crc kubenswrapper[4806]: E0126 07:53:40.964405 4806 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.66:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188e38ad63f95f71 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 07:53:40.960968561 +0000 UTC m=+0.225376627,LastTimestamp:2026-01-26 07:53:40.960968561 +0000 UTC m=+0.225376627,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 07:53:40 crc kubenswrapper[4806]: E0126 07:53:40.969201 4806 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.969454 4806 factory.go:55] Registering systemd factory Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.969486 4806 factory.go:221] Registration of the systemd container factory successfully Jan 26 07:53:40 crc kubenswrapper[4806]: E0126 07:53:40.970393 4806 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.66:6443: connect: connection refused" interval="200ms" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.970805 4806 factory.go:153] Registering CRI-O factory Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.970923 4806 factory.go:221] Registration of the crio container factory successfully Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.971101 4806 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.971221 4806 factory.go:103] Registering Raw factory Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.971325 4806 manager.go:1196] Started watching for new ooms in manager Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.972988 4806 manager.go:319] Starting recovery of all containers Jan 26 07:53:40 crc kubenswrapper[4806]: W0126 07:53:40.973975 4806 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.66:6443: connect: connection refused Jan 26 07:53:40 crc kubenswrapper[4806]: E0126 07:53:40.974058 4806 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.66:6443: connect: connection refused" logger="UnhandledError" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.989703 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.989778 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.989792 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.989808 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.989821 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.989835 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.989849 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.989862 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.989879 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.989894 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.989912 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.989926 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.989939 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.989956 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.989999 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.990014 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.990025 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.990038 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.990048 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.990058 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.990071 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.990085 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.990098 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.990109 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.990135 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.990148 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.990171 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.990185 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.990200 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.990212 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.990227 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.990238 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.990251 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.990265 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.990278 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.990291 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.990304 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.990315 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.990328 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.990341 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.990354 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.990368 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.990381 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.990394 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.990408 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.990423 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.990435 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.990447 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.990459 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.990472 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.990485 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.990499 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.990515 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.990542 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.990556 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.990568 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.990581 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.990592 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.990606 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.990621 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.990634 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.990645 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.990660 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.990675 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.990689 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.990700 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.990711 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.990724 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.990736 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.990747 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.990761 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.990774 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.990787 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.990798 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.990809 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.990821 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.990833 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.990846 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.990857 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.990869 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.990881 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.990895 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.990908 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.990919 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.990932 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.990945 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.990957 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.990967 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.990982 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.990995 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.991010 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.991023 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.991039 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.991056 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.991074 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.991089 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.991102 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.991115 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.991128 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.991144 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.991159 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.991175 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.991191 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.991209 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.991231 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.991245 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.991261 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.991278 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.991295 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.991310 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.991328 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.991342 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.991357 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.991372 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.991419 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.991438 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.991456 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.991469 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.991484 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.991500 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.991514 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.991551 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.991564 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.991580 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.991595 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.991609 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.991622 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.991635 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.991649 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.991662 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.991675 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.991687 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.991701 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.991715 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.991729 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.991742 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.991756 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.991769 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.991783 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.991796 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.991810 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.991834 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.991847 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.991857 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.991868 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.991879 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.991889 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.991900 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.991909 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.991919 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.991929 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.991940 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.991950 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.991959 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.991969 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.991979 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.991988 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.991998 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.992009 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.992017 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.992028 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.992037 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.992046 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.992054 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.992062 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.992072 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.992079 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.992091 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.992099 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.992109 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.992118 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.992128 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.992139 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.992148 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.992157 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.992167 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.992183 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.992192 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.992221 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.992232 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.992241 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.992251 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.992260 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.992269 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.992277 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.992288 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.992296 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.992306 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.992315 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.992324 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.992334 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.992343 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.992352 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.992361 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.992370 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.992381 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.992391 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.998738 4806 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.998781 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.998800 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.998814 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.998828 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.998842 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.998853 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.998867 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.998878 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.998895 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.998905 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.998914 4806 reconstruct.go:97] "Volume reconstruction finished" Jan 26 07:53:40 crc kubenswrapper[4806]: I0126 07:53:40.998922 4806 reconciler.go:26] "Reconciler: start to sync state" Jan 26 07:53:41 crc kubenswrapper[4806]: I0126 07:53:41.001551 4806 manager.go:324] Recovery completed Jan 26 07:53:41 crc kubenswrapper[4806]: I0126 07:53:41.009948 4806 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 07:53:41 crc kubenswrapper[4806]: I0126 07:53:41.011976 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:53:41 crc kubenswrapper[4806]: I0126 07:53:41.012042 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:53:41 crc kubenswrapper[4806]: I0126 07:53:41.012059 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:53:41 crc kubenswrapper[4806]: I0126 07:53:41.016240 4806 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 26 07:53:41 crc kubenswrapper[4806]: I0126 07:53:41.016275 4806 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 26 07:53:41 crc kubenswrapper[4806]: I0126 07:53:41.016307 4806 state_mem.go:36] "Initialized new in-memory state store" Jan 26 07:53:41 crc kubenswrapper[4806]: I0126 07:53:41.034938 4806 policy_none.go:49] "None policy: Start" Jan 26 07:53:41 crc kubenswrapper[4806]: I0126 07:53:41.036283 4806 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 26 07:53:41 crc kubenswrapper[4806]: I0126 07:53:41.036337 4806 state_mem.go:35] "Initializing new in-memory state store" Jan 26 07:53:41 crc kubenswrapper[4806]: I0126 07:53:41.038565 4806 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 26 07:53:41 crc kubenswrapper[4806]: I0126 07:53:41.040554 4806 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 26 07:53:41 crc kubenswrapper[4806]: I0126 07:53:41.040599 4806 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 26 07:53:41 crc kubenswrapper[4806]: I0126 07:53:41.040634 4806 kubelet.go:2335] "Starting kubelet main sync loop" Jan 26 07:53:41 crc kubenswrapper[4806]: E0126 07:53:41.041663 4806 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 26 07:53:41 crc kubenswrapper[4806]: W0126 07:53:41.041976 4806 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.66:6443: connect: connection refused Jan 26 07:53:41 crc kubenswrapper[4806]: E0126 07:53:41.042141 4806 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.66:6443: connect: connection refused" logger="UnhandledError" Jan 26 07:53:41 crc kubenswrapper[4806]: E0126 07:53:41.069441 4806 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 07:53:41 crc kubenswrapper[4806]: I0126 07:53:41.095762 4806 manager.go:334] "Starting Device Plugin manager" Jan 26 07:53:41 crc kubenswrapper[4806]: I0126 07:53:41.095816 4806 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 26 07:53:41 crc kubenswrapper[4806]: I0126 07:53:41.095830 4806 server.go:79] "Starting device plugin registration server" Jan 26 07:53:41 crc kubenswrapper[4806]: I0126 07:53:41.096287 4806 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 26 07:53:41 crc kubenswrapper[4806]: I0126 07:53:41.096301 4806 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 26 07:53:41 crc kubenswrapper[4806]: I0126 07:53:41.096470 4806 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 26 07:53:41 crc kubenswrapper[4806]: I0126 07:53:41.096564 4806 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 26 07:53:41 crc kubenswrapper[4806]: I0126 07:53:41.096571 4806 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 26 07:53:41 crc kubenswrapper[4806]: E0126 07:53:41.107513 4806 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 26 07:53:41 crc kubenswrapper[4806]: I0126 07:53:41.142573 4806 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 26 07:53:41 crc kubenswrapper[4806]: I0126 07:53:41.142693 4806 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 07:53:41 crc kubenswrapper[4806]: I0126 07:53:41.143827 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:53:41 crc kubenswrapper[4806]: I0126 07:53:41.143876 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:53:41 crc kubenswrapper[4806]: I0126 07:53:41.143886 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:53:41 crc kubenswrapper[4806]: I0126 07:53:41.144027 4806 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 07:53:41 crc kubenswrapper[4806]: I0126 07:53:41.144439 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 07:53:41 crc kubenswrapper[4806]: I0126 07:53:41.144514 4806 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 07:53:41 crc kubenswrapper[4806]: I0126 07:53:41.144878 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:53:41 crc kubenswrapper[4806]: I0126 07:53:41.144924 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:53:41 crc kubenswrapper[4806]: I0126 07:53:41.144939 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:53:41 crc kubenswrapper[4806]: I0126 07:53:41.145231 4806 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 07:53:41 crc kubenswrapper[4806]: I0126 07:53:41.145305 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 26 07:53:41 crc kubenswrapper[4806]: I0126 07:53:41.145342 4806 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 07:53:41 crc kubenswrapper[4806]: I0126 07:53:41.145606 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:53:41 crc kubenswrapper[4806]: I0126 07:53:41.145635 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:53:41 crc kubenswrapper[4806]: I0126 07:53:41.145648 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:53:41 crc kubenswrapper[4806]: I0126 07:53:41.146260 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:53:41 crc kubenswrapper[4806]: I0126 07:53:41.146293 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:53:41 crc kubenswrapper[4806]: I0126 07:53:41.146310 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:53:41 crc kubenswrapper[4806]: I0126 07:53:41.146370 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:53:41 crc kubenswrapper[4806]: I0126 07:53:41.146385 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:53:41 crc kubenswrapper[4806]: I0126 07:53:41.146394 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:53:41 crc kubenswrapper[4806]: I0126 07:53:41.146676 4806 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 07:53:41 crc kubenswrapper[4806]: I0126 07:53:41.146745 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 07:53:41 crc kubenswrapper[4806]: I0126 07:53:41.146773 4806 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 07:53:41 crc kubenswrapper[4806]: I0126 07:53:41.147559 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:53:41 crc kubenswrapper[4806]: I0126 07:53:41.147585 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:53:41 crc kubenswrapper[4806]: I0126 07:53:41.147596 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:53:41 crc kubenswrapper[4806]: I0126 07:53:41.148608 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:53:41 crc kubenswrapper[4806]: I0126 07:53:41.148648 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:53:41 crc kubenswrapper[4806]: I0126 07:53:41.148662 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:53:41 crc kubenswrapper[4806]: I0126 07:53:41.148792 4806 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 07:53:41 crc kubenswrapper[4806]: I0126 07:53:41.148879 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 07:53:41 crc kubenswrapper[4806]: I0126 07:53:41.148930 4806 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 07:53:41 crc kubenswrapper[4806]: I0126 07:53:41.149684 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:53:41 crc kubenswrapper[4806]: I0126 07:53:41.149721 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:53:41 crc kubenswrapper[4806]: I0126 07:53:41.149732 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:53:41 crc kubenswrapper[4806]: I0126 07:53:41.151471 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:53:41 crc kubenswrapper[4806]: I0126 07:53:41.151496 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:53:41 crc kubenswrapper[4806]: I0126 07:53:41.151506 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:53:41 crc kubenswrapper[4806]: I0126 07:53:41.151756 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 07:53:41 crc kubenswrapper[4806]: I0126 07:53:41.151793 4806 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 07:53:41 crc kubenswrapper[4806]: I0126 07:53:41.152448 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:53:41 crc kubenswrapper[4806]: I0126 07:53:41.152485 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:53:41 crc kubenswrapper[4806]: I0126 07:53:41.152500 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:53:41 crc kubenswrapper[4806]: E0126 07:53:41.171875 4806 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.66:6443: connect: connection refused" interval="400ms" Jan 26 07:53:41 crc kubenswrapper[4806]: I0126 07:53:41.196629 4806 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 07:53:41 crc kubenswrapper[4806]: I0126 07:53:41.197745 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:53:41 crc kubenswrapper[4806]: I0126 07:53:41.197792 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:53:41 crc kubenswrapper[4806]: I0126 07:53:41.197808 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:53:41 crc kubenswrapper[4806]: I0126 07:53:41.197838 4806 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 26 07:53:41 crc kubenswrapper[4806]: E0126 07:53:41.198391 4806 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.66:6443: connect: connection refused" node="crc" Jan 26 07:53:41 crc kubenswrapper[4806]: I0126 07:53:41.201612 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 07:53:41 crc kubenswrapper[4806]: I0126 07:53:41.201651 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 07:53:41 crc kubenswrapper[4806]: I0126 07:53:41.201677 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 07:53:41 crc kubenswrapper[4806]: I0126 07:53:41.201697 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 07:53:41 crc kubenswrapper[4806]: I0126 07:53:41.201715 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 07:53:41 crc kubenswrapper[4806]: I0126 07:53:41.201736 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 07:53:41 crc kubenswrapper[4806]: I0126 07:53:41.201761 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 07:53:41 crc kubenswrapper[4806]: I0126 07:53:41.201804 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 07:53:41 crc kubenswrapper[4806]: I0126 07:53:41.201930 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 07:53:41 crc kubenswrapper[4806]: I0126 07:53:41.201972 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 07:53:41 crc kubenswrapper[4806]: I0126 07:53:41.202016 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 07:53:41 crc kubenswrapper[4806]: I0126 07:53:41.202055 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 07:53:41 crc kubenswrapper[4806]: I0126 07:53:41.202101 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 07:53:41 crc kubenswrapper[4806]: I0126 07:53:41.202144 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 07:53:41 crc kubenswrapper[4806]: I0126 07:53:41.202181 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 07:53:41 crc kubenswrapper[4806]: I0126 07:53:41.303368 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 07:53:41 crc kubenswrapper[4806]: I0126 07:53:41.303447 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 07:53:41 crc kubenswrapper[4806]: I0126 07:53:41.303488 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 07:53:41 crc kubenswrapper[4806]: I0126 07:53:41.303548 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 07:53:41 crc kubenswrapper[4806]: I0126 07:53:41.303566 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 07:53:41 crc kubenswrapper[4806]: I0126 07:53:41.303601 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 07:53:41 crc kubenswrapper[4806]: I0126 07:53:41.303611 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 07:53:41 crc kubenswrapper[4806]: I0126 07:53:41.303644 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 07:53:41 crc kubenswrapper[4806]: I0126 07:53:41.303679 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 07:53:41 crc kubenswrapper[4806]: I0126 07:53:41.303678 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 07:53:41 crc kubenswrapper[4806]: I0126 07:53:41.303713 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 07:53:41 crc kubenswrapper[4806]: I0126 07:53:41.303722 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 07:53:41 crc kubenswrapper[4806]: I0126 07:53:41.303732 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 07:53:41 crc kubenswrapper[4806]: I0126 07:53:41.303776 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 07:53:41 crc kubenswrapper[4806]: I0126 07:53:41.303782 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 07:53:41 crc kubenswrapper[4806]: I0126 07:53:41.303746 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 07:53:41 crc kubenswrapper[4806]: I0126 07:53:41.303781 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 07:53:41 crc kubenswrapper[4806]: I0126 07:53:41.303825 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 07:53:41 crc kubenswrapper[4806]: I0126 07:53:41.303825 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 07:53:41 crc kubenswrapper[4806]: I0126 07:53:41.303912 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 07:53:41 crc kubenswrapper[4806]: I0126 07:53:41.303924 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 07:53:41 crc kubenswrapper[4806]: I0126 07:53:41.303953 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 07:53:41 crc kubenswrapper[4806]: I0126 07:53:41.303982 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 07:53:41 crc kubenswrapper[4806]: I0126 07:53:41.304158 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 07:53:41 crc kubenswrapper[4806]: I0126 07:53:41.304072 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 07:53:41 crc kubenswrapper[4806]: I0126 07:53:41.304310 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 07:53:41 crc kubenswrapper[4806]: I0126 07:53:41.304345 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 07:53:41 crc kubenswrapper[4806]: I0126 07:53:41.304244 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 07:53:41 crc kubenswrapper[4806]: I0126 07:53:41.304479 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 07:53:41 crc kubenswrapper[4806]: I0126 07:53:41.304563 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 07:53:41 crc kubenswrapper[4806]: I0126 07:53:41.399154 4806 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 07:53:41 crc kubenswrapper[4806]: I0126 07:53:41.401076 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:53:41 crc kubenswrapper[4806]: I0126 07:53:41.401143 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:53:41 crc kubenswrapper[4806]: I0126 07:53:41.401156 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:53:41 crc kubenswrapper[4806]: I0126 07:53:41.401200 4806 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 26 07:53:41 crc kubenswrapper[4806]: E0126 07:53:41.401924 4806 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.66:6443: connect: connection refused" node="crc" Jan 26 07:53:41 crc kubenswrapper[4806]: I0126 07:53:41.480771 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 07:53:41 crc kubenswrapper[4806]: I0126 07:53:41.488656 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 26 07:53:41 crc kubenswrapper[4806]: I0126 07:53:41.507285 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 07:53:41 crc kubenswrapper[4806]: I0126 07:53:41.522271 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 07:53:41 crc kubenswrapper[4806]: W0126 07:53:41.528273 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-edad4608e193f1e2466e90c5269503ec0551343aa70d59cb01034b5030b23f05 WatchSource:0}: Error finding container edad4608e193f1e2466e90c5269503ec0551343aa70d59cb01034b5030b23f05: Status 404 returned error can't find the container with id edad4608e193f1e2466e90c5269503ec0551343aa70d59cb01034b5030b23f05 Jan 26 07:53:41 crc kubenswrapper[4806]: I0126 07:53:41.528404 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 07:53:41 crc kubenswrapper[4806]: W0126 07:53:41.531306 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-5755809ac1e2ce4c80292f04904fd1fcc42148cec223d711f2c09932f01e9822 WatchSource:0}: Error finding container 5755809ac1e2ce4c80292f04904fd1fcc42148cec223d711f2c09932f01e9822: Status 404 returned error can't find the container with id 5755809ac1e2ce4c80292f04904fd1fcc42148cec223d711f2c09932f01e9822 Jan 26 07:53:41 crc kubenswrapper[4806]: W0126 07:53:41.534800 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-153974dab757cc9a4b21ca39c722fc24fc580abf254f521429bcb0d3d397de0f WatchSource:0}: Error finding container 153974dab757cc9a4b21ca39c722fc24fc580abf254f521429bcb0d3d397de0f: Status 404 returned error can't find the container with id 153974dab757cc9a4b21ca39c722fc24fc580abf254f521429bcb0d3d397de0f Jan 26 07:53:41 crc kubenswrapper[4806]: W0126 07:53:41.545361 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-3f9e9c0da9c1e3c4c8e8a6eabf863e29173101ee7566af5c73466399ab0a76fe WatchSource:0}: Error finding container 3f9e9c0da9c1e3c4c8e8a6eabf863e29173101ee7566af5c73466399ab0a76fe: Status 404 returned error can't find the container with id 3f9e9c0da9c1e3c4c8e8a6eabf863e29173101ee7566af5c73466399ab0a76fe Jan 26 07:53:41 crc kubenswrapper[4806]: W0126 07:53:41.546924 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-8eccba91f499e16cf8b4b30fd9905351e6066d590a31760423356d05cb9c10b3 WatchSource:0}: Error finding container 8eccba91f499e16cf8b4b30fd9905351e6066d590a31760423356d05cb9c10b3: Status 404 returned error can't find the container with id 8eccba91f499e16cf8b4b30fd9905351e6066d590a31760423356d05cb9c10b3 Jan 26 07:53:41 crc kubenswrapper[4806]: E0126 07:53:41.573400 4806 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.66:6443: connect: connection refused" interval="800ms" Jan 26 07:53:41 crc kubenswrapper[4806]: I0126 07:53:41.802794 4806 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 07:53:41 crc kubenswrapper[4806]: I0126 07:53:41.804502 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:53:41 crc kubenswrapper[4806]: I0126 07:53:41.804553 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:53:41 crc kubenswrapper[4806]: I0126 07:53:41.804566 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:53:41 crc kubenswrapper[4806]: I0126 07:53:41.804595 4806 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 26 07:53:41 crc kubenswrapper[4806]: E0126 07:53:41.805021 4806 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.66:6443: connect: connection refused" node="crc" Jan 26 07:53:41 crc kubenswrapper[4806]: I0126 07:53:41.962776 4806 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.66:6443: connect: connection refused Jan 26 07:53:41 crc kubenswrapper[4806]: I0126 07:53:41.963900 4806 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 04:53:22.479630444 +0000 UTC Jan 26 07:53:42 crc kubenswrapper[4806]: I0126 07:53:42.050699 4806 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="ad400013f30ae5d55431037940b99d47dc5a2449b4535963218a0e9510e5fcc6" exitCode=0 Jan 26 07:53:42 crc kubenswrapper[4806]: I0126 07:53:42.050801 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"ad400013f30ae5d55431037940b99d47dc5a2449b4535963218a0e9510e5fcc6"} Jan 26 07:53:42 crc kubenswrapper[4806]: I0126 07:53:42.050898 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"edad4608e193f1e2466e90c5269503ec0551343aa70d59cb01034b5030b23f05"} Jan 26 07:53:42 crc kubenswrapper[4806]: I0126 07:53:42.051059 4806 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 07:53:42 crc kubenswrapper[4806]: I0126 07:53:42.052845 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:53:42 crc kubenswrapper[4806]: I0126 07:53:42.052872 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:53:42 crc kubenswrapper[4806]: I0126 07:53:42.052880 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:53:42 crc kubenswrapper[4806]: I0126 07:53:42.053626 4806 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="e49f6187d5b008fdb24b0f1a268d099bba2d3700d3ebd07c6f2d05e5718bf990" exitCode=0 Jan 26 07:53:42 crc kubenswrapper[4806]: I0126 07:53:42.053705 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"e49f6187d5b008fdb24b0f1a268d099bba2d3700d3ebd07c6f2d05e5718bf990"} Jan 26 07:53:42 crc kubenswrapper[4806]: I0126 07:53:42.053730 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"8eccba91f499e16cf8b4b30fd9905351e6066d590a31760423356d05cb9c10b3"} Jan 26 07:53:42 crc kubenswrapper[4806]: I0126 07:53:42.053818 4806 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 07:53:42 crc kubenswrapper[4806]: I0126 07:53:42.054595 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:53:42 crc kubenswrapper[4806]: I0126 07:53:42.054624 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:53:42 crc kubenswrapper[4806]: I0126 07:53:42.054637 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:53:42 crc kubenswrapper[4806]: I0126 07:53:42.055107 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"4c111f2ec62507ccc570af4f4da19232bfa0946bb9e67bde595d5a3d66eff6a6"} Jan 26 07:53:42 crc kubenswrapper[4806]: I0126 07:53:42.055124 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"3f9e9c0da9c1e3c4c8e8a6eabf863e29173101ee7566af5c73466399ab0a76fe"} Jan 26 07:53:42 crc kubenswrapper[4806]: I0126 07:53:42.057329 4806 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="02368f0d186c7de1c3dc25d5c60985799ded799314f332c5132230a413d531a6" exitCode=0 Jan 26 07:53:42 crc kubenswrapper[4806]: I0126 07:53:42.057354 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"02368f0d186c7de1c3dc25d5c60985799ded799314f332c5132230a413d531a6"} Jan 26 07:53:42 crc kubenswrapper[4806]: I0126 07:53:42.057388 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"153974dab757cc9a4b21ca39c722fc24fc580abf254f521429bcb0d3d397de0f"} Jan 26 07:53:42 crc kubenswrapper[4806]: I0126 07:53:42.057658 4806 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 07:53:42 crc kubenswrapper[4806]: I0126 07:53:42.058970 4806 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="6b85e7f36ac03113362c2ab6b15fb73e517599c6a56c6a410b9dead70191502f" exitCode=0 Jan 26 07:53:42 crc kubenswrapper[4806]: I0126 07:53:42.058996 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"6b85e7f36ac03113362c2ab6b15fb73e517599c6a56c6a410b9dead70191502f"} Jan 26 07:53:42 crc kubenswrapper[4806]: I0126 07:53:42.059034 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"5755809ac1e2ce4c80292f04904fd1fcc42148cec223d711f2c09932f01e9822"} Jan 26 07:53:42 crc kubenswrapper[4806]: I0126 07:53:42.059131 4806 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 07:53:42 crc kubenswrapper[4806]: I0126 07:53:42.059792 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:53:42 crc kubenswrapper[4806]: I0126 07:53:42.059823 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:53:42 crc kubenswrapper[4806]: I0126 07:53:42.059835 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:53:42 crc kubenswrapper[4806]: I0126 07:53:42.060436 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:53:42 crc kubenswrapper[4806]: I0126 07:53:42.060465 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:53:42 crc kubenswrapper[4806]: I0126 07:53:42.060478 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:53:42 crc kubenswrapper[4806]: I0126 07:53:42.064156 4806 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 07:53:42 crc kubenswrapper[4806]: I0126 07:53:42.065303 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:53:42 crc kubenswrapper[4806]: I0126 07:53:42.065346 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:53:42 crc kubenswrapper[4806]: I0126 07:53:42.065354 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:53:42 crc kubenswrapper[4806]: W0126 07:53:42.338259 4806 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.66:6443: connect: connection refused Jan 26 07:53:42 crc kubenswrapper[4806]: E0126 07:53:42.338351 4806 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.66:6443: connect: connection refused" logger="UnhandledError" Jan 26 07:53:42 crc kubenswrapper[4806]: W0126 07:53:42.345150 4806 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.66:6443: connect: connection refused Jan 26 07:53:42 crc kubenswrapper[4806]: E0126 07:53:42.345196 4806 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.66:6443: connect: connection refused" logger="UnhandledError" Jan 26 07:53:42 crc kubenswrapper[4806]: E0126 07:53:42.374315 4806 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.66:6443: connect: connection refused" interval="1.6s" Jan 26 07:53:42 crc kubenswrapper[4806]: W0126 07:53:42.445877 4806 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.66:6443: connect: connection refused Jan 26 07:53:42 crc kubenswrapper[4806]: E0126 07:53:42.445954 4806 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.66:6443: connect: connection refused" logger="UnhandledError" Jan 26 07:53:42 crc kubenswrapper[4806]: W0126 07:53:42.546241 4806 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.66:6443: connect: connection refused Jan 26 07:53:42 crc kubenswrapper[4806]: E0126 07:53:42.546324 4806 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.66:6443: connect: connection refused" logger="UnhandledError" Jan 26 07:53:42 crc kubenswrapper[4806]: I0126 07:53:42.606025 4806 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 07:53:42 crc kubenswrapper[4806]: I0126 07:53:42.610019 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:53:42 crc kubenswrapper[4806]: I0126 07:53:42.610075 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:53:42 crc kubenswrapper[4806]: I0126 07:53:42.610087 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:53:42 crc kubenswrapper[4806]: I0126 07:53:42.610117 4806 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 26 07:53:42 crc kubenswrapper[4806]: I0126 07:53:42.964316 4806 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 15:37:30.032199406 +0000 UTC Jan 26 07:53:42 crc kubenswrapper[4806]: I0126 07:53:42.984375 4806 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 26 07:53:43 crc kubenswrapper[4806]: I0126 07:53:43.063844 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"a8e90cbd21cd7080a82e2440f441264b1fd1b45ee048fbcadc7afbd7b4384996"} Jan 26 07:53:43 crc kubenswrapper[4806]: I0126 07:53:43.063889 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"c6719f7820924ce02c29e99fc9fb5c6e733597c1f006b3a2701046680af978e8"} Jan 26 07:53:43 crc kubenswrapper[4806]: I0126 07:53:43.063899 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"da291d9c45edfc19eed33fce385cb1d8005585d2e9ac5fdfc50040a9c3038683"} Jan 26 07:53:43 crc kubenswrapper[4806]: I0126 07:53:43.063980 4806 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 07:53:43 crc kubenswrapper[4806]: I0126 07:53:43.064957 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:53:43 crc kubenswrapper[4806]: I0126 07:53:43.064983 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:53:43 crc kubenswrapper[4806]: I0126 07:53:43.064992 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:53:43 crc kubenswrapper[4806]: I0126 07:53:43.067441 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"50228cb176a3defc821a78ed75d837706c2de3a3414f02e0afacf81289a15999"} Jan 26 07:53:43 crc kubenswrapper[4806]: I0126 07:53:43.067472 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"7f94ceee9c2ed3bd1c5ba58ee7e5047dd5f0a325c610c4277705f3426e69ed79"} Jan 26 07:53:43 crc kubenswrapper[4806]: I0126 07:53:43.067482 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"e1797a908e33ef3678ac1471e937a8e52c11dbb31d8de0f19d400a01706a98b7"} Jan 26 07:53:43 crc kubenswrapper[4806]: I0126 07:53:43.067591 4806 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 07:53:43 crc kubenswrapper[4806]: I0126 07:53:43.071733 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:53:43 crc kubenswrapper[4806]: I0126 07:53:43.071751 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:53:43 crc kubenswrapper[4806]: I0126 07:53:43.071781 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:53:43 crc kubenswrapper[4806]: I0126 07:53:43.074026 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"6442d8b57ec5789bc5ad219da279d42cd053598b4a8a843b4190a619750483d3"} Jan 26 07:53:43 crc kubenswrapper[4806]: I0126 07:53:43.074062 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"2eda625619208720589e5f413d0df8e1e55e7fd5ab4a2e59d64e19150afb05ac"} Jan 26 07:53:43 crc kubenswrapper[4806]: I0126 07:53:43.074073 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"befc3a7310ed730ab06800d0ffe5a5ad206f5193f921d0080543d96e53c382cc"} Jan 26 07:53:43 crc kubenswrapper[4806]: I0126 07:53:43.074083 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"6f4e90bd8d92e4159f45b887647465c23595d6a14cb9fd5a6f3b6c736345b302"} Jan 26 07:53:43 crc kubenswrapper[4806]: I0126 07:53:43.074091 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"370998a25827938e42e6ca8f51cb20c64068d266a2a6e92f43cfa6033efe1f1f"} Jan 26 07:53:43 crc kubenswrapper[4806]: I0126 07:53:43.074205 4806 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 07:53:43 crc kubenswrapper[4806]: I0126 07:53:43.074938 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:53:43 crc kubenswrapper[4806]: I0126 07:53:43.074953 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:53:43 crc kubenswrapper[4806]: I0126 07:53:43.074962 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:53:43 crc kubenswrapper[4806]: I0126 07:53:43.078012 4806 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="7324fbdf53b0be79696bb4bf870fe072761633e09cc88cce3a213a40b339410d" exitCode=0 Jan 26 07:53:43 crc kubenswrapper[4806]: I0126 07:53:43.078086 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"7324fbdf53b0be79696bb4bf870fe072761633e09cc88cce3a213a40b339410d"} Jan 26 07:53:43 crc kubenswrapper[4806]: I0126 07:53:43.078216 4806 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 07:53:43 crc kubenswrapper[4806]: I0126 07:53:43.078942 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:53:43 crc kubenswrapper[4806]: I0126 07:53:43.078959 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:53:43 crc kubenswrapper[4806]: I0126 07:53:43.078967 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:53:43 crc kubenswrapper[4806]: I0126 07:53:43.080333 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"d54a191a8969d10a108b9fa36fe6c03f55969e7bd6c73aef14d2936d92290ccb"} Jan 26 07:53:43 crc kubenswrapper[4806]: I0126 07:53:43.080397 4806 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 07:53:43 crc kubenswrapper[4806]: I0126 07:53:43.081043 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:53:43 crc kubenswrapper[4806]: I0126 07:53:43.081062 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:53:43 crc kubenswrapper[4806]: I0126 07:53:43.081072 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:53:43 crc kubenswrapper[4806]: I0126 07:53:43.845351 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 07:53:43 crc kubenswrapper[4806]: I0126 07:53:43.965222 4806 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 04:48:32.44142469 +0000 UTC Jan 26 07:53:44 crc kubenswrapper[4806]: I0126 07:53:44.085604 4806 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="1987002ea8dfdacf427291f1757243727d744b1fe99d0be1f8cddc26beb74fc9" exitCode=0 Jan 26 07:53:44 crc kubenswrapper[4806]: I0126 07:53:44.085643 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"1987002ea8dfdacf427291f1757243727d744b1fe99d0be1f8cddc26beb74fc9"} Jan 26 07:53:44 crc kubenswrapper[4806]: I0126 07:53:44.085752 4806 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 07:53:44 crc kubenswrapper[4806]: I0126 07:53:44.085788 4806 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 07:53:44 crc kubenswrapper[4806]: I0126 07:53:44.085800 4806 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 07:53:44 crc kubenswrapper[4806]: I0126 07:53:44.085850 4806 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 07:53:44 crc kubenswrapper[4806]: I0126 07:53:44.086900 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:53:44 crc kubenswrapper[4806]: I0126 07:53:44.086926 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:53:44 crc kubenswrapper[4806]: I0126 07:53:44.086936 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:53:44 crc kubenswrapper[4806]: I0126 07:53:44.087000 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:53:44 crc kubenswrapper[4806]: I0126 07:53:44.087032 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:53:44 crc kubenswrapper[4806]: I0126 07:53:44.087047 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:53:44 crc kubenswrapper[4806]: I0126 07:53:44.087440 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:53:44 crc kubenswrapper[4806]: I0126 07:53:44.087476 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:53:44 crc kubenswrapper[4806]: I0126 07:53:44.087490 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:53:44 crc kubenswrapper[4806]: I0126 07:53:44.351712 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 07:53:44 crc kubenswrapper[4806]: I0126 07:53:44.357372 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 07:53:44 crc kubenswrapper[4806]: I0126 07:53:44.965402 4806 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 05:45:12.451923637 +0000 UTC Jan 26 07:53:45 crc kubenswrapper[4806]: I0126 07:53:45.093012 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"374c11776db902d4a30fdeab722bf8c85be53043b87555ac42f9f18bbfca5f78"} Jan 26 07:53:45 crc kubenswrapper[4806]: I0126 07:53:45.093063 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"67cf748dcc41d3402937cba97eadb7311a4fd7ab359a24768ffd4b6f689e9e2e"} Jan 26 07:53:45 crc kubenswrapper[4806]: I0126 07:53:45.093077 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"06b15982445e897ebbaf881e9a84973b9b700b0763ea6dfc0dfa3d5adb3d5b90"} Jan 26 07:53:45 crc kubenswrapper[4806]: I0126 07:53:45.093089 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"7dc1a26bcb2eff0795009b3d88f7fedd86bccf553e9b8b86f8bc4bdda154566d"} Jan 26 07:53:45 crc kubenswrapper[4806]: I0126 07:53:45.093092 4806 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 07:53:45 crc kubenswrapper[4806]: I0126 07:53:45.093959 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:53:45 crc kubenswrapper[4806]: I0126 07:53:45.093990 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:53:45 crc kubenswrapper[4806]: I0126 07:53:45.094001 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:53:45 crc kubenswrapper[4806]: I0126 07:53:45.315486 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 07:53:45 crc kubenswrapper[4806]: I0126 07:53:45.315898 4806 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 07:53:45 crc kubenswrapper[4806]: I0126 07:53:45.315983 4806 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 07:53:45 crc kubenswrapper[4806]: I0126 07:53:45.318197 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:53:45 crc kubenswrapper[4806]: I0126 07:53:45.318369 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:53:45 crc kubenswrapper[4806]: I0126 07:53:45.318390 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:53:45 crc kubenswrapper[4806]: I0126 07:53:45.966231 4806 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 17:35:28.784752848 +0000 UTC Jan 26 07:53:46 crc kubenswrapper[4806]: I0126 07:53:46.099767 4806 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 07:53:46 crc kubenswrapper[4806]: I0126 07:53:46.099836 4806 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 07:53:46 crc kubenswrapper[4806]: I0126 07:53:46.099762 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"ded50f9353eb262b2587b8d1a59d8e1a06946c9b228eb0991fe35ad79f19567b"} Jan 26 07:53:46 crc kubenswrapper[4806]: I0126 07:53:46.099880 4806 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 07:53:46 crc kubenswrapper[4806]: I0126 07:53:46.101179 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:53:46 crc kubenswrapper[4806]: I0126 07:53:46.101222 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:53:46 crc kubenswrapper[4806]: I0126 07:53:46.101250 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:53:46 crc kubenswrapper[4806]: I0126 07:53:46.101288 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:53:46 crc kubenswrapper[4806]: I0126 07:53:46.101316 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:53:46 crc kubenswrapper[4806]: I0126 07:53:46.101334 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:53:47 crc kubenswrapper[4806]: I0126 07:53:47.023320 4806 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 23:56:10.62518268 +0000 UTC Jan 26 07:53:47 crc kubenswrapper[4806]: I0126 07:53:47.023482 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 07:53:47 crc kubenswrapper[4806]: I0126 07:53:47.103032 4806 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 07:53:47 crc kubenswrapper[4806]: I0126 07:53:47.103088 4806 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 07:53:47 crc kubenswrapper[4806]: I0126 07:53:47.103226 4806 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 07:53:47 crc kubenswrapper[4806]: I0126 07:53:47.104157 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:53:47 crc kubenswrapper[4806]: I0126 07:53:47.104222 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:53:47 crc kubenswrapper[4806]: I0126 07:53:47.104242 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:53:47 crc kubenswrapper[4806]: I0126 07:53:47.104926 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:53:47 crc kubenswrapper[4806]: I0126 07:53:47.105024 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:53:47 crc kubenswrapper[4806]: I0126 07:53:47.105163 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:53:47 crc kubenswrapper[4806]: I0126 07:53:47.198715 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 07:53:47 crc kubenswrapper[4806]: I0126 07:53:47.198887 4806 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 07:53:47 crc kubenswrapper[4806]: I0126 07:53:47.199772 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:53:47 crc kubenswrapper[4806]: I0126 07:53:47.199796 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:53:47 crc kubenswrapper[4806]: I0126 07:53:47.199804 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:53:47 crc kubenswrapper[4806]: I0126 07:53:47.627363 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 26 07:53:48 crc kubenswrapper[4806]: I0126 07:53:48.023742 4806 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 12:39:14.230550269 +0000 UTC Jan 26 07:53:48 crc kubenswrapper[4806]: I0126 07:53:48.105685 4806 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 07:53:48 crc kubenswrapper[4806]: I0126 07:53:48.107010 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:53:48 crc kubenswrapper[4806]: I0126 07:53:48.107077 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:53:48 crc kubenswrapper[4806]: I0126 07:53:48.107102 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:53:48 crc kubenswrapper[4806]: I0126 07:53:48.439097 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 07:53:48 crc kubenswrapper[4806]: I0126 07:53:48.439387 4806 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 07:53:48 crc kubenswrapper[4806]: I0126 07:53:48.441046 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:53:48 crc kubenswrapper[4806]: I0126 07:53:48.441101 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:53:48 crc kubenswrapper[4806]: I0126 07:53:48.441120 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:53:49 crc kubenswrapper[4806]: I0126 07:53:49.024703 4806 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 14:11:00.183285889 +0000 UTC Jan 26 07:53:50 crc kubenswrapper[4806]: I0126 07:53:50.024217 4806 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 07:53:50 crc kubenswrapper[4806]: I0126 07:53:50.024343 4806 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 07:53:50 crc kubenswrapper[4806]: I0126 07:53:50.025305 4806 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 18:00:55.061281838 +0000 UTC Jan 26 07:53:51 crc kubenswrapper[4806]: I0126 07:53:51.026222 4806 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 04:00:10.373592772 +0000 UTC Jan 26 07:53:51 crc kubenswrapper[4806]: E0126 07:53:51.107624 4806 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 26 07:53:51 crc kubenswrapper[4806]: I0126 07:53:51.325885 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 07:53:51 crc kubenswrapper[4806]: I0126 07:53:51.326239 4806 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 07:53:51 crc kubenswrapper[4806]: I0126 07:53:51.327857 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:53:51 crc kubenswrapper[4806]: I0126 07:53:51.327901 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:53:51 crc kubenswrapper[4806]: I0126 07:53:51.327920 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:53:51 crc kubenswrapper[4806]: I0126 07:53:51.331209 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 07:53:52 crc kubenswrapper[4806]: I0126 07:53:52.027214 4806 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 20:04:05.890020263 +0000 UTC Jan 26 07:53:52 crc kubenswrapper[4806]: I0126 07:53:52.116626 4806 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 07:53:52 crc kubenswrapper[4806]: I0126 07:53:52.118048 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:53:52 crc kubenswrapper[4806]: I0126 07:53:52.118481 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:53:52 crc kubenswrapper[4806]: I0126 07:53:52.118605 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:53:52 crc kubenswrapper[4806]: E0126 07:53:52.611957 4806 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="crc" Jan 26 07:53:52 crc kubenswrapper[4806]: I0126 07:53:52.695651 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 26 07:53:52 crc kubenswrapper[4806]: I0126 07:53:52.696041 4806 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 07:53:52 crc kubenswrapper[4806]: I0126 07:53:52.697325 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:53:52 crc kubenswrapper[4806]: I0126 07:53:52.697422 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:53:52 crc kubenswrapper[4806]: I0126 07:53:52.697440 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:53:52 crc kubenswrapper[4806]: I0126 07:53:52.799743 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 07:53:52 crc kubenswrapper[4806]: I0126 07:53:52.800016 4806 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 07:53:52 crc kubenswrapper[4806]: I0126 07:53:52.801751 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:53:52 crc kubenswrapper[4806]: I0126 07:53:52.801810 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:53:52 crc kubenswrapper[4806]: I0126 07:53:52.801829 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:53:52 crc kubenswrapper[4806]: I0126 07:53:52.964313 4806 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Jan 26 07:53:52 crc kubenswrapper[4806]: E0126 07:53:52.986376 4806 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 26 07:53:53 crc kubenswrapper[4806]: I0126 07:53:53.029009 4806 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 13:51:09.36959209 +0000 UTC Jan 26 07:53:53 crc kubenswrapper[4806]: I0126 07:53:53.845927 4806 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="Get \"https://192.168.126.11:6443/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 07:53:53 crc kubenswrapper[4806]: I0126 07:53:53.845993 4806 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 07:53:53 crc kubenswrapper[4806]: E0126 07:53:53.975907 4806 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="3.2s" Jan 26 07:53:54 crc kubenswrapper[4806]: I0126 07:53:54.029432 4806 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 05:38:15.662069502 +0000 UTC Jan 26 07:53:54 crc kubenswrapper[4806]: I0126 07:53:54.061385 4806 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 26 07:53:54 crc kubenswrapper[4806]: I0126 07:53:54.061450 4806 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 26 07:53:54 crc kubenswrapper[4806]: I0126 07:53:54.212506 4806 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 07:53:54 crc kubenswrapper[4806]: I0126 07:53:54.213667 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:53:54 crc kubenswrapper[4806]: I0126 07:53:54.213703 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:53:54 crc kubenswrapper[4806]: I0126 07:53:54.213714 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:53:54 crc kubenswrapper[4806]: I0126 07:53:54.213737 4806 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 26 07:53:55 crc kubenswrapper[4806]: I0126 07:53:55.030260 4806 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 03:19:21.284099073 +0000 UTC Jan 26 07:53:56 crc kubenswrapper[4806]: I0126 07:53:56.031344 4806 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 00:08:49.626218827 +0000 UTC Jan 26 07:53:57 crc kubenswrapper[4806]: I0126 07:53:57.033195 4806 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 02:36:27.905710657 +0000 UTC Jan 26 07:53:57 crc kubenswrapper[4806]: I0126 07:53:57.366377 4806 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 26 07:53:57 crc kubenswrapper[4806]: I0126 07:53:57.388070 4806 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 26 07:53:58 crc kubenswrapper[4806]: I0126 07:53:58.034144 4806 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 10:24:59.408937512 +0000 UTC Jan 26 07:53:58 crc kubenswrapper[4806]: I0126 07:53:58.854969 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 07:53:58 crc kubenswrapper[4806]: I0126 07:53:58.855293 4806 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 07:53:58 crc kubenswrapper[4806]: I0126 07:53:58.857365 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:53:58 crc kubenswrapper[4806]: I0126 07:53:58.857452 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:53:58 crc kubenswrapper[4806]: I0126 07:53:58.857482 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:53:58 crc kubenswrapper[4806]: I0126 07:53:58.862643 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 07:53:59 crc kubenswrapper[4806]: I0126 07:53:59.034586 4806 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 08:28:08.516425952 +0000 UTC Jan 26 07:53:59 crc kubenswrapper[4806]: I0126 07:53:59.038301 4806 trace.go:236] Trace[140332330]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (26-Jan-2026 07:53:45.215) (total time: 13822ms): Jan 26 07:53:59 crc kubenswrapper[4806]: Trace[140332330]: ---"Objects listed" error: 13822ms (07:53:59.038) Jan 26 07:53:59 crc kubenswrapper[4806]: Trace[140332330]: [13.822776322s] [13.822776322s] END Jan 26 07:53:59 crc kubenswrapper[4806]: I0126 07:53:59.038358 4806 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 26 07:53:59 crc kubenswrapper[4806]: I0126 07:53:59.038598 4806 trace.go:236] Trace[336185991]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (26-Jan-2026 07:53:44.644) (total time: 14393ms): Jan 26 07:53:59 crc kubenswrapper[4806]: Trace[336185991]: ---"Objects listed" error: 14393ms (07:53:59.038) Jan 26 07:53:59 crc kubenswrapper[4806]: Trace[336185991]: [14.393512843s] [14.393512843s] END Jan 26 07:53:59 crc kubenswrapper[4806]: I0126 07:53:59.038642 4806 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 26 07:53:59 crc kubenswrapper[4806]: I0126 07:53:59.039289 4806 trace.go:236] Trace[496763444]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (26-Jan-2026 07:53:44.620) (total time: 14418ms): Jan 26 07:53:59 crc kubenswrapper[4806]: Trace[496763444]: ---"Objects listed" error: 14418ms (07:53:59.039) Jan 26 07:53:59 crc kubenswrapper[4806]: Trace[496763444]: [14.418272307s] [14.418272307s] END Jan 26 07:53:59 crc kubenswrapper[4806]: I0126 07:53:59.039332 4806 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 26 07:53:59 crc kubenswrapper[4806]: I0126 07:53:59.039823 4806 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 26 07:53:59 crc kubenswrapper[4806]: I0126 07:53:59.041708 4806 trace.go:236] Trace[315065200]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (26-Jan-2026 07:53:44.962) (total time: 14079ms): Jan 26 07:53:59 crc kubenswrapper[4806]: Trace[315065200]: ---"Objects listed" error: 14078ms (07:53:59.041) Jan 26 07:53:59 crc kubenswrapper[4806]: Trace[315065200]: [14.079177409s] [14.079177409s] END Jan 26 07:53:59 crc kubenswrapper[4806]: I0126 07:53:59.041751 4806 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 26 07:53:59 crc kubenswrapper[4806]: I0126 07:53:59.096469 4806 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:35206->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 26 07:53:59 crc kubenswrapper[4806]: I0126 07:53:59.096502 4806 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:35210->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 26 07:53:59 crc kubenswrapper[4806]: I0126 07:53:59.096596 4806 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:35206->192.168.126.11:17697: read: connection reset by peer" Jan 26 07:53:59 crc kubenswrapper[4806]: I0126 07:53:59.096610 4806 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:35210->192.168.126.11:17697: read: connection reset by peer" Jan 26 07:53:59 crc kubenswrapper[4806]: I0126 07:53:59.096970 4806 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 26 07:53:59 crc kubenswrapper[4806]: I0126 07:53:59.097010 4806 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 26 07:53:59 crc kubenswrapper[4806]: I0126 07:53:59.138544 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 26 07:53:59 crc kubenswrapper[4806]: I0126 07:53:59.141293 4806 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="6442d8b57ec5789bc5ad219da279d42cd053598b4a8a843b4190a619750483d3" exitCode=255 Jan 26 07:53:59 crc kubenswrapper[4806]: I0126 07:53:59.141349 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"6442d8b57ec5789bc5ad219da279d42cd053598b4a8a843b4190a619750483d3"} Jan 26 07:53:59 crc kubenswrapper[4806]: I0126 07:53:59.161680 4806 scope.go:117] "RemoveContainer" containerID="6442d8b57ec5789bc5ad219da279d42cd053598b4a8a843b4190a619750483d3" Jan 26 07:53:59 crc kubenswrapper[4806]: I0126 07:53:59.224397 4806 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 26 07:53:59 crc kubenswrapper[4806]: I0126 07:53:59.224797 4806 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 26 07:53:59 crc kubenswrapper[4806]: I0126 07:53:59.226155 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:53:59 crc kubenswrapper[4806]: I0126 07:53:59.226189 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:53:59 crc kubenswrapper[4806]: I0126 07:53:59.226202 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:53:59 crc kubenswrapper[4806]: I0126 07:53:59.226219 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:53:59 crc kubenswrapper[4806]: I0126 07:53:59.226231 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:53:59Z","lastTransitionTime":"2026-01-26T07:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:53:59 crc kubenswrapper[4806]: E0126 07:53:59.238678 4806 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6d591560-a509-477f-85dc-1a92a429bf2e\\\",\\\"systemUUID\\\":\\\"8cee8155-a08c-4d0d-aec6-2f132dd9ee01\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 07:53:59 crc kubenswrapper[4806]: I0126 07:53:59.242746 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:53:59 crc kubenswrapper[4806]: I0126 07:53:59.242771 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:53:59 crc kubenswrapper[4806]: I0126 07:53:59.242779 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:53:59 crc kubenswrapper[4806]: I0126 07:53:59.242796 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:53:59 crc kubenswrapper[4806]: I0126 07:53:59.242808 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:53:59Z","lastTransitionTime":"2026-01-26T07:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:53:59 crc kubenswrapper[4806]: E0126 07:53:59.254471 4806 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6d591560-a509-477f-85dc-1a92a429bf2e\\\",\\\"systemUUID\\\":\\\"8cee8155-a08c-4d0d-aec6-2f132dd9ee01\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 07:53:59 crc kubenswrapper[4806]: I0126 07:53:59.269504 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:53:59 crc kubenswrapper[4806]: I0126 07:53:59.269551 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:53:59 crc kubenswrapper[4806]: I0126 07:53:59.269561 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:53:59 crc kubenswrapper[4806]: I0126 07:53:59.269582 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:53:59 crc kubenswrapper[4806]: I0126 07:53:59.269597 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:53:59Z","lastTransitionTime":"2026-01-26T07:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:53:59 crc kubenswrapper[4806]: E0126 07:53:59.288784 4806 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6d591560-a509-477f-85dc-1a92a429bf2e\\\",\\\"systemUUID\\\":\\\"8cee8155-a08c-4d0d-aec6-2f132dd9ee01\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 07:53:59 crc kubenswrapper[4806]: I0126 07:53:59.299539 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:53:59 crc kubenswrapper[4806]: I0126 07:53:59.299580 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:53:59 crc kubenswrapper[4806]: I0126 07:53:59.299590 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:53:59 crc kubenswrapper[4806]: I0126 07:53:59.299607 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:53:59 crc kubenswrapper[4806]: I0126 07:53:59.299635 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:53:59Z","lastTransitionTime":"2026-01-26T07:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:53:59 crc kubenswrapper[4806]: E0126 07:53:59.317825 4806 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6d591560-a509-477f-85dc-1a92a429bf2e\\\",\\\"systemUUID\\\":\\\"8cee8155-a08c-4d0d-aec6-2f132dd9ee01\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 07:53:59 crc kubenswrapper[4806]: I0126 07:53:59.328640 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:53:59 crc kubenswrapper[4806]: I0126 07:53:59.328676 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:53:59 crc kubenswrapper[4806]: I0126 07:53:59.328687 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:53:59 crc kubenswrapper[4806]: I0126 07:53:59.328703 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:53:59 crc kubenswrapper[4806]: I0126 07:53:59.328714 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:53:59Z","lastTransitionTime":"2026-01-26T07:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:53:59 crc kubenswrapper[4806]: E0126 07:53:59.353633 4806 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6d591560-a509-477f-85dc-1a92a429bf2e\\\",\\\"systemUUID\\\":\\\"8cee8155-a08c-4d0d-aec6-2f132dd9ee01\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 07:53:59 crc kubenswrapper[4806]: E0126 07:53:59.353773 4806 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 07:53:59 crc kubenswrapper[4806]: I0126 07:53:59.356334 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:53:59 crc kubenswrapper[4806]: I0126 07:53:59.356365 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:53:59 crc kubenswrapper[4806]: I0126 07:53:59.356375 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:53:59 crc kubenswrapper[4806]: I0126 07:53:59.356411 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:53:59 crc kubenswrapper[4806]: I0126 07:53:59.356425 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:53:59Z","lastTransitionTime":"2026-01-26T07:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:53:59 crc kubenswrapper[4806]: I0126 07:53:59.425246 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 07:53:59 crc kubenswrapper[4806]: I0126 07:53:59.429181 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 07:53:59 crc kubenswrapper[4806]: I0126 07:53:59.458624 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:53:59 crc kubenswrapper[4806]: I0126 07:53:59.458669 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:53:59 crc kubenswrapper[4806]: I0126 07:53:59.458682 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:53:59 crc kubenswrapper[4806]: I0126 07:53:59.458700 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:53:59 crc kubenswrapper[4806]: I0126 07:53:59.458719 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:53:59Z","lastTransitionTime":"2026-01-26T07:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:53:59 crc kubenswrapper[4806]: I0126 07:53:59.561342 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:53:59 crc kubenswrapper[4806]: I0126 07:53:59.561392 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:53:59 crc kubenswrapper[4806]: I0126 07:53:59.561404 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:53:59 crc kubenswrapper[4806]: I0126 07:53:59.561424 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:53:59 crc kubenswrapper[4806]: I0126 07:53:59.561439 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:53:59Z","lastTransitionTime":"2026-01-26T07:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:53:59 crc kubenswrapper[4806]: I0126 07:53:59.610866 4806 csr.go:261] certificate signing request csr-6sv5c is approved, waiting to be issued Jan 26 07:53:59 crc kubenswrapper[4806]: I0126 07:53:59.646493 4806 csr.go:257] certificate signing request csr-6sv5c is issued Jan 26 07:53:59 crc kubenswrapper[4806]: I0126 07:53:59.664995 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:53:59 crc kubenswrapper[4806]: I0126 07:53:59.665037 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:53:59 crc kubenswrapper[4806]: I0126 07:53:59.665048 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:53:59 crc kubenswrapper[4806]: I0126 07:53:59.665067 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:53:59 crc kubenswrapper[4806]: I0126 07:53:59.665078 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:53:59Z","lastTransitionTime":"2026-01-26T07:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:53:59 crc kubenswrapper[4806]: I0126 07:53:59.767214 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:53:59 crc kubenswrapper[4806]: I0126 07:53:59.767268 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:53:59 crc kubenswrapper[4806]: I0126 07:53:59.767278 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:53:59 crc kubenswrapper[4806]: I0126 07:53:59.767332 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:53:59 crc kubenswrapper[4806]: I0126 07:53:59.767343 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:53:59Z","lastTransitionTime":"2026-01-26T07:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:53:59 crc kubenswrapper[4806]: I0126 07:53:59.869562 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:53:59 crc kubenswrapper[4806]: I0126 07:53:59.869612 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:53:59 crc kubenswrapper[4806]: I0126 07:53:59.869623 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:53:59 crc kubenswrapper[4806]: I0126 07:53:59.869647 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:53:59 crc kubenswrapper[4806]: I0126 07:53:59.869660 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:53:59Z","lastTransitionTime":"2026-01-26T07:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:53:59 crc kubenswrapper[4806]: I0126 07:53:59.972432 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:53:59 crc kubenswrapper[4806]: I0126 07:53:59.972474 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:53:59 crc kubenswrapper[4806]: I0126 07:53:59.972483 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:53:59 crc kubenswrapper[4806]: I0126 07:53:59.972502 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:53:59 crc kubenswrapper[4806]: I0126 07:53:59.972514 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:53:59Z","lastTransitionTime":"2026-01-26T07:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.032934 4806 apiserver.go:52] "Watching apiserver" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.035110 4806 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 21:21:58.141242529 +0000 UTC Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.037659 4806 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.038086 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-wx526","openshift-image-registry/node-ca-pw8cg","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h"] Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.038441 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.038926 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 07:54:00 crc kubenswrapper[4806]: E0126 07:54:00.038998 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.039081 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.039155 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.039171 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.039215 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 07:54:00 crc kubenswrapper[4806]: E0126 07:54:00.039319 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 07:54:00 crc kubenswrapper[4806]: E0126 07:54:00.039396 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.039437 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-wx526" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.040082 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-pw8cg" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.041077 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.041464 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.042291 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.043546 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.043579 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.043642 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.046295 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.048596 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.048859 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.048971 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.049010 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.049074 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.048861 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.049164 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.049240 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.049264 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.064941 4806 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.068607 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.074003 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.074048 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.074067 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.074086 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.074130 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:00Z","lastTransitionTime":"2026-01-26T07:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.083235 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f837be3-c97c-4ec6-9ac0-1b2c210a2bd1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://370998a25827938e42e6ca8f51cb20c64068d266a2a6e92f43cfa6033efe1f1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://befc3a7310ed730ab06800d0ffe5a5ad206f5193f921d0080543d96e53c382cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f4e90bd8d92e4159f45b887647465c23595d6a14cb9fd5a6f3b6c736345b302\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6442d8b57ec5789bc5ad219da279d42cd053598b4a8a843b4190a619750483d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6442d8b57ec5789bc5ad219da279d42cd053598b4a8a843b4190a619750483d3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 07:53:53.366863 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 07:53:53.368547 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1947576678/tls.crt::/tmp/serving-cert-1947576678/tls.key\\\\\\\"\\\\nI0126 07:53:59.068951 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 07:53:59.072863 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 07:53:59.072893 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 07:53:59.072922 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 07:53:59.072931 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 07:53:59.078386 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 07:53:59.078432 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 07:53:59.078439 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 07:53:59.078445 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 07:53:59.078449 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 07:53:59.078455 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 07:53:59.078459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 07:53:59.078571 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 07:53:59.084946 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eda625619208720589e5f413d0df8e1e55e7fd5ab4a2e59d64e19150afb05ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02368f0d186c7de1c3dc25d5c60985799ded799314f332c5132230a413d531a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02368f0d186c7de1c3dc25d5c60985799ded799314f332c5132230a413d531a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:53:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.095354 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.107875 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.120595 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wx526" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"265e15b3-6ef8-47df-ab15-dcc9bd9574ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmfr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wx526\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.135422 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pw8cg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c73a9f4-20b2-4c8a-b25d-413770be4fac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt9w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pw8cg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.145678 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.146889 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.146962 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.146988 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.147042 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.147068 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.147114 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.147157 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.147179 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.147200 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.147267 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.147291 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.147340 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.147365 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.147386 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.147410 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.147433 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.147457 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.147483 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.147505 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.147544 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.147580 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.147655 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.147678 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.147701 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.147723 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.147750 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.147775 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.147798 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.147848 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.147874 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.147917 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.147941 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.147964 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.147991 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.147999 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"1d8fe9445b34c3dea0e5508fbbd9d8c72d3b0bd3c927f3d0469fb5f86a61b7a3"} Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.148020 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.148042 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.148066 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.148159 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.148183 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.148201 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.148208 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.148220 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.148244 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.148262 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.148280 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.148299 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.148320 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.148341 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.148362 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.148382 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.148434 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.148460 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.148487 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.148510 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.148556 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.148586 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.148610 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.148633 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.148658 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.147293 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.147715 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.147763 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.148128 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.148377 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.148665 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.148868 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.148887 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.148899 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.148988 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.149049 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.149336 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.149472 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.149500 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.149570 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.149780 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.149794 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.149822 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.150457 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.150696 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.151307 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.151961 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.152476 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.157819 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.158127 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.159486 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.159609 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.159740 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: E0126 07:54:00.159862 4806 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-apiserver-crc\" already exists" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.160564 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.160703 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.161415 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.161567 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.148680 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.161710 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.161754 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.161782 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.161806 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.161804 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.161828 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.161859 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.161881 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.161903 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 26 07:54:00 crc kubenswrapper[4806]: E0126 07:54:00.161926 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 07:54:00.661905034 +0000 UTC m=+19.926313150 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.161941 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.161960 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.161993 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.162015 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.162044 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.162069 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.162089 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.162090 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.162139 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.162160 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.162180 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.162199 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.162218 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.162237 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.162253 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.162273 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.162290 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.162307 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.162325 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.162374 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.162393 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.162409 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.162438 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.162454 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.162469 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.162486 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.162503 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.162900 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.163109 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.163364 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.163412 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.163603 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.163672 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.163813 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.163884 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.164083 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.164208 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.164276 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.164361 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.164401 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: E0126 07:54:00.164542 4806 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.164876 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.165016 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.165050 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.165073 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.165091 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.165108 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.165125 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.165141 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.165150 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.165157 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.165195 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.165216 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.165236 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.165255 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.165275 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.165291 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.165308 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.165327 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.165343 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.165362 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.165365 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.165380 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.165401 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.165420 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.165440 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.165458 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.165476 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.165493 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.165565 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.165585 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.165605 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.165623 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.165640 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.165658 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.165675 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.165690 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.165707 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.165724 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.165740 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.165755 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.165777 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.165802 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.165822 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.165844 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.165879 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.165896 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.165912 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.165928 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.165944 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.165962 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.165980 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.165997 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.166015 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.166029 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.166044 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.166062 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.166077 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.166094 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.166111 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.166128 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.166144 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.166163 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.166180 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.166199 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.166229 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.166257 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.166277 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.166301 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.166325 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.166344 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.166360 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.166377 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.166397 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.166419 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.166447 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.166474 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.166497 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.166792 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a2716f1-7f63-4f1b-88e7-412b33a5afe3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1797a908e33ef3678ac1471e937a8e52c11dbb31d8de0f19d400a01706a98b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c111f2ec62507ccc570af4f4da19232bfa0946bb9e67bde595d5a3d66eff6a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f94ceee9c2ed3bd1c5ba58ee7e5047dd5f0a325c610c4277705f3426e69ed79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50228cb176a3defc821a78ed75d837706c2de3a3414f02e0afacf81289a15999\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:53:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.167973 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.168028 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.168047 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.168069 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.168092 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.168112 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.168129 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.168154 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.168174 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.168192 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.168209 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.168227 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.168254 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.168271 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.168288 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.168305 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.168745 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.168773 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.168798 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.168815 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.168844 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.168861 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.168881 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.168901 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.168918 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.168936 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.168956 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.168974 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.168993 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.169042 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/265e15b3-6ef8-47df-ab15-dcc9bd9574ae-hosts-file\") pod \"node-resolver-wx526\" (UID: \"265e15b3-6ef8-47df-ab15-dcc9bd9574ae\") " pod="openshift-dns/node-resolver-wx526" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.169065 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.169094 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9c73a9f4-20b2-4c8a-b25d-413770be4fac-host\") pod \"node-ca-pw8cg\" (UID: \"9c73a9f4-20b2-4c8a-b25d-413770be4fac\") " pod="openshift-image-registry/node-ca-pw8cg" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.169121 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.169149 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.169173 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.169195 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.169214 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.169232 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmfr4\" (UniqueName: \"kubernetes.io/projected/265e15b3-6ef8-47df-ab15-dcc9bd9574ae-kube-api-access-cmfr4\") pod \"node-resolver-wx526\" (UID: \"265e15b3-6ef8-47df-ab15-dcc9bd9574ae\") " pod="openshift-dns/node-resolver-wx526" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.169264 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.169290 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.169313 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qt9w8\" (UniqueName: \"kubernetes.io/projected/9c73a9f4-20b2-4c8a-b25d-413770be4fac-kube-api-access-qt9w8\") pod \"node-ca-pw8cg\" (UID: \"9c73a9f4-20b2-4c8a-b25d-413770be4fac\") " pod="openshift-image-registry/node-ca-pw8cg" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.169345 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.169369 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.169393 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.169471 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.169493 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.169511 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/9c73a9f4-20b2-4c8a-b25d-413770be4fac-serviceca\") pod \"node-ca-pw8cg\" (UID: \"9c73a9f4-20b2-4c8a-b25d-413770be4fac\") " pod="openshift-image-registry/node-ca-pw8cg" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.169542 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.169606 4806 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.169626 4806 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.169636 4806 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.169647 4806 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.169658 4806 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.169669 4806 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.169680 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.169691 4806 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.169701 4806 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.169710 4806 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.169720 4806 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.169729 4806 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.169739 4806 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.169749 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.169762 4806 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.169782 4806 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.169799 4806 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.169812 4806 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.169824 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.169835 4806 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.169845 4806 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.169856 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.169869 4806 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.169881 4806 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.169896 4806 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.169911 4806 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.169924 4806 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.169935 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.169946 4806 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.169957 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.169968 4806 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.169980 4806 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.169993 4806 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.170006 4806 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.170019 4806 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.170030 4806 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.170041 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.165378 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.182333 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.182368 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.182464 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.182477 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.182494 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.182554 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.182506 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:00Z","lastTransitionTime":"2026-01-26T07:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.182663 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.182903 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.182954 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.183415 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.183684 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.183768 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.184104 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.184370 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.184826 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.184860 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.172739 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.172941 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.173085 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.173215 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.173365 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.173511 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.173678 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.173702 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.173847 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.173869 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.173983 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.174071 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.174387 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.175256 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.176464 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.176515 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.177354 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.177818 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.180167 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.180437 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.180504 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.180714 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.180882 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.180950 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.181056 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.181217 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.187416 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.181250 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.181396 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.181620 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.181656 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.181610 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.181893 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.182043 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.182220 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.182232 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.185269 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.185416 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.185563 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.185723 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.185933 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.186234 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.186920 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.186949 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.187299 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.181229 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.187718 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.187865 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.188043 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.188285 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.188587 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.194142 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.191931 4806 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.188862 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.189039 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.189102 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: E0126 07:54:00.189108 4806 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 07:54:00 crc kubenswrapper[4806]: E0126 07:54:00.195268 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 07:54:00.695240522 +0000 UTC m=+19.959648578 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.195275 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.189370 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: E0126 07:54:00.189565 4806 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 07:54:00 crc kubenswrapper[4806]: E0126 07:54:00.198709 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 07:54:00.69868869 +0000 UTC m=+19.963096936 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.189032 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.189726 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.189901 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.189942 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.190048 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.190278 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.190494 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.190680 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.190906 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.191131 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.191411 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.191431 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.192124 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.193002 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.193732 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.194124 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.194790 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.195089 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.195467 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.195487 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.189343 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.191872 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.195660 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.195744 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.196454 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.199262 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.199488 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.199697 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.199807 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.200741 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.201114 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.201181 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.201959 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.202988 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.203819 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.185733 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.204640 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.204679 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.204703 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: E0126 07:54:00.204906 4806 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 07:54:00 crc kubenswrapper[4806]: E0126 07:54:00.211239 4806 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 07:54:00 crc kubenswrapper[4806]: E0126 07:54:00.211259 4806 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 07:54:00 crc kubenswrapper[4806]: E0126 07:54:00.211327 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 07:54:00.711307198 +0000 UTC m=+19.975715434 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.211504 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.204965 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.205015 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.205305 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.206575 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.206801 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.207076 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.207513 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.207537 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.208432 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.208443 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.208645 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.208879 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.172654 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.211726 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.210851 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: E0126 07:54:00.210984 4806 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 07:54:00 crc kubenswrapper[4806]: E0126 07:54:00.211784 4806 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 07:54:00 crc kubenswrapper[4806]: E0126 07:54:00.211798 4806 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 07:54:00 crc kubenswrapper[4806]: E0126 07:54:00.211850 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 07:54:00.711832633 +0000 UTC m=+19.976240869 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.211728 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.211907 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.212283 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.212287 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.212478 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.212546 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.212824 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.213083 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.213327 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.213499 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.213727 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.214168 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.214674 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.215694 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.217297 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.217595 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.219805 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.220191 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.220306 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.223253 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.223369 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.225707 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.226316 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.229729 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.230607 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.241357 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.243341 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.243978 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.253935 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.258208 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.272925 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/9c73a9f4-20b2-4c8a-b25d-413770be4fac-serviceca\") pod \"node-ca-pw8cg\" (UID: \"9c73a9f4-20b2-4c8a-b25d-413770be4fac\") " pod="openshift-image-registry/node-ca-pw8cg" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.272972 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/265e15b3-6ef8-47df-ab15-dcc9bd9574ae-hosts-file\") pod \"node-resolver-wx526\" (UID: \"265e15b3-6ef8-47df-ab15-dcc9bd9574ae\") " pod="openshift-dns/node-resolver-wx526" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.273000 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.273023 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9c73a9f4-20b2-4c8a-b25d-413770be4fac-host\") pod \"node-ca-pw8cg\" (UID: \"9c73a9f4-20b2-4c8a-b25d-413770be4fac\") " pod="openshift-image-registry/node-ca-pw8cg" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.273043 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.273082 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qt9w8\" (UniqueName: \"kubernetes.io/projected/9c73a9f4-20b2-4c8a-b25d-413770be4fac-kube-api-access-qt9w8\") pod \"node-ca-pw8cg\" (UID: \"9c73a9f4-20b2-4c8a-b25d-413770be4fac\") " pod="openshift-image-registry/node-ca-pw8cg" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.273101 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cmfr4\" (UniqueName: \"kubernetes.io/projected/265e15b3-6ef8-47df-ab15-dcc9bd9574ae-kube-api-access-cmfr4\") pod \"node-resolver-wx526\" (UID: \"265e15b3-6ef8-47df-ab15-dcc9bd9574ae\") " pod="openshift-dns/node-resolver-wx526" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.273156 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.273173 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.273184 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.273196 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.273208 4806 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.273219 4806 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.273230 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.273241 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.273251 4806 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.273261 4806 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.273272 4806 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.273283 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.273294 4806 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.273306 4806 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.273316 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.273337 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.273348 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.273359 4806 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.273370 4806 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.273381 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.273393 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.273404 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.273415 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.273426 4806 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.273436 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.273446 4806 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.273457 4806 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.273467 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.273477 4806 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.273487 4806 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.273497 4806 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.273509 4806 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.273557 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.273570 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.273580 4806 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.273590 4806 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.273677 4806 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.273691 4806 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.273701 4806 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.273711 4806 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.273722 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.273733 4806 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.273744 4806 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.273754 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.273765 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.273775 4806 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.273786 4806 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.273799 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.273811 4806 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.273823 4806 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.273834 4806 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.273844 4806 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.273854 4806 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.273863 4806 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.273873 4806 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.273871 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/9c73a9f4-20b2-4c8a-b25d-413770be4fac-serviceca\") pod \"node-ca-pw8cg\" (UID: \"9c73a9f4-20b2-4c8a-b25d-413770be4fac\") " pod="openshift-image-registry/node-ca-pw8cg" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.273883 4806 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.273951 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.273967 4806 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.273980 4806 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.273993 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.274008 4806 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.274021 4806 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.274035 4806 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.274051 4806 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.274064 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.274076 4806 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.274095 4806 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.274106 4806 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.274116 4806 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.274126 4806 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.274136 4806 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.274146 4806 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.274155 4806 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.274165 4806 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.274175 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.274184 4806 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.274193 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.274203 4806 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.274214 4806 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.274223 4806 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.274232 4806 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.274242 4806 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.274251 4806 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.274260 4806 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.274270 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.274280 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.274289 4806 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.274299 4806 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.274309 4806 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.274320 4806 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.274329 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.274340 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.274342 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/265e15b3-6ef8-47df-ab15-dcc9bd9574ae-hosts-file\") pod \"node-resolver-wx526\" (UID: \"265e15b3-6ef8-47df-ab15-dcc9bd9574ae\") " pod="openshift-dns/node-resolver-wx526" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.274351 4806 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.274378 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9c73a9f4-20b2-4c8a-b25d-413770be4fac-host\") pod \"node-ca-pw8cg\" (UID: \"9c73a9f4-20b2-4c8a-b25d-413770be4fac\") " pod="openshift-image-registry/node-ca-pw8cg" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.274386 4806 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.274401 4806 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.274412 4806 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.274424 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.274438 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.274448 4806 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.274458 4806 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.274468 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.274478 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.274489 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.274501 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.274512 4806 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.274557 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.274569 4806 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.274580 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.274591 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.274602 4806 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.274611 4806 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.274622 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.274632 4806 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.274642 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.274654 4806 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.274665 4806 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.274677 4806 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.274690 4806 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.274702 4806 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.274713 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.274724 4806 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.274411 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.274817 4806 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.274832 4806 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.274843 4806 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.274852 4806 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.274862 4806 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.274872 4806 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.274883 4806 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.274894 4806 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.274905 4806 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.274914 4806 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.274923 4806 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.274934 4806 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.274943 4806 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.274953 4806 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.274963 4806 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.274973 4806 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.274983 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.274994 4806 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.275006 4806 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.275018 4806 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.275029 4806 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.275039 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.275049 4806 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.275060 4806 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.275070 4806 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.275079 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.275089 4806 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.275100 4806 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.275111 4806 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.275122 4806 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.275134 4806 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.275145 4806 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.275180 4806 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.275191 4806 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.275201 4806 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.275211 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.274543 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.286241 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.286264 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.286273 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.286286 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.286295 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:00Z","lastTransitionTime":"2026-01-26T07:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.290117 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f837be3-c97c-4ec6-9ac0-1b2c210a2bd1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://370998a25827938e42e6ca8f51cb20c64068d266a2a6e92f43cfa6033efe1f1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://befc3a7310ed730ab06800d0ffe5a5ad206f5193f921d0080543d96e53c382cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f4e90bd8d92e4159f45b887647465c23595d6a14cb9fd5a6f3b6c736345b302\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d8fe9445b34c3dea0e5508fbbd9d8c72d3b0bd3c927f3d0469fb5f86a61b7a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6442d8b57ec5789bc5ad219da279d42cd053598b4a8a843b4190a619750483d3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 07:53:53.366863 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 07:53:53.368547 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1947576678/tls.crt::/tmp/serving-cert-1947576678/tls.key\\\\\\\"\\\\nI0126 07:53:59.068951 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 07:53:59.072863 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 07:53:59.072893 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 07:53:59.072922 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 07:53:59.072931 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 07:53:59.078386 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 07:53:59.078432 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 07:53:59.078439 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 07:53:59.078445 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 07:53:59.078449 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 07:53:59.078455 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 07:53:59.078459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 07:53:59.078571 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 07:53:59.084946 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eda625619208720589e5f413d0df8e1e55e7fd5ab4a2e59d64e19150afb05ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02368f0d186c7de1c3dc25d5c60985799ded799314f332c5132230a413d531a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02368f0d186c7de1c3dc25d5c60985799ded799314f332c5132230a413d531a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:53:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.294599 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qt9w8\" (UniqueName: \"kubernetes.io/projected/9c73a9f4-20b2-4c8a-b25d-413770be4fac-kube-api-access-qt9w8\") pod \"node-ca-pw8cg\" (UID: \"9c73a9f4-20b2-4c8a-b25d-413770be4fac\") " pod="openshift-image-registry/node-ca-pw8cg" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.308702 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.320472 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmfr4\" (UniqueName: \"kubernetes.io/projected/265e15b3-6ef8-47df-ab15-dcc9bd9574ae-kube-api-access-cmfr4\") pod \"node-resolver-wx526\" (UID: \"265e15b3-6ef8-47df-ab15-dcc9bd9574ae\") " pod="openshift-dns/node-resolver-wx526" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.333219 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-k2tlk"] Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.342089 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.349401 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.354956 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.355341 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.355564 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.355772 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.357508 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.367582 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.367814 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.370704 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-wx526" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.381958 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.394380 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wx526" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"265e15b3-6ef8-47df-ab15-dcc9bd9574ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmfr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wx526\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.394825 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-pw8cg" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.398133 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.398155 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.398163 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.398175 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.398183 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:00Z","lastTransitionTime":"2026-01-26T07:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.432986 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pw8cg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c73a9f4-20b2-4c8a-b25d-413770be4fac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt9w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pw8cg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.468187 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.486599 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/d07502a2-50b0-4012-b335-340a1c694c50-rootfs\") pod \"machine-config-daemon-k2tlk\" (UID: \"d07502a2-50b0-4012-b335-340a1c694c50\") " pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.486636 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d07502a2-50b0-4012-b335-340a1c694c50-proxy-tls\") pod \"machine-config-daemon-k2tlk\" (UID: \"d07502a2-50b0-4012-b335-340a1c694c50\") " pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.486662 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67x45\" (UniqueName: \"kubernetes.io/projected/d07502a2-50b0-4012-b335-340a1c694c50-kube-api-access-67x45\") pod \"machine-config-daemon-k2tlk\" (UID: \"d07502a2-50b0-4012-b335-340a1c694c50\") " pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.486695 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d07502a2-50b0-4012-b335-340a1c694c50-mcd-auth-proxy-config\") pod \"machine-config-daemon-k2tlk\" (UID: \"d07502a2-50b0-4012-b335-340a1c694c50\") " pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.501713 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.501747 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.501755 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.501770 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.501779 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:00Z","lastTransitionTime":"2026-01-26T07:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.515829 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a2716f1-7f63-4f1b-88e7-412b33a5afe3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1797a908e33ef3678ac1471e937a8e52c11dbb31d8de0f19d400a01706a98b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c111f2ec62507ccc570af4f4da19232bfa0946bb9e67bde595d5a3d66eff6a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f94ceee9c2ed3bd1c5ba58ee7e5047dd5f0a325c610c4277705f3426e69ed79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50228cb176a3defc821a78ed75d837706c2de3a3414f02e0afacf81289a15999\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:53:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.536132 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.554650 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.583140 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.589837 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d07502a2-50b0-4012-b335-340a1c694c50-mcd-auth-proxy-config\") pod \"machine-config-daemon-k2tlk\" (UID: \"d07502a2-50b0-4012-b335-340a1c694c50\") " pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.589900 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/d07502a2-50b0-4012-b335-340a1c694c50-rootfs\") pod \"machine-config-daemon-k2tlk\" (UID: \"d07502a2-50b0-4012-b335-340a1c694c50\") " pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.589920 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d07502a2-50b0-4012-b335-340a1c694c50-proxy-tls\") pod \"machine-config-daemon-k2tlk\" (UID: \"d07502a2-50b0-4012-b335-340a1c694c50\") " pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.589935 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67x45\" (UniqueName: \"kubernetes.io/projected/d07502a2-50b0-4012-b335-340a1c694c50-kube-api-access-67x45\") pod \"machine-config-daemon-k2tlk\" (UID: \"d07502a2-50b0-4012-b335-340a1c694c50\") " pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.590831 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d07502a2-50b0-4012-b335-340a1c694c50-mcd-auth-proxy-config\") pod \"machine-config-daemon-k2tlk\" (UID: \"d07502a2-50b0-4012-b335-340a1c694c50\") " pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.590893 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/d07502a2-50b0-4012-b335-340a1c694c50-rootfs\") pod \"machine-config-daemon-k2tlk\" (UID: \"d07502a2-50b0-4012-b335-340a1c694c50\") " pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.596866 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d07502a2-50b0-4012-b335-340a1c694c50-proxy-tls\") pod \"machine-config-daemon-k2tlk\" (UID: \"d07502a2-50b0-4012-b335-340a1c694c50\") " pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.605767 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.605813 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.605822 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.605840 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.605849 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:00Z","lastTransitionTime":"2026-01-26T07:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.608802 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pw8cg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c73a9f4-20b2-4c8a-b25d-413770be4fac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt9w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pw8cg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.625295 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67x45\" (UniqueName: \"kubernetes.io/projected/d07502a2-50b0-4012-b335-340a1c694c50-kube-api-access-67x45\") pod \"machine-config-daemon-k2tlk\" (UID: \"d07502a2-50b0-4012-b335-340a1c694c50\") " pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.640148 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a2716f1-7f63-4f1b-88e7-412b33a5afe3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1797a908e33ef3678ac1471e937a8e52c11dbb31d8de0f19d400a01706a98b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c111f2ec62507ccc570af4f4da19232bfa0946bb9e67bde595d5a3d66eff6a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f94ceee9c2ed3bd1c5ba58ee7e5047dd5f0a325c610c4277705f3426e69ed79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50228cb176a3defc821a78ed75d837706c2de3a3414f02e0afacf81289a15999\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:53:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.647846 4806 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-26 07:48:59 +0000 UTC, rotation deadline is 2026-10-09 09:59:24.729567186 +0000 UTC Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.647923 4806 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6146h5m24.081646434s for next certificate rotation Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.663041 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.690861 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 07:54:00 crc kubenswrapper[4806]: E0126 07:54:00.691016 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 07:54:01.691002694 +0000 UTC m=+20.955410750 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.694212 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.694362 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.704675 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f837be3-c97c-4ec6-9ac0-1b2c210a2bd1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://370998a25827938e42e6ca8f51cb20c64068d266a2a6e92f43cfa6033efe1f1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://befc3a7310ed730ab06800d0ffe5a5ad206f5193f921d0080543d96e53c382cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f4e90bd8d92e4159f45b887647465c23595d6a14cb9fd5a6f3b6c736345b302\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d8fe9445b34c3dea0e5508fbbd9d8c72d3b0bd3c927f3d0469fb5f86a61b7a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6442d8b57ec5789bc5ad219da279d42cd053598b4a8a843b4190a619750483d3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 07:53:53.366863 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 07:53:53.368547 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1947576678/tls.crt::/tmp/serving-cert-1947576678/tls.key\\\\\\\"\\\\nI0126 07:53:59.068951 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 07:53:59.072863 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 07:53:59.072893 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 07:53:59.072922 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 07:53:59.072931 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 07:53:59.078386 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 07:53:59.078432 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 07:53:59.078439 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 07:53:59.078445 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 07:53:59.078449 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 07:53:59.078455 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 07:53:59.078459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 07:53:59.078571 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 07:53:59.084946 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eda625619208720589e5f413d0df8e1e55e7fd5ab4a2e59d64e19150afb05ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02368f0d186c7de1c3dc25d5c60985799ded799314f332c5132230a413d531a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02368f0d186c7de1c3dc25d5c60985799ded799314f332c5132230a413d531a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:53:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.711786 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.711810 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.711819 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.712086 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.712102 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:00Z","lastTransitionTime":"2026-01-26T07:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.722144 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.746227 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.757010 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-d7glh"] Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.757343 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-d7glh" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.761886 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-268q5"] Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.762696 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-268q5" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.765365 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.767070 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.767120 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.767175 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.767272 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.767420 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.767513 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.771081 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wx526" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"265e15b3-6ef8-47df-ab15-dcc9bd9574ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmfr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wx526\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.772667 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-8mw7z"] Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.773607 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.783508 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.784059 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.784306 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.784332 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.788916 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.788944 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.788999 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.791450 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.791485 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.791512 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.791552 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 07:54:00 crc kubenswrapper[4806]: E0126 07:54:00.791655 4806 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 07:54:00 crc kubenswrapper[4806]: E0126 07:54:00.791712 4806 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 07:54:00 crc kubenswrapper[4806]: E0126 07:54:00.791723 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 07:54:01.791699449 +0000 UTC m=+21.056107505 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 07:54:00 crc kubenswrapper[4806]: E0126 07:54:00.791762 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 07:54:01.79174409 +0000 UTC m=+21.056152146 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 07:54:00 crc kubenswrapper[4806]: E0126 07:54:00.791661 4806 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 07:54:00 crc kubenswrapper[4806]: E0126 07:54:00.791785 4806 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 07:54:00 crc kubenswrapper[4806]: E0126 07:54:00.791796 4806 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 07:54:00 crc kubenswrapper[4806]: E0126 07:54:00.791823 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 07:54:01.791816662 +0000 UTC m=+21.056224718 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 07:54:00 crc kubenswrapper[4806]: E0126 07:54:00.791860 4806 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 07:54:00 crc kubenswrapper[4806]: E0126 07:54:00.791878 4806 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 07:54:00 crc kubenswrapper[4806]: E0126 07:54:00.791890 4806 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 07:54:00 crc kubenswrapper[4806]: E0126 07:54:00.791939 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 07:54:01.791922565 +0000 UTC m=+21.056330621 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.798190 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.808755 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.814112 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.814144 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.814153 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.814168 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.814176 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:00Z","lastTransitionTime":"2026-01-26T07:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.816757 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d07502a2-50b0-4012-b335-340a1c694c50\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67x45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67x45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k2tlk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.833001 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a2716f1-7f63-4f1b-88e7-412b33a5afe3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1797a908e33ef3678ac1471e937a8e52c11dbb31d8de0f19d400a01706a98b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c111f2ec62507ccc570af4f4da19232bfa0946bb9e67bde595d5a3d66eff6a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f94ceee9c2ed3bd1c5ba58ee7e5047dd5f0a325c610c4277705f3426e69ed79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50228cb176a3defc821a78ed75d837706c2de3a3414f02e0afacf81289a15999\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:53:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.843872 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.863477 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.882006 4806 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 26 07:54:00 crc kubenswrapper[4806]: W0126 07:54:00.882331 4806 reflector.go:484] object-"openshift-network-node-identity"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 26 07:54:00 crc kubenswrapper[4806]: W0126 07:54:00.882451 4806 reflector.go:484] object-"openshift-image-registry"/"image-registry-certificates": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-image-registry"/"image-registry-certificates": Unexpected watch close - watch lasted less than a second and no items received Jan 26 07:54:00 crc kubenswrapper[4806]: W0126 07:54:00.882580 4806 reflector.go:484] object-"openshift-image-registry"/"node-ca-dockercfg-4777p": watch of *v1.Secret ended with: very short watch: object-"openshift-image-registry"/"node-ca-dockercfg-4777p": Unexpected watch close - watch lasted less than a second and no items received Jan 26 07:54:00 crc kubenswrapper[4806]: W0126 07:54:00.882671 4806 reflector.go:484] object-"openshift-multus"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 26 07:54:00 crc kubenswrapper[4806]: W0126 07:54:00.882758 4806 reflector.go:484] object-"openshift-multus"/"default-cni-sysctl-allowlist": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"default-cni-sysctl-allowlist": Unexpected watch close - watch lasted less than a second and no items received Jan 26 07:54:00 crc kubenswrapper[4806]: W0126 07:54:00.882844 4806 reflector.go:484] object-"openshift-network-operator"/"iptables-alerter-script": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-operator"/"iptables-alerter-script": Unexpected watch close - watch lasted less than a second and no items received Jan 26 07:54:00 crc kubenswrapper[4806]: W0126 07:54:00.882945 4806 reflector.go:484] object-"openshift-dns"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-dns"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 26 07:54:00 crc kubenswrapper[4806]: W0126 07:54:00.883026 4806 reflector.go:484] object-"openshift-machine-config-operator"/"kube-rbac-proxy": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-machine-config-operator"/"kube-rbac-proxy": Unexpected watch close - watch lasted less than a second and no items received Jan 26 07:54:00 crc kubenswrapper[4806]: W0126 07:54:00.883098 4806 reflector.go:484] object-"openshift-dns"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-dns"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 26 07:54:00 crc kubenswrapper[4806]: W0126 07:54:00.883177 4806 reflector.go:484] object-"openshift-image-registry"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-image-registry"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 26 07:54:00 crc kubenswrapper[4806]: W0126 07:54:00.883288 4806 reflector.go:484] object-"openshift-machine-config-operator"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-machine-config-operator"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 26 07:54:00 crc kubenswrapper[4806]: W0126 07:54:00.883360 4806 reflector.go:484] object-"openshift-machine-config-operator"/"proxy-tls": watch of *v1.Secret ended with: very short watch: object-"openshift-machine-config-operator"/"proxy-tls": Unexpected watch close - watch lasted less than a second and no items received Jan 26 07:54:00 crc kubenswrapper[4806]: W0126 07:54:00.883436 4806 reflector.go:484] object-"openshift-multus"/"multus-daemon-config": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"multus-daemon-config": Unexpected watch close - watch lasted less than a second and no items received Jan 26 07:54:00 crc kubenswrapper[4806]: W0126 07:54:00.883544 4806 reflector.go:484] object-"openshift-dns"/"node-resolver-dockercfg-kz9s7": watch of *v1.Secret ended with: very short watch: object-"openshift-dns"/"node-resolver-dockercfg-kz9s7": Unexpected watch close - watch lasted less than a second and no items received Jan 26 07:54:00 crc kubenswrapper[4806]: W0126 07:54:00.883922 4806 reflector.go:484] object-"openshift-image-registry"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-image-registry"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 26 07:54:00 crc kubenswrapper[4806]: W0126 07:54:00.884013 4806 reflector.go:484] object-"openshift-network-node-identity"/"network-node-identity-cert": watch of *v1.Secret ended with: very short watch: object-"openshift-network-node-identity"/"network-node-identity-cert": Unexpected watch close - watch lasted less than a second and no items received Jan 26 07:54:00 crc kubenswrapper[4806]: W0126 07:54:00.884282 4806 reflector.go:484] object-"openshift-machine-config-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-machine-config-operator"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 26 07:54:00 crc kubenswrapper[4806]: W0126 07:54:00.884387 4806 reflector.go:484] object-"openshift-ovn-kubernetes"/"env-overrides": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-ovn-kubernetes"/"env-overrides": Unexpected watch close - watch lasted less than a second and no items received Jan 26 07:54:00 crc kubenswrapper[4806]: W0126 07:54:00.884489 4806 reflector.go:484] object-"openshift-multus"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 26 07:54:00 crc kubenswrapper[4806]: W0126 07:54:00.884628 4806 reflector.go:484] object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz": watch of *v1.Secret ended with: very short watch: object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz": Unexpected watch close - watch lasted less than a second and no items received Jan 26 07:54:00 crc kubenswrapper[4806]: W0126 07:54:00.884717 4806 reflector.go:484] object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl": watch of *v1.Secret ended with: very short watch: object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl": Unexpected watch close - watch lasted less than a second and no items received Jan 26 07:54:00 crc kubenswrapper[4806]: W0126 07:54:00.884797 4806 reflector.go:484] object-"openshift-ovn-kubernetes"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-ovn-kubernetes"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 26 07:54:00 crc kubenswrapper[4806]: W0126 07:54:00.884870 4806 reflector.go:484] object-"openshift-network-node-identity"/"env-overrides": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"env-overrides": Unexpected watch close - watch lasted less than a second and no items received Jan 26 07:54:00 crc kubenswrapper[4806]: W0126 07:54:00.884952 4806 reflector.go:484] object-"openshift-network-operator"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-operator"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 26 07:54:00 crc kubenswrapper[4806]: W0126 07:54:00.885044 4806 reflector.go:484] object-"openshift-network-node-identity"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 26 07:54:00 crc kubenswrapper[4806]: W0126 07:54:00.885131 4806 reflector.go:484] object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq": watch of *v1.Secret ended with: very short watch: object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq": Unexpected watch close - watch lasted less than a second and no items received Jan 26 07:54:00 crc kubenswrapper[4806]: W0126 07:54:00.885208 4806 reflector.go:484] object-"openshift-network-operator"/"metrics-tls": watch of *v1.Secret ended with: very short watch: object-"openshift-network-operator"/"metrics-tls": Unexpected watch close - watch lasted less than a second and no items received Jan 26 07:54:00 crc kubenswrapper[4806]: W0126 07:54:00.885290 4806 reflector.go:484] object-"openshift-multus"/"cni-copy-resources": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"cni-copy-resources": Unexpected watch close - watch lasted less than a second and no items received Jan 26 07:54:00 crc kubenswrapper[4806]: W0126 07:54:00.885383 4806 reflector.go:484] object-"openshift-multus"/"default-dockercfg-2q5b6": watch of *v1.Secret ended with: very short watch: object-"openshift-multus"/"default-dockercfg-2q5b6": Unexpected watch close - watch lasted less than a second and no items received Jan 26 07:54:00 crc kubenswrapper[4806]: W0126 07:54:00.885477 4806 reflector.go:484] object-"openshift-ovn-kubernetes"/"ovnkube-script-lib": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-ovn-kubernetes"/"ovnkube-script-lib": Unexpected watch close - watch lasted less than a second and no items received Jan 26 07:54:00 crc kubenswrapper[4806]: W0126 07:54:00.885943 4806 reflector.go:484] object-"openshift-network-node-identity"/"ovnkube-identity-cm": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"ovnkube-identity-cm": Unexpected watch close - watch lasted less than a second and no items received Jan 26 07:54:00 crc kubenswrapper[4806]: W0126 07:54:00.886051 4806 reflector.go:484] object-"openshift-network-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-operator"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 26 07:54:00 crc kubenswrapper[4806]: W0126 07:54:00.886154 4806 reflector.go:484] object-"openshift-ovn-kubernetes"/"ovnkube-config": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-ovn-kubernetes"/"ovnkube-config": Unexpected watch close - watch lasted less than a second and no items received Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.886374 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f8b8acb-f4cf-41db-82f8-94ffd21c1594\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8mw7z\": Patch \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ovn-kubernetes/pods/ovnkube-node-8mw7z/status\": read tcp 38.102.83.66:33278->38.102.83.66:6443: use of closed network connection" Jan 26 07:54:00 crc kubenswrapper[4806]: W0126 07:54:00.886809 4806 reflector.go:484] object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert": watch of *v1.Secret ended with: very short watch: object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert": Unexpected watch close - watch lasted less than a second and no items received Jan 26 07:54:00 crc kubenswrapper[4806]: W0126 07:54:00.886825 4806 reflector.go:484] object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.895987 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1f8b8acb-f4cf-41db-82f8-94ffd21c1594-host-run-ovn-kubernetes\") pod \"ovnkube-node-8mw7z\" (UID: \"1f8b8acb-f4cf-41db-82f8-94ffd21c1594\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.896022 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4320ae6b-0d73-47d7-9f2c-f3c5b6b69041-host-run-netns\") pod \"multus-d7glh\" (UID: \"4320ae6b-0d73-47d7-9f2c-f3c5b6b69041\") " pod="openshift-multus/multus-d7glh" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.896037 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/4320ae6b-0d73-47d7-9f2c-f3c5b6b69041-hostroot\") pod \"multus-d7glh\" (UID: \"4320ae6b-0d73-47d7-9f2c-f3c5b6b69041\") " pod="openshift-multus/multus-d7glh" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.896051 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4320ae6b-0d73-47d7-9f2c-f3c5b6b69041-multus-daemon-config\") pod \"multus-d7glh\" (UID: \"4320ae6b-0d73-47d7-9f2c-f3c5b6b69041\") " pod="openshift-multus/multus-d7glh" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.896068 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpcqh\" (UniqueName: \"kubernetes.io/projected/4320ae6b-0d73-47d7-9f2c-f3c5b6b69041-kube-api-access-xpcqh\") pod \"multus-d7glh\" (UID: \"4320ae6b-0d73-47d7-9f2c-f3c5b6b69041\") " pod="openshift-multus/multus-d7glh" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.896084 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/1f8b8acb-f4cf-41db-82f8-94ffd21c1594-ovnkube-script-lib\") pod \"ovnkube-node-8mw7z\" (UID: \"1f8b8acb-f4cf-41db-82f8-94ffd21c1594\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.896099 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/59844d88-1bf9-4761-b664-74623e7532c3-cnibin\") pod \"multus-additional-cni-plugins-268q5\" (UID: \"59844d88-1bf9-4761-b664-74623e7532c3\") " pod="openshift-multus/multus-additional-cni-plugins-268q5" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.896113 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/59844d88-1bf9-4761-b664-74623e7532c3-os-release\") pod \"multus-additional-cni-plugins-268q5\" (UID: \"59844d88-1bf9-4761-b664-74623e7532c3\") " pod="openshift-multus/multus-additional-cni-plugins-268q5" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.896129 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/1f8b8acb-f4cf-41db-82f8-94ffd21c1594-run-systemd\") pod \"ovnkube-node-8mw7z\" (UID: \"1f8b8acb-f4cf-41db-82f8-94ffd21c1594\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.896142 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1f8b8acb-f4cf-41db-82f8-94ffd21c1594-run-openvswitch\") pod \"ovnkube-node-8mw7z\" (UID: \"1f8b8acb-f4cf-41db-82f8-94ffd21c1594\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.896156 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/4320ae6b-0d73-47d7-9f2c-f3c5b6b69041-cnibin\") pod \"multus-d7glh\" (UID: \"4320ae6b-0d73-47d7-9f2c-f3c5b6b69041\") " pod="openshift-multus/multus-d7glh" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.896171 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/4320ae6b-0d73-47d7-9f2c-f3c5b6b69041-host-var-lib-cni-multus\") pod \"multus-d7glh\" (UID: \"4320ae6b-0d73-47d7-9f2c-f3c5b6b69041\") " pod="openshift-multus/multus-d7glh" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.896189 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/4320ae6b-0d73-47d7-9f2c-f3c5b6b69041-os-release\") pod \"multus-d7glh\" (UID: \"4320ae6b-0d73-47d7-9f2c-f3c5b6b69041\") " pod="openshift-multus/multus-d7glh" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.896203 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4320ae6b-0d73-47d7-9f2c-f3c5b6b69041-host-var-lib-kubelet\") pod \"multus-d7glh\" (UID: \"4320ae6b-0d73-47d7-9f2c-f3c5b6b69041\") " pod="openshift-multus/multus-d7glh" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.896218 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1f8b8acb-f4cf-41db-82f8-94ffd21c1594-host-slash\") pod \"ovnkube-node-8mw7z\" (UID: \"1f8b8acb-f4cf-41db-82f8-94ffd21c1594\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.896232 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1f8b8acb-f4cf-41db-82f8-94ffd21c1594-host-cni-bin\") pod \"ovnkube-node-8mw7z\" (UID: \"1f8b8acb-f4cf-41db-82f8-94ffd21c1594\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.896261 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/1f8b8acb-f4cf-41db-82f8-94ffd21c1594-node-log\") pod \"ovnkube-node-8mw7z\" (UID: \"1f8b8acb-f4cf-41db-82f8-94ffd21c1594\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.896283 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/59844d88-1bf9-4761-b664-74623e7532c3-cni-binary-copy\") pod \"multus-additional-cni-plugins-268q5\" (UID: \"59844d88-1bf9-4761-b664-74623e7532c3\") " pod="openshift-multus/multus-additional-cni-plugins-268q5" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.896305 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1f8b8acb-f4cf-41db-82f8-94ffd21c1594-host-cni-netd\") pod \"ovnkube-node-8mw7z\" (UID: \"1f8b8acb-f4cf-41db-82f8-94ffd21c1594\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.896321 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4320ae6b-0d73-47d7-9f2c-f3c5b6b69041-cni-binary-copy\") pod \"multus-d7glh\" (UID: \"4320ae6b-0d73-47d7-9f2c-f3c5b6b69041\") " pod="openshift-multus/multus-d7glh" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.896337 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1f8b8acb-f4cf-41db-82f8-94ffd21c1594-etc-openvswitch\") pod \"ovnkube-node-8mw7z\" (UID: \"1f8b8acb-f4cf-41db-82f8-94ffd21c1594\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.896352 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/1f8b8acb-f4cf-41db-82f8-94ffd21c1594-systemd-units\") pod \"ovnkube-node-8mw7z\" (UID: \"1f8b8acb-f4cf-41db-82f8-94ffd21c1594\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.896366 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4320ae6b-0d73-47d7-9f2c-f3c5b6b69041-etc-kubernetes\") pod \"multus-d7glh\" (UID: \"4320ae6b-0d73-47d7-9f2c-f3c5b6b69041\") " pod="openshift-multus/multus-d7glh" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.896383 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/59844d88-1bf9-4761-b664-74623e7532c3-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-268q5\" (UID: \"59844d88-1bf9-4761-b664-74623e7532c3\") " pod="openshift-multus/multus-additional-cni-plugins-268q5" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.896399 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gt988\" (UniqueName: \"kubernetes.io/projected/59844d88-1bf9-4761-b664-74623e7532c3-kube-api-access-gt988\") pod \"multus-additional-cni-plugins-268q5\" (UID: \"59844d88-1bf9-4761-b664-74623e7532c3\") " pod="openshift-multus/multus-additional-cni-plugins-268q5" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.896414 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/4320ae6b-0d73-47d7-9f2c-f3c5b6b69041-multus-conf-dir\") pod \"multus-d7glh\" (UID: \"4320ae6b-0d73-47d7-9f2c-f3c5b6b69041\") " pod="openshift-multus/multus-d7glh" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.896438 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/1f8b8acb-f4cf-41db-82f8-94ffd21c1594-log-socket\") pod \"ovnkube-node-8mw7z\" (UID: \"1f8b8acb-f4cf-41db-82f8-94ffd21c1594\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.896454 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/4320ae6b-0d73-47d7-9f2c-f3c5b6b69041-system-cni-dir\") pod \"multus-d7glh\" (UID: \"4320ae6b-0d73-47d7-9f2c-f3c5b6b69041\") " pod="openshift-multus/multus-d7glh" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.896470 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/4320ae6b-0d73-47d7-9f2c-f3c5b6b69041-multus-socket-dir-parent\") pod \"multus-d7glh\" (UID: \"4320ae6b-0d73-47d7-9f2c-f3c5b6b69041\") " pod="openshift-multus/multus-d7glh" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.896491 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/4320ae6b-0d73-47d7-9f2c-f3c5b6b69041-host-run-multus-certs\") pod \"multus-d7glh\" (UID: \"4320ae6b-0d73-47d7-9f2c-f3c5b6b69041\") " pod="openshift-multus/multus-d7glh" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.896508 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1f8b8acb-f4cf-41db-82f8-94ffd21c1594-ovnkube-config\") pod \"ovnkube-node-8mw7z\" (UID: \"1f8b8acb-f4cf-41db-82f8-94ffd21c1594\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.897827 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1f8b8acb-f4cf-41db-82f8-94ffd21c1594-ovn-node-metrics-cert\") pod \"ovnkube-node-8mw7z\" (UID: \"1f8b8acb-f4cf-41db-82f8-94ffd21c1594\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.897848 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/4320ae6b-0d73-47d7-9f2c-f3c5b6b69041-host-run-k8s-cni-cncf-io\") pod \"multus-d7glh\" (UID: \"4320ae6b-0d73-47d7-9f2c-f3c5b6b69041\") " pod="openshift-multus/multus-d7glh" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.897866 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bh82q\" (UniqueName: \"kubernetes.io/projected/1f8b8acb-f4cf-41db-82f8-94ffd21c1594-kube-api-access-bh82q\") pod \"ovnkube-node-8mw7z\" (UID: \"1f8b8acb-f4cf-41db-82f8-94ffd21c1594\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.898182 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1f8b8acb-f4cf-41db-82f8-94ffd21c1594-var-lib-openvswitch\") pod \"ovnkube-node-8mw7z\" (UID: \"1f8b8acb-f4cf-41db-82f8-94ffd21c1594\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.898204 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/1f8b8acb-f4cf-41db-82f8-94ffd21c1594-run-ovn\") pod \"ovnkube-node-8mw7z\" (UID: \"1f8b8acb-f4cf-41db-82f8-94ffd21c1594\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.898230 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/1f8b8acb-f4cf-41db-82f8-94ffd21c1594-host-kubelet\") pod \"ovnkube-node-8mw7z\" (UID: \"1f8b8acb-f4cf-41db-82f8-94ffd21c1594\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.898243 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1f8b8acb-f4cf-41db-82f8-94ffd21c1594-host-run-netns\") pod \"ovnkube-node-8mw7z\" (UID: \"1f8b8acb-f4cf-41db-82f8-94ffd21c1594\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.898258 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1f8b8acb-f4cf-41db-82f8-94ffd21c1594-env-overrides\") pod \"ovnkube-node-8mw7z\" (UID: \"1f8b8acb-f4cf-41db-82f8-94ffd21c1594\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.899916 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/59844d88-1bf9-4761-b664-74623e7532c3-system-cni-dir\") pod \"multus-additional-cni-plugins-268q5\" (UID: \"59844d88-1bf9-4761-b664-74623e7532c3\") " pod="openshift-multus/multus-additional-cni-plugins-268q5" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.899941 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/59844d88-1bf9-4761-b664-74623e7532c3-tuning-conf-dir\") pod \"multus-additional-cni-plugins-268q5\" (UID: \"59844d88-1bf9-4761-b664-74623e7532c3\") " pod="openshift-multus/multus-additional-cni-plugins-268q5" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.899966 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4320ae6b-0d73-47d7-9f2c-f3c5b6b69041-host-var-lib-cni-bin\") pod \"multus-d7glh\" (UID: \"4320ae6b-0d73-47d7-9f2c-f3c5b6b69041\") " pod="openshift-multus/multus-d7glh" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.899984 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1f8b8acb-f4cf-41db-82f8-94ffd21c1594-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-8mw7z\" (UID: \"1f8b8acb-f4cf-41db-82f8-94ffd21c1594\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.900002 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/4320ae6b-0d73-47d7-9f2c-f3c5b6b69041-multus-cni-dir\") pod \"multus-d7glh\" (UID: \"4320ae6b-0d73-47d7-9f2c-f3c5b6b69041\") " pod="openshift-multus/multus-d7glh" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.918624 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.918655 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.918664 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.918680 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.918917 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:00Z","lastTransitionTime":"2026-01-26T07:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.938972 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-268q5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59844d88-1bf9-4761-b664-74623e7532c3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-268q5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.974510 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f837be3-c97c-4ec6-9ac0-1b2c210a2bd1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://370998a25827938e42e6ca8f51cb20c64068d266a2a6e92f43cfa6033efe1f1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://befc3a7310ed730ab06800d0ffe5a5ad206f5193f921d0080543d96e53c382cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f4e90bd8d92e4159f45b887647465c23595d6a14cb9fd5a6f3b6c736345b302\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d8fe9445b34c3dea0e5508fbbd9d8c72d3b0bd3c927f3d0469fb5f86a61b7a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6442d8b57ec5789bc5ad219da279d42cd053598b4a8a843b4190a619750483d3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 07:53:53.366863 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 07:53:53.368547 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1947576678/tls.crt::/tmp/serving-cert-1947576678/tls.key\\\\\\\"\\\\nI0126 07:53:59.068951 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 07:53:59.072863 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 07:53:59.072893 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 07:53:59.072922 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 07:53:59.072931 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 07:53:59.078386 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 07:53:59.078432 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 07:53:59.078439 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 07:53:59.078445 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 07:53:59.078449 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 07:53:59.078455 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 07:53:59.078459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 07:53:59.078571 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 07:53:59.084946 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eda625619208720589e5f413d0df8e1e55e7fd5ab4a2e59d64e19150afb05ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02368f0d186c7de1c3dc25d5c60985799ded799314f332c5132230a413d531a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02368f0d186c7de1c3dc25d5c60985799ded799314f332c5132230a413d531a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:53:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 07:54:00 crc kubenswrapper[4806]: I0126 07:54:00.990307 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.001157 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/1f8b8acb-f4cf-41db-82f8-94ffd21c1594-systemd-units\") pod \"ovnkube-node-8mw7z\" (UID: \"1f8b8acb-f4cf-41db-82f8-94ffd21c1594\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.001207 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4320ae6b-0d73-47d7-9f2c-f3c5b6b69041-etc-kubernetes\") pod \"multus-d7glh\" (UID: \"4320ae6b-0d73-47d7-9f2c-f3c5b6b69041\") " pod="openshift-multus/multus-d7glh" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.001225 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/59844d88-1bf9-4761-b664-74623e7532c3-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-268q5\" (UID: \"59844d88-1bf9-4761-b664-74623e7532c3\") " pod="openshift-multus/multus-additional-cni-plugins-268q5" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.001240 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gt988\" (UniqueName: \"kubernetes.io/projected/59844d88-1bf9-4761-b664-74623e7532c3-kube-api-access-gt988\") pod \"multus-additional-cni-plugins-268q5\" (UID: \"59844d88-1bf9-4761-b664-74623e7532c3\") " pod="openshift-multus/multus-additional-cni-plugins-268q5" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.001256 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/4320ae6b-0d73-47d7-9f2c-f3c5b6b69041-multus-conf-dir\") pod \"multus-d7glh\" (UID: \"4320ae6b-0d73-47d7-9f2c-f3c5b6b69041\") " pod="openshift-multus/multus-d7glh" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.001279 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/1f8b8acb-f4cf-41db-82f8-94ffd21c1594-log-socket\") pod \"ovnkube-node-8mw7z\" (UID: \"1f8b8acb-f4cf-41db-82f8-94ffd21c1594\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.001294 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/4320ae6b-0d73-47d7-9f2c-f3c5b6b69041-system-cni-dir\") pod \"multus-d7glh\" (UID: \"4320ae6b-0d73-47d7-9f2c-f3c5b6b69041\") " pod="openshift-multus/multus-d7glh" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.001309 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/4320ae6b-0d73-47d7-9f2c-f3c5b6b69041-multus-socket-dir-parent\") pod \"multus-d7glh\" (UID: \"4320ae6b-0d73-47d7-9f2c-f3c5b6b69041\") " pod="openshift-multus/multus-d7glh" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.001331 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/4320ae6b-0d73-47d7-9f2c-f3c5b6b69041-host-run-multus-certs\") pod \"multus-d7glh\" (UID: \"4320ae6b-0d73-47d7-9f2c-f3c5b6b69041\") " pod="openshift-multus/multus-d7glh" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.001345 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1f8b8acb-f4cf-41db-82f8-94ffd21c1594-ovnkube-config\") pod \"ovnkube-node-8mw7z\" (UID: \"1f8b8acb-f4cf-41db-82f8-94ffd21c1594\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.001361 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1f8b8acb-f4cf-41db-82f8-94ffd21c1594-ovn-node-metrics-cert\") pod \"ovnkube-node-8mw7z\" (UID: \"1f8b8acb-f4cf-41db-82f8-94ffd21c1594\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.001375 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/4320ae6b-0d73-47d7-9f2c-f3c5b6b69041-host-run-k8s-cni-cncf-io\") pod \"multus-d7glh\" (UID: \"4320ae6b-0d73-47d7-9f2c-f3c5b6b69041\") " pod="openshift-multus/multus-d7glh" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.001396 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bh82q\" (UniqueName: \"kubernetes.io/projected/1f8b8acb-f4cf-41db-82f8-94ffd21c1594-kube-api-access-bh82q\") pod \"ovnkube-node-8mw7z\" (UID: \"1f8b8acb-f4cf-41db-82f8-94ffd21c1594\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.001411 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1f8b8acb-f4cf-41db-82f8-94ffd21c1594-var-lib-openvswitch\") pod \"ovnkube-node-8mw7z\" (UID: \"1f8b8acb-f4cf-41db-82f8-94ffd21c1594\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.001426 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/1f8b8acb-f4cf-41db-82f8-94ffd21c1594-run-ovn\") pod \"ovnkube-node-8mw7z\" (UID: \"1f8b8acb-f4cf-41db-82f8-94ffd21c1594\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.001446 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/1f8b8acb-f4cf-41db-82f8-94ffd21c1594-host-kubelet\") pod \"ovnkube-node-8mw7z\" (UID: \"1f8b8acb-f4cf-41db-82f8-94ffd21c1594\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.001460 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1f8b8acb-f4cf-41db-82f8-94ffd21c1594-host-run-netns\") pod \"ovnkube-node-8mw7z\" (UID: \"1f8b8acb-f4cf-41db-82f8-94ffd21c1594\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.001473 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1f8b8acb-f4cf-41db-82f8-94ffd21c1594-env-overrides\") pod \"ovnkube-node-8mw7z\" (UID: \"1f8b8acb-f4cf-41db-82f8-94ffd21c1594\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.001491 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/59844d88-1bf9-4761-b664-74623e7532c3-system-cni-dir\") pod \"multus-additional-cni-plugins-268q5\" (UID: \"59844d88-1bf9-4761-b664-74623e7532c3\") " pod="openshift-multus/multus-additional-cni-plugins-268q5" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.001512 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/59844d88-1bf9-4761-b664-74623e7532c3-tuning-conf-dir\") pod \"multus-additional-cni-plugins-268q5\" (UID: \"59844d88-1bf9-4761-b664-74623e7532c3\") " pod="openshift-multus/multus-additional-cni-plugins-268q5" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.001546 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4320ae6b-0d73-47d7-9f2c-f3c5b6b69041-host-var-lib-cni-bin\") pod \"multus-d7glh\" (UID: \"4320ae6b-0d73-47d7-9f2c-f3c5b6b69041\") " pod="openshift-multus/multus-d7glh" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.001560 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1f8b8acb-f4cf-41db-82f8-94ffd21c1594-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-8mw7z\" (UID: \"1f8b8acb-f4cf-41db-82f8-94ffd21c1594\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.001574 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/4320ae6b-0d73-47d7-9f2c-f3c5b6b69041-multus-cni-dir\") pod \"multus-d7glh\" (UID: \"4320ae6b-0d73-47d7-9f2c-f3c5b6b69041\") " pod="openshift-multus/multus-d7glh" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.001590 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1f8b8acb-f4cf-41db-82f8-94ffd21c1594-host-run-ovn-kubernetes\") pod \"ovnkube-node-8mw7z\" (UID: \"1f8b8acb-f4cf-41db-82f8-94ffd21c1594\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.001608 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4320ae6b-0d73-47d7-9f2c-f3c5b6b69041-host-run-netns\") pod \"multus-d7glh\" (UID: \"4320ae6b-0d73-47d7-9f2c-f3c5b6b69041\") " pod="openshift-multus/multus-d7glh" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.001623 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/4320ae6b-0d73-47d7-9f2c-f3c5b6b69041-hostroot\") pod \"multus-d7glh\" (UID: \"4320ae6b-0d73-47d7-9f2c-f3c5b6b69041\") " pod="openshift-multus/multus-d7glh" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.001641 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4320ae6b-0d73-47d7-9f2c-f3c5b6b69041-multus-daemon-config\") pod \"multus-d7glh\" (UID: \"4320ae6b-0d73-47d7-9f2c-f3c5b6b69041\") " pod="openshift-multus/multus-d7glh" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.001657 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xpcqh\" (UniqueName: \"kubernetes.io/projected/4320ae6b-0d73-47d7-9f2c-f3c5b6b69041-kube-api-access-xpcqh\") pod \"multus-d7glh\" (UID: \"4320ae6b-0d73-47d7-9f2c-f3c5b6b69041\") " pod="openshift-multus/multus-d7glh" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.001671 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/1f8b8acb-f4cf-41db-82f8-94ffd21c1594-ovnkube-script-lib\") pod \"ovnkube-node-8mw7z\" (UID: \"1f8b8acb-f4cf-41db-82f8-94ffd21c1594\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.001683 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/59844d88-1bf9-4761-b664-74623e7532c3-cnibin\") pod \"multus-additional-cni-plugins-268q5\" (UID: \"59844d88-1bf9-4761-b664-74623e7532c3\") " pod="openshift-multus/multus-additional-cni-plugins-268q5" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.001724 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/59844d88-1bf9-4761-b664-74623e7532c3-os-release\") pod \"multus-additional-cni-plugins-268q5\" (UID: \"59844d88-1bf9-4761-b664-74623e7532c3\") " pod="openshift-multus/multus-additional-cni-plugins-268q5" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.001739 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/1f8b8acb-f4cf-41db-82f8-94ffd21c1594-run-systemd\") pod \"ovnkube-node-8mw7z\" (UID: \"1f8b8acb-f4cf-41db-82f8-94ffd21c1594\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.001752 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1f8b8acb-f4cf-41db-82f8-94ffd21c1594-run-openvswitch\") pod \"ovnkube-node-8mw7z\" (UID: \"1f8b8acb-f4cf-41db-82f8-94ffd21c1594\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.001766 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/4320ae6b-0d73-47d7-9f2c-f3c5b6b69041-cnibin\") pod \"multus-d7glh\" (UID: \"4320ae6b-0d73-47d7-9f2c-f3c5b6b69041\") " pod="openshift-multus/multus-d7glh" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.001781 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/4320ae6b-0d73-47d7-9f2c-f3c5b6b69041-host-var-lib-cni-multus\") pod \"multus-d7glh\" (UID: \"4320ae6b-0d73-47d7-9f2c-f3c5b6b69041\") " pod="openshift-multus/multus-d7glh" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.001796 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/4320ae6b-0d73-47d7-9f2c-f3c5b6b69041-os-release\") pod \"multus-d7glh\" (UID: \"4320ae6b-0d73-47d7-9f2c-f3c5b6b69041\") " pod="openshift-multus/multus-d7glh" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.001810 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4320ae6b-0d73-47d7-9f2c-f3c5b6b69041-host-var-lib-kubelet\") pod \"multus-d7glh\" (UID: \"4320ae6b-0d73-47d7-9f2c-f3c5b6b69041\") " pod="openshift-multus/multus-d7glh" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.001823 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1f8b8acb-f4cf-41db-82f8-94ffd21c1594-host-slash\") pod \"ovnkube-node-8mw7z\" (UID: \"1f8b8acb-f4cf-41db-82f8-94ffd21c1594\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.001837 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1f8b8acb-f4cf-41db-82f8-94ffd21c1594-host-cni-bin\") pod \"ovnkube-node-8mw7z\" (UID: \"1f8b8acb-f4cf-41db-82f8-94ffd21c1594\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.001859 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/1f8b8acb-f4cf-41db-82f8-94ffd21c1594-node-log\") pod \"ovnkube-node-8mw7z\" (UID: \"1f8b8acb-f4cf-41db-82f8-94ffd21c1594\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.001876 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/59844d88-1bf9-4761-b664-74623e7532c3-cni-binary-copy\") pod \"multus-additional-cni-plugins-268q5\" (UID: \"59844d88-1bf9-4761-b664-74623e7532c3\") " pod="openshift-multus/multus-additional-cni-plugins-268q5" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.001889 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1f8b8acb-f4cf-41db-82f8-94ffd21c1594-host-cni-netd\") pod \"ovnkube-node-8mw7z\" (UID: \"1f8b8acb-f4cf-41db-82f8-94ffd21c1594\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.001904 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4320ae6b-0d73-47d7-9f2c-f3c5b6b69041-cni-binary-copy\") pod \"multus-d7glh\" (UID: \"4320ae6b-0d73-47d7-9f2c-f3c5b6b69041\") " pod="openshift-multus/multus-d7glh" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.001918 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1f8b8acb-f4cf-41db-82f8-94ffd21c1594-etc-openvswitch\") pod \"ovnkube-node-8mw7z\" (UID: \"1f8b8acb-f4cf-41db-82f8-94ffd21c1594\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.001972 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1f8b8acb-f4cf-41db-82f8-94ffd21c1594-etc-openvswitch\") pod \"ovnkube-node-8mw7z\" (UID: \"1f8b8acb-f4cf-41db-82f8-94ffd21c1594\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.002002 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/1f8b8acb-f4cf-41db-82f8-94ffd21c1594-systemd-units\") pod \"ovnkube-node-8mw7z\" (UID: \"1f8b8acb-f4cf-41db-82f8-94ffd21c1594\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.002023 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4320ae6b-0d73-47d7-9f2c-f3c5b6b69041-etc-kubernetes\") pod \"multus-d7glh\" (UID: \"4320ae6b-0d73-47d7-9f2c-f3c5b6b69041\") " pod="openshift-multus/multus-d7glh" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.002569 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/59844d88-1bf9-4761-b664-74623e7532c3-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-268q5\" (UID: \"59844d88-1bf9-4761-b664-74623e7532c3\") " pod="openshift-multus/multus-additional-cni-plugins-268q5" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.002781 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/4320ae6b-0d73-47d7-9f2c-f3c5b6b69041-multus-conf-dir\") pod \"multus-d7glh\" (UID: \"4320ae6b-0d73-47d7-9f2c-f3c5b6b69041\") " pod="openshift-multus/multus-d7glh" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.002836 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/1f8b8acb-f4cf-41db-82f8-94ffd21c1594-log-socket\") pod \"ovnkube-node-8mw7z\" (UID: \"1f8b8acb-f4cf-41db-82f8-94ffd21c1594\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.002870 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/4320ae6b-0d73-47d7-9f2c-f3c5b6b69041-system-cni-dir\") pod \"multus-d7glh\" (UID: \"4320ae6b-0d73-47d7-9f2c-f3c5b6b69041\") " pod="openshift-multus/multus-d7glh" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.003009 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/4320ae6b-0d73-47d7-9f2c-f3c5b6b69041-multus-socket-dir-parent\") pod \"multus-d7glh\" (UID: \"4320ae6b-0d73-47d7-9f2c-f3c5b6b69041\") " pod="openshift-multus/multus-d7glh" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.003036 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/4320ae6b-0d73-47d7-9f2c-f3c5b6b69041-host-run-multus-certs\") pod \"multus-d7glh\" (UID: \"4320ae6b-0d73-47d7-9f2c-f3c5b6b69041\") " pod="openshift-multus/multus-d7glh" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.003495 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1f8b8acb-f4cf-41db-82f8-94ffd21c1594-ovnkube-config\") pod \"ovnkube-node-8mw7z\" (UID: \"1f8b8acb-f4cf-41db-82f8-94ffd21c1594\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.005860 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.006069 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/59844d88-1bf9-4761-b664-74623e7532c3-system-cni-dir\") pod \"multus-additional-cni-plugins-268q5\" (UID: \"59844d88-1bf9-4761-b664-74623e7532c3\") " pod="openshift-multus/multus-additional-cni-plugins-268q5" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.006118 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/4320ae6b-0d73-47d7-9f2c-f3c5b6b69041-host-run-k8s-cni-cncf-io\") pod \"multus-d7glh\" (UID: \"4320ae6b-0d73-47d7-9f2c-f3c5b6b69041\") " pod="openshift-multus/multus-d7glh" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.006310 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1f8b8acb-f4cf-41db-82f8-94ffd21c1594-var-lib-openvswitch\") pod \"ovnkube-node-8mw7z\" (UID: \"1f8b8acb-f4cf-41db-82f8-94ffd21c1594\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.006350 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/1f8b8acb-f4cf-41db-82f8-94ffd21c1594-run-ovn\") pod \"ovnkube-node-8mw7z\" (UID: \"1f8b8acb-f4cf-41db-82f8-94ffd21c1594\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.006377 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/1f8b8acb-f4cf-41db-82f8-94ffd21c1594-host-kubelet\") pod \"ovnkube-node-8mw7z\" (UID: \"1f8b8acb-f4cf-41db-82f8-94ffd21c1594\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.006400 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1f8b8acb-f4cf-41db-82f8-94ffd21c1594-host-run-netns\") pod \"ovnkube-node-8mw7z\" (UID: \"1f8b8acb-f4cf-41db-82f8-94ffd21c1594\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.006418 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4320ae6b-0d73-47d7-9f2c-f3c5b6b69041-multus-daemon-config\") pod \"multus-d7glh\" (UID: \"4320ae6b-0d73-47d7-9f2c-f3c5b6b69041\") " pod="openshift-multus/multus-d7glh" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.006827 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1f8b8acb-f4cf-41db-82f8-94ffd21c1594-ovn-node-metrics-cert\") pod \"ovnkube-node-8mw7z\" (UID: \"1f8b8acb-f4cf-41db-82f8-94ffd21c1594\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.006844 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1f8b8acb-f4cf-41db-82f8-94ffd21c1594-env-overrides\") pod \"ovnkube-node-8mw7z\" (UID: \"1f8b8acb-f4cf-41db-82f8-94ffd21c1594\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.006903 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/4320ae6b-0d73-47d7-9f2c-f3c5b6b69041-host-var-lib-cni-multus\") pod \"multus-d7glh\" (UID: \"4320ae6b-0d73-47d7-9f2c-f3c5b6b69041\") " pod="openshift-multus/multus-d7glh" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.007231 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1f8b8acb-f4cf-41db-82f8-94ffd21c1594-host-cni-bin\") pod \"ovnkube-node-8mw7z\" (UID: \"1f8b8acb-f4cf-41db-82f8-94ffd21c1594\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.007261 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/59844d88-1bf9-4761-b664-74623e7532c3-cnibin\") pod \"multus-additional-cni-plugins-268q5\" (UID: \"59844d88-1bf9-4761-b664-74623e7532c3\") " pod="openshift-multus/multus-additional-cni-plugins-268q5" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.007390 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/59844d88-1bf9-4761-b664-74623e7532c3-os-release\") pod \"multus-additional-cni-plugins-268q5\" (UID: \"59844d88-1bf9-4761-b664-74623e7532c3\") " pod="openshift-multus/multus-additional-cni-plugins-268q5" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.007414 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/1f8b8acb-f4cf-41db-82f8-94ffd21c1594-run-systemd\") pod \"ovnkube-node-8mw7z\" (UID: \"1f8b8acb-f4cf-41db-82f8-94ffd21c1594\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.007427 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1f8b8acb-f4cf-41db-82f8-94ffd21c1594-run-openvswitch\") pod \"ovnkube-node-8mw7z\" (UID: \"1f8b8acb-f4cf-41db-82f8-94ffd21c1594\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.007451 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/4320ae6b-0d73-47d7-9f2c-f3c5b6b69041-cnibin\") pod \"multus-d7glh\" (UID: \"4320ae6b-0d73-47d7-9f2c-f3c5b6b69041\") " pod="openshift-multus/multus-d7glh" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.007464 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4320ae6b-0d73-47d7-9f2c-f3c5b6b69041-host-run-netns\") pod \"multus-d7glh\" (UID: \"4320ae6b-0d73-47d7-9f2c-f3c5b6b69041\") " pod="openshift-multus/multus-d7glh" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.007477 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1f8b8acb-f4cf-41db-82f8-94ffd21c1594-host-run-ovn-kubernetes\") pod \"ovnkube-node-8mw7z\" (UID: \"1f8b8acb-f4cf-41db-82f8-94ffd21c1594\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.007491 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4320ae6b-0d73-47d7-9f2c-f3c5b6b69041-host-var-lib-cni-bin\") pod \"multus-d7glh\" (UID: \"4320ae6b-0d73-47d7-9f2c-f3c5b6b69041\") " pod="openshift-multus/multus-d7glh" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.007489 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/1f8b8acb-f4cf-41db-82f8-94ffd21c1594-ovnkube-script-lib\") pod \"ovnkube-node-8mw7z\" (UID: \"1f8b8acb-f4cf-41db-82f8-94ffd21c1594\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.007549 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/4320ae6b-0d73-47d7-9f2c-f3c5b6b69041-os-release\") pod \"multus-d7glh\" (UID: \"4320ae6b-0d73-47d7-9f2c-f3c5b6b69041\") " pod="openshift-multus/multus-d7glh" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.007573 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4320ae6b-0d73-47d7-9f2c-f3c5b6b69041-host-var-lib-kubelet\") pod \"multus-d7glh\" (UID: \"4320ae6b-0d73-47d7-9f2c-f3c5b6b69041\") " pod="openshift-multus/multus-d7glh" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.007591 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1f8b8acb-f4cf-41db-82f8-94ffd21c1594-host-slash\") pod \"ovnkube-node-8mw7z\" (UID: \"1f8b8acb-f4cf-41db-82f8-94ffd21c1594\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.007682 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/4320ae6b-0d73-47d7-9f2c-f3c5b6b69041-multus-cni-dir\") pod \"multus-d7glh\" (UID: \"4320ae6b-0d73-47d7-9f2c-f3c5b6b69041\") " pod="openshift-multus/multus-d7glh" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.007680 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/59844d88-1bf9-4761-b664-74623e7532c3-tuning-conf-dir\") pod \"multus-additional-cni-plugins-268q5\" (UID: \"59844d88-1bf9-4761-b664-74623e7532c3\") " pod="openshift-multus/multus-additional-cni-plugins-268q5" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.007710 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/1f8b8acb-f4cf-41db-82f8-94ffd21c1594-node-log\") pod \"ovnkube-node-8mw7z\" (UID: \"1f8b8acb-f4cf-41db-82f8-94ffd21c1594\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.007725 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1f8b8acb-f4cf-41db-82f8-94ffd21c1594-host-cni-netd\") pod \"ovnkube-node-8mw7z\" (UID: \"1f8b8acb-f4cf-41db-82f8-94ffd21c1594\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.008138 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/59844d88-1bf9-4761-b664-74623e7532c3-cni-binary-copy\") pod \"multus-additional-cni-plugins-268q5\" (UID: \"59844d88-1bf9-4761-b664-74623e7532c3\") " pod="openshift-multus/multus-additional-cni-plugins-268q5" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.008156 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4320ae6b-0d73-47d7-9f2c-f3c5b6b69041-cni-binary-copy\") pod \"multus-d7glh\" (UID: \"4320ae6b-0d73-47d7-9f2c-f3c5b6b69041\") " pod="openshift-multus/multus-d7glh" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.008171 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/4320ae6b-0d73-47d7-9f2c-f3c5b6b69041-hostroot\") pod \"multus-d7glh\" (UID: \"4320ae6b-0d73-47d7-9f2c-f3c5b6b69041\") " pod="openshift-multus/multus-d7glh" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.008190 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1f8b8acb-f4cf-41db-82f8-94ffd21c1594-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-8mw7z\" (UID: \"1f8b8acb-f4cf-41db-82f8-94ffd21c1594\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.020841 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wx526" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"265e15b3-6ef8-47df-ab15-dcc9bd9574ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmfr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wx526\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.021080 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.021097 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.021106 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.021120 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.021131 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:01Z","lastTransitionTime":"2026-01-26T07:54:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.027297 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gt988\" (UniqueName: \"kubernetes.io/projected/59844d88-1bf9-4761-b664-74623e7532c3-kube-api-access-gt988\") pod \"multus-additional-cni-plugins-268q5\" (UID: \"59844d88-1bf9-4761-b664-74623e7532c3\") " pod="openshift-multus/multus-additional-cni-plugins-268q5" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.027765 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xpcqh\" (UniqueName: \"kubernetes.io/projected/4320ae6b-0d73-47d7-9f2c-f3c5b6b69041-kube-api-access-xpcqh\") pod \"multus-d7glh\" (UID: \"4320ae6b-0d73-47d7-9f2c-f3c5b6b69041\") " pod="openshift-multus/multus-d7glh" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.029420 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bh82q\" (UniqueName: \"kubernetes.io/projected/1f8b8acb-f4cf-41db-82f8-94ffd21c1594-kube-api-access-bh82q\") pod \"ovnkube-node-8mw7z\" (UID: \"1f8b8acb-f4cf-41db-82f8-94ffd21c1594\") " pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.034578 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d7glh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4320ae6b-0d73-47d7-9f2c-f3c5b6b69041\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpcqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d7glh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.035838 4806 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 11:10:52.58492321 +0000 UTC Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.047664 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.048410 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.050274 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.051140 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.051206 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.052339 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.053729 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.054345 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.059966 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.061052 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.063531 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.065374 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.065970 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.067271 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.067840 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.068837 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.069950 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.071085 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.071796 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.072179 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.073191 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.074069 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.074640 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d07502a2-50b0-4012-b335-340a1c694c50\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67x45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67x45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k2tlk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.074712 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.075787 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.076265 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.077986 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.078376 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.079488 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.080218 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.081059 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.081659 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.082933 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pw8cg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c73a9f4-20b2-4c8a-b25d-413770be4fac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt9w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pw8cg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.083100 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.083680 4806 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.083777 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.085467 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.086101 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-d7glh" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.086359 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.087011 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.089043 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.090408 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.091404 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.092567 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.093236 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.094140 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.095191 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.096073 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.096751 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.097037 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.097655 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 26 07:54:01 crc kubenswrapper[4806]: W0126 07:54:01.097826 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4320ae6b_0d73_47d7_9f2c_f3c5b6b69041.slice/crio-1dfd9cd087fb37a31ee348f77c4336682f745c7457372ea92873c8e868ac33f5 WatchSource:0}: Error finding container 1dfd9cd087fb37a31ee348f77c4336682f745c7457372ea92873c8e868ac33f5: Status 404 returned error can't find the container with id 1dfd9cd087fb37a31ee348f77c4336682f745c7457372ea92873c8e868ac33f5 Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.098171 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.099081 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.099799 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.100738 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.101245 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.102262 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.102983 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.103695 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.104686 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.113705 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.123732 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.124045 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.124056 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.124071 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.124081 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:01Z","lastTransitionTime":"2026-01-26T07:54:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.126659 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wx526" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"265e15b3-6ef8-47df-ab15-dcc9bd9574ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmfr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wx526\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.131169 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-268q5" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.139230 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d7glh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4320ae6b-0d73-47d7-9f2c-f3c5b6b69041\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpcqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d7glh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.154961 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-wx526" event={"ID":"265e15b3-6ef8-47df-ab15-dcc9bd9574ae","Type":"ContainerStarted","Data":"e5080e1085d9267f3e432eaf62a6c620f7e59db08140bd5a901ab21840bfd6bf"} Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.155017 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-wx526" event={"ID":"265e15b3-6ef8-47df-ab15-dcc9bd9574ae","Type":"ContainerStarted","Data":"e081fe66173f6f36132f9f5b2da4a7b549bb23c9b60fb4150f07dede139f2357"} Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.157053 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.159157 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-268q5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59844d88-1bf9-4761-b664-74623e7532c3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-268q5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.167879 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"acf0880fe013b6ee2bf3e8342e19ebbdc21b0f7aeb69749dce219fe0e5e4939a"} Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.167927 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"74443c16a4372fa48096cc9ee3e0cb31f05903b2df297811ce89860c28209166"} Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.172058 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-268q5" event={"ID":"59844d88-1bf9-4761-b664-74623e7532c3","Type":"ContainerStarted","Data":"663b3f8e7a17c416e3144b5974d5b57b097f7c7c4f04b7421e6772a8c460d48e"} Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.174714 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f837be3-c97c-4ec6-9ac0-1b2c210a2bd1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://370998a25827938e42e6ca8f51cb20c64068d266a2a6e92f43cfa6033efe1f1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://befc3a7310ed730ab06800d0ffe5a5ad206f5193f921d0080543d96e53c382cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f4e90bd8d92e4159f45b887647465c23595d6a14cb9fd5a6f3b6c736345b302\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d8fe9445b34c3dea0e5508fbbd9d8c72d3b0bd3c927f3d0469fb5f86a61b7a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6442d8b57ec5789bc5ad219da279d42cd053598b4a8a843b4190a619750483d3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 07:53:53.366863 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 07:53:53.368547 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1947576678/tls.crt::/tmp/serving-cert-1947576678/tls.key\\\\\\\"\\\\nI0126 07:53:59.068951 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 07:53:59.072863 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 07:53:59.072893 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 07:53:59.072922 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 07:53:59.072931 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 07:53:59.078386 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 07:53:59.078432 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 07:53:59.078439 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 07:53:59.078445 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 07:53:59.078449 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 07:53:59.078455 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 07:53:59.078459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 07:53:59.078571 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 07:53:59.084946 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eda625619208720589e5f413d0df8e1e55e7fd5ab4a2e59d64e19150afb05ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02368f0d186c7de1c3dc25d5c60985799ded799314f332c5132230a413d531a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02368f0d186c7de1c3dc25d5c60985799ded799314f332c5132230a413d531a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:53:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.184745 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"89f653e4de50d9654203fbb3d623247d7652227ed39c6f7f8a9ac1297f740f55"} Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.190015 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"0298829a559cb39d3f8d2f3b0e53b249e74c432ba82385ab628135d53fa77272"} Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.190057 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"70db7a55580b02bc2be913a8978f895c5a3826133883cf1c838bc84a76e89af1"} Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.190066 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"5329d1bc6931cda3bcc0ecaaff2666c3c97861e06610961f0d173508aebc638b"} Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.193832 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-d7glh" event={"ID":"4320ae6b-0d73-47d7-9f2c-f3c5b6b69041","Type":"ContainerStarted","Data":"1dfd9cd087fb37a31ee348f77c4336682f745c7457372ea92873c8e868ac33f5"} Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.194797 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" event={"ID":"d07502a2-50b0-4012-b335-340a1c694c50","Type":"ContainerStarted","Data":"03d55ca5e1d542a5cdb726c1e581fb9b644e237bf4575794fedadf60386c339c"} Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.194820 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" event={"ID":"d07502a2-50b0-4012-b335-340a1c694c50","Type":"ContainerStarted","Data":"824bb40c04b0ea474c45020c9c84ce7b7ee18c52beef9fee9618ba5a6e59be04"} Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.194829 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" event={"ID":"d07502a2-50b0-4012-b335-340a1c694c50","Type":"ContainerStarted","Data":"571733da43ee65c95ade43afb410d37a0fab8837b2a4d1267be3052a5fa08e8b"} Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.194868 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.204675 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-pw8cg" event={"ID":"9c73a9f4-20b2-4c8a-b25d-413770be4fac","Type":"ContainerStarted","Data":"792a2a27d6043da345b05c611a152abc9233ff61958434dc7a7f8c13d185fc04"} Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.204867 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-pw8cg" event={"ID":"9c73a9f4-20b2-4c8a-b25d-413770be4fac","Type":"ContainerStarted","Data":"91358d5b9d6dc99ee60753c2ed5424789eec802dd42f711b3e2ddd1a8ca5fa06"} Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.205112 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d07502a2-50b0-4012-b335-340a1c694c50\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67x45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67x45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k2tlk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.219693 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.234102 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.234144 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.234159 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.234179 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.234190 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:01Z","lastTransitionTime":"2026-01-26T07:54:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.270541 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pw8cg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c73a9f4-20b2-4c8a-b25d-413770be4fac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt9w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pw8cg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:01Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.305948 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:01Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.336184 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.336212 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.336219 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.336234 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.336242 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:01Z","lastTransitionTime":"2026-01-26T07:54:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.347234 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:01Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.394583 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f8b8acb-f4cf-41db-82f8-94ffd21c1594\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8mw7z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:01Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.424183 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a2716f1-7f63-4f1b-88e7-412b33a5afe3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1797a908e33ef3678ac1471e937a8e52c11dbb31d8de0f19d400a01706a98b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c111f2ec62507ccc570af4f4da19232bfa0946bb9e67bde595d5a3d66eff6a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f94ceee9c2ed3bd1c5ba58ee7e5047dd5f0a325c610c4277705f3426e69ed79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50228cb176a3defc821a78ed75d837706c2de3a3414f02e0afacf81289a15999\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:53:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:01Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.438280 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.438307 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.438316 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.438329 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.438339 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:01Z","lastTransitionTime":"2026-01-26T07:54:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.463468 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wx526" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"265e15b3-6ef8-47df-ab15-dcc9bd9574ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5080e1085d9267f3e432eaf62a6c620f7e59db08140bd5a901ab21840bfd6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmfr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wx526\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:01Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.517613 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d7glh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4320ae6b-0d73-47d7-9f2c-f3c5b6b69041\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpcqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d7glh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:01Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.540167 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.540205 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.540231 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.540251 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.540261 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:01Z","lastTransitionTime":"2026-01-26T07:54:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.571995 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-268q5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59844d88-1bf9-4761-b664-74623e7532c3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-268q5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:01Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.612330 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f837be3-c97c-4ec6-9ac0-1b2c210a2bd1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://370998a25827938e42e6ca8f51cb20c64068d266a2a6e92f43cfa6033efe1f1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://befc3a7310ed730ab06800d0ffe5a5ad206f5193f921d0080543d96e53c382cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f4e90bd8d92e4159f45b887647465c23595d6a14cb9fd5a6f3b6c736345b302\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d8fe9445b34c3dea0e5508fbbd9d8c72d3b0bd3c927f3d0469fb5f86a61b7a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6442d8b57ec5789bc5ad219da279d42cd053598b4a8a843b4190a619750483d3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 07:53:53.366863 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 07:53:53.368547 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1947576678/tls.crt::/tmp/serving-cert-1947576678/tls.key\\\\\\\"\\\\nI0126 07:53:59.068951 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 07:53:59.072863 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 07:53:59.072893 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 07:53:59.072922 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 07:53:59.072931 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 07:53:59.078386 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 07:53:59.078432 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 07:53:59.078439 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 07:53:59.078445 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 07:53:59.078449 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 07:53:59.078455 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 07:53:59.078459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 07:53:59.078571 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 07:53:59.084946 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eda625619208720589e5f413d0df8e1e55e7fd5ab4a2e59d64e19150afb05ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02368f0d186c7de1c3dc25d5c60985799ded799314f332c5132230a413d531a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02368f0d186c7de1c3dc25d5c60985799ded799314f332c5132230a413d531a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:53:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:01Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.643149 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.643488 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.643500 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.643515 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.643784 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:01Z","lastTransitionTime":"2026-01-26T07:54:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.657536 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acf0880fe013b6ee2bf3e8342e19ebbdc21b0f7aeb69749dce219fe0e5e4939a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:01Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.674136 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0298829a559cb39d3f8d2f3b0e53b249e74c432ba82385ab628135d53fa77272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70db7a55580b02bc2be913a8978f895c5a3826133883cf1c838bc84a76e89af1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:01Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.710667 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 07:54:01 crc kubenswrapper[4806]: E0126 07:54:01.710863 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 07:54:03.710836679 +0000 UTC m=+22.975244735 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.711379 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:01Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.743081 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:01Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.745741 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.745780 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.745788 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.745804 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.745815 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:01Z","lastTransitionTime":"2026-01-26T07:54:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.755276 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.774315 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.811748 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.811791 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.811813 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.811834 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 07:54:01 crc kubenswrapper[4806]: E0126 07:54:01.811902 4806 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 07:54:01 crc kubenswrapper[4806]: E0126 07:54:01.811947 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 07:54:03.811934635 +0000 UTC m=+23.076342691 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 07:54:01 crc kubenswrapper[4806]: E0126 07:54:01.812267 4806 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 07:54:01 crc kubenswrapper[4806]: E0126 07:54:01.812307 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 07:54:03.812299326 +0000 UTC m=+23.076707382 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 07:54:01 crc kubenswrapper[4806]: E0126 07:54:01.812308 4806 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 07:54:01 crc kubenswrapper[4806]: E0126 07:54:01.812336 4806 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 07:54:01 crc kubenswrapper[4806]: E0126 07:54:01.812338 4806 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 07:54:01 crc kubenswrapper[4806]: E0126 07:54:01.812375 4806 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 07:54:01 crc kubenswrapper[4806]: E0126 07:54:01.812410 4806 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 07:54:01 crc kubenswrapper[4806]: E0126 07:54:01.812350 4806 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 07:54:01 crc kubenswrapper[4806]: E0126 07:54:01.812464 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 07:54:03.8124468 +0000 UTC m=+23.076854856 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 07:54:01 crc kubenswrapper[4806]: E0126 07:54:01.812493 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 07:54:03.812480011 +0000 UTC m=+23.076888067 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.818082 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.843990 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d07502a2-50b0-4012-b335-340a1c694c50\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03d55ca5e1d542a5cdb726c1e581fb9b644e237bf4575794fedadf60386c339c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67x45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://824bb40c04b0ea474c45020c9c84ce7b7ee18c52beef9fee9618ba5a6e59be04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67x45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k2tlk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:01Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.848510 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.848553 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.848561 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.848577 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.848586 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:01Z","lastTransitionTime":"2026-01-26T07:54:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.855161 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.874726 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.914613 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.941007 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pw8cg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c73a9f4-20b2-4c8a-b25d-413770be4fac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://792a2a27d6043da345b05c611a152abc9233ff61958434dc7a7f8c13d185fc04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt9w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pw8cg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:01Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.950464 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.950505 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.950529 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.950548 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.950558 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:01Z","lastTransitionTime":"2026-01-26T07:54:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.954667 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 26 07:54:01 crc kubenswrapper[4806]: I0126 07:54:01.974202 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 26 07:54:02 crc kubenswrapper[4806]: I0126 07:54:02.014628 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 26 07:54:02 crc kubenswrapper[4806]: I0126 07:54:02.036431 4806 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 10:38:23.687609734 +0000 UTC Jan 26 07:54:02 crc kubenswrapper[4806]: I0126 07:54:02.041650 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 07:54:02 crc kubenswrapper[4806]: I0126 07:54:02.041733 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 07:54:02 crc kubenswrapper[4806]: I0126 07:54:02.041757 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 07:54:02 crc kubenswrapper[4806]: E0126 07:54:02.041870 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 07:54:02 crc kubenswrapper[4806]: E0126 07:54:02.041945 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 07:54:02 crc kubenswrapper[4806]: E0126 07:54:02.042203 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 07:54:02 crc kubenswrapper[4806]: I0126 07:54:02.052140 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f8b8acb-f4cf-41db-82f8-94ffd21c1594\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8mw7z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:02Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:02 crc kubenswrapper[4806]: I0126 07:54:02.053558 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:02 crc kubenswrapper[4806]: I0126 07:54:02.053580 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:02 crc kubenswrapper[4806]: I0126 07:54:02.053588 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:02 crc kubenswrapper[4806]: I0126 07:54:02.053600 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:02 crc kubenswrapper[4806]: I0126 07:54:02.053609 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:02Z","lastTransitionTime":"2026-01-26T07:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:02 crc kubenswrapper[4806]: I0126 07:54:02.055013 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 26 07:54:02 crc kubenswrapper[4806]: I0126 07:54:02.076338 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 26 07:54:02 crc kubenswrapper[4806]: I0126 07:54:02.095172 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 26 07:54:02 crc kubenswrapper[4806]: I0126 07:54:02.114644 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 26 07:54:02 crc kubenswrapper[4806]: I0126 07:54:02.135239 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 26 07:54:02 crc kubenswrapper[4806]: I0126 07:54:02.157113 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:02 crc kubenswrapper[4806]: I0126 07:54:02.157420 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:02 crc kubenswrapper[4806]: I0126 07:54:02.157430 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:02 crc kubenswrapper[4806]: I0126 07:54:02.157443 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:02 crc kubenswrapper[4806]: I0126 07:54:02.157454 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:02Z","lastTransitionTime":"2026-01-26T07:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:02 crc kubenswrapper[4806]: I0126 07:54:02.176029 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 26 07:54:02 crc kubenswrapper[4806]: I0126 07:54:02.195104 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 26 07:54:02 crc kubenswrapper[4806]: I0126 07:54:02.209896 4806 generic.go:334] "Generic (PLEG): container finished" podID="1f8b8acb-f4cf-41db-82f8-94ffd21c1594" containerID="2105d09a59a290aa3e1915a2b8c2c8aeb3a3e94ddfd4b48dc574a5362f522d18" exitCode=0 Jan 26 07:54:02 crc kubenswrapper[4806]: I0126 07:54:02.210074 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" event={"ID":"1f8b8acb-f4cf-41db-82f8-94ffd21c1594","Type":"ContainerDied","Data":"2105d09a59a290aa3e1915a2b8c2c8aeb3a3e94ddfd4b48dc574a5362f522d18"} Jan 26 07:54:02 crc kubenswrapper[4806]: I0126 07:54:02.212580 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" event={"ID":"1f8b8acb-f4cf-41db-82f8-94ffd21c1594","Type":"ContainerStarted","Data":"2165bee14b56b0c6a41e3958e9481ed141857d7e598045c0db4cf2040477f3d7"} Jan 26 07:54:02 crc kubenswrapper[4806]: I0126 07:54:02.219167 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 26 07:54:02 crc kubenswrapper[4806]: I0126 07:54:02.219730 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-d7glh" event={"ID":"4320ae6b-0d73-47d7-9f2c-f3c5b6b69041","Type":"ContainerStarted","Data":"9da5e8c52d415e76190ace196927bd1783b08990164b11379940ecc2b6734551"} Jan 26 07:54:02 crc kubenswrapper[4806]: I0126 07:54:02.228198 4806 generic.go:334] "Generic (PLEG): container finished" podID="59844d88-1bf9-4761-b664-74623e7532c3" containerID="19fc2da16d6c76dad40cc2947eb8da01b1ead5804be80df5756c132456575909" exitCode=0 Jan 26 07:54:02 crc kubenswrapper[4806]: I0126 07:54:02.228367 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-268q5" event={"ID":"59844d88-1bf9-4761-b664-74623e7532c3","Type":"ContainerDied","Data":"19fc2da16d6c76dad40cc2947eb8da01b1ead5804be80df5756c132456575909"} Jan 26 07:54:02 crc kubenswrapper[4806]: I0126 07:54:02.235205 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 26 07:54:02 crc kubenswrapper[4806]: I0126 07:54:02.255625 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 26 07:54:02 crc kubenswrapper[4806]: I0126 07:54:02.260481 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:02 crc kubenswrapper[4806]: I0126 07:54:02.260531 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:02 crc kubenswrapper[4806]: I0126 07:54:02.260545 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:02 crc kubenswrapper[4806]: I0126 07:54:02.260563 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:02 crc kubenswrapper[4806]: I0126 07:54:02.260572 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:02Z","lastTransitionTime":"2026-01-26T07:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:02 crc kubenswrapper[4806]: I0126 07:54:02.274658 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 26 07:54:02 crc kubenswrapper[4806]: I0126 07:54:02.295144 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 26 07:54:02 crc kubenswrapper[4806]: I0126 07:54:02.331502 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a2716f1-7f63-4f1b-88e7-412b33a5afe3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1797a908e33ef3678ac1471e937a8e52c11dbb31d8de0f19d400a01706a98b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c111f2ec62507ccc570af4f4da19232bfa0946bb9e67bde595d5a3d66eff6a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f94ceee9c2ed3bd1c5ba58ee7e5047dd5f0a325c610c4277705f3426e69ed79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50228cb176a3defc821a78ed75d837706c2de3a3414f02e0afacf81289a15999\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:53:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:02Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:02 crc kubenswrapper[4806]: I0126 07:54:02.335617 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 26 07:54:02 crc kubenswrapper[4806]: I0126 07:54:02.356806 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 26 07:54:02 crc kubenswrapper[4806]: I0126 07:54:02.362359 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:02 crc kubenswrapper[4806]: I0126 07:54:02.362401 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:02 crc kubenswrapper[4806]: I0126 07:54:02.362413 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:02 crc kubenswrapper[4806]: I0126 07:54:02.362432 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:02 crc kubenswrapper[4806]: I0126 07:54:02.362444 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:02Z","lastTransitionTime":"2026-01-26T07:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:02 crc kubenswrapper[4806]: I0126 07:54:02.377558 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 26 07:54:02 crc kubenswrapper[4806]: I0126 07:54:02.394445 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 26 07:54:02 crc kubenswrapper[4806]: I0126 07:54:02.415790 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 26 07:54:02 crc kubenswrapper[4806]: I0126 07:54:02.435838 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 26 07:54:02 crc kubenswrapper[4806]: I0126 07:54:02.456900 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 26 07:54:02 crc kubenswrapper[4806]: I0126 07:54:02.464829 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:02 crc kubenswrapper[4806]: I0126 07:54:02.464865 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:02 crc kubenswrapper[4806]: I0126 07:54:02.464876 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:02 crc kubenswrapper[4806]: I0126 07:54:02.464921 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:02 crc kubenswrapper[4806]: I0126 07:54:02.464934 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:02Z","lastTransitionTime":"2026-01-26T07:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:02 crc kubenswrapper[4806]: I0126 07:54:02.476182 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 26 07:54:02 crc kubenswrapper[4806]: I0126 07:54:02.515592 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 26 07:54:02 crc kubenswrapper[4806]: I0126 07:54:02.535344 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 26 07:54:02 crc kubenswrapper[4806]: I0126 07:54:02.555739 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 26 07:54:02 crc kubenswrapper[4806]: I0126 07:54:02.566613 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:02 crc kubenswrapper[4806]: I0126 07:54:02.566667 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:02 crc kubenswrapper[4806]: I0126 07:54:02.566680 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:02 crc kubenswrapper[4806]: I0126 07:54:02.566697 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:02 crc kubenswrapper[4806]: I0126 07:54:02.566709 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:02Z","lastTransitionTime":"2026-01-26T07:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:02 crc kubenswrapper[4806]: I0126 07:54:02.575104 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 26 07:54:02 crc kubenswrapper[4806]: I0126 07:54:02.595294 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 26 07:54:02 crc kubenswrapper[4806]: I0126 07:54:02.615214 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 26 07:54:02 crc kubenswrapper[4806]: I0126 07:54:02.652336 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:02Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:02 crc kubenswrapper[4806]: I0126 07:54:02.671490 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:02 crc kubenswrapper[4806]: I0126 07:54:02.671537 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:02 crc kubenswrapper[4806]: I0126 07:54:02.671546 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:02 crc kubenswrapper[4806]: I0126 07:54:02.671560 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:02 crc kubenswrapper[4806]: I0126 07:54:02.671573 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:02Z","lastTransitionTime":"2026-01-26T07:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:02 crc kubenswrapper[4806]: I0126 07:54:02.685113 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:02Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:02 crc kubenswrapper[4806]: I0126 07:54:02.720327 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 26 07:54:02 crc kubenswrapper[4806]: I0126 07:54:02.727461 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a2716f1-7f63-4f1b-88e7-412b33a5afe3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1797a908e33ef3678ac1471e937a8e52c11dbb31d8de0f19d400a01706a98b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c111f2ec62507ccc570af4f4da19232bfa0946bb9e67bde595d5a3d66eff6a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f94ceee9c2ed3bd1c5ba58ee7e5047dd5f0a325c610c4277705f3426e69ed79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50228cb176a3defc821a78ed75d837706c2de3a3414f02e0afacf81289a15999\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:53:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:02Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:02 crc kubenswrapper[4806]: I0126 07:54:02.734863 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 26 07:54:02 crc kubenswrapper[4806]: I0126 07:54:02.750249 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 26 07:54:02 crc kubenswrapper[4806]: I0126 07:54:02.773392 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:02 crc kubenswrapper[4806]: I0126 07:54:02.773420 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:02 crc kubenswrapper[4806]: I0126 07:54:02.773429 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:02 crc kubenswrapper[4806]: I0126 07:54:02.773444 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:02 crc kubenswrapper[4806]: I0126 07:54:02.773455 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:02Z","lastTransitionTime":"2026-01-26T07:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:02 crc kubenswrapper[4806]: I0126 07:54:02.786227 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:02Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:02 crc kubenswrapper[4806]: I0126 07:54:02.827018 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:02Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:02 crc kubenswrapper[4806]: I0126 07:54:02.869190 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f8b8acb-f4cf-41db-82f8-94ffd21c1594\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2105d09a59a290aa3e1915a2b8c2c8aeb3a3e94ddfd4b48dc574a5362f522d18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2105d09a59a290aa3e1915a2b8c2c8aeb3a3e94ddfd4b48dc574a5362f522d18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8mw7z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:02Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:02 crc kubenswrapper[4806]: I0126 07:54:02.875214 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:02 crc kubenswrapper[4806]: I0126 07:54:02.875247 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:02 crc kubenswrapper[4806]: I0126 07:54:02.875258 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:02 crc kubenswrapper[4806]: I0126 07:54:02.875272 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:02 crc kubenswrapper[4806]: I0126 07:54:02.875281 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:02Z","lastTransitionTime":"2026-01-26T07:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:02 crc kubenswrapper[4806]: I0126 07:54:02.904815 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f837be3-c97c-4ec6-9ac0-1b2c210a2bd1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://370998a25827938e42e6ca8f51cb20c64068d266a2a6e92f43cfa6033efe1f1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://befc3a7310ed730ab06800d0ffe5a5ad206f5193f921d0080543d96e53c382cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f4e90bd8d92e4159f45b887647465c23595d6a14cb9fd5a6f3b6c736345b302\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d8fe9445b34c3dea0e5508fbbd9d8c72d3b0bd3c927f3d0469fb5f86a61b7a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6442d8b57ec5789bc5ad219da279d42cd053598b4a8a843b4190a619750483d3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 07:53:53.366863 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 07:53:53.368547 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1947576678/tls.crt::/tmp/serving-cert-1947576678/tls.key\\\\\\\"\\\\nI0126 07:53:59.068951 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 07:53:59.072863 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 07:53:59.072893 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 07:53:59.072922 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 07:53:59.072931 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 07:53:59.078386 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 07:53:59.078432 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 07:53:59.078439 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 07:53:59.078445 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 07:53:59.078449 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 07:53:59.078455 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 07:53:59.078459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 07:53:59.078571 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 07:53:59.084946 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eda625619208720589e5f413d0df8e1e55e7fd5ab4a2e59d64e19150afb05ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02368f0d186c7de1c3dc25d5c60985799ded799314f332c5132230a413d531a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02368f0d186c7de1c3dc25d5c60985799ded799314f332c5132230a413d531a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:53:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:02Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:02 crc kubenswrapper[4806]: I0126 07:54:02.948660 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acf0880fe013b6ee2bf3e8342e19ebbdc21b0f7aeb69749dce219fe0e5e4939a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:02Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:02 crc kubenswrapper[4806]: I0126 07:54:02.977238 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:02 crc kubenswrapper[4806]: I0126 07:54:02.977286 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:02 crc kubenswrapper[4806]: I0126 07:54:02.977300 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:02 crc kubenswrapper[4806]: I0126 07:54:02.977323 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:02 crc kubenswrapper[4806]: I0126 07:54:02.977338 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:02Z","lastTransitionTime":"2026-01-26T07:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:02 crc kubenswrapper[4806]: I0126 07:54:02.986682 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0298829a559cb39d3f8d2f3b0e53b249e74c432ba82385ab628135d53fa77272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70db7a55580b02bc2be913a8978f895c5a3826133883cf1c838bc84a76e89af1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:02Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:03 crc kubenswrapper[4806]: I0126 07:54:03.022651 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wx526" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"265e15b3-6ef8-47df-ab15-dcc9bd9574ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5080e1085d9267f3e432eaf62a6c620f7e59db08140bd5a901ab21840bfd6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmfr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wx526\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:03Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:03 crc kubenswrapper[4806]: I0126 07:54:03.036711 4806 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 19:01:02.61204815 +0000 UTC Jan 26 07:54:03 crc kubenswrapper[4806]: I0126 07:54:03.064538 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d7glh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4320ae6b-0d73-47d7-9f2c-f3c5b6b69041\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da5e8c52d415e76190ace196927bd1783b08990164b11379940ecc2b6734551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpcqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d7glh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:03Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:03 crc kubenswrapper[4806]: I0126 07:54:03.079106 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:03 crc kubenswrapper[4806]: I0126 07:54:03.079150 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:03 crc kubenswrapper[4806]: I0126 07:54:03.079159 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:03 crc kubenswrapper[4806]: I0126 07:54:03.079173 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:03 crc kubenswrapper[4806]: I0126 07:54:03.079184 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:03Z","lastTransitionTime":"2026-01-26T07:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:03 crc kubenswrapper[4806]: I0126 07:54:03.106560 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-268q5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59844d88-1bf9-4761-b664-74623e7532c3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19fc2da16d6c76dad40cc2947eb8da01b1ead5804be80df5756c132456575909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19fc2da16d6c76dad40cc2947eb8da01b1ead5804be80df5756c132456575909\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-268q5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:03Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:03 crc kubenswrapper[4806]: I0126 07:54:03.142760 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:03Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:03 crc kubenswrapper[4806]: I0126 07:54:03.181028 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:03 crc kubenswrapper[4806]: I0126 07:54:03.181065 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:03 crc kubenswrapper[4806]: I0126 07:54:03.181076 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:03 crc kubenswrapper[4806]: I0126 07:54:03.181092 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:03 crc kubenswrapper[4806]: I0126 07:54:03.181101 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:03Z","lastTransitionTime":"2026-01-26T07:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:03 crc kubenswrapper[4806]: I0126 07:54:03.184786 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:03Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:03 crc kubenswrapper[4806]: I0126 07:54:03.223543 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d07502a2-50b0-4012-b335-340a1c694c50\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03d55ca5e1d542a5cdb726c1e581fb9b644e237bf4575794fedadf60386c339c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67x45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://824bb40c04b0ea474c45020c9c84ce7b7ee18c52beef9fee9618ba5a6e59be04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67x45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k2tlk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:03Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:03 crc kubenswrapper[4806]: I0126 07:54:03.233628 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" event={"ID":"1f8b8acb-f4cf-41db-82f8-94ffd21c1594","Type":"ContainerStarted","Data":"9d102c6b4ea9d79168f0fec24c556c8fec8e3c1191646eebcbe8ed0289edfb6d"} Jan 26 07:54:03 crc kubenswrapper[4806]: I0126 07:54:03.233668 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" event={"ID":"1f8b8acb-f4cf-41db-82f8-94ffd21c1594","Type":"ContainerStarted","Data":"c90dede1fc0753cad500555db7d92be73cb9b51fe948e3e65bed66134d85628c"} Jan 26 07:54:03 crc kubenswrapper[4806]: I0126 07:54:03.233680 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" event={"ID":"1f8b8acb-f4cf-41db-82f8-94ffd21c1594","Type":"ContainerStarted","Data":"adbed86e005e6e9cecb4320e0dff72eab3cfd505ff656a64e6643cf47828c789"} Jan 26 07:54:03 crc kubenswrapper[4806]: I0126 07:54:03.233688 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" event={"ID":"1f8b8acb-f4cf-41db-82f8-94ffd21c1594","Type":"ContainerStarted","Data":"21d2ea6bc7f0a91da2b372587419ec1b34cfe3aca8f70dd4e360bf3e818bb103"} Jan 26 07:54:03 crc kubenswrapper[4806]: I0126 07:54:03.233699 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" event={"ID":"1f8b8acb-f4cf-41db-82f8-94ffd21c1594","Type":"ContainerStarted","Data":"e05c58fd693a7827b1ff019c6e160e4d2198004aca34245ee6e08cd37f5627da"} Jan 26 07:54:03 crc kubenswrapper[4806]: I0126 07:54:03.233707 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" event={"ID":"1f8b8acb-f4cf-41db-82f8-94ffd21c1594","Type":"ContainerStarted","Data":"634f160a9064240a06e9b425f70ceae6390db7205f898efdc601437e72c362df"} Jan 26 07:54:03 crc kubenswrapper[4806]: I0126 07:54:03.235188 4806 generic.go:334] "Generic (PLEG): container finished" podID="59844d88-1bf9-4761-b664-74623e7532c3" containerID="7a426f2a63c3cdd56cec3bbd1f6e88ff76808304da3fe757043afeb2feb26614" exitCode=0 Jan 26 07:54:03 crc kubenswrapper[4806]: I0126 07:54:03.235270 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-268q5" event={"ID":"59844d88-1bf9-4761-b664-74623e7532c3","Type":"ContainerDied","Data":"7a426f2a63c3cdd56cec3bbd1f6e88ff76808304da3fe757043afeb2feb26614"} Jan 26 07:54:03 crc kubenswrapper[4806]: I0126 07:54:03.236317 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"6dfe44c66a73660b38211ed7ae8fbb6a036c2ebc6a2716dd0e36da70b8d1fcc3"} Jan 26 07:54:03 crc kubenswrapper[4806]: E0126 07:54:03.266474 4806 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-crc\" already exists" pod="openshift-etcd/etcd-crc" Jan 26 07:54:03 crc kubenswrapper[4806]: I0126 07:54:03.281992 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pw8cg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c73a9f4-20b2-4c8a-b25d-413770be4fac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://792a2a27d6043da345b05c611a152abc9233ff61958434dc7a7f8c13d185fc04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt9w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pw8cg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:03Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:03 crc kubenswrapper[4806]: I0126 07:54:03.283500 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:03 crc kubenswrapper[4806]: I0126 07:54:03.283574 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:03 crc kubenswrapper[4806]: I0126 07:54:03.283587 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:03 crc kubenswrapper[4806]: I0126 07:54:03.283606 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:03 crc kubenswrapper[4806]: I0126 07:54:03.283617 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:03Z","lastTransitionTime":"2026-01-26T07:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:03 crc kubenswrapper[4806]: I0126 07:54:03.322717 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pw8cg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c73a9f4-20b2-4c8a-b25d-413770be4fac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://792a2a27d6043da345b05c611a152abc9233ff61958434dc7a7f8c13d185fc04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt9w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pw8cg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:03Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:03 crc kubenswrapper[4806]: I0126 07:54:03.364562 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a2716f1-7f63-4f1b-88e7-412b33a5afe3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1797a908e33ef3678ac1471e937a8e52c11dbb31d8de0f19d400a01706a98b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c111f2ec62507ccc570af4f4da19232bfa0946bb9e67bde595d5a3d66eff6a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f94ceee9c2ed3bd1c5ba58ee7e5047dd5f0a325c610c4277705f3426e69ed79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50228cb176a3defc821a78ed75d837706c2de3a3414f02e0afacf81289a15999\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:53:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:03Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:03 crc kubenswrapper[4806]: I0126 07:54:03.386414 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:03 crc kubenswrapper[4806]: I0126 07:54:03.386457 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:03 crc kubenswrapper[4806]: I0126 07:54:03.386468 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:03 crc kubenswrapper[4806]: I0126 07:54:03.386486 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:03 crc kubenswrapper[4806]: I0126 07:54:03.386498 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:03Z","lastTransitionTime":"2026-01-26T07:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:03 crc kubenswrapper[4806]: I0126 07:54:03.404954 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:03Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:03 crc kubenswrapper[4806]: I0126 07:54:03.443624 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:03Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:03 crc kubenswrapper[4806]: I0126 07:54:03.489686 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:03 crc kubenswrapper[4806]: I0126 07:54:03.489754 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:03 crc kubenswrapper[4806]: I0126 07:54:03.489549 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f8b8acb-f4cf-41db-82f8-94ffd21c1594\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2105d09a59a290aa3e1915a2b8c2c8aeb3a3e94ddfd4b48dc574a5362f522d18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2105d09a59a290aa3e1915a2b8c2c8aeb3a3e94ddfd4b48dc574a5362f522d18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8mw7z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:03Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:03 crc kubenswrapper[4806]: I0126 07:54:03.489769 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:03 crc kubenswrapper[4806]: I0126 07:54:03.489876 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:03 crc kubenswrapper[4806]: I0126 07:54:03.489894 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:03Z","lastTransitionTime":"2026-01-26T07:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:03 crc kubenswrapper[4806]: I0126 07:54:03.525605 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-268q5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59844d88-1bf9-4761-b664-74623e7532c3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19fc2da16d6c76dad40cc2947eb8da01b1ead5804be80df5756c132456575909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19fc2da16d6c76dad40cc2947eb8da01b1ead5804be80df5756c132456575909\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a426f2a63c3cdd56cec3bbd1f6e88ff76808304da3fe757043afeb2feb26614\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a426f2a63c3cdd56cec3bbd1f6e88ff76808304da3fe757043afeb2feb26614\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-268q5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:03Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:03 crc kubenswrapper[4806]: I0126 07:54:03.568879 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"674c5d9f-0f9e-47cf-b6bb-5a011bd0af68\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06b15982445e897ebbaf881e9a84973b9b700b0763ea6dfc0dfa3d5adb3d5b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67cf748dcc41d3402937cba97eadb7311a4fd7ab359a24768ffd4b6f689e9e2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://374c11776db902d4a30fdeab722bf8c85be53043b87555ac42f9f18bbfca5f78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded50f9353eb262b2587b8d1a59d8e1a06946c9b228eb0991fe35ad79f19567b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dc1a26bcb2eff0795009b3d88f7fedd86bccf553e9b8b86f8bc4bdda154566d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b85e7f36ac03113362c2ab6b15fb73e517599c6a56c6a410b9dead70191502f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b85e7f36ac03113362c2ab6b15fb73e517599c6a56c6a410b9dead70191502f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7324fbdf53b0be79696bb4bf870fe072761633e09cc88cce3a213a40b339410d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7324fbdf53b0be79696bb4bf870fe072761633e09cc88cce3a213a40b339410d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1987002ea8dfdacf427291f1757243727d744b1fe99d0be1f8cddc26beb74fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1987002ea8dfdacf427291f1757243727d744b1fe99d0be1f8cddc26beb74fc9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:53:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:03Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:03 crc kubenswrapper[4806]: I0126 07:54:03.592934 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:03 crc kubenswrapper[4806]: I0126 07:54:03.592992 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:03 crc kubenswrapper[4806]: I0126 07:54:03.593007 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:03 crc kubenswrapper[4806]: I0126 07:54:03.593025 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:03 crc kubenswrapper[4806]: I0126 07:54:03.593038 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:03Z","lastTransitionTime":"2026-01-26T07:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:03 crc kubenswrapper[4806]: I0126 07:54:03.604757 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f837be3-c97c-4ec6-9ac0-1b2c210a2bd1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://370998a25827938e42e6ca8f51cb20c64068d266a2a6e92f43cfa6033efe1f1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://befc3a7310ed730ab06800d0ffe5a5ad206f5193f921d0080543d96e53c382cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f4e90bd8d92e4159f45b887647465c23595d6a14cb9fd5a6f3b6c736345b302\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d8fe9445b34c3dea0e5508fbbd9d8c72d3b0bd3c927f3d0469fb5f86a61b7a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6442d8b57ec5789bc5ad219da279d42cd053598b4a8a843b4190a619750483d3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 07:53:53.366863 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 07:53:53.368547 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1947576678/tls.crt::/tmp/serving-cert-1947576678/tls.key\\\\\\\"\\\\nI0126 07:53:59.068951 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 07:53:59.072863 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 07:53:59.072893 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 07:53:59.072922 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 07:53:59.072931 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 07:53:59.078386 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 07:53:59.078432 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 07:53:59.078439 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 07:53:59.078445 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 07:53:59.078449 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 07:53:59.078455 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 07:53:59.078459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 07:53:59.078571 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 07:53:59.084946 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eda625619208720589e5f413d0df8e1e55e7fd5ab4a2e59d64e19150afb05ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02368f0d186c7de1c3dc25d5c60985799ded799314f332c5132230a413d531a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02368f0d186c7de1c3dc25d5c60985799ded799314f332c5132230a413d531a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:53:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:03Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:03 crc kubenswrapper[4806]: I0126 07:54:03.644071 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acf0880fe013b6ee2bf3e8342e19ebbdc21b0f7aeb69749dce219fe0e5e4939a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:03Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:03 crc kubenswrapper[4806]: I0126 07:54:03.683508 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0298829a559cb39d3f8d2f3b0e53b249e74c432ba82385ab628135d53fa77272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70db7a55580b02bc2be913a8978f895c5a3826133883cf1c838bc84a76e89af1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:03Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:03 crc kubenswrapper[4806]: I0126 07:54:03.695908 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:03 crc kubenswrapper[4806]: I0126 07:54:03.695954 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:03 crc kubenswrapper[4806]: I0126 07:54:03.695968 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:03 crc kubenswrapper[4806]: I0126 07:54:03.695985 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:03 crc kubenswrapper[4806]: I0126 07:54:03.695996 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:03Z","lastTransitionTime":"2026-01-26T07:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:03 crc kubenswrapper[4806]: I0126 07:54:03.721454 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wx526" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"265e15b3-6ef8-47df-ab15-dcc9bd9574ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5080e1085d9267f3e432eaf62a6c620f7e59db08140bd5a901ab21840bfd6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmfr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wx526\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:03Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:03 crc kubenswrapper[4806]: I0126 07:54:03.726927 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 07:54:03 crc kubenswrapper[4806]: E0126 07:54:03.727226 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 07:54:07.727182332 +0000 UTC m=+26.991590408 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 07:54:03 crc kubenswrapper[4806]: I0126 07:54:03.765574 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d7glh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4320ae6b-0d73-47d7-9f2c-f3c5b6b69041\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da5e8c52d415e76190ace196927bd1783b08990164b11379940ecc2b6734551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpcqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d7glh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:03Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:03 crc kubenswrapper[4806]: I0126 07:54:03.799771 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:03 crc kubenswrapper[4806]: I0126 07:54:03.799824 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:03 crc kubenswrapper[4806]: I0126 07:54:03.799838 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:03 crc kubenswrapper[4806]: I0126 07:54:03.799859 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:03 crc kubenswrapper[4806]: I0126 07:54:03.799873 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:03Z","lastTransitionTime":"2026-01-26T07:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:03 crc kubenswrapper[4806]: I0126 07:54:03.804402 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:03Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:03 crc kubenswrapper[4806]: I0126 07:54:03.828020 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 07:54:03 crc kubenswrapper[4806]: I0126 07:54:03.828081 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 07:54:03 crc kubenswrapper[4806]: I0126 07:54:03.828122 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 07:54:03 crc kubenswrapper[4806]: I0126 07:54:03.828161 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 07:54:03 crc kubenswrapper[4806]: E0126 07:54:03.828261 4806 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 07:54:03 crc kubenswrapper[4806]: E0126 07:54:03.828324 4806 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 07:54:03 crc kubenswrapper[4806]: E0126 07:54:03.828355 4806 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 07:54:03 crc kubenswrapper[4806]: E0126 07:54:03.828354 4806 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 07:54:03 crc kubenswrapper[4806]: E0126 07:54:03.828398 4806 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 07:54:03 crc kubenswrapper[4806]: E0126 07:54:03.828419 4806 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 07:54:03 crc kubenswrapper[4806]: E0126 07:54:03.828374 4806 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 07:54:03 crc kubenswrapper[4806]: E0126 07:54:03.828330 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 07:54:07.828309449 +0000 UTC m=+27.092717515 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 07:54:03 crc kubenswrapper[4806]: E0126 07:54:03.828568 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 07:54:07.828548966 +0000 UTC m=+27.092957052 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 07:54:03 crc kubenswrapper[4806]: E0126 07:54:03.828595 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 07:54:07.828581097 +0000 UTC m=+27.092989163 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 07:54:03 crc kubenswrapper[4806]: E0126 07:54:03.828599 4806 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 07:54:03 crc kubenswrapper[4806]: E0126 07:54:03.828676 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 07:54:07.828643319 +0000 UTC m=+27.093051535 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 07:54:03 crc kubenswrapper[4806]: I0126 07:54:03.847834 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dfe44c66a73660b38211ed7ae8fbb6a036c2ebc6a2716dd0e36da70b8d1fcc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:03Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:03 crc kubenswrapper[4806]: I0126 07:54:03.881818 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d07502a2-50b0-4012-b335-340a1c694c50\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03d55ca5e1d542a5cdb726c1e581fb9b644e237bf4575794fedadf60386c339c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67x45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://824bb40c04b0ea474c45020c9c84ce7b7ee18c52beef9fee9618ba5a6e59be04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67x45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k2tlk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:03Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:03 crc kubenswrapper[4806]: I0126 07:54:03.901969 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:03 crc kubenswrapper[4806]: I0126 07:54:03.902022 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:03 crc kubenswrapper[4806]: I0126 07:54:03.902038 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:03 crc kubenswrapper[4806]: I0126 07:54:03.902059 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:03 crc kubenswrapper[4806]: I0126 07:54:03.902094 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:03Z","lastTransitionTime":"2026-01-26T07:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:04 crc kubenswrapper[4806]: I0126 07:54:04.004612 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:04 crc kubenswrapper[4806]: I0126 07:54:04.004661 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:04 crc kubenswrapper[4806]: I0126 07:54:04.004677 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:04 crc kubenswrapper[4806]: I0126 07:54:04.004698 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:04 crc kubenswrapper[4806]: I0126 07:54:04.004713 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:04Z","lastTransitionTime":"2026-01-26T07:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:04 crc kubenswrapper[4806]: I0126 07:54:04.037129 4806 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 12:17:52.616688417 +0000 UTC Jan 26 07:54:04 crc kubenswrapper[4806]: I0126 07:54:04.041463 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 07:54:04 crc kubenswrapper[4806]: I0126 07:54:04.041463 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 07:54:04 crc kubenswrapper[4806]: I0126 07:54:04.041545 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 07:54:04 crc kubenswrapper[4806]: E0126 07:54:04.041648 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 07:54:04 crc kubenswrapper[4806]: E0126 07:54:04.041795 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 07:54:04 crc kubenswrapper[4806]: E0126 07:54:04.041918 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 07:54:04 crc kubenswrapper[4806]: I0126 07:54:04.106636 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:04 crc kubenswrapper[4806]: I0126 07:54:04.106677 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:04 crc kubenswrapper[4806]: I0126 07:54:04.106688 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:04 crc kubenswrapper[4806]: I0126 07:54:04.106706 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:04 crc kubenswrapper[4806]: I0126 07:54:04.106720 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:04Z","lastTransitionTime":"2026-01-26T07:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:04 crc kubenswrapper[4806]: I0126 07:54:04.209842 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:04 crc kubenswrapper[4806]: I0126 07:54:04.209882 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:04 crc kubenswrapper[4806]: I0126 07:54:04.209894 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:04 crc kubenswrapper[4806]: I0126 07:54:04.209911 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:04 crc kubenswrapper[4806]: I0126 07:54:04.209925 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:04Z","lastTransitionTime":"2026-01-26T07:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:04 crc kubenswrapper[4806]: I0126 07:54:04.242117 4806 generic.go:334] "Generic (PLEG): container finished" podID="59844d88-1bf9-4761-b664-74623e7532c3" containerID="5459f7b834ef08b014a811d0955763235cb4dbb6cbe92670202b03f51f7d684d" exitCode=0 Jan 26 07:54:04 crc kubenswrapper[4806]: I0126 07:54:04.242194 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-268q5" event={"ID":"59844d88-1bf9-4761-b664-74623e7532c3","Type":"ContainerDied","Data":"5459f7b834ef08b014a811d0955763235cb4dbb6cbe92670202b03f51f7d684d"} Jan 26 07:54:04 crc kubenswrapper[4806]: I0126 07:54:04.255931 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pw8cg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c73a9f4-20b2-4c8a-b25d-413770be4fac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://792a2a27d6043da345b05c611a152abc9233ff61958434dc7a7f8c13d185fc04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt9w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pw8cg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:04Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:04 crc kubenswrapper[4806]: I0126 07:54:04.273009 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a2716f1-7f63-4f1b-88e7-412b33a5afe3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1797a908e33ef3678ac1471e937a8e52c11dbb31d8de0f19d400a01706a98b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c111f2ec62507ccc570af4f4da19232bfa0946bb9e67bde595d5a3d66eff6a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f94ceee9c2ed3bd1c5ba58ee7e5047dd5f0a325c610c4277705f3426e69ed79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50228cb176a3defc821a78ed75d837706c2de3a3414f02e0afacf81289a15999\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:53:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:04Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:04 crc kubenswrapper[4806]: I0126 07:54:04.289600 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:04Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:04 crc kubenswrapper[4806]: I0126 07:54:04.307667 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:04Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:04 crc kubenswrapper[4806]: I0126 07:54:04.311765 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:04 crc kubenswrapper[4806]: I0126 07:54:04.311800 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:04 crc kubenswrapper[4806]: I0126 07:54:04.311813 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:04 crc kubenswrapper[4806]: I0126 07:54:04.311826 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:04 crc kubenswrapper[4806]: I0126 07:54:04.311836 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:04Z","lastTransitionTime":"2026-01-26T07:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:04 crc kubenswrapper[4806]: I0126 07:54:04.326054 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f8b8acb-f4cf-41db-82f8-94ffd21c1594\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2105d09a59a290aa3e1915a2b8c2c8aeb3a3e94ddfd4b48dc574a5362f522d18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2105d09a59a290aa3e1915a2b8c2c8aeb3a3e94ddfd4b48dc574a5362f522d18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8mw7z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:04Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:04 crc kubenswrapper[4806]: I0126 07:54:04.340663 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f837be3-c97c-4ec6-9ac0-1b2c210a2bd1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://370998a25827938e42e6ca8f51cb20c64068d266a2a6e92f43cfa6033efe1f1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://befc3a7310ed730ab06800d0ffe5a5ad206f5193f921d0080543d96e53c382cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f4e90bd8d92e4159f45b887647465c23595d6a14cb9fd5a6f3b6c736345b302\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d8fe9445b34c3dea0e5508fbbd9d8c72d3b0bd3c927f3d0469fb5f86a61b7a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6442d8b57ec5789bc5ad219da279d42cd053598b4a8a843b4190a619750483d3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 07:53:53.366863 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 07:53:53.368547 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1947576678/tls.crt::/tmp/serving-cert-1947576678/tls.key\\\\\\\"\\\\nI0126 07:53:59.068951 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 07:53:59.072863 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 07:53:59.072893 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 07:53:59.072922 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 07:53:59.072931 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 07:53:59.078386 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 07:53:59.078432 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 07:53:59.078439 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 07:53:59.078445 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 07:53:59.078449 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 07:53:59.078455 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 07:53:59.078459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 07:53:59.078571 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 07:53:59.084946 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eda625619208720589e5f413d0df8e1e55e7fd5ab4a2e59d64e19150afb05ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02368f0d186c7de1c3dc25d5c60985799ded799314f332c5132230a413d531a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02368f0d186c7de1c3dc25d5c60985799ded799314f332c5132230a413d531a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:53:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:04Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:04 crc kubenswrapper[4806]: I0126 07:54:04.387414 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acf0880fe013b6ee2bf3e8342e19ebbdc21b0f7aeb69749dce219fe0e5e4939a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:04Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:04 crc kubenswrapper[4806]: I0126 07:54:04.404539 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0298829a559cb39d3f8d2f3b0e53b249e74c432ba82385ab628135d53fa77272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70db7a55580b02bc2be913a8978f895c5a3826133883cf1c838bc84a76e89af1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:04Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:04 crc kubenswrapper[4806]: I0126 07:54:04.414586 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:04 crc kubenswrapper[4806]: I0126 07:54:04.414614 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:04 crc kubenswrapper[4806]: I0126 07:54:04.414622 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:04 crc kubenswrapper[4806]: I0126 07:54:04.414634 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:04 crc kubenswrapper[4806]: I0126 07:54:04.414642 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:04Z","lastTransitionTime":"2026-01-26T07:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:04 crc kubenswrapper[4806]: I0126 07:54:04.426945 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wx526" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"265e15b3-6ef8-47df-ab15-dcc9bd9574ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5080e1085d9267f3e432eaf62a6c620f7e59db08140bd5a901ab21840bfd6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmfr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wx526\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:04Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:04 crc kubenswrapper[4806]: I0126 07:54:04.444904 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d7glh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4320ae6b-0d73-47d7-9f2c-f3c5b6b69041\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da5e8c52d415e76190ace196927bd1783b08990164b11379940ecc2b6734551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpcqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d7glh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:04Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:04 crc kubenswrapper[4806]: I0126 07:54:04.458667 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-268q5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59844d88-1bf9-4761-b664-74623e7532c3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19fc2da16d6c76dad40cc2947eb8da01b1ead5804be80df5756c132456575909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19fc2da16d6c76dad40cc2947eb8da01b1ead5804be80df5756c132456575909\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a426f2a63c3cdd56cec3bbd1f6e88ff76808304da3fe757043afeb2feb26614\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a426f2a63c3cdd56cec3bbd1f6e88ff76808304da3fe757043afeb2feb26614\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5459f7b834ef08b014a811d0955763235cb4dbb6cbe92670202b03f51f7d684d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5459f7b834ef08b014a811d0955763235cb4dbb6cbe92670202b03f51f7d684d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-268q5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:04Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:04 crc kubenswrapper[4806]: I0126 07:54:04.476393 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"674c5d9f-0f9e-47cf-b6bb-5a011bd0af68\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06b15982445e897ebbaf881e9a84973b9b700b0763ea6dfc0dfa3d5adb3d5b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67cf748dcc41d3402937cba97eadb7311a4fd7ab359a24768ffd4b6f689e9e2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://374c11776db902d4a30fdeab722bf8c85be53043b87555ac42f9f18bbfca5f78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded50f9353eb262b2587b8d1a59d8e1a06946c9b228eb0991fe35ad79f19567b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dc1a26bcb2eff0795009b3d88f7fedd86bccf553e9b8b86f8bc4bdda154566d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b85e7f36ac03113362c2ab6b15fb73e517599c6a56c6a410b9dead70191502f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b85e7f36ac03113362c2ab6b15fb73e517599c6a56c6a410b9dead70191502f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7324fbdf53b0be79696bb4bf870fe072761633e09cc88cce3a213a40b339410d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7324fbdf53b0be79696bb4bf870fe072761633e09cc88cce3a213a40b339410d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1987002ea8dfdacf427291f1757243727d744b1fe99d0be1f8cddc26beb74fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1987002ea8dfdacf427291f1757243727d744b1fe99d0be1f8cddc26beb74fc9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:53:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:04Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:04 crc kubenswrapper[4806]: I0126 07:54:04.486750 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:04Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:04 crc kubenswrapper[4806]: I0126 07:54:04.496676 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dfe44c66a73660b38211ed7ae8fbb6a036c2ebc6a2716dd0e36da70b8d1fcc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:04Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:04 crc kubenswrapper[4806]: I0126 07:54:04.506431 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d07502a2-50b0-4012-b335-340a1c694c50\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03d55ca5e1d542a5cdb726c1e581fb9b644e237bf4575794fedadf60386c339c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67x45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://824bb40c04b0ea474c45020c9c84ce7b7ee18c52beef9fee9618ba5a6e59be04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67x45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k2tlk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:04Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:04 crc kubenswrapper[4806]: I0126 07:54:04.517104 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:04 crc kubenswrapper[4806]: I0126 07:54:04.517150 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:04 crc kubenswrapper[4806]: I0126 07:54:04.517178 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:04 crc kubenswrapper[4806]: I0126 07:54:04.517197 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:04 crc kubenswrapper[4806]: I0126 07:54:04.517207 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:04Z","lastTransitionTime":"2026-01-26T07:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:04 crc kubenswrapper[4806]: I0126 07:54:04.619606 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:04 crc kubenswrapper[4806]: I0126 07:54:04.619648 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:04 crc kubenswrapper[4806]: I0126 07:54:04.619659 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:04 crc kubenswrapper[4806]: I0126 07:54:04.619676 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:04 crc kubenswrapper[4806]: I0126 07:54:04.619690 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:04Z","lastTransitionTime":"2026-01-26T07:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:04 crc kubenswrapper[4806]: I0126 07:54:04.721731 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:04 crc kubenswrapper[4806]: I0126 07:54:04.721769 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:04 crc kubenswrapper[4806]: I0126 07:54:04.721781 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:04 crc kubenswrapper[4806]: I0126 07:54:04.721794 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:04 crc kubenswrapper[4806]: I0126 07:54:04.721802 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:04Z","lastTransitionTime":"2026-01-26T07:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:04 crc kubenswrapper[4806]: I0126 07:54:04.824095 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:04 crc kubenswrapper[4806]: I0126 07:54:04.824332 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:04 crc kubenswrapper[4806]: I0126 07:54:04.824408 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:04 crc kubenswrapper[4806]: I0126 07:54:04.824479 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:04 crc kubenswrapper[4806]: I0126 07:54:04.824559 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:04Z","lastTransitionTime":"2026-01-26T07:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:04 crc kubenswrapper[4806]: I0126 07:54:04.926714 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:04 crc kubenswrapper[4806]: I0126 07:54:04.926750 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:04 crc kubenswrapper[4806]: I0126 07:54:04.926759 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:04 crc kubenswrapper[4806]: I0126 07:54:04.926777 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:04 crc kubenswrapper[4806]: I0126 07:54:04.926792 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:04Z","lastTransitionTime":"2026-01-26T07:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:05 crc kubenswrapper[4806]: I0126 07:54:05.029894 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:05 crc kubenswrapper[4806]: I0126 07:54:05.029949 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:05 crc kubenswrapper[4806]: I0126 07:54:05.029962 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:05 crc kubenswrapper[4806]: I0126 07:54:05.029983 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:05 crc kubenswrapper[4806]: I0126 07:54:05.029996 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:05Z","lastTransitionTime":"2026-01-26T07:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:05 crc kubenswrapper[4806]: I0126 07:54:05.037394 4806 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 11:00:52.485517467 +0000 UTC Jan 26 07:54:05 crc kubenswrapper[4806]: I0126 07:54:05.132905 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:05 crc kubenswrapper[4806]: I0126 07:54:05.132939 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:05 crc kubenswrapper[4806]: I0126 07:54:05.132948 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:05 crc kubenswrapper[4806]: I0126 07:54:05.132962 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:05 crc kubenswrapper[4806]: I0126 07:54:05.132972 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:05Z","lastTransitionTime":"2026-01-26T07:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:05 crc kubenswrapper[4806]: I0126 07:54:05.235351 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:05 crc kubenswrapper[4806]: I0126 07:54:05.235396 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:05 crc kubenswrapper[4806]: I0126 07:54:05.235405 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:05 crc kubenswrapper[4806]: I0126 07:54:05.235418 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:05 crc kubenswrapper[4806]: I0126 07:54:05.235428 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:05Z","lastTransitionTime":"2026-01-26T07:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:05 crc kubenswrapper[4806]: I0126 07:54:05.246925 4806 generic.go:334] "Generic (PLEG): container finished" podID="59844d88-1bf9-4761-b664-74623e7532c3" containerID="4f1efe94940fbf1babcccc68efae1d38751c22232d1a2f197485057be906db80" exitCode=0 Jan 26 07:54:05 crc kubenswrapper[4806]: I0126 07:54:05.246965 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-268q5" event={"ID":"59844d88-1bf9-4761-b664-74623e7532c3","Type":"ContainerDied","Data":"4f1efe94940fbf1babcccc68efae1d38751c22232d1a2f197485057be906db80"} Jan 26 07:54:05 crc kubenswrapper[4806]: I0126 07:54:05.261446 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pw8cg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c73a9f4-20b2-4c8a-b25d-413770be4fac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://792a2a27d6043da345b05c611a152abc9233ff61958434dc7a7f8c13d185fc04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt9w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pw8cg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:05Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:05 crc kubenswrapper[4806]: I0126 07:54:05.282219 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:05Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:05 crc kubenswrapper[4806]: I0126 07:54:05.298253 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:05Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:05 crc kubenswrapper[4806]: I0126 07:54:05.316575 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f8b8acb-f4cf-41db-82f8-94ffd21c1594\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2105d09a59a290aa3e1915a2b8c2c8aeb3a3e94ddfd4b48dc574a5362f522d18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2105d09a59a290aa3e1915a2b8c2c8aeb3a3e94ddfd4b48dc574a5362f522d18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8mw7z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:05Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:05 crc kubenswrapper[4806]: I0126 07:54:05.328603 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a2716f1-7f63-4f1b-88e7-412b33a5afe3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1797a908e33ef3678ac1471e937a8e52c11dbb31d8de0f19d400a01706a98b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c111f2ec62507ccc570af4f4da19232bfa0946bb9e67bde595d5a3d66eff6a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f94ceee9c2ed3bd1c5ba58ee7e5047dd5f0a325c610c4277705f3426e69ed79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50228cb176a3defc821a78ed75d837706c2de3a3414f02e0afacf81289a15999\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:53:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:05Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:05 crc kubenswrapper[4806]: I0126 07:54:05.340801 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:05 crc kubenswrapper[4806]: I0126 07:54:05.340848 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:05 crc kubenswrapper[4806]: I0126 07:54:05.340862 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:05 crc kubenswrapper[4806]: I0126 07:54:05.340882 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:05 crc kubenswrapper[4806]: I0126 07:54:05.340895 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:05Z","lastTransitionTime":"2026-01-26T07:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:05 crc kubenswrapper[4806]: I0126 07:54:05.346166 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acf0880fe013b6ee2bf3e8342e19ebbdc21b0f7aeb69749dce219fe0e5e4939a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:05Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:05 crc kubenswrapper[4806]: I0126 07:54:05.359977 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0298829a559cb39d3f8d2f3b0e53b249e74c432ba82385ab628135d53fa77272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70db7a55580b02bc2be913a8978f895c5a3826133883cf1c838bc84a76e89af1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:05Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:05 crc kubenswrapper[4806]: I0126 07:54:05.370099 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wx526" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"265e15b3-6ef8-47df-ab15-dcc9bd9574ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5080e1085d9267f3e432eaf62a6c620f7e59db08140bd5a901ab21840bfd6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmfr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wx526\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:05Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:05 crc kubenswrapper[4806]: I0126 07:54:05.382714 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d7glh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4320ae6b-0d73-47d7-9f2c-f3c5b6b69041\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da5e8c52d415e76190ace196927bd1783b08990164b11379940ecc2b6734551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpcqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d7glh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:05Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:05 crc kubenswrapper[4806]: I0126 07:54:05.396187 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-268q5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59844d88-1bf9-4761-b664-74623e7532c3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19fc2da16d6c76dad40cc2947eb8da01b1ead5804be80df5756c132456575909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19fc2da16d6c76dad40cc2947eb8da01b1ead5804be80df5756c132456575909\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a426f2a63c3cdd56cec3bbd1f6e88ff76808304da3fe757043afeb2feb26614\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a426f2a63c3cdd56cec3bbd1f6e88ff76808304da3fe757043afeb2feb26614\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5459f7b834ef08b014a811d0955763235cb4dbb6cbe92670202b03f51f7d684d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5459f7b834ef08b014a811d0955763235cb4dbb6cbe92670202b03f51f7d684d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1efe94940fbf1babcccc68efae1d38751c22232d1a2f197485057be906db80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f1efe94940fbf1babcccc68efae1d38751c22232d1a2f197485057be906db80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-268q5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:05Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:05 crc kubenswrapper[4806]: I0126 07:54:05.422011 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"674c5d9f-0f9e-47cf-b6bb-5a011bd0af68\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06b15982445e897ebbaf881e9a84973b9b700b0763ea6dfc0dfa3d5adb3d5b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67cf748dcc41d3402937cba97eadb7311a4fd7ab359a24768ffd4b6f689e9e2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://374c11776db902d4a30fdeab722bf8c85be53043b87555ac42f9f18bbfca5f78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded50f9353eb262b2587b8d1a59d8e1a06946c9b228eb0991fe35ad79f19567b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dc1a26bcb2eff0795009b3d88f7fedd86bccf553e9b8b86f8bc4bdda154566d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b85e7f36ac03113362c2ab6b15fb73e517599c6a56c6a410b9dead70191502f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b85e7f36ac03113362c2ab6b15fb73e517599c6a56c6a410b9dead70191502f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7324fbdf53b0be79696bb4bf870fe072761633e09cc88cce3a213a40b339410d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7324fbdf53b0be79696bb4bf870fe072761633e09cc88cce3a213a40b339410d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1987002ea8dfdacf427291f1757243727d744b1fe99d0be1f8cddc26beb74fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1987002ea8dfdacf427291f1757243727d744b1fe99d0be1f8cddc26beb74fc9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:53:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:05Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:05 crc kubenswrapper[4806]: I0126 07:54:05.437055 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f837be3-c97c-4ec6-9ac0-1b2c210a2bd1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://370998a25827938e42e6ca8f51cb20c64068d266a2a6e92f43cfa6033efe1f1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://befc3a7310ed730ab06800d0ffe5a5ad206f5193f921d0080543d96e53c382cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f4e90bd8d92e4159f45b887647465c23595d6a14cb9fd5a6f3b6c736345b302\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d8fe9445b34c3dea0e5508fbbd9d8c72d3b0bd3c927f3d0469fb5f86a61b7a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6442d8b57ec5789bc5ad219da279d42cd053598b4a8a843b4190a619750483d3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 07:53:53.366863 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 07:53:53.368547 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1947576678/tls.crt::/tmp/serving-cert-1947576678/tls.key\\\\\\\"\\\\nI0126 07:53:59.068951 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 07:53:59.072863 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 07:53:59.072893 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 07:53:59.072922 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 07:53:59.072931 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 07:53:59.078386 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 07:53:59.078432 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 07:53:59.078439 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 07:53:59.078445 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 07:53:59.078449 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 07:53:59.078455 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 07:53:59.078459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 07:53:59.078571 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 07:53:59.084946 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eda625619208720589e5f413d0df8e1e55e7fd5ab4a2e59d64e19150afb05ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02368f0d186c7de1c3dc25d5c60985799ded799314f332c5132230a413d531a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02368f0d186c7de1c3dc25d5c60985799ded799314f332c5132230a413d531a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:53:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:05Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:05 crc kubenswrapper[4806]: I0126 07:54:05.442297 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:05 crc kubenswrapper[4806]: I0126 07:54:05.442347 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:05 crc kubenswrapper[4806]: I0126 07:54:05.442361 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:05 crc kubenswrapper[4806]: I0126 07:54:05.442383 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:05 crc kubenswrapper[4806]: I0126 07:54:05.442394 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:05Z","lastTransitionTime":"2026-01-26T07:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:05 crc kubenswrapper[4806]: I0126 07:54:05.448327 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dfe44c66a73660b38211ed7ae8fbb6a036c2ebc6a2716dd0e36da70b8d1fcc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:05Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:05 crc kubenswrapper[4806]: I0126 07:54:05.459562 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d07502a2-50b0-4012-b335-340a1c694c50\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03d55ca5e1d542a5cdb726c1e581fb9b644e237bf4575794fedadf60386c339c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67x45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://824bb40c04b0ea474c45020c9c84ce7b7ee18c52beef9fee9618ba5a6e59be04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67x45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k2tlk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:05Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:05 crc kubenswrapper[4806]: I0126 07:54:05.473789 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:05Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:05 crc kubenswrapper[4806]: I0126 07:54:05.544756 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:05 crc kubenswrapper[4806]: I0126 07:54:05.544796 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:05 crc kubenswrapper[4806]: I0126 07:54:05.544807 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:05 crc kubenswrapper[4806]: I0126 07:54:05.544821 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:05 crc kubenswrapper[4806]: I0126 07:54:05.544832 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:05Z","lastTransitionTime":"2026-01-26T07:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:05 crc kubenswrapper[4806]: I0126 07:54:05.646933 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:05 crc kubenswrapper[4806]: I0126 07:54:05.646983 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:05 crc kubenswrapper[4806]: I0126 07:54:05.646996 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:05 crc kubenswrapper[4806]: I0126 07:54:05.647014 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:05 crc kubenswrapper[4806]: I0126 07:54:05.647028 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:05Z","lastTransitionTime":"2026-01-26T07:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:05 crc kubenswrapper[4806]: I0126 07:54:05.749326 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:05 crc kubenswrapper[4806]: I0126 07:54:05.749377 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:05 crc kubenswrapper[4806]: I0126 07:54:05.749387 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:05 crc kubenswrapper[4806]: I0126 07:54:05.749404 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:05 crc kubenswrapper[4806]: I0126 07:54:05.749415 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:05Z","lastTransitionTime":"2026-01-26T07:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:05 crc kubenswrapper[4806]: I0126 07:54:05.851334 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:05 crc kubenswrapper[4806]: I0126 07:54:05.851373 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:05 crc kubenswrapper[4806]: I0126 07:54:05.851386 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:05 crc kubenswrapper[4806]: I0126 07:54:05.851402 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:05 crc kubenswrapper[4806]: I0126 07:54:05.851414 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:05Z","lastTransitionTime":"2026-01-26T07:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:05 crc kubenswrapper[4806]: I0126 07:54:05.954041 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:05 crc kubenswrapper[4806]: I0126 07:54:05.954090 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:05 crc kubenswrapper[4806]: I0126 07:54:05.954105 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:05 crc kubenswrapper[4806]: I0126 07:54:05.954125 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:05 crc kubenswrapper[4806]: I0126 07:54:05.954140 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:05Z","lastTransitionTime":"2026-01-26T07:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:06 crc kubenswrapper[4806]: I0126 07:54:06.037599 4806 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 00:38:42.500950106 +0000 UTC Jan 26 07:54:06 crc kubenswrapper[4806]: I0126 07:54:06.040847 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 07:54:06 crc kubenswrapper[4806]: I0126 07:54:06.040873 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 07:54:06 crc kubenswrapper[4806]: I0126 07:54:06.040847 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 07:54:06 crc kubenswrapper[4806]: E0126 07:54:06.040988 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 07:54:06 crc kubenswrapper[4806]: E0126 07:54:06.041050 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 07:54:06 crc kubenswrapper[4806]: E0126 07:54:06.041137 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 07:54:06 crc kubenswrapper[4806]: I0126 07:54:06.056737 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:06 crc kubenswrapper[4806]: I0126 07:54:06.056774 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:06 crc kubenswrapper[4806]: I0126 07:54:06.056786 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:06 crc kubenswrapper[4806]: I0126 07:54:06.056804 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:06 crc kubenswrapper[4806]: I0126 07:54:06.056814 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:06Z","lastTransitionTime":"2026-01-26T07:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:06 crc kubenswrapper[4806]: I0126 07:54:06.159421 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:06 crc kubenswrapper[4806]: I0126 07:54:06.159453 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:06 crc kubenswrapper[4806]: I0126 07:54:06.159462 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:06 crc kubenswrapper[4806]: I0126 07:54:06.159475 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:06 crc kubenswrapper[4806]: I0126 07:54:06.159484 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:06Z","lastTransitionTime":"2026-01-26T07:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:06 crc kubenswrapper[4806]: I0126 07:54:06.251277 4806 generic.go:334] "Generic (PLEG): container finished" podID="59844d88-1bf9-4761-b664-74623e7532c3" containerID="e1332a28e0cf08275ce281997fdc5674f44e397f8ff02fdd19fed12c2de2b70a" exitCode=0 Jan 26 07:54:06 crc kubenswrapper[4806]: I0126 07:54:06.251307 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-268q5" event={"ID":"59844d88-1bf9-4761-b664-74623e7532c3","Type":"ContainerDied","Data":"e1332a28e0cf08275ce281997fdc5674f44e397f8ff02fdd19fed12c2de2b70a"} Jan 26 07:54:06 crc kubenswrapper[4806]: I0126 07:54:06.257443 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" event={"ID":"1f8b8acb-f4cf-41db-82f8-94ffd21c1594","Type":"ContainerStarted","Data":"6e9d86c9f33a8ad421c133f7b674eff9362011ad9ce60cbe1b917bda84c85047"} Jan 26 07:54:06 crc kubenswrapper[4806]: I0126 07:54:06.261360 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:06 crc kubenswrapper[4806]: I0126 07:54:06.261405 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:06 crc kubenswrapper[4806]: I0126 07:54:06.261416 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:06 crc kubenswrapper[4806]: I0126 07:54:06.261433 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:06 crc kubenswrapper[4806]: I0126 07:54:06.261445 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:06Z","lastTransitionTime":"2026-01-26T07:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:06 crc kubenswrapper[4806]: I0126 07:54:06.271012 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a2716f1-7f63-4f1b-88e7-412b33a5afe3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1797a908e33ef3678ac1471e937a8e52c11dbb31d8de0f19d400a01706a98b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c111f2ec62507ccc570af4f4da19232bfa0946bb9e67bde595d5a3d66eff6a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f94ceee9c2ed3bd1c5ba58ee7e5047dd5f0a325c610c4277705f3426e69ed79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50228cb176a3defc821a78ed75d837706c2de3a3414f02e0afacf81289a15999\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:53:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:06Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:06 crc kubenswrapper[4806]: I0126 07:54:06.284783 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:06Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:06 crc kubenswrapper[4806]: I0126 07:54:06.295674 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:06Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:06 crc kubenswrapper[4806]: I0126 07:54:06.313332 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f8b8acb-f4cf-41db-82f8-94ffd21c1594\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2105d09a59a290aa3e1915a2b8c2c8aeb3a3e94ddfd4b48dc574a5362f522d18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2105d09a59a290aa3e1915a2b8c2c8aeb3a3e94ddfd4b48dc574a5362f522d18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8mw7z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:06Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:06 crc kubenswrapper[4806]: I0126 07:54:06.325865 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d7glh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4320ae6b-0d73-47d7-9f2c-f3c5b6b69041\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da5e8c52d415e76190ace196927bd1783b08990164b11379940ecc2b6734551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpcqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d7glh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:06Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:06 crc kubenswrapper[4806]: I0126 07:54:06.343073 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-268q5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59844d88-1bf9-4761-b664-74623e7532c3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19fc2da16d6c76dad40cc2947eb8da01b1ead5804be80df5756c132456575909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19fc2da16d6c76dad40cc2947eb8da01b1ead5804be80df5756c132456575909\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a426f2a63c3cdd56cec3bbd1f6e88ff76808304da3fe757043afeb2feb26614\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a426f2a63c3cdd56cec3bbd1f6e88ff76808304da3fe757043afeb2feb26614\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5459f7b834ef08b014a811d0955763235cb4dbb6cbe92670202b03f51f7d684d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5459f7b834ef08b014a811d0955763235cb4dbb6cbe92670202b03f51f7d684d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1efe94940fbf1babcccc68efae1d38751c22232d1a2f197485057be906db80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f1efe94940fbf1babcccc68efae1d38751c22232d1a2f197485057be906db80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1332a28e0cf08275ce281997fdc5674f44e397f8ff02fdd19fed12c2de2b70a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1332a28e0cf08275ce281997fdc5674f44e397f8ff02fdd19fed12c2de2b70a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-268q5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:06Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:06 crc kubenswrapper[4806]: I0126 07:54:06.364628 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:06 crc kubenswrapper[4806]: I0126 07:54:06.364670 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:06 crc kubenswrapper[4806]: I0126 07:54:06.364682 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:06 crc kubenswrapper[4806]: I0126 07:54:06.364698 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:06 crc kubenswrapper[4806]: I0126 07:54:06.364709 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:06Z","lastTransitionTime":"2026-01-26T07:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:06 crc kubenswrapper[4806]: I0126 07:54:06.365024 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"674c5d9f-0f9e-47cf-b6bb-5a011bd0af68\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06b15982445e897ebbaf881e9a84973b9b700b0763ea6dfc0dfa3d5adb3d5b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67cf748dcc41d3402937cba97eadb7311a4fd7ab359a24768ffd4b6f689e9e2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://374c11776db902d4a30fdeab722bf8c85be53043b87555ac42f9f18bbfca5f78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded50f9353eb262b2587b8d1a59d8e1a06946c9b228eb0991fe35ad79f19567b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dc1a26bcb2eff0795009b3d88f7fedd86bccf553e9b8b86f8bc4bdda154566d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b85e7f36ac03113362c2ab6b15fb73e517599c6a56c6a410b9dead70191502f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b85e7f36ac03113362c2ab6b15fb73e517599c6a56c6a410b9dead70191502f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7324fbdf53b0be79696bb4bf870fe072761633e09cc88cce3a213a40b339410d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7324fbdf53b0be79696bb4bf870fe072761633e09cc88cce3a213a40b339410d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1987002ea8dfdacf427291f1757243727d744b1fe99d0be1f8cddc26beb74fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1987002ea8dfdacf427291f1757243727d744b1fe99d0be1f8cddc26beb74fc9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:53:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:06Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:06 crc kubenswrapper[4806]: I0126 07:54:06.379286 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f837be3-c97c-4ec6-9ac0-1b2c210a2bd1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://370998a25827938e42e6ca8f51cb20c64068d266a2a6e92f43cfa6033efe1f1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://befc3a7310ed730ab06800d0ffe5a5ad206f5193f921d0080543d96e53c382cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f4e90bd8d92e4159f45b887647465c23595d6a14cb9fd5a6f3b6c736345b302\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d8fe9445b34c3dea0e5508fbbd9d8c72d3b0bd3c927f3d0469fb5f86a61b7a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6442d8b57ec5789bc5ad219da279d42cd053598b4a8a843b4190a619750483d3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 07:53:53.366863 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 07:53:53.368547 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1947576678/tls.crt::/tmp/serving-cert-1947576678/tls.key\\\\\\\"\\\\nI0126 07:53:59.068951 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 07:53:59.072863 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 07:53:59.072893 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 07:53:59.072922 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 07:53:59.072931 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 07:53:59.078386 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 07:53:59.078432 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 07:53:59.078439 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 07:53:59.078445 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 07:53:59.078449 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 07:53:59.078455 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 07:53:59.078459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 07:53:59.078571 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 07:53:59.084946 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eda625619208720589e5f413d0df8e1e55e7fd5ab4a2e59d64e19150afb05ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02368f0d186c7de1c3dc25d5c60985799ded799314f332c5132230a413d531a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02368f0d186c7de1c3dc25d5c60985799ded799314f332c5132230a413d531a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:53:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:06Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:06 crc kubenswrapper[4806]: I0126 07:54:06.392521 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acf0880fe013b6ee2bf3e8342e19ebbdc21b0f7aeb69749dce219fe0e5e4939a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:06Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:06 crc kubenswrapper[4806]: I0126 07:54:06.418745 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0298829a559cb39d3f8d2f3b0e53b249e74c432ba82385ab628135d53fa77272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70db7a55580b02bc2be913a8978f895c5a3826133883cf1c838bc84a76e89af1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:06Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:06 crc kubenswrapper[4806]: I0126 07:54:06.436348 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wx526" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"265e15b3-6ef8-47df-ab15-dcc9bd9574ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5080e1085d9267f3e432eaf62a6c620f7e59db08140bd5a901ab21840bfd6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmfr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wx526\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:06Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:06 crc kubenswrapper[4806]: I0126 07:54:06.467440 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:06 crc kubenswrapper[4806]: I0126 07:54:06.467483 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:06 crc kubenswrapper[4806]: I0126 07:54:06.467494 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:06 crc kubenswrapper[4806]: I0126 07:54:06.467511 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:06 crc kubenswrapper[4806]: I0126 07:54:06.467536 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:06Z","lastTransitionTime":"2026-01-26T07:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:06 crc kubenswrapper[4806]: I0126 07:54:06.483252 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:06Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:06 crc kubenswrapper[4806]: I0126 07:54:06.500257 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dfe44c66a73660b38211ed7ae8fbb6a036c2ebc6a2716dd0e36da70b8d1fcc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:06Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:06 crc kubenswrapper[4806]: I0126 07:54:06.511030 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d07502a2-50b0-4012-b335-340a1c694c50\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03d55ca5e1d542a5cdb726c1e581fb9b644e237bf4575794fedadf60386c339c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67x45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://824bb40c04b0ea474c45020c9c84ce7b7ee18c52beef9fee9618ba5a6e59be04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67x45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k2tlk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:06Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:06 crc kubenswrapper[4806]: I0126 07:54:06.519385 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pw8cg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c73a9f4-20b2-4c8a-b25d-413770be4fac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://792a2a27d6043da345b05c611a152abc9233ff61958434dc7a7f8c13d185fc04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt9w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pw8cg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:06Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:06 crc kubenswrapper[4806]: I0126 07:54:06.569511 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:06 crc kubenswrapper[4806]: I0126 07:54:06.569568 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:06 crc kubenswrapper[4806]: I0126 07:54:06.569578 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:06 crc kubenswrapper[4806]: I0126 07:54:06.569591 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:06 crc kubenswrapper[4806]: I0126 07:54:06.569602 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:06Z","lastTransitionTime":"2026-01-26T07:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:06 crc kubenswrapper[4806]: I0126 07:54:06.672567 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:06 crc kubenswrapper[4806]: I0126 07:54:06.672608 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:06 crc kubenswrapper[4806]: I0126 07:54:06.672620 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:06 crc kubenswrapper[4806]: I0126 07:54:06.672638 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:06 crc kubenswrapper[4806]: I0126 07:54:06.672650 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:06Z","lastTransitionTime":"2026-01-26T07:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:06 crc kubenswrapper[4806]: I0126 07:54:06.774930 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:06 crc kubenswrapper[4806]: I0126 07:54:06.774972 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:06 crc kubenswrapper[4806]: I0126 07:54:06.774982 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:06 crc kubenswrapper[4806]: I0126 07:54:06.774997 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:06 crc kubenswrapper[4806]: I0126 07:54:06.775007 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:06Z","lastTransitionTime":"2026-01-26T07:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:06 crc kubenswrapper[4806]: I0126 07:54:06.877027 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:06 crc kubenswrapper[4806]: I0126 07:54:06.877059 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:06 crc kubenswrapper[4806]: I0126 07:54:06.877069 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:06 crc kubenswrapper[4806]: I0126 07:54:06.877082 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:06 crc kubenswrapper[4806]: I0126 07:54:06.877092 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:06Z","lastTransitionTime":"2026-01-26T07:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:06 crc kubenswrapper[4806]: I0126 07:54:06.979008 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:06 crc kubenswrapper[4806]: I0126 07:54:06.979042 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:06 crc kubenswrapper[4806]: I0126 07:54:06.979052 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:06 crc kubenswrapper[4806]: I0126 07:54:06.979065 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:06 crc kubenswrapper[4806]: I0126 07:54:06.979074 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:06Z","lastTransitionTime":"2026-01-26T07:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:07 crc kubenswrapper[4806]: I0126 07:54:07.038467 4806 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 00:46:30.060678082 +0000 UTC Jan 26 07:54:07 crc kubenswrapper[4806]: I0126 07:54:07.082455 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:07 crc kubenswrapper[4806]: I0126 07:54:07.082507 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:07 crc kubenswrapper[4806]: I0126 07:54:07.082516 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:07 crc kubenswrapper[4806]: I0126 07:54:07.082554 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:07 crc kubenswrapper[4806]: I0126 07:54:07.082568 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:07Z","lastTransitionTime":"2026-01-26T07:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:07 crc kubenswrapper[4806]: I0126 07:54:07.184503 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:07 crc kubenswrapper[4806]: I0126 07:54:07.184574 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:07 crc kubenswrapper[4806]: I0126 07:54:07.184590 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:07 crc kubenswrapper[4806]: I0126 07:54:07.184611 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:07 crc kubenswrapper[4806]: I0126 07:54:07.184626 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:07Z","lastTransitionTime":"2026-01-26T07:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:07 crc kubenswrapper[4806]: I0126 07:54:07.264115 4806 generic.go:334] "Generic (PLEG): container finished" podID="59844d88-1bf9-4761-b664-74623e7532c3" containerID="638da847467b7cade9c68a764044417e85bfa99cae93105cb70b260d3668c1a5" exitCode=0 Jan 26 07:54:07 crc kubenswrapper[4806]: I0126 07:54:07.264160 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-268q5" event={"ID":"59844d88-1bf9-4761-b664-74623e7532c3","Type":"ContainerDied","Data":"638da847467b7cade9c68a764044417e85bfa99cae93105cb70b260d3668c1a5"} Jan 26 07:54:07 crc kubenswrapper[4806]: I0126 07:54:07.277807 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pw8cg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c73a9f4-20b2-4c8a-b25d-413770be4fac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://792a2a27d6043da345b05c611a152abc9233ff61958434dc7a7f8c13d185fc04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt9w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pw8cg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:07Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:07 crc kubenswrapper[4806]: I0126 07:54:07.286893 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:07 crc kubenswrapper[4806]: I0126 07:54:07.286927 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:07 crc kubenswrapper[4806]: I0126 07:54:07.286936 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:07 crc kubenswrapper[4806]: I0126 07:54:07.286952 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:07 crc kubenswrapper[4806]: I0126 07:54:07.286961 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:07Z","lastTransitionTime":"2026-01-26T07:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:07 crc kubenswrapper[4806]: I0126 07:54:07.297032 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a2716f1-7f63-4f1b-88e7-412b33a5afe3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1797a908e33ef3678ac1471e937a8e52c11dbb31d8de0f19d400a01706a98b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c111f2ec62507ccc570af4f4da19232bfa0946bb9e67bde595d5a3d66eff6a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f94ceee9c2ed3bd1c5ba58ee7e5047dd5f0a325c610c4277705f3426e69ed79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50228cb176a3defc821a78ed75d837706c2de3a3414f02e0afacf81289a15999\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:53:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:07Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:07 crc kubenswrapper[4806]: I0126 07:54:07.309120 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:07Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:07 crc kubenswrapper[4806]: I0126 07:54:07.322903 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:07Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:07 crc kubenswrapper[4806]: I0126 07:54:07.342029 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f8b8acb-f4cf-41db-82f8-94ffd21c1594\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2105d09a59a290aa3e1915a2b8c2c8aeb3a3e94ddfd4b48dc574a5362f522d18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2105d09a59a290aa3e1915a2b8c2c8aeb3a3e94ddfd4b48dc574a5362f522d18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8mw7z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:07Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:07 crc kubenswrapper[4806]: I0126 07:54:07.360846 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-268q5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59844d88-1bf9-4761-b664-74623e7532c3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19fc2da16d6c76dad40cc2947eb8da01b1ead5804be80df5756c132456575909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19fc2da16d6c76dad40cc2947eb8da01b1ead5804be80df5756c132456575909\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a426f2a63c3cdd56cec3bbd1f6e88ff76808304da3fe757043afeb2feb26614\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a426f2a63c3cdd56cec3bbd1f6e88ff76808304da3fe757043afeb2feb26614\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5459f7b834ef08b014a811d0955763235cb4dbb6cbe92670202b03f51f7d684d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5459f7b834ef08b014a811d0955763235cb4dbb6cbe92670202b03f51f7d684d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1efe94940fbf1babcccc68efae1d38751c22232d1a2f197485057be906db80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f1efe94940fbf1babcccc68efae1d38751c22232d1a2f197485057be906db80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1332a28e0cf08275ce281997fdc5674f44e397f8ff02fdd19fed12c2de2b70a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1332a28e0cf08275ce281997fdc5674f44e397f8ff02fdd19fed12c2de2b70a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://638da847467b7cade9c68a764044417e85bfa99cae93105cb70b260d3668c1a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://638da847467b7cade9c68a764044417e85bfa99cae93105cb70b260d3668c1a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-268q5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:07Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:07 crc kubenswrapper[4806]: I0126 07:54:07.380600 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"674c5d9f-0f9e-47cf-b6bb-5a011bd0af68\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06b15982445e897ebbaf881e9a84973b9b700b0763ea6dfc0dfa3d5adb3d5b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67cf748dcc41d3402937cba97eadb7311a4fd7ab359a24768ffd4b6f689e9e2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://374c11776db902d4a30fdeab722bf8c85be53043b87555ac42f9f18bbfca5f78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded50f9353eb262b2587b8d1a59d8e1a06946c9b228eb0991fe35ad79f19567b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dc1a26bcb2eff0795009b3d88f7fedd86bccf553e9b8b86f8bc4bdda154566d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b85e7f36ac03113362c2ab6b15fb73e517599c6a56c6a410b9dead70191502f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b85e7f36ac03113362c2ab6b15fb73e517599c6a56c6a410b9dead70191502f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7324fbdf53b0be79696bb4bf870fe072761633e09cc88cce3a213a40b339410d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7324fbdf53b0be79696bb4bf870fe072761633e09cc88cce3a213a40b339410d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1987002ea8dfdacf427291f1757243727d744b1fe99d0be1f8cddc26beb74fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1987002ea8dfdacf427291f1757243727d744b1fe99d0be1f8cddc26beb74fc9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:53:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:07Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:07 crc kubenswrapper[4806]: I0126 07:54:07.389085 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:07 crc kubenswrapper[4806]: I0126 07:54:07.389115 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:07 crc kubenswrapper[4806]: I0126 07:54:07.389127 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:07 crc kubenswrapper[4806]: I0126 07:54:07.389143 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:07 crc kubenswrapper[4806]: I0126 07:54:07.389156 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:07Z","lastTransitionTime":"2026-01-26T07:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:07 crc kubenswrapper[4806]: I0126 07:54:07.393899 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f837be3-c97c-4ec6-9ac0-1b2c210a2bd1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://370998a25827938e42e6ca8f51cb20c64068d266a2a6e92f43cfa6033efe1f1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://befc3a7310ed730ab06800d0ffe5a5ad206f5193f921d0080543d96e53c382cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f4e90bd8d92e4159f45b887647465c23595d6a14cb9fd5a6f3b6c736345b302\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d8fe9445b34c3dea0e5508fbbd9d8c72d3b0bd3c927f3d0469fb5f86a61b7a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6442d8b57ec5789bc5ad219da279d42cd053598b4a8a843b4190a619750483d3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 07:53:53.366863 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 07:53:53.368547 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1947576678/tls.crt::/tmp/serving-cert-1947576678/tls.key\\\\\\\"\\\\nI0126 07:53:59.068951 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 07:53:59.072863 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 07:53:59.072893 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 07:53:59.072922 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 07:53:59.072931 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 07:53:59.078386 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 07:53:59.078432 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 07:53:59.078439 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 07:53:59.078445 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 07:53:59.078449 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 07:53:59.078455 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 07:53:59.078459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 07:53:59.078571 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 07:53:59.084946 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eda625619208720589e5f413d0df8e1e55e7fd5ab4a2e59d64e19150afb05ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02368f0d186c7de1c3dc25d5c60985799ded799314f332c5132230a413d531a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02368f0d186c7de1c3dc25d5c60985799ded799314f332c5132230a413d531a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:53:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:07Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:07 crc kubenswrapper[4806]: I0126 07:54:07.405666 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acf0880fe013b6ee2bf3e8342e19ebbdc21b0f7aeb69749dce219fe0e5e4939a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:07Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:07 crc kubenswrapper[4806]: I0126 07:54:07.421369 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0298829a559cb39d3f8d2f3b0e53b249e74c432ba82385ab628135d53fa77272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70db7a55580b02bc2be913a8978f895c5a3826133883cf1c838bc84a76e89af1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:07Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:07 crc kubenswrapper[4806]: I0126 07:54:07.485588 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wx526" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"265e15b3-6ef8-47df-ab15-dcc9bd9574ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5080e1085d9267f3e432eaf62a6c620f7e59db08140bd5a901ab21840bfd6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmfr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wx526\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:07Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:07 crc kubenswrapper[4806]: I0126 07:54:07.491565 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:07 crc kubenswrapper[4806]: I0126 07:54:07.491774 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:07 crc kubenswrapper[4806]: I0126 07:54:07.491855 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:07 crc kubenswrapper[4806]: I0126 07:54:07.491945 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:07 crc kubenswrapper[4806]: I0126 07:54:07.492017 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:07Z","lastTransitionTime":"2026-01-26T07:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:07 crc kubenswrapper[4806]: I0126 07:54:07.502333 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d7glh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4320ae6b-0d73-47d7-9f2c-f3c5b6b69041\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da5e8c52d415e76190ace196927bd1783b08990164b11379940ecc2b6734551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpcqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d7glh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:07Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:07 crc kubenswrapper[4806]: I0126 07:54:07.515654 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:07Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:07 crc kubenswrapper[4806]: I0126 07:54:07.528714 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dfe44c66a73660b38211ed7ae8fbb6a036c2ebc6a2716dd0e36da70b8d1fcc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:07Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:07 crc kubenswrapper[4806]: I0126 07:54:07.544326 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d07502a2-50b0-4012-b335-340a1c694c50\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03d55ca5e1d542a5cdb726c1e581fb9b644e237bf4575794fedadf60386c339c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67x45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://824bb40c04b0ea474c45020c9c84ce7b7ee18c52beef9fee9618ba5a6e59be04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67x45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k2tlk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:07Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:07 crc kubenswrapper[4806]: I0126 07:54:07.594836 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:07 crc kubenswrapper[4806]: I0126 07:54:07.594861 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:07 crc kubenswrapper[4806]: I0126 07:54:07.594871 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:07 crc kubenswrapper[4806]: I0126 07:54:07.594885 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:07 crc kubenswrapper[4806]: I0126 07:54:07.594895 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:07Z","lastTransitionTime":"2026-01-26T07:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:07 crc kubenswrapper[4806]: I0126 07:54:07.697424 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:07 crc kubenswrapper[4806]: I0126 07:54:07.697459 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:07 crc kubenswrapper[4806]: I0126 07:54:07.697470 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:07 crc kubenswrapper[4806]: I0126 07:54:07.697484 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:07 crc kubenswrapper[4806]: I0126 07:54:07.697493 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:07Z","lastTransitionTime":"2026-01-26T07:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:07 crc kubenswrapper[4806]: I0126 07:54:07.784098 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 07:54:07 crc kubenswrapper[4806]: E0126 07:54:07.784284 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 07:54:15.784257706 +0000 UTC m=+35.048665762 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 07:54:07 crc kubenswrapper[4806]: I0126 07:54:07.799693 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:07 crc kubenswrapper[4806]: I0126 07:54:07.799730 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:07 crc kubenswrapper[4806]: I0126 07:54:07.799741 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:07 crc kubenswrapper[4806]: I0126 07:54:07.799757 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:07 crc kubenswrapper[4806]: I0126 07:54:07.799766 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:07Z","lastTransitionTime":"2026-01-26T07:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:07 crc kubenswrapper[4806]: I0126 07:54:07.884851 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 07:54:07 crc kubenswrapper[4806]: I0126 07:54:07.884919 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 07:54:07 crc kubenswrapper[4806]: I0126 07:54:07.884950 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 07:54:07 crc kubenswrapper[4806]: I0126 07:54:07.884984 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 07:54:07 crc kubenswrapper[4806]: E0126 07:54:07.885062 4806 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 07:54:07 crc kubenswrapper[4806]: E0126 07:54:07.885094 4806 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 07:54:07 crc kubenswrapper[4806]: E0126 07:54:07.885165 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 07:54:15.885144386 +0000 UTC m=+35.149552432 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 07:54:07 crc kubenswrapper[4806]: E0126 07:54:07.885186 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 07:54:15.885177737 +0000 UTC m=+35.149585783 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 07:54:07 crc kubenswrapper[4806]: E0126 07:54:07.885222 4806 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 07:54:07 crc kubenswrapper[4806]: E0126 07:54:07.885273 4806 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 07:54:07 crc kubenswrapper[4806]: E0126 07:54:07.885290 4806 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 07:54:07 crc kubenswrapper[4806]: E0126 07:54:07.885239 4806 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 07:54:07 crc kubenswrapper[4806]: E0126 07:54:07.885351 4806 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 07:54:07 crc kubenswrapper[4806]: E0126 07:54:07.885365 4806 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 07:54:07 crc kubenswrapper[4806]: E0126 07:54:07.885366 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 07:54:15.885339412 +0000 UTC m=+35.149747658 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 07:54:07 crc kubenswrapper[4806]: E0126 07:54:07.885405 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 07:54:15.885392654 +0000 UTC m=+35.149800910 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 07:54:07 crc kubenswrapper[4806]: I0126 07:54:07.902858 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:07 crc kubenswrapper[4806]: I0126 07:54:07.902905 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:07 crc kubenswrapper[4806]: I0126 07:54:07.902915 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:07 crc kubenswrapper[4806]: I0126 07:54:07.902933 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:07 crc kubenswrapper[4806]: I0126 07:54:07.902944 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:07Z","lastTransitionTime":"2026-01-26T07:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:08 crc kubenswrapper[4806]: I0126 07:54:08.005157 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:08 crc kubenswrapper[4806]: I0126 07:54:08.005198 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:08 crc kubenswrapper[4806]: I0126 07:54:08.005210 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:08 crc kubenswrapper[4806]: I0126 07:54:08.005226 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:08 crc kubenswrapper[4806]: I0126 07:54:08.005238 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:08Z","lastTransitionTime":"2026-01-26T07:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:08 crc kubenswrapper[4806]: I0126 07:54:08.039381 4806 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 00:25:35.04029978 +0000 UTC Jan 26 07:54:08 crc kubenswrapper[4806]: I0126 07:54:08.041786 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 07:54:08 crc kubenswrapper[4806]: I0126 07:54:08.041799 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 07:54:08 crc kubenswrapper[4806]: I0126 07:54:08.041830 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 07:54:08 crc kubenswrapper[4806]: E0126 07:54:08.041920 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 07:54:08 crc kubenswrapper[4806]: E0126 07:54:08.042037 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 07:54:08 crc kubenswrapper[4806]: E0126 07:54:08.042159 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 07:54:08 crc kubenswrapper[4806]: I0126 07:54:08.108066 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:08 crc kubenswrapper[4806]: I0126 07:54:08.108104 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:08 crc kubenswrapper[4806]: I0126 07:54:08.108113 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:08 crc kubenswrapper[4806]: I0126 07:54:08.108127 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:08 crc kubenswrapper[4806]: I0126 07:54:08.108136 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:08Z","lastTransitionTime":"2026-01-26T07:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:08 crc kubenswrapper[4806]: I0126 07:54:08.210492 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:08 crc kubenswrapper[4806]: I0126 07:54:08.210594 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:08 crc kubenswrapper[4806]: I0126 07:54:08.210617 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:08 crc kubenswrapper[4806]: I0126 07:54:08.210652 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:08 crc kubenswrapper[4806]: I0126 07:54:08.210676 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:08Z","lastTransitionTime":"2026-01-26T07:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:08 crc kubenswrapper[4806]: I0126 07:54:08.273115 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" event={"ID":"1f8b8acb-f4cf-41db-82f8-94ffd21c1594","Type":"ContainerStarted","Data":"250a784e5b1e7bbd67d31eec66bfcd8cfac8a2a80e8882838553063b1a951346"} Jan 26 07:54:08 crc kubenswrapper[4806]: I0126 07:54:08.273692 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" Jan 26 07:54:08 crc kubenswrapper[4806]: I0126 07:54:08.273787 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" Jan 26 07:54:08 crc kubenswrapper[4806]: I0126 07:54:08.273902 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" Jan 26 07:54:08 crc kubenswrapper[4806]: I0126 07:54:08.276995 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-268q5" event={"ID":"59844d88-1bf9-4761-b664-74623e7532c3","Type":"ContainerStarted","Data":"80fc9c4f56c3d017653449212b2b9457a701e8c567317798eb12c695bccec5c3"} Jan 26 07:54:08 crc kubenswrapper[4806]: I0126 07:54:08.290279 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pw8cg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c73a9f4-20b2-4c8a-b25d-413770be4fac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://792a2a27d6043da345b05c611a152abc9233ff61958434dc7a7f8c13d185fc04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt9w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pw8cg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:08Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:08 crc kubenswrapper[4806]: I0126 07:54:08.306246 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" Jan 26 07:54:08 crc kubenswrapper[4806]: I0126 07:54:08.308694 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a2716f1-7f63-4f1b-88e7-412b33a5afe3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1797a908e33ef3678ac1471e937a8e52c11dbb31d8de0f19d400a01706a98b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c111f2ec62507ccc570af4f4da19232bfa0946bb9e67bde595d5a3d66eff6a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f94ceee9c2ed3bd1c5ba58ee7e5047dd5f0a325c610c4277705f3426e69ed79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50228cb176a3defc821a78ed75d837706c2de3a3414f02e0afacf81289a15999\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:53:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:08Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:08 crc kubenswrapper[4806]: I0126 07:54:08.310486 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" Jan 26 07:54:08 crc kubenswrapper[4806]: I0126 07:54:08.313214 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:08 crc kubenswrapper[4806]: I0126 07:54:08.313298 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:08 crc kubenswrapper[4806]: I0126 07:54:08.313357 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:08 crc kubenswrapper[4806]: I0126 07:54:08.313414 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:08 crc kubenswrapper[4806]: I0126 07:54:08.313472 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:08Z","lastTransitionTime":"2026-01-26T07:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:08 crc kubenswrapper[4806]: I0126 07:54:08.332899 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:08Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:08 crc kubenswrapper[4806]: I0126 07:54:08.350858 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:08Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:08 crc kubenswrapper[4806]: I0126 07:54:08.371461 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f8b8acb-f4cf-41db-82f8-94ffd21c1594\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21d2ea6bc7f0a91da2b372587419ec1b34cfe3aca8f70dd4e360bf3e818bb103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adbed86e005e6e9cecb4320e0dff72eab3cfd505ff656a64e6643cf47828c789\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d102c6b4ea9d79168f0fec24c556c8fec8e3c1191646eebcbe8ed0289edfb6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c90dede1fc0753cad500555db7d92be73cb9b51fe948e3e65bed66134d85628c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e05c58fd693a7827b1ff019c6e160e4d2198004aca34245ee6e08cd37f5627da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://634f160a9064240a06e9b425f70ceae6390db7205f898efdc601437e72c362df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://250a784e5b1e7bbd67d31eec66bfcd8cfac8a2a80e8882838553063b1a951346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e9d86c9f33a8ad421c133f7b674eff9362011ad9ce60cbe1b917bda84c85047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2105d09a59a290aa3e1915a2b8c2c8aeb3a3e94ddfd4b48dc574a5362f522d18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2105d09a59a290aa3e1915a2b8c2c8aeb3a3e94ddfd4b48dc574a5362f522d18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8mw7z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:08Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:08 crc kubenswrapper[4806]: I0126 07:54:08.392110 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"674c5d9f-0f9e-47cf-b6bb-5a011bd0af68\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06b15982445e897ebbaf881e9a84973b9b700b0763ea6dfc0dfa3d5adb3d5b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67cf748dcc41d3402937cba97eadb7311a4fd7ab359a24768ffd4b6f689e9e2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://374c11776db902d4a30fdeab722bf8c85be53043b87555ac42f9f18bbfca5f78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded50f9353eb262b2587b8d1a59d8e1a06946c9b228eb0991fe35ad79f19567b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dc1a26bcb2eff0795009b3d88f7fedd86bccf553e9b8b86f8bc4bdda154566d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b85e7f36ac03113362c2ab6b15fb73e517599c6a56c6a410b9dead70191502f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b85e7f36ac03113362c2ab6b15fb73e517599c6a56c6a410b9dead70191502f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7324fbdf53b0be79696bb4bf870fe072761633e09cc88cce3a213a40b339410d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7324fbdf53b0be79696bb4bf870fe072761633e09cc88cce3a213a40b339410d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1987002ea8dfdacf427291f1757243727d744b1fe99d0be1f8cddc26beb74fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1987002ea8dfdacf427291f1757243727d744b1fe99d0be1f8cddc26beb74fc9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:53:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:08Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:08 crc kubenswrapper[4806]: I0126 07:54:08.416416 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:08 crc kubenswrapper[4806]: I0126 07:54:08.416485 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:08 crc kubenswrapper[4806]: I0126 07:54:08.416503 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:08 crc kubenswrapper[4806]: I0126 07:54:08.416569 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:08 crc kubenswrapper[4806]: I0126 07:54:08.416590 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:08Z","lastTransitionTime":"2026-01-26T07:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:08 crc kubenswrapper[4806]: I0126 07:54:08.417518 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f837be3-c97c-4ec6-9ac0-1b2c210a2bd1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://370998a25827938e42e6ca8f51cb20c64068d266a2a6e92f43cfa6033efe1f1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://befc3a7310ed730ab06800d0ffe5a5ad206f5193f921d0080543d96e53c382cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f4e90bd8d92e4159f45b887647465c23595d6a14cb9fd5a6f3b6c736345b302\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d8fe9445b34c3dea0e5508fbbd9d8c72d3b0bd3c927f3d0469fb5f86a61b7a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6442d8b57ec5789bc5ad219da279d42cd053598b4a8a843b4190a619750483d3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 07:53:53.366863 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 07:53:53.368547 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1947576678/tls.crt::/tmp/serving-cert-1947576678/tls.key\\\\\\\"\\\\nI0126 07:53:59.068951 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 07:53:59.072863 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 07:53:59.072893 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 07:53:59.072922 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 07:53:59.072931 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 07:53:59.078386 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 07:53:59.078432 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 07:53:59.078439 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 07:53:59.078445 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 07:53:59.078449 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 07:53:59.078455 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 07:53:59.078459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 07:53:59.078571 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 07:53:59.084946 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eda625619208720589e5f413d0df8e1e55e7fd5ab4a2e59d64e19150afb05ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02368f0d186c7de1c3dc25d5c60985799ded799314f332c5132230a413d531a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02368f0d186c7de1c3dc25d5c60985799ded799314f332c5132230a413d531a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:53:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:08Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:08 crc kubenswrapper[4806]: I0126 07:54:08.441644 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acf0880fe013b6ee2bf3e8342e19ebbdc21b0f7aeb69749dce219fe0e5e4939a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:08Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:08 crc kubenswrapper[4806]: I0126 07:54:08.456887 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0298829a559cb39d3f8d2f3b0e53b249e74c432ba82385ab628135d53fa77272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70db7a55580b02bc2be913a8978f895c5a3826133883cf1c838bc84a76e89af1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:08Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:08 crc kubenswrapper[4806]: I0126 07:54:08.472683 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wx526" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"265e15b3-6ef8-47df-ab15-dcc9bd9574ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5080e1085d9267f3e432eaf62a6c620f7e59db08140bd5a901ab21840bfd6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmfr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wx526\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:08Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:08 crc kubenswrapper[4806]: I0126 07:54:08.488829 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d7glh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4320ae6b-0d73-47d7-9f2c-f3c5b6b69041\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da5e8c52d415e76190ace196927bd1783b08990164b11379940ecc2b6734551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpcqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d7glh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:08Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:08 crc kubenswrapper[4806]: I0126 07:54:08.515087 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-268q5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59844d88-1bf9-4761-b664-74623e7532c3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19fc2da16d6c76dad40cc2947eb8da01b1ead5804be80df5756c132456575909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19fc2da16d6c76dad40cc2947eb8da01b1ead5804be80df5756c132456575909\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a426f2a63c3cdd56cec3bbd1f6e88ff76808304da3fe757043afeb2feb26614\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a426f2a63c3cdd56cec3bbd1f6e88ff76808304da3fe757043afeb2feb26614\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5459f7b834ef08b014a811d0955763235cb4dbb6cbe92670202b03f51f7d684d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5459f7b834ef08b014a811d0955763235cb4dbb6cbe92670202b03f51f7d684d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1efe94940fbf1babcccc68efae1d38751c22232d1a2f197485057be906db80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f1efe94940fbf1babcccc68efae1d38751c22232d1a2f197485057be906db80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1332a28e0cf08275ce281997fdc5674f44e397f8ff02fdd19fed12c2de2b70a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1332a28e0cf08275ce281997fdc5674f44e397f8ff02fdd19fed12c2de2b70a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://638da847467b7cade9c68a764044417e85bfa99cae93105cb70b260d3668c1a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://638da847467b7cade9c68a764044417e85bfa99cae93105cb70b260d3668c1a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-268q5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:08Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:08 crc kubenswrapper[4806]: I0126 07:54:08.519122 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:08 crc kubenswrapper[4806]: I0126 07:54:08.519379 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:08 crc kubenswrapper[4806]: I0126 07:54:08.519526 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:08 crc kubenswrapper[4806]: I0126 07:54:08.519705 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:08 crc kubenswrapper[4806]: I0126 07:54:08.519829 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:08Z","lastTransitionTime":"2026-01-26T07:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:08 crc kubenswrapper[4806]: I0126 07:54:08.537602 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:08Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:08 crc kubenswrapper[4806]: I0126 07:54:08.553944 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dfe44c66a73660b38211ed7ae8fbb6a036c2ebc6a2716dd0e36da70b8d1fcc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:08Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:08 crc kubenswrapper[4806]: I0126 07:54:08.578799 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d07502a2-50b0-4012-b335-340a1c694c50\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03d55ca5e1d542a5cdb726c1e581fb9b644e237bf4575794fedadf60386c339c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67x45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://824bb40c04b0ea474c45020c9c84ce7b7ee18c52beef9fee9618ba5a6e59be04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67x45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k2tlk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:08Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:08 crc kubenswrapper[4806]: I0126 07:54:08.597315 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pw8cg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c73a9f4-20b2-4c8a-b25d-413770be4fac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://792a2a27d6043da345b05c611a152abc9233ff61958434dc7a7f8c13d185fc04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt9w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pw8cg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:08Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:08 crc kubenswrapper[4806]: I0126 07:54:08.617254 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a2716f1-7f63-4f1b-88e7-412b33a5afe3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1797a908e33ef3678ac1471e937a8e52c11dbb31d8de0f19d400a01706a98b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c111f2ec62507ccc570af4f4da19232bfa0946bb9e67bde595d5a3d66eff6a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f94ceee9c2ed3bd1c5ba58ee7e5047dd5f0a325c610c4277705f3426e69ed79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50228cb176a3defc821a78ed75d837706c2de3a3414f02e0afacf81289a15999\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:53:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:08Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:08 crc kubenswrapper[4806]: I0126 07:54:08.622616 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:08 crc kubenswrapper[4806]: I0126 07:54:08.622648 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:08 crc kubenswrapper[4806]: I0126 07:54:08.622658 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:08 crc kubenswrapper[4806]: I0126 07:54:08.622676 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:08 crc kubenswrapper[4806]: I0126 07:54:08.622689 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:08Z","lastTransitionTime":"2026-01-26T07:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:08 crc kubenswrapper[4806]: I0126 07:54:08.638117 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:08Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:08 crc kubenswrapper[4806]: I0126 07:54:08.653164 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:08Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:08 crc kubenswrapper[4806]: I0126 07:54:08.674877 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f8b8acb-f4cf-41db-82f8-94ffd21c1594\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21d2ea6bc7f0a91da2b372587419ec1b34cfe3aca8f70dd4e360bf3e818bb103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adbed86e005e6e9cecb4320e0dff72eab3cfd505ff656a64e6643cf47828c789\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d102c6b4ea9d79168f0fec24c556c8fec8e3c1191646eebcbe8ed0289edfb6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c90dede1fc0753cad500555db7d92be73cb9b51fe948e3e65bed66134d85628c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e05c58fd693a7827b1ff019c6e160e4d2198004aca34245ee6e08cd37f5627da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://634f160a9064240a06e9b425f70ceae6390db7205f898efdc601437e72c362df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://250a784e5b1e7bbd67d31eec66bfcd8cfac8a2a80e8882838553063b1a951346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e9d86c9f33a8ad421c133f7b674eff9362011ad9ce60cbe1b917bda84c85047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2105d09a59a290aa3e1915a2b8c2c8aeb3a3e94ddfd4b48dc574a5362f522d18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2105d09a59a290aa3e1915a2b8c2c8aeb3a3e94ddfd4b48dc574a5362f522d18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8mw7z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:08Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:08 crc kubenswrapper[4806]: I0126 07:54:08.694927 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"674c5d9f-0f9e-47cf-b6bb-5a011bd0af68\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06b15982445e897ebbaf881e9a84973b9b700b0763ea6dfc0dfa3d5adb3d5b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67cf748dcc41d3402937cba97eadb7311a4fd7ab359a24768ffd4b6f689e9e2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://374c11776db902d4a30fdeab722bf8c85be53043b87555ac42f9f18bbfca5f78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded50f9353eb262b2587b8d1a59d8e1a06946c9b228eb0991fe35ad79f19567b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dc1a26bcb2eff0795009b3d88f7fedd86bccf553e9b8b86f8bc4bdda154566d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b85e7f36ac03113362c2ab6b15fb73e517599c6a56c6a410b9dead70191502f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b85e7f36ac03113362c2ab6b15fb73e517599c6a56c6a410b9dead70191502f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7324fbdf53b0be79696bb4bf870fe072761633e09cc88cce3a213a40b339410d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7324fbdf53b0be79696bb4bf870fe072761633e09cc88cce3a213a40b339410d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1987002ea8dfdacf427291f1757243727d744b1fe99d0be1f8cddc26beb74fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1987002ea8dfdacf427291f1757243727d744b1fe99d0be1f8cddc26beb74fc9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:53:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:08Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:08 crc kubenswrapper[4806]: I0126 07:54:08.711906 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f837be3-c97c-4ec6-9ac0-1b2c210a2bd1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://370998a25827938e42e6ca8f51cb20c64068d266a2a6e92f43cfa6033efe1f1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://befc3a7310ed730ab06800d0ffe5a5ad206f5193f921d0080543d96e53c382cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f4e90bd8d92e4159f45b887647465c23595d6a14cb9fd5a6f3b6c736345b302\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d8fe9445b34c3dea0e5508fbbd9d8c72d3b0bd3c927f3d0469fb5f86a61b7a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6442d8b57ec5789bc5ad219da279d42cd053598b4a8a843b4190a619750483d3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 07:53:53.366863 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 07:53:53.368547 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1947576678/tls.crt::/tmp/serving-cert-1947576678/tls.key\\\\\\\"\\\\nI0126 07:53:59.068951 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 07:53:59.072863 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 07:53:59.072893 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 07:53:59.072922 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 07:53:59.072931 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 07:53:59.078386 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 07:53:59.078432 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 07:53:59.078439 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 07:53:59.078445 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 07:53:59.078449 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 07:53:59.078455 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 07:53:59.078459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 07:53:59.078571 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 07:53:59.084946 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eda625619208720589e5f413d0df8e1e55e7fd5ab4a2e59d64e19150afb05ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02368f0d186c7de1c3dc25d5c60985799ded799314f332c5132230a413d531a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02368f0d186c7de1c3dc25d5c60985799ded799314f332c5132230a413d531a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:53:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:08Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:08 crc kubenswrapper[4806]: I0126 07:54:08.725401 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:08 crc kubenswrapper[4806]: I0126 07:54:08.725452 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:08 crc kubenswrapper[4806]: I0126 07:54:08.725468 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:08 crc kubenswrapper[4806]: I0126 07:54:08.725489 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:08 crc kubenswrapper[4806]: I0126 07:54:08.725504 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:08Z","lastTransitionTime":"2026-01-26T07:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:08 crc kubenswrapper[4806]: I0126 07:54:08.732592 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acf0880fe013b6ee2bf3e8342e19ebbdc21b0f7aeb69749dce219fe0e5e4939a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:08Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:08 crc kubenswrapper[4806]: I0126 07:54:08.747241 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0298829a559cb39d3f8d2f3b0e53b249e74c432ba82385ab628135d53fa77272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70db7a55580b02bc2be913a8978f895c5a3826133883cf1c838bc84a76e89af1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:08Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:08 crc kubenswrapper[4806]: I0126 07:54:08.759644 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wx526" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"265e15b3-6ef8-47df-ab15-dcc9bd9574ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5080e1085d9267f3e432eaf62a6c620f7e59db08140bd5a901ab21840bfd6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmfr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wx526\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:08Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:08 crc kubenswrapper[4806]: I0126 07:54:08.776067 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d7glh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4320ae6b-0d73-47d7-9f2c-f3c5b6b69041\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da5e8c52d415e76190ace196927bd1783b08990164b11379940ecc2b6734551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpcqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d7glh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:08Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:08 crc kubenswrapper[4806]: I0126 07:54:08.800662 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-268q5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59844d88-1bf9-4761-b664-74623e7532c3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80fc9c4f56c3d017653449212b2b9457a701e8c567317798eb12c695bccec5c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19fc2da16d6c76dad40cc2947eb8da01b1ead5804be80df5756c132456575909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19fc2da16d6c76dad40cc2947eb8da01b1ead5804be80df5756c132456575909\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a426f2a63c3cdd56cec3bbd1f6e88ff76808304da3fe757043afeb2feb26614\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a426f2a63c3cdd56cec3bbd1f6e88ff76808304da3fe757043afeb2feb26614\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5459f7b834ef08b014a811d0955763235cb4dbb6cbe92670202b03f51f7d684d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5459f7b834ef08b014a811d0955763235cb4dbb6cbe92670202b03f51f7d684d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1efe94940fbf1babcccc68efae1d38751c22232d1a2f197485057be906db80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f1efe94940fbf1babcccc68efae1d38751c22232d1a2f197485057be906db80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1332a28e0cf08275ce281997fdc5674f44e397f8ff02fdd19fed12c2de2b70a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1332a28e0cf08275ce281997fdc5674f44e397f8ff02fdd19fed12c2de2b70a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://638da847467b7cade9c68a764044417e85bfa99cae93105cb70b260d3668c1a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://638da847467b7cade9c68a764044417e85bfa99cae93105cb70b260d3668c1a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-268q5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:08Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:08 crc kubenswrapper[4806]: I0126 07:54:08.817476 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:08Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:08 crc kubenswrapper[4806]: I0126 07:54:08.828797 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:08 crc kubenswrapper[4806]: I0126 07:54:08.828835 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:08 crc kubenswrapper[4806]: I0126 07:54:08.828848 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:08 crc kubenswrapper[4806]: I0126 07:54:08.828866 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:08 crc kubenswrapper[4806]: I0126 07:54:08.828881 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:08Z","lastTransitionTime":"2026-01-26T07:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:08 crc kubenswrapper[4806]: I0126 07:54:08.835392 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dfe44c66a73660b38211ed7ae8fbb6a036c2ebc6a2716dd0e36da70b8d1fcc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:08Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:08 crc kubenswrapper[4806]: I0126 07:54:08.851257 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d07502a2-50b0-4012-b335-340a1c694c50\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03d55ca5e1d542a5cdb726c1e581fb9b644e237bf4575794fedadf60386c339c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67x45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://824bb40c04b0ea474c45020c9c84ce7b7ee18c52beef9fee9618ba5a6e59be04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67x45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k2tlk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:08Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:08 crc kubenswrapper[4806]: I0126 07:54:08.932227 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:08 crc kubenswrapper[4806]: I0126 07:54:08.932321 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:08 crc kubenswrapper[4806]: I0126 07:54:08.932339 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:08 crc kubenswrapper[4806]: I0126 07:54:08.932366 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:08 crc kubenswrapper[4806]: I0126 07:54:08.932388 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:08Z","lastTransitionTime":"2026-01-26T07:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:09 crc kubenswrapper[4806]: I0126 07:54:09.035322 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:09 crc kubenswrapper[4806]: I0126 07:54:09.035370 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:09 crc kubenswrapper[4806]: I0126 07:54:09.035382 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:09 crc kubenswrapper[4806]: I0126 07:54:09.035404 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:09 crc kubenswrapper[4806]: I0126 07:54:09.035417 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:09Z","lastTransitionTime":"2026-01-26T07:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:09 crc kubenswrapper[4806]: I0126 07:54:09.040566 4806 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 10:08:44.943158706 +0000 UTC Jan 26 07:54:09 crc kubenswrapper[4806]: I0126 07:54:09.138251 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:09 crc kubenswrapper[4806]: I0126 07:54:09.138293 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:09 crc kubenswrapper[4806]: I0126 07:54:09.138302 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:09 crc kubenswrapper[4806]: I0126 07:54:09.138318 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:09 crc kubenswrapper[4806]: I0126 07:54:09.138330 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:09Z","lastTransitionTime":"2026-01-26T07:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:09 crc kubenswrapper[4806]: I0126 07:54:09.240312 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:09 crc kubenswrapper[4806]: I0126 07:54:09.240354 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:09 crc kubenswrapper[4806]: I0126 07:54:09.240365 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:09 crc kubenswrapper[4806]: I0126 07:54:09.240382 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:09 crc kubenswrapper[4806]: I0126 07:54:09.240393 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:09Z","lastTransitionTime":"2026-01-26T07:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:09 crc kubenswrapper[4806]: I0126 07:54:09.342968 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:09 crc kubenswrapper[4806]: I0126 07:54:09.343009 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:09 crc kubenswrapper[4806]: I0126 07:54:09.343019 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:09 crc kubenswrapper[4806]: I0126 07:54:09.343037 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:09 crc kubenswrapper[4806]: I0126 07:54:09.343047 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:09Z","lastTransitionTime":"2026-01-26T07:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:09 crc kubenswrapper[4806]: I0126 07:54:09.359802 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:09 crc kubenswrapper[4806]: I0126 07:54:09.359842 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:09 crc kubenswrapper[4806]: I0126 07:54:09.359852 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:09 crc kubenswrapper[4806]: I0126 07:54:09.359869 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:09 crc kubenswrapper[4806]: I0126 07:54:09.359884 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:09Z","lastTransitionTime":"2026-01-26T07:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:09 crc kubenswrapper[4806]: E0126 07:54:09.372171 4806 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:54:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:54:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:54:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:54:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6d591560-a509-477f-85dc-1a92a429bf2e\\\",\\\"systemUUID\\\":\\\"8cee8155-a08c-4d0d-aec6-2f132dd9ee01\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:09Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:09 crc kubenswrapper[4806]: I0126 07:54:09.376492 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:09 crc kubenswrapper[4806]: I0126 07:54:09.376550 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:09 crc kubenswrapper[4806]: I0126 07:54:09.376560 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:09 crc kubenswrapper[4806]: I0126 07:54:09.376578 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:09 crc kubenswrapper[4806]: I0126 07:54:09.376590 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:09Z","lastTransitionTime":"2026-01-26T07:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:09 crc kubenswrapper[4806]: E0126 07:54:09.387178 4806 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:54:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:54:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:54:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:54:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6d591560-a509-477f-85dc-1a92a429bf2e\\\",\\\"systemUUID\\\":\\\"8cee8155-a08c-4d0d-aec6-2f132dd9ee01\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:09Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:09 crc kubenswrapper[4806]: I0126 07:54:09.390297 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:09 crc kubenswrapper[4806]: I0126 07:54:09.390328 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:09 crc kubenswrapper[4806]: I0126 07:54:09.390338 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:09 crc kubenswrapper[4806]: I0126 07:54:09.390353 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:09 crc kubenswrapper[4806]: I0126 07:54:09.390363 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:09Z","lastTransitionTime":"2026-01-26T07:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:09 crc kubenswrapper[4806]: E0126 07:54:09.400830 4806 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:54:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:54:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:54:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:54:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6d591560-a509-477f-85dc-1a92a429bf2e\\\",\\\"systemUUID\\\":\\\"8cee8155-a08c-4d0d-aec6-2f132dd9ee01\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:09Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:09 crc kubenswrapper[4806]: I0126 07:54:09.403624 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:09 crc kubenswrapper[4806]: I0126 07:54:09.403658 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:09 crc kubenswrapper[4806]: I0126 07:54:09.403670 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:09 crc kubenswrapper[4806]: I0126 07:54:09.403693 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:09 crc kubenswrapper[4806]: I0126 07:54:09.403704 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:09Z","lastTransitionTime":"2026-01-26T07:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:09 crc kubenswrapper[4806]: E0126 07:54:09.425327 4806 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:54:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:54:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:54:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:54:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6d591560-a509-477f-85dc-1a92a429bf2e\\\",\\\"systemUUID\\\":\\\"8cee8155-a08c-4d0d-aec6-2f132dd9ee01\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:09Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:09 crc kubenswrapper[4806]: I0126 07:54:09.432122 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:09 crc kubenswrapper[4806]: I0126 07:54:09.432252 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:09 crc kubenswrapper[4806]: I0126 07:54:09.432319 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:09 crc kubenswrapper[4806]: I0126 07:54:09.432381 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:09 crc kubenswrapper[4806]: I0126 07:54:09.432447 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:09Z","lastTransitionTime":"2026-01-26T07:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:09 crc kubenswrapper[4806]: E0126 07:54:09.446463 4806 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:54:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:54:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:54:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:54:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6d591560-a509-477f-85dc-1a92a429bf2e\\\",\\\"systemUUID\\\":\\\"8cee8155-a08c-4d0d-aec6-2f132dd9ee01\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:09Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:09 crc kubenswrapper[4806]: E0126 07:54:09.446834 4806 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 07:54:09 crc kubenswrapper[4806]: I0126 07:54:09.448740 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:09 crc kubenswrapper[4806]: I0126 07:54:09.449829 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:09 crc kubenswrapper[4806]: I0126 07:54:09.450058 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:09 crc kubenswrapper[4806]: I0126 07:54:09.450245 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:09 crc kubenswrapper[4806]: I0126 07:54:09.450398 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:09Z","lastTransitionTime":"2026-01-26T07:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:09 crc kubenswrapper[4806]: I0126 07:54:09.553565 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:09 crc kubenswrapper[4806]: I0126 07:54:09.553843 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:09 crc kubenswrapper[4806]: I0126 07:54:09.553973 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:09 crc kubenswrapper[4806]: I0126 07:54:09.554059 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:09 crc kubenswrapper[4806]: I0126 07:54:09.554139 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:09Z","lastTransitionTime":"2026-01-26T07:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:09 crc kubenswrapper[4806]: I0126 07:54:09.656237 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:09 crc kubenswrapper[4806]: I0126 07:54:09.656485 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:09 crc kubenswrapper[4806]: I0126 07:54:09.656559 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:09 crc kubenswrapper[4806]: I0126 07:54:09.656653 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:09 crc kubenswrapper[4806]: I0126 07:54:09.656714 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:09Z","lastTransitionTime":"2026-01-26T07:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:09 crc kubenswrapper[4806]: I0126 07:54:09.759417 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:09 crc kubenswrapper[4806]: I0126 07:54:09.759654 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:09 crc kubenswrapper[4806]: I0126 07:54:09.759715 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:09 crc kubenswrapper[4806]: I0126 07:54:09.759794 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:09 crc kubenswrapper[4806]: I0126 07:54:09.759856 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:09Z","lastTransitionTime":"2026-01-26T07:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:09 crc kubenswrapper[4806]: I0126 07:54:09.863208 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:09 crc kubenswrapper[4806]: I0126 07:54:09.864166 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:09 crc kubenswrapper[4806]: I0126 07:54:09.864377 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:09 crc kubenswrapper[4806]: I0126 07:54:09.864562 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:09 crc kubenswrapper[4806]: I0126 07:54:09.864728 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:09Z","lastTransitionTime":"2026-01-26T07:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:09 crc kubenswrapper[4806]: I0126 07:54:09.968453 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:09 crc kubenswrapper[4806]: I0126 07:54:09.969001 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:09 crc kubenswrapper[4806]: I0126 07:54:09.969119 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:09 crc kubenswrapper[4806]: I0126 07:54:09.969236 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:09 crc kubenswrapper[4806]: I0126 07:54:09.969324 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:09Z","lastTransitionTime":"2026-01-26T07:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:10 crc kubenswrapper[4806]: I0126 07:54:10.041711 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 07:54:10 crc kubenswrapper[4806]: I0126 07:54:10.041754 4806 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 05:12:29.649672318 +0000 UTC Jan 26 07:54:10 crc kubenswrapper[4806]: I0126 07:54:10.041881 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 07:54:10 crc kubenswrapper[4806]: E0126 07:54:10.042066 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 07:54:10 crc kubenswrapper[4806]: I0126 07:54:10.042118 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 07:54:10 crc kubenswrapper[4806]: E0126 07:54:10.042311 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 07:54:10 crc kubenswrapper[4806]: E0126 07:54:10.042441 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 07:54:10 crc kubenswrapper[4806]: I0126 07:54:10.071878 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:10 crc kubenswrapper[4806]: I0126 07:54:10.071917 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:10 crc kubenswrapper[4806]: I0126 07:54:10.071928 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:10 crc kubenswrapper[4806]: I0126 07:54:10.071947 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:10 crc kubenswrapper[4806]: I0126 07:54:10.071960 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:10Z","lastTransitionTime":"2026-01-26T07:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:10 crc kubenswrapper[4806]: I0126 07:54:10.175348 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:10 crc kubenswrapper[4806]: I0126 07:54:10.175408 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:10 crc kubenswrapper[4806]: I0126 07:54:10.175423 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:10 crc kubenswrapper[4806]: I0126 07:54:10.175489 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:10 crc kubenswrapper[4806]: I0126 07:54:10.175514 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:10Z","lastTransitionTime":"2026-01-26T07:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:10 crc kubenswrapper[4806]: I0126 07:54:10.278937 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:10 crc kubenswrapper[4806]: I0126 07:54:10.278993 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:10 crc kubenswrapper[4806]: I0126 07:54:10.279003 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:10 crc kubenswrapper[4806]: I0126 07:54:10.279017 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:10 crc kubenswrapper[4806]: I0126 07:54:10.279026 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:10Z","lastTransitionTime":"2026-01-26T07:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:10 crc kubenswrapper[4806]: I0126 07:54:10.382123 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:10 crc kubenswrapper[4806]: I0126 07:54:10.382192 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:10 crc kubenswrapper[4806]: I0126 07:54:10.382212 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:10 crc kubenswrapper[4806]: I0126 07:54:10.382243 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:10 crc kubenswrapper[4806]: I0126 07:54:10.382273 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:10Z","lastTransitionTime":"2026-01-26T07:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:10 crc kubenswrapper[4806]: I0126 07:54:10.488552 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:10 crc kubenswrapper[4806]: I0126 07:54:10.488591 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:10 crc kubenswrapper[4806]: I0126 07:54:10.488603 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:10 crc kubenswrapper[4806]: I0126 07:54:10.488621 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:10 crc kubenswrapper[4806]: I0126 07:54:10.488633 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:10Z","lastTransitionTime":"2026-01-26T07:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:10 crc kubenswrapper[4806]: I0126 07:54:10.592683 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:10 crc kubenswrapper[4806]: I0126 07:54:10.592756 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:10 crc kubenswrapper[4806]: I0126 07:54:10.592773 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:10 crc kubenswrapper[4806]: I0126 07:54:10.592801 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:10 crc kubenswrapper[4806]: I0126 07:54:10.592820 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:10Z","lastTransitionTime":"2026-01-26T07:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:10 crc kubenswrapper[4806]: I0126 07:54:10.696616 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:10 crc kubenswrapper[4806]: I0126 07:54:10.696680 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:10 crc kubenswrapper[4806]: I0126 07:54:10.696694 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:10 crc kubenswrapper[4806]: I0126 07:54:10.696716 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:10 crc kubenswrapper[4806]: I0126 07:54:10.696733 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:10Z","lastTransitionTime":"2026-01-26T07:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:10 crc kubenswrapper[4806]: I0126 07:54:10.800413 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:10 crc kubenswrapper[4806]: I0126 07:54:10.800483 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:10 crc kubenswrapper[4806]: I0126 07:54:10.800500 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:10 crc kubenswrapper[4806]: I0126 07:54:10.800559 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:10 crc kubenswrapper[4806]: I0126 07:54:10.800586 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:10Z","lastTransitionTime":"2026-01-26T07:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:10 crc kubenswrapper[4806]: I0126 07:54:10.903753 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:10 crc kubenswrapper[4806]: I0126 07:54:10.903843 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:10 crc kubenswrapper[4806]: I0126 07:54:10.904026 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:10 crc kubenswrapper[4806]: I0126 07:54:10.904246 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:10 crc kubenswrapper[4806]: I0126 07:54:10.904283 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:10Z","lastTransitionTime":"2026-01-26T07:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:11 crc kubenswrapper[4806]: I0126 07:54:11.007593 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:11 crc kubenswrapper[4806]: I0126 07:54:11.007676 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:11 crc kubenswrapper[4806]: I0126 07:54:11.007689 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:11 crc kubenswrapper[4806]: I0126 07:54:11.007716 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:11 crc kubenswrapper[4806]: I0126 07:54:11.007741 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:11Z","lastTransitionTime":"2026-01-26T07:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:11 crc kubenswrapper[4806]: I0126 07:54:11.041979 4806 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 19:51:09.700178708 +0000 UTC Jan 26 07:54:11 crc kubenswrapper[4806]: I0126 07:54:11.061988 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pw8cg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c73a9f4-20b2-4c8a-b25d-413770be4fac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://792a2a27d6043da345b05c611a152abc9233ff61958434dc7a7f8c13d185fc04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt9w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pw8cg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:11Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:11 crc kubenswrapper[4806]: I0126 07:54:11.086453 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:11Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:11 crc kubenswrapper[4806]: I0126 07:54:11.111205 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:11 crc kubenswrapper[4806]: I0126 07:54:11.111292 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:11 crc kubenswrapper[4806]: I0126 07:54:11.111318 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:11 crc kubenswrapper[4806]: I0126 07:54:11.111356 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:11 crc kubenswrapper[4806]: I0126 07:54:11.111384 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:11Z","lastTransitionTime":"2026-01-26T07:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:11 crc kubenswrapper[4806]: I0126 07:54:11.123344 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f8b8acb-f4cf-41db-82f8-94ffd21c1594\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21d2ea6bc7f0a91da2b372587419ec1b34cfe3aca8f70dd4e360bf3e818bb103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adbed86e005e6e9cecb4320e0dff72eab3cfd505ff656a64e6643cf47828c789\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d102c6b4ea9d79168f0fec24c556c8fec8e3c1191646eebcbe8ed0289edfb6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c90dede1fc0753cad500555db7d92be73cb9b51fe948e3e65bed66134d85628c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e05c58fd693a7827b1ff019c6e160e4d2198004aca34245ee6e08cd37f5627da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://634f160a9064240a06e9b425f70ceae6390db7205f898efdc601437e72c362df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://250a784e5b1e7bbd67d31eec66bfcd8cfac8a2a80e8882838553063b1a951346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e9d86c9f33a8ad421c133f7b674eff9362011ad9ce60cbe1b917bda84c85047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2105d09a59a290aa3e1915a2b8c2c8aeb3a3e94ddfd4b48dc574a5362f522d18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2105d09a59a290aa3e1915a2b8c2c8aeb3a3e94ddfd4b48dc574a5362f522d18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8mw7z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:11Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:11 crc kubenswrapper[4806]: I0126 07:54:11.144549 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a2716f1-7f63-4f1b-88e7-412b33a5afe3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1797a908e33ef3678ac1471e937a8e52c11dbb31d8de0f19d400a01706a98b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c111f2ec62507ccc570af4f4da19232bfa0946bb9e67bde595d5a3d66eff6a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f94ceee9c2ed3bd1c5ba58ee7e5047dd5f0a325c610c4277705f3426e69ed79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50228cb176a3defc821a78ed75d837706c2de3a3414f02e0afacf81289a15999\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:53:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:11Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:11 crc kubenswrapper[4806]: I0126 07:54:11.168084 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:11Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:11 crc kubenswrapper[4806]: I0126 07:54:11.197669 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0298829a559cb39d3f8d2f3b0e53b249e74c432ba82385ab628135d53fa77272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70db7a55580b02bc2be913a8978f895c5a3826133883cf1c838bc84a76e89af1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:11Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:11 crc kubenswrapper[4806]: I0126 07:54:11.211811 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wx526" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"265e15b3-6ef8-47df-ab15-dcc9bd9574ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5080e1085d9267f3e432eaf62a6c620f7e59db08140bd5a901ab21840bfd6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmfr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wx526\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:11Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:11 crc kubenswrapper[4806]: I0126 07:54:11.215278 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:11 crc kubenswrapper[4806]: I0126 07:54:11.215349 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:11 crc kubenswrapper[4806]: I0126 07:54:11.215370 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:11 crc kubenswrapper[4806]: I0126 07:54:11.215396 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:11 crc kubenswrapper[4806]: I0126 07:54:11.215415 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:11Z","lastTransitionTime":"2026-01-26T07:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:11 crc kubenswrapper[4806]: I0126 07:54:11.227877 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d7glh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4320ae6b-0d73-47d7-9f2c-f3c5b6b69041\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da5e8c52d415e76190ace196927bd1783b08990164b11379940ecc2b6734551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpcqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d7glh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:11Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:11 crc kubenswrapper[4806]: I0126 07:54:11.248385 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-268q5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59844d88-1bf9-4761-b664-74623e7532c3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80fc9c4f56c3d017653449212b2b9457a701e8c567317798eb12c695bccec5c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19fc2da16d6c76dad40cc2947eb8da01b1ead5804be80df5756c132456575909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19fc2da16d6c76dad40cc2947eb8da01b1ead5804be80df5756c132456575909\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a426f2a63c3cdd56cec3bbd1f6e88ff76808304da3fe757043afeb2feb26614\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a426f2a63c3cdd56cec3bbd1f6e88ff76808304da3fe757043afeb2feb26614\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5459f7b834ef08b014a811d0955763235cb4dbb6cbe92670202b03f51f7d684d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5459f7b834ef08b014a811d0955763235cb4dbb6cbe92670202b03f51f7d684d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1efe94940fbf1babcccc68efae1d38751c22232d1a2f197485057be906db80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f1efe94940fbf1babcccc68efae1d38751c22232d1a2f197485057be906db80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1332a28e0cf08275ce281997fdc5674f44e397f8ff02fdd19fed12c2de2b70a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1332a28e0cf08275ce281997fdc5674f44e397f8ff02fdd19fed12c2de2b70a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://638da847467b7cade9c68a764044417e85bfa99cae93105cb70b260d3668c1a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://638da847467b7cade9c68a764044417e85bfa99cae93105cb70b260d3668c1a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-268q5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:11Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:11 crc kubenswrapper[4806]: I0126 07:54:11.274295 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"674c5d9f-0f9e-47cf-b6bb-5a011bd0af68\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06b15982445e897ebbaf881e9a84973b9b700b0763ea6dfc0dfa3d5adb3d5b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67cf748dcc41d3402937cba97eadb7311a4fd7ab359a24768ffd4b6f689e9e2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://374c11776db902d4a30fdeab722bf8c85be53043b87555ac42f9f18bbfca5f78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded50f9353eb262b2587b8d1a59d8e1a06946c9b228eb0991fe35ad79f19567b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dc1a26bcb2eff0795009b3d88f7fedd86bccf553e9b8b86f8bc4bdda154566d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b85e7f36ac03113362c2ab6b15fb73e517599c6a56c6a410b9dead70191502f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b85e7f36ac03113362c2ab6b15fb73e517599c6a56c6a410b9dead70191502f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7324fbdf53b0be79696bb4bf870fe072761633e09cc88cce3a213a40b339410d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7324fbdf53b0be79696bb4bf870fe072761633e09cc88cce3a213a40b339410d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1987002ea8dfdacf427291f1757243727d744b1fe99d0be1f8cddc26beb74fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1987002ea8dfdacf427291f1757243727d744b1fe99d0be1f8cddc26beb74fc9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:53:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:11Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:11 crc kubenswrapper[4806]: I0126 07:54:11.290299 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8mw7z_1f8b8acb-f4cf-41db-82f8-94ffd21c1594/ovnkube-controller/0.log" Jan 26 07:54:11 crc kubenswrapper[4806]: I0126 07:54:11.292737 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f837be3-c97c-4ec6-9ac0-1b2c210a2bd1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://370998a25827938e42e6ca8f51cb20c64068d266a2a6e92f43cfa6033efe1f1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://befc3a7310ed730ab06800d0ffe5a5ad206f5193f921d0080543d96e53c382cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f4e90bd8d92e4159f45b887647465c23595d6a14cb9fd5a6f3b6c736345b302\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d8fe9445b34c3dea0e5508fbbd9d8c72d3b0bd3c927f3d0469fb5f86a61b7a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6442d8b57ec5789bc5ad219da279d42cd053598b4a8a843b4190a619750483d3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 07:53:53.366863 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 07:53:53.368547 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1947576678/tls.crt::/tmp/serving-cert-1947576678/tls.key\\\\\\\"\\\\nI0126 07:53:59.068951 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 07:53:59.072863 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 07:53:59.072893 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 07:53:59.072922 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 07:53:59.072931 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 07:53:59.078386 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 07:53:59.078432 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 07:53:59.078439 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 07:53:59.078445 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 07:53:59.078449 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 07:53:59.078455 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 07:53:59.078459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 07:53:59.078571 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 07:53:59.084946 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eda625619208720589e5f413d0df8e1e55e7fd5ab4a2e59d64e19150afb05ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02368f0d186c7de1c3dc25d5c60985799ded799314f332c5132230a413d531a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02368f0d186c7de1c3dc25d5c60985799ded799314f332c5132230a413d531a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:53:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:11Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:11 crc kubenswrapper[4806]: I0126 07:54:11.296804 4806 generic.go:334] "Generic (PLEG): container finished" podID="1f8b8acb-f4cf-41db-82f8-94ffd21c1594" containerID="250a784e5b1e7bbd67d31eec66bfcd8cfac8a2a80e8882838553063b1a951346" exitCode=1 Jan 26 07:54:11 crc kubenswrapper[4806]: I0126 07:54:11.296909 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" event={"ID":"1f8b8acb-f4cf-41db-82f8-94ffd21c1594","Type":"ContainerDied","Data":"250a784e5b1e7bbd67d31eec66bfcd8cfac8a2a80e8882838553063b1a951346"} Jan 26 07:54:11 crc kubenswrapper[4806]: I0126 07:54:11.298912 4806 scope.go:117] "RemoveContainer" containerID="250a784e5b1e7bbd67d31eec66bfcd8cfac8a2a80e8882838553063b1a951346" Jan 26 07:54:11 crc kubenswrapper[4806]: I0126 07:54:11.313078 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acf0880fe013b6ee2bf3e8342e19ebbdc21b0f7aeb69749dce219fe0e5e4939a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:11Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:11 crc kubenswrapper[4806]: I0126 07:54:11.319507 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:11 crc kubenswrapper[4806]: I0126 07:54:11.319595 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:11 crc kubenswrapper[4806]: I0126 07:54:11.319620 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:11 crc kubenswrapper[4806]: I0126 07:54:11.319651 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:11 crc kubenswrapper[4806]: I0126 07:54:11.319670 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:11Z","lastTransitionTime":"2026-01-26T07:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:11 crc kubenswrapper[4806]: I0126 07:54:11.333776 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d07502a2-50b0-4012-b335-340a1c694c50\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03d55ca5e1d542a5cdb726c1e581fb9b644e237bf4575794fedadf60386c339c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67x45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://824bb40c04b0ea474c45020c9c84ce7b7ee18c52beef9fee9618ba5a6e59be04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67x45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k2tlk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:11Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:11 crc kubenswrapper[4806]: I0126 07:54:11.351949 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:11Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:11 crc kubenswrapper[4806]: I0126 07:54:11.369118 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dfe44c66a73660b38211ed7ae8fbb6a036c2ebc6a2716dd0e36da70b8d1fcc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:11Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:11 crc kubenswrapper[4806]: I0126 07:54:11.382719 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pw8cg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c73a9f4-20b2-4c8a-b25d-413770be4fac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://792a2a27d6043da345b05c611a152abc9233ff61958434dc7a7f8c13d185fc04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt9w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pw8cg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:11Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:11 crc kubenswrapper[4806]: I0126 07:54:11.399835 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:11Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:11 crc kubenswrapper[4806]: I0126 07:54:11.416594 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:11Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:11 crc kubenswrapper[4806]: I0126 07:54:11.424407 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:11 crc kubenswrapper[4806]: I0126 07:54:11.424737 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:11 crc kubenswrapper[4806]: I0126 07:54:11.424833 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:11 crc kubenswrapper[4806]: I0126 07:54:11.424932 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:11 crc kubenswrapper[4806]: I0126 07:54:11.425016 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:11Z","lastTransitionTime":"2026-01-26T07:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:11 crc kubenswrapper[4806]: I0126 07:54:11.441114 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f8b8acb-f4cf-41db-82f8-94ffd21c1594\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21d2ea6bc7f0a91da2b372587419ec1b34cfe3aca8f70dd4e360bf3e818bb103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adbed86e005e6e9cecb4320e0dff72eab3cfd505ff656a64e6643cf47828c789\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d102c6b4ea9d79168f0fec24c556c8fec8e3c1191646eebcbe8ed0289edfb6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c90dede1fc0753cad500555db7d92be73cb9b51fe948e3e65bed66134d85628c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e05c58fd693a7827b1ff019c6e160e4d2198004aca34245ee6e08cd37f5627da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://634f160a9064240a06e9b425f70ceae6390db7205f898efdc601437e72c362df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://250a784e5b1e7bbd67d31eec66bfcd8cfac8a2a80e8882838553063b1a951346\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://250a784e5b1e7bbd67d31eec66bfcd8cfac8a2a80e8882838553063b1a951346\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T07:54:10Z\\\",\\\"message\\\":\\\"ons/factory.go:141\\\\nI0126 07:54:10.044291 6015 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0126 07:54:10.044304 6015 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0126 07:54:10.044958 6015 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 07:54:10.045293 6015 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 07:54:10.045726 6015 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 07:54:10.045814 6015 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0126 07:54:10.046455 6015 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0126 07:54:10.046486 6015 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0126 07:54:10.046495 6015 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0126 07:54:10.046559 6015 factory.go:656] Stopping watch factory\\\\nI0126 07:54:10.046569 6015 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0126 07:54:10.046587 6015 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0126 07:54:10.046599 6015 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e9d86c9f33a8ad421c133f7b674eff9362011ad9ce60cbe1b917bda84c85047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2105d09a59a290aa3e1915a2b8c2c8aeb3a3e94ddfd4b48dc574a5362f522d18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2105d09a59a290aa3e1915a2b8c2c8aeb3a3e94ddfd4b48dc574a5362f522d18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8mw7z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:11Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:11 crc kubenswrapper[4806]: I0126 07:54:11.462399 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a2716f1-7f63-4f1b-88e7-412b33a5afe3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1797a908e33ef3678ac1471e937a8e52c11dbb31d8de0f19d400a01706a98b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c111f2ec62507ccc570af4f4da19232bfa0946bb9e67bde595d5a3d66eff6a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f94ceee9c2ed3bd1c5ba58ee7e5047dd5f0a325c610c4277705f3426e69ed79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50228cb176a3defc821a78ed75d837706c2de3a3414f02e0afacf81289a15999\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:53:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:11Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:11 crc kubenswrapper[4806]: I0126 07:54:11.476699 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acf0880fe013b6ee2bf3e8342e19ebbdc21b0f7aeb69749dce219fe0e5e4939a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:11Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:11 crc kubenswrapper[4806]: I0126 07:54:11.491044 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0298829a559cb39d3f8d2f3b0e53b249e74c432ba82385ab628135d53fa77272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70db7a55580b02bc2be913a8978f895c5a3826133883cf1c838bc84a76e89af1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:11Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:11 crc kubenswrapper[4806]: I0126 07:54:11.502688 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wx526" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"265e15b3-6ef8-47df-ab15-dcc9bd9574ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5080e1085d9267f3e432eaf62a6c620f7e59db08140bd5a901ab21840bfd6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmfr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wx526\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:11Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:11 crc kubenswrapper[4806]: I0126 07:54:11.521517 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d7glh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4320ae6b-0d73-47d7-9f2c-f3c5b6b69041\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da5e8c52d415e76190ace196927bd1783b08990164b11379940ecc2b6734551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpcqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d7glh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:11Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:11 crc kubenswrapper[4806]: I0126 07:54:11.527978 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:11 crc kubenswrapper[4806]: I0126 07:54:11.528038 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:11 crc kubenswrapper[4806]: I0126 07:54:11.528053 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:11 crc kubenswrapper[4806]: I0126 07:54:11.528073 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:11 crc kubenswrapper[4806]: I0126 07:54:11.528085 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:11Z","lastTransitionTime":"2026-01-26T07:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:11 crc kubenswrapper[4806]: I0126 07:54:11.538426 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-268q5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59844d88-1bf9-4761-b664-74623e7532c3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80fc9c4f56c3d017653449212b2b9457a701e8c567317798eb12c695bccec5c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19fc2da16d6c76dad40cc2947eb8da01b1ead5804be80df5756c132456575909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19fc2da16d6c76dad40cc2947eb8da01b1ead5804be80df5756c132456575909\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a426f2a63c3cdd56cec3bbd1f6e88ff76808304da3fe757043afeb2feb26614\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a426f2a63c3cdd56cec3bbd1f6e88ff76808304da3fe757043afeb2feb26614\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5459f7b834ef08b014a811d0955763235cb4dbb6cbe92670202b03f51f7d684d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5459f7b834ef08b014a811d0955763235cb4dbb6cbe92670202b03f51f7d684d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1efe94940fbf1babcccc68efae1d38751c22232d1a2f197485057be906db80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f1efe94940fbf1babcccc68efae1d38751c22232d1a2f197485057be906db80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1332a28e0cf08275ce281997fdc5674f44e397f8ff02fdd19fed12c2de2b70a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1332a28e0cf08275ce281997fdc5674f44e397f8ff02fdd19fed12c2de2b70a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://638da847467b7cade9c68a764044417e85bfa99cae93105cb70b260d3668c1a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://638da847467b7cade9c68a764044417e85bfa99cae93105cb70b260d3668c1a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-268q5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:11Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:11 crc kubenswrapper[4806]: I0126 07:54:11.563157 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"674c5d9f-0f9e-47cf-b6bb-5a011bd0af68\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06b15982445e897ebbaf881e9a84973b9b700b0763ea6dfc0dfa3d5adb3d5b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67cf748dcc41d3402937cba97eadb7311a4fd7ab359a24768ffd4b6f689e9e2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://374c11776db902d4a30fdeab722bf8c85be53043b87555ac42f9f18bbfca5f78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded50f9353eb262b2587b8d1a59d8e1a06946c9b228eb0991fe35ad79f19567b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dc1a26bcb2eff0795009b3d88f7fedd86bccf553e9b8b86f8bc4bdda154566d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b85e7f36ac03113362c2ab6b15fb73e517599c6a56c6a410b9dead70191502f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b85e7f36ac03113362c2ab6b15fb73e517599c6a56c6a410b9dead70191502f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7324fbdf53b0be79696bb4bf870fe072761633e09cc88cce3a213a40b339410d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7324fbdf53b0be79696bb4bf870fe072761633e09cc88cce3a213a40b339410d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1987002ea8dfdacf427291f1757243727d744b1fe99d0be1f8cddc26beb74fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1987002ea8dfdacf427291f1757243727d744b1fe99d0be1f8cddc26beb74fc9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:53:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:11Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:11 crc kubenswrapper[4806]: I0126 07:54:11.581122 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f837be3-c97c-4ec6-9ac0-1b2c210a2bd1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://370998a25827938e42e6ca8f51cb20c64068d266a2a6e92f43cfa6033efe1f1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://befc3a7310ed730ab06800d0ffe5a5ad206f5193f921d0080543d96e53c382cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f4e90bd8d92e4159f45b887647465c23595d6a14cb9fd5a6f3b6c736345b302\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d8fe9445b34c3dea0e5508fbbd9d8c72d3b0bd3c927f3d0469fb5f86a61b7a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6442d8b57ec5789bc5ad219da279d42cd053598b4a8a843b4190a619750483d3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 07:53:53.366863 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 07:53:53.368547 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1947576678/tls.crt::/tmp/serving-cert-1947576678/tls.key\\\\\\\"\\\\nI0126 07:53:59.068951 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 07:53:59.072863 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 07:53:59.072893 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 07:53:59.072922 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 07:53:59.072931 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 07:53:59.078386 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 07:53:59.078432 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 07:53:59.078439 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 07:53:59.078445 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 07:53:59.078449 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 07:53:59.078455 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 07:53:59.078459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 07:53:59.078571 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 07:53:59.084946 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eda625619208720589e5f413d0df8e1e55e7fd5ab4a2e59d64e19150afb05ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02368f0d186c7de1c3dc25d5c60985799ded799314f332c5132230a413d531a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02368f0d186c7de1c3dc25d5c60985799ded799314f332c5132230a413d531a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:53:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:11Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:11 crc kubenswrapper[4806]: I0126 07:54:11.598849 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dfe44c66a73660b38211ed7ae8fbb6a036c2ebc6a2716dd0e36da70b8d1fcc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:11Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:11 crc kubenswrapper[4806]: I0126 07:54:11.612271 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d07502a2-50b0-4012-b335-340a1c694c50\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03d55ca5e1d542a5cdb726c1e581fb9b644e237bf4575794fedadf60386c339c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67x45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://824bb40c04b0ea474c45020c9c84ce7b7ee18c52beef9fee9618ba5a6e59be04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67x45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k2tlk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:11Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:11 crc kubenswrapper[4806]: I0126 07:54:11.630944 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:11 crc kubenswrapper[4806]: I0126 07:54:11.630996 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:11 crc kubenswrapper[4806]: I0126 07:54:11.631009 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:11 crc kubenswrapper[4806]: I0126 07:54:11.631029 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:11 crc kubenswrapper[4806]: I0126 07:54:11.631042 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:11Z","lastTransitionTime":"2026-01-26T07:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:11 crc kubenswrapper[4806]: I0126 07:54:11.631490 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:11Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:11 crc kubenswrapper[4806]: I0126 07:54:11.747292 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:11 crc kubenswrapper[4806]: I0126 07:54:11.747353 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:11 crc kubenswrapper[4806]: I0126 07:54:11.747371 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:11 crc kubenswrapper[4806]: I0126 07:54:11.747400 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:11 crc kubenswrapper[4806]: I0126 07:54:11.747422 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:11Z","lastTransitionTime":"2026-01-26T07:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:11 crc kubenswrapper[4806]: I0126 07:54:11.850116 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:11 crc kubenswrapper[4806]: I0126 07:54:11.850163 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:11 crc kubenswrapper[4806]: I0126 07:54:11.850178 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:11 crc kubenswrapper[4806]: I0126 07:54:11.850204 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:11 crc kubenswrapper[4806]: I0126 07:54:11.850222 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:11Z","lastTransitionTime":"2026-01-26T07:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:11 crc kubenswrapper[4806]: I0126 07:54:11.953672 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:11 crc kubenswrapper[4806]: I0126 07:54:11.953750 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:11 crc kubenswrapper[4806]: I0126 07:54:11.953770 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:11 crc kubenswrapper[4806]: I0126 07:54:11.953801 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:11 crc kubenswrapper[4806]: I0126 07:54:11.953825 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:11Z","lastTransitionTime":"2026-01-26T07:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:12 crc kubenswrapper[4806]: I0126 07:54:12.041368 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 07:54:12 crc kubenswrapper[4806]: I0126 07:54:12.041646 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 07:54:12 crc kubenswrapper[4806]: I0126 07:54:12.041717 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 07:54:12 crc kubenswrapper[4806]: E0126 07:54:12.041742 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 07:54:12 crc kubenswrapper[4806]: E0126 07:54:12.041788 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 07:54:12 crc kubenswrapper[4806]: E0126 07:54:12.041847 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 07:54:12 crc kubenswrapper[4806]: I0126 07:54:12.042239 4806 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 10:18:41.864489955 +0000 UTC Jan 26 07:54:12 crc kubenswrapper[4806]: I0126 07:54:12.056750 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:12 crc kubenswrapper[4806]: I0126 07:54:12.056788 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:12 crc kubenswrapper[4806]: I0126 07:54:12.056801 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:12 crc kubenswrapper[4806]: I0126 07:54:12.056822 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:12 crc kubenswrapper[4806]: I0126 07:54:12.056835 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:12Z","lastTransitionTime":"2026-01-26T07:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:12 crc kubenswrapper[4806]: I0126 07:54:12.159748 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:12 crc kubenswrapper[4806]: I0126 07:54:12.159822 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:12 crc kubenswrapper[4806]: I0126 07:54:12.159838 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:12 crc kubenswrapper[4806]: I0126 07:54:12.159880 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:12 crc kubenswrapper[4806]: I0126 07:54:12.159895 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:12Z","lastTransitionTime":"2026-01-26T07:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:12 crc kubenswrapper[4806]: I0126 07:54:12.263813 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:12 crc kubenswrapper[4806]: I0126 07:54:12.263862 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:12 crc kubenswrapper[4806]: I0126 07:54:12.263875 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:12 crc kubenswrapper[4806]: I0126 07:54:12.263925 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:12 crc kubenswrapper[4806]: I0126 07:54:12.263941 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:12Z","lastTransitionTime":"2026-01-26T07:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:12 crc kubenswrapper[4806]: I0126 07:54:12.308087 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8mw7z_1f8b8acb-f4cf-41db-82f8-94ffd21c1594/ovnkube-controller/0.log" Jan 26 07:54:12 crc kubenswrapper[4806]: I0126 07:54:12.311837 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" event={"ID":"1f8b8acb-f4cf-41db-82f8-94ffd21c1594","Type":"ContainerStarted","Data":"e7b156e8395962880407d0db1c1b809b804ce946f09b03d256e2d49bbeea75f1"} Jan 26 07:54:12 crc kubenswrapper[4806]: I0126 07:54:12.313020 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" Jan 26 07:54:12 crc kubenswrapper[4806]: I0126 07:54:12.334651 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pw8cg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c73a9f4-20b2-4c8a-b25d-413770be4fac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://792a2a27d6043da345b05c611a152abc9233ff61958434dc7a7f8c13d185fc04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt9w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pw8cg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:12Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:12 crc kubenswrapper[4806]: I0126 07:54:12.357970 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a2716f1-7f63-4f1b-88e7-412b33a5afe3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1797a908e33ef3678ac1471e937a8e52c11dbb31d8de0f19d400a01706a98b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c111f2ec62507ccc570af4f4da19232bfa0946bb9e67bde595d5a3d66eff6a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f94ceee9c2ed3bd1c5ba58ee7e5047dd5f0a325c610c4277705f3426e69ed79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50228cb176a3defc821a78ed75d837706c2de3a3414f02e0afacf81289a15999\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:53:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:12Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:12 crc kubenswrapper[4806]: I0126 07:54:12.368102 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:12 crc kubenswrapper[4806]: I0126 07:54:12.368203 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:12 crc kubenswrapper[4806]: I0126 07:54:12.368229 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:12 crc kubenswrapper[4806]: I0126 07:54:12.368771 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:12 crc kubenswrapper[4806]: I0126 07:54:12.369050 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:12Z","lastTransitionTime":"2026-01-26T07:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:12 crc kubenswrapper[4806]: I0126 07:54:12.380918 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:12Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:12 crc kubenswrapper[4806]: I0126 07:54:12.404635 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:12Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:12 crc kubenswrapper[4806]: I0126 07:54:12.434712 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f8b8acb-f4cf-41db-82f8-94ffd21c1594\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21d2ea6bc7f0a91da2b372587419ec1b34cfe3aca8f70dd4e360bf3e818bb103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adbed86e005e6e9cecb4320e0dff72eab3cfd505ff656a64e6643cf47828c789\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d102c6b4ea9d79168f0fec24c556c8fec8e3c1191646eebcbe8ed0289edfb6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c90dede1fc0753cad500555db7d92be73cb9b51fe948e3e65bed66134d85628c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e05c58fd693a7827b1ff019c6e160e4d2198004aca34245ee6e08cd37f5627da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://634f160a9064240a06e9b425f70ceae6390db7205f898efdc601437e72c362df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7b156e8395962880407d0db1c1b809b804ce946f09b03d256e2d49bbeea75f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://250a784e5b1e7bbd67d31eec66bfcd8cfac8a2a80e8882838553063b1a951346\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T07:54:10Z\\\",\\\"message\\\":\\\"ons/factory.go:141\\\\nI0126 07:54:10.044291 6015 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0126 07:54:10.044304 6015 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0126 07:54:10.044958 6015 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 07:54:10.045293 6015 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 07:54:10.045726 6015 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 07:54:10.045814 6015 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0126 07:54:10.046455 6015 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0126 07:54:10.046486 6015 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0126 07:54:10.046495 6015 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0126 07:54:10.046559 6015 factory.go:656] Stopping watch factory\\\\nI0126 07:54:10.046569 6015 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0126 07:54:10.046587 6015 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0126 07:54:10.046599 6015 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e9d86c9f33a8ad421c133f7b674eff9362011ad9ce60cbe1b917bda84c85047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2105d09a59a290aa3e1915a2b8c2c8aeb3a3e94ddfd4b48dc574a5362f522d18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2105d09a59a290aa3e1915a2b8c2c8aeb3a3e94ddfd4b48dc574a5362f522d18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8mw7z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:12Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:12 crc kubenswrapper[4806]: I0126 07:54:12.468271 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"674c5d9f-0f9e-47cf-b6bb-5a011bd0af68\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06b15982445e897ebbaf881e9a84973b9b700b0763ea6dfc0dfa3d5adb3d5b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67cf748dcc41d3402937cba97eadb7311a4fd7ab359a24768ffd4b6f689e9e2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://374c11776db902d4a30fdeab722bf8c85be53043b87555ac42f9f18bbfca5f78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded50f9353eb262b2587b8d1a59d8e1a06946c9b228eb0991fe35ad79f19567b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dc1a26bcb2eff0795009b3d88f7fedd86bccf553e9b8b86f8bc4bdda154566d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b85e7f36ac03113362c2ab6b15fb73e517599c6a56c6a410b9dead70191502f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b85e7f36ac03113362c2ab6b15fb73e517599c6a56c6a410b9dead70191502f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7324fbdf53b0be79696bb4bf870fe072761633e09cc88cce3a213a40b339410d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7324fbdf53b0be79696bb4bf870fe072761633e09cc88cce3a213a40b339410d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1987002ea8dfdacf427291f1757243727d744b1fe99d0be1f8cddc26beb74fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1987002ea8dfdacf427291f1757243727d744b1fe99d0be1f8cddc26beb74fc9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:53:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:12Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:12 crc kubenswrapper[4806]: I0126 07:54:12.475079 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:12 crc kubenswrapper[4806]: I0126 07:54:12.475121 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:12 crc kubenswrapper[4806]: I0126 07:54:12.475131 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:12 crc kubenswrapper[4806]: I0126 07:54:12.475149 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:12 crc kubenswrapper[4806]: I0126 07:54:12.475165 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:12Z","lastTransitionTime":"2026-01-26T07:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:12 crc kubenswrapper[4806]: I0126 07:54:12.486590 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f837be3-c97c-4ec6-9ac0-1b2c210a2bd1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://370998a25827938e42e6ca8f51cb20c64068d266a2a6e92f43cfa6033efe1f1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://befc3a7310ed730ab06800d0ffe5a5ad206f5193f921d0080543d96e53c382cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f4e90bd8d92e4159f45b887647465c23595d6a14cb9fd5a6f3b6c736345b302\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d8fe9445b34c3dea0e5508fbbd9d8c72d3b0bd3c927f3d0469fb5f86a61b7a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6442d8b57ec5789bc5ad219da279d42cd053598b4a8a843b4190a619750483d3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 07:53:53.366863 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 07:53:53.368547 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1947576678/tls.crt::/tmp/serving-cert-1947576678/tls.key\\\\\\\"\\\\nI0126 07:53:59.068951 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 07:53:59.072863 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 07:53:59.072893 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 07:53:59.072922 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 07:53:59.072931 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 07:53:59.078386 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 07:53:59.078432 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 07:53:59.078439 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 07:53:59.078445 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 07:53:59.078449 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 07:53:59.078455 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 07:53:59.078459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 07:53:59.078571 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 07:53:59.084946 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eda625619208720589e5f413d0df8e1e55e7fd5ab4a2e59d64e19150afb05ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02368f0d186c7de1c3dc25d5c60985799ded799314f332c5132230a413d531a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02368f0d186c7de1c3dc25d5c60985799ded799314f332c5132230a413d531a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:53:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:12Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:12 crc kubenswrapper[4806]: I0126 07:54:12.507242 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acf0880fe013b6ee2bf3e8342e19ebbdc21b0f7aeb69749dce219fe0e5e4939a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:12Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:12 crc kubenswrapper[4806]: I0126 07:54:12.530006 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0298829a559cb39d3f8d2f3b0e53b249e74c432ba82385ab628135d53fa77272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70db7a55580b02bc2be913a8978f895c5a3826133883cf1c838bc84a76e89af1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:12Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:12 crc kubenswrapper[4806]: I0126 07:54:12.545552 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wx526" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"265e15b3-6ef8-47df-ab15-dcc9bd9574ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5080e1085d9267f3e432eaf62a6c620f7e59db08140bd5a901ab21840bfd6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmfr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wx526\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:12Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:12 crc kubenswrapper[4806]: I0126 07:54:12.568990 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d7glh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4320ae6b-0d73-47d7-9f2c-f3c5b6b69041\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da5e8c52d415e76190ace196927bd1783b08990164b11379940ecc2b6734551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpcqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d7glh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:12Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:12 crc kubenswrapper[4806]: I0126 07:54:12.577561 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:12 crc kubenswrapper[4806]: I0126 07:54:12.577628 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:12 crc kubenswrapper[4806]: I0126 07:54:12.577641 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:12 crc kubenswrapper[4806]: I0126 07:54:12.577663 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:12 crc kubenswrapper[4806]: I0126 07:54:12.577676 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:12Z","lastTransitionTime":"2026-01-26T07:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:12 crc kubenswrapper[4806]: I0126 07:54:12.604510 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-268q5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59844d88-1bf9-4761-b664-74623e7532c3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80fc9c4f56c3d017653449212b2b9457a701e8c567317798eb12c695bccec5c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19fc2da16d6c76dad40cc2947eb8da01b1ead5804be80df5756c132456575909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19fc2da16d6c76dad40cc2947eb8da01b1ead5804be80df5756c132456575909\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a426f2a63c3cdd56cec3bbd1f6e88ff76808304da3fe757043afeb2feb26614\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a426f2a63c3cdd56cec3bbd1f6e88ff76808304da3fe757043afeb2feb26614\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5459f7b834ef08b014a811d0955763235cb4dbb6cbe92670202b03f51f7d684d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5459f7b834ef08b014a811d0955763235cb4dbb6cbe92670202b03f51f7d684d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1efe94940fbf1babcccc68efae1d38751c22232d1a2f197485057be906db80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f1efe94940fbf1babcccc68efae1d38751c22232d1a2f197485057be906db80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1332a28e0cf08275ce281997fdc5674f44e397f8ff02fdd19fed12c2de2b70a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1332a28e0cf08275ce281997fdc5674f44e397f8ff02fdd19fed12c2de2b70a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://638da847467b7cade9c68a764044417e85bfa99cae93105cb70b260d3668c1a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://638da847467b7cade9c68a764044417e85bfa99cae93105cb70b260d3668c1a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-268q5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:12Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:12 crc kubenswrapper[4806]: I0126 07:54:12.631465 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:12Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:12 crc kubenswrapper[4806]: I0126 07:54:12.651175 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dfe44c66a73660b38211ed7ae8fbb6a036c2ebc6a2716dd0e36da70b8d1fcc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:12Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:12 crc kubenswrapper[4806]: I0126 07:54:12.667858 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d07502a2-50b0-4012-b335-340a1c694c50\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03d55ca5e1d542a5cdb726c1e581fb9b644e237bf4575794fedadf60386c339c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67x45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://824bb40c04b0ea474c45020c9c84ce7b7ee18c52beef9fee9618ba5a6e59be04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67x45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k2tlk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:12Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:12 crc kubenswrapper[4806]: I0126 07:54:12.680679 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:12 crc kubenswrapper[4806]: I0126 07:54:12.680727 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:12 crc kubenswrapper[4806]: I0126 07:54:12.680741 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:12 crc kubenswrapper[4806]: I0126 07:54:12.680766 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:12 crc kubenswrapper[4806]: I0126 07:54:12.680782 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:12Z","lastTransitionTime":"2026-01-26T07:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:12 crc kubenswrapper[4806]: I0126 07:54:12.783698 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:12 crc kubenswrapper[4806]: I0126 07:54:12.783765 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:12 crc kubenswrapper[4806]: I0126 07:54:12.783778 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:12 crc kubenswrapper[4806]: I0126 07:54:12.783801 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:12 crc kubenswrapper[4806]: I0126 07:54:12.783816 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:12Z","lastTransitionTime":"2026-01-26T07:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:12 crc kubenswrapper[4806]: I0126 07:54:12.887815 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:12 crc kubenswrapper[4806]: I0126 07:54:12.887871 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:12 crc kubenswrapper[4806]: I0126 07:54:12.887895 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:12 crc kubenswrapper[4806]: I0126 07:54:12.887918 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:12 crc kubenswrapper[4806]: I0126 07:54:12.887932 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:12Z","lastTransitionTime":"2026-01-26T07:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:12 crc kubenswrapper[4806]: I0126 07:54:12.938698 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tfbl7"] Jan 26 07:54:12 crc kubenswrapper[4806]: I0126 07:54:12.939278 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tfbl7" Jan 26 07:54:12 crc kubenswrapper[4806]: I0126 07:54:12.942560 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 26 07:54:12 crc kubenswrapper[4806]: I0126 07:54:12.943959 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 26 07:54:12 crc kubenswrapper[4806]: I0126 07:54:12.967003 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a2716f1-7f63-4f1b-88e7-412b33a5afe3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1797a908e33ef3678ac1471e937a8e52c11dbb31d8de0f19d400a01706a98b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c111f2ec62507ccc570af4f4da19232bfa0946bb9e67bde595d5a3d66eff6a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f94ceee9c2ed3bd1c5ba58ee7e5047dd5f0a325c610c4277705f3426e69ed79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50228cb176a3defc821a78ed75d837706c2de3a3414f02e0afacf81289a15999\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:53:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:12Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:12 crc kubenswrapper[4806]: I0126 07:54:12.991867 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:12 crc kubenswrapper[4806]: I0126 07:54:12.992225 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:12 crc kubenswrapper[4806]: I0126 07:54:12.992352 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:12 crc kubenswrapper[4806]: I0126 07:54:12.992483 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:12 crc kubenswrapper[4806]: I0126 07:54:12.992639 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:12Z","lastTransitionTime":"2026-01-26T07:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:12 crc kubenswrapper[4806]: I0126 07:54:12.993935 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:12Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:13 crc kubenswrapper[4806]: I0126 07:54:13.034435 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:13Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:13 crc kubenswrapper[4806]: I0126 07:54:13.037872 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/47635c72-c532-48f3-839a-d86393eb5d24-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-tfbl7\" (UID: \"47635c72-c532-48f3-839a-d86393eb5d24\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tfbl7" Jan 26 07:54:13 crc kubenswrapper[4806]: I0126 07:54:13.037934 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/47635c72-c532-48f3-839a-d86393eb5d24-env-overrides\") pod \"ovnkube-control-plane-749d76644c-tfbl7\" (UID: \"47635c72-c532-48f3-839a-d86393eb5d24\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tfbl7" Jan 26 07:54:13 crc kubenswrapper[4806]: I0126 07:54:13.037957 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4lbh\" (UniqueName: \"kubernetes.io/projected/47635c72-c532-48f3-839a-d86393eb5d24-kube-api-access-s4lbh\") pod \"ovnkube-control-plane-749d76644c-tfbl7\" (UID: \"47635c72-c532-48f3-839a-d86393eb5d24\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tfbl7" Jan 26 07:54:13 crc kubenswrapper[4806]: I0126 07:54:13.038094 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/47635c72-c532-48f3-839a-d86393eb5d24-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-tfbl7\" (UID: \"47635c72-c532-48f3-839a-d86393eb5d24\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tfbl7" Jan 26 07:54:13 crc kubenswrapper[4806]: I0126 07:54:13.042849 4806 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 20:23:58.030735255 +0000 UTC Jan 26 07:54:13 crc kubenswrapper[4806]: I0126 07:54:13.064258 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f8b8acb-f4cf-41db-82f8-94ffd21c1594\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21d2ea6bc7f0a91da2b372587419ec1b34cfe3aca8f70dd4e360bf3e818bb103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adbed86e005e6e9cecb4320e0dff72eab3cfd505ff656a64e6643cf47828c789\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d102c6b4ea9d79168f0fec24c556c8fec8e3c1191646eebcbe8ed0289edfb6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c90dede1fc0753cad500555db7d92be73cb9b51fe948e3e65bed66134d85628c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e05c58fd693a7827b1ff019c6e160e4d2198004aca34245ee6e08cd37f5627da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://634f160a9064240a06e9b425f70ceae6390db7205f898efdc601437e72c362df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7b156e8395962880407d0db1c1b809b804ce946f09b03d256e2d49bbeea75f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://250a784e5b1e7bbd67d31eec66bfcd8cfac8a2a80e8882838553063b1a951346\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T07:54:10Z\\\",\\\"message\\\":\\\"ons/factory.go:141\\\\nI0126 07:54:10.044291 6015 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0126 07:54:10.044304 6015 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0126 07:54:10.044958 6015 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 07:54:10.045293 6015 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 07:54:10.045726 6015 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 07:54:10.045814 6015 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0126 07:54:10.046455 6015 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0126 07:54:10.046486 6015 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0126 07:54:10.046495 6015 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0126 07:54:10.046559 6015 factory.go:656] Stopping watch factory\\\\nI0126 07:54:10.046569 6015 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0126 07:54:10.046587 6015 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0126 07:54:10.046599 6015 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e9d86c9f33a8ad421c133f7b674eff9362011ad9ce60cbe1b917bda84c85047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2105d09a59a290aa3e1915a2b8c2c8aeb3a3e94ddfd4b48dc574a5362f522d18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2105d09a59a290aa3e1915a2b8c2c8aeb3a3e94ddfd4b48dc574a5362f522d18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8mw7z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:13Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:13 crc kubenswrapper[4806]: I0126 07:54:13.090147 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f837be3-c97c-4ec6-9ac0-1b2c210a2bd1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://370998a25827938e42e6ca8f51cb20c64068d266a2a6e92f43cfa6033efe1f1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://befc3a7310ed730ab06800d0ffe5a5ad206f5193f921d0080543d96e53c382cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f4e90bd8d92e4159f45b887647465c23595d6a14cb9fd5a6f3b6c736345b302\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d8fe9445b34c3dea0e5508fbbd9d8c72d3b0bd3c927f3d0469fb5f86a61b7a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6442d8b57ec5789bc5ad219da279d42cd053598b4a8a843b4190a619750483d3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 07:53:53.366863 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 07:53:53.368547 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1947576678/tls.crt::/tmp/serving-cert-1947576678/tls.key\\\\\\\"\\\\nI0126 07:53:59.068951 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 07:53:59.072863 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 07:53:59.072893 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 07:53:59.072922 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 07:53:59.072931 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 07:53:59.078386 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 07:53:59.078432 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 07:53:59.078439 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 07:53:59.078445 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 07:53:59.078449 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 07:53:59.078455 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 07:53:59.078459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 07:53:59.078571 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 07:53:59.084946 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eda625619208720589e5f413d0df8e1e55e7fd5ab4a2e59d64e19150afb05ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02368f0d186c7de1c3dc25d5c60985799ded799314f332c5132230a413d531a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02368f0d186c7de1c3dc25d5c60985799ded799314f332c5132230a413d531a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:53:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:13Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:13 crc kubenswrapper[4806]: I0126 07:54:13.096289 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:13 crc kubenswrapper[4806]: I0126 07:54:13.096457 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:13 crc kubenswrapper[4806]: I0126 07:54:13.096570 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:13 crc kubenswrapper[4806]: I0126 07:54:13.096671 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:13 crc kubenswrapper[4806]: I0126 07:54:13.097390 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:13Z","lastTransitionTime":"2026-01-26T07:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:13 crc kubenswrapper[4806]: I0126 07:54:13.114154 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acf0880fe013b6ee2bf3e8342e19ebbdc21b0f7aeb69749dce219fe0e5e4939a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:13Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:13 crc kubenswrapper[4806]: I0126 07:54:13.128348 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0298829a559cb39d3f8d2f3b0e53b249e74c432ba82385ab628135d53fa77272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70db7a55580b02bc2be913a8978f895c5a3826133883cf1c838bc84a76e89af1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:13Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:13 crc kubenswrapper[4806]: I0126 07:54:13.139288 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/47635c72-c532-48f3-839a-d86393eb5d24-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-tfbl7\" (UID: \"47635c72-c532-48f3-839a-d86393eb5d24\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tfbl7" Jan 26 07:54:13 crc kubenswrapper[4806]: I0126 07:54:13.139540 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/47635c72-c532-48f3-839a-d86393eb5d24-env-overrides\") pod \"ovnkube-control-plane-749d76644c-tfbl7\" (UID: \"47635c72-c532-48f3-839a-d86393eb5d24\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tfbl7" Jan 26 07:54:13 crc kubenswrapper[4806]: I0126 07:54:13.139665 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4lbh\" (UniqueName: \"kubernetes.io/projected/47635c72-c532-48f3-839a-d86393eb5d24-kube-api-access-s4lbh\") pod \"ovnkube-control-plane-749d76644c-tfbl7\" (UID: \"47635c72-c532-48f3-839a-d86393eb5d24\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tfbl7" Jan 26 07:54:13 crc kubenswrapper[4806]: I0126 07:54:13.139795 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/47635c72-c532-48f3-839a-d86393eb5d24-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-tfbl7\" (UID: \"47635c72-c532-48f3-839a-d86393eb5d24\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tfbl7" Jan 26 07:54:13 crc kubenswrapper[4806]: I0126 07:54:13.140325 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/47635c72-c532-48f3-839a-d86393eb5d24-env-overrides\") pod \"ovnkube-control-plane-749d76644c-tfbl7\" (UID: \"47635c72-c532-48f3-839a-d86393eb5d24\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tfbl7" Jan 26 07:54:13 crc kubenswrapper[4806]: I0126 07:54:13.140826 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/47635c72-c532-48f3-839a-d86393eb5d24-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-tfbl7\" (UID: \"47635c72-c532-48f3-839a-d86393eb5d24\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tfbl7" Jan 26 07:54:13 crc kubenswrapper[4806]: I0126 07:54:13.145052 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wx526" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"265e15b3-6ef8-47df-ab15-dcc9bd9574ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5080e1085d9267f3e432eaf62a6c620f7e59db08140bd5a901ab21840bfd6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmfr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wx526\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:13Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:13 crc kubenswrapper[4806]: I0126 07:54:13.148145 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/47635c72-c532-48f3-839a-d86393eb5d24-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-tfbl7\" (UID: \"47635c72-c532-48f3-839a-d86393eb5d24\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tfbl7" Jan 26 07:54:13 crc kubenswrapper[4806]: I0126 07:54:13.157853 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4lbh\" (UniqueName: \"kubernetes.io/projected/47635c72-c532-48f3-839a-d86393eb5d24-kube-api-access-s4lbh\") pod \"ovnkube-control-plane-749d76644c-tfbl7\" (UID: \"47635c72-c532-48f3-839a-d86393eb5d24\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tfbl7" Jan 26 07:54:13 crc kubenswrapper[4806]: I0126 07:54:13.158781 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d7glh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4320ae6b-0d73-47d7-9f2c-f3c5b6b69041\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da5e8c52d415e76190ace196927bd1783b08990164b11379940ecc2b6734551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpcqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d7glh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:13Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:13 crc kubenswrapper[4806]: I0126 07:54:13.177371 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-268q5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59844d88-1bf9-4761-b664-74623e7532c3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80fc9c4f56c3d017653449212b2b9457a701e8c567317798eb12c695bccec5c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19fc2da16d6c76dad40cc2947eb8da01b1ead5804be80df5756c132456575909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19fc2da16d6c76dad40cc2947eb8da01b1ead5804be80df5756c132456575909\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a426f2a63c3cdd56cec3bbd1f6e88ff76808304da3fe757043afeb2feb26614\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a426f2a63c3cdd56cec3bbd1f6e88ff76808304da3fe757043afeb2feb26614\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5459f7b834ef08b014a811d0955763235cb4dbb6cbe92670202b03f51f7d684d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5459f7b834ef08b014a811d0955763235cb4dbb6cbe92670202b03f51f7d684d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1efe94940fbf1babcccc68efae1d38751c22232d1a2f197485057be906db80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f1efe94940fbf1babcccc68efae1d38751c22232d1a2f197485057be906db80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1332a28e0cf08275ce281997fdc5674f44e397f8ff02fdd19fed12c2de2b70a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1332a28e0cf08275ce281997fdc5674f44e397f8ff02fdd19fed12c2de2b70a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://638da847467b7cade9c68a764044417e85bfa99cae93105cb70b260d3668c1a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://638da847467b7cade9c68a764044417e85bfa99cae93105cb70b260d3668c1a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-268q5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:13Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:13 crc kubenswrapper[4806]: I0126 07:54:13.201450 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:13 crc kubenswrapper[4806]: I0126 07:54:13.201868 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:13 crc kubenswrapper[4806]: I0126 07:54:13.201994 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:13 crc kubenswrapper[4806]: I0126 07:54:13.202123 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:13 crc kubenswrapper[4806]: I0126 07:54:13.202238 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:13Z","lastTransitionTime":"2026-01-26T07:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:13 crc kubenswrapper[4806]: I0126 07:54:13.206481 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"674c5d9f-0f9e-47cf-b6bb-5a011bd0af68\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06b15982445e897ebbaf881e9a84973b9b700b0763ea6dfc0dfa3d5adb3d5b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67cf748dcc41d3402937cba97eadb7311a4fd7ab359a24768ffd4b6f689e9e2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://374c11776db902d4a30fdeab722bf8c85be53043b87555ac42f9f18bbfca5f78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded50f9353eb262b2587b8d1a59d8e1a06946c9b228eb0991fe35ad79f19567b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dc1a26bcb2eff0795009b3d88f7fedd86bccf553e9b8b86f8bc4bdda154566d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b85e7f36ac03113362c2ab6b15fb73e517599c6a56c6a410b9dead70191502f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b85e7f36ac03113362c2ab6b15fb73e517599c6a56c6a410b9dead70191502f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7324fbdf53b0be79696bb4bf870fe072761633e09cc88cce3a213a40b339410d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7324fbdf53b0be79696bb4bf870fe072761633e09cc88cce3a213a40b339410d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1987002ea8dfdacf427291f1757243727d744b1fe99d0be1f8cddc26beb74fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1987002ea8dfdacf427291f1757243727d744b1fe99d0be1f8cddc26beb74fc9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:53:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:13Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:13 crc kubenswrapper[4806]: I0126 07:54:13.221950 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:13Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:13 crc kubenswrapper[4806]: I0126 07:54:13.237364 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dfe44c66a73660b38211ed7ae8fbb6a036c2ebc6a2716dd0e36da70b8d1fcc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:13Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:13 crc kubenswrapper[4806]: I0126 07:54:13.254484 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d07502a2-50b0-4012-b335-340a1c694c50\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03d55ca5e1d542a5cdb726c1e581fb9b644e237bf4575794fedadf60386c339c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67x45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://824bb40c04b0ea474c45020c9c84ce7b7ee18c52beef9fee9618ba5a6e59be04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67x45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k2tlk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:13Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:13 crc kubenswrapper[4806]: I0126 07:54:13.262385 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tfbl7" Jan 26 07:54:13 crc kubenswrapper[4806]: I0126 07:54:13.275253 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tfbl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47635c72-c532-48f3-839a-d86393eb5d24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4lbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4lbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tfbl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:13Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:13 crc kubenswrapper[4806]: I0126 07:54:13.301914 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pw8cg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c73a9f4-20b2-4c8a-b25d-413770be4fac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://792a2a27d6043da345b05c611a152abc9233ff61958434dc7a7f8c13d185fc04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt9w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pw8cg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:13Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:13 crc kubenswrapper[4806]: I0126 07:54:13.304694 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:13 crc kubenswrapper[4806]: I0126 07:54:13.304744 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:13 crc kubenswrapper[4806]: I0126 07:54:13.304759 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:13 crc kubenswrapper[4806]: I0126 07:54:13.304781 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:13 crc kubenswrapper[4806]: I0126 07:54:13.304797 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:13Z","lastTransitionTime":"2026-01-26T07:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:13 crc kubenswrapper[4806]: I0126 07:54:13.320639 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tfbl7" event={"ID":"47635c72-c532-48f3-839a-d86393eb5d24","Type":"ContainerStarted","Data":"55753c3a5d0c97b4c62fef7186a01968ef2c49d57755010ab48665f08ee8d0ac"} Jan 26 07:54:13 crc kubenswrapper[4806]: I0126 07:54:13.323917 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8mw7z_1f8b8acb-f4cf-41db-82f8-94ffd21c1594/ovnkube-controller/1.log" Jan 26 07:54:13 crc kubenswrapper[4806]: I0126 07:54:13.324628 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8mw7z_1f8b8acb-f4cf-41db-82f8-94ffd21c1594/ovnkube-controller/0.log" Jan 26 07:54:13 crc kubenswrapper[4806]: I0126 07:54:13.327666 4806 generic.go:334] "Generic (PLEG): container finished" podID="1f8b8acb-f4cf-41db-82f8-94ffd21c1594" containerID="e7b156e8395962880407d0db1c1b809b804ce946f09b03d256e2d49bbeea75f1" exitCode=1 Jan 26 07:54:13 crc kubenswrapper[4806]: I0126 07:54:13.327714 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" event={"ID":"1f8b8acb-f4cf-41db-82f8-94ffd21c1594","Type":"ContainerDied","Data":"e7b156e8395962880407d0db1c1b809b804ce946f09b03d256e2d49bbeea75f1"} Jan 26 07:54:13 crc kubenswrapper[4806]: I0126 07:54:13.327781 4806 scope.go:117] "RemoveContainer" containerID="250a784e5b1e7bbd67d31eec66bfcd8cfac8a2a80e8882838553063b1a951346" Jan 26 07:54:13 crc kubenswrapper[4806]: I0126 07:54:13.329060 4806 scope.go:117] "RemoveContainer" containerID="e7b156e8395962880407d0db1c1b809b804ce946f09b03d256e2d49bbeea75f1" Jan 26 07:54:13 crc kubenswrapper[4806]: E0126 07:54:13.329342 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-8mw7z_openshift-ovn-kubernetes(1f8b8acb-f4cf-41db-82f8-94ffd21c1594)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" podUID="1f8b8acb-f4cf-41db-82f8-94ffd21c1594" Jan 26 07:54:13 crc kubenswrapper[4806]: I0126 07:54:13.359026 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f8b8acb-f4cf-41db-82f8-94ffd21c1594\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21d2ea6bc7f0a91da2b372587419ec1b34cfe3aca8f70dd4e360bf3e818bb103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adbed86e005e6e9cecb4320e0dff72eab3cfd505ff656a64e6643cf47828c789\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d102c6b4ea9d79168f0fec24c556c8fec8e3c1191646eebcbe8ed0289edfb6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c90dede1fc0753cad500555db7d92be73cb9b51fe948e3e65bed66134d85628c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e05c58fd693a7827b1ff019c6e160e4d2198004aca34245ee6e08cd37f5627da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://634f160a9064240a06e9b425f70ceae6390db7205f898efdc601437e72c362df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7b156e8395962880407d0db1c1b809b804ce946f09b03d256e2d49bbeea75f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://250a784e5b1e7bbd67d31eec66bfcd8cfac8a2a80e8882838553063b1a951346\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T07:54:10Z\\\",\\\"message\\\":\\\"ons/factory.go:141\\\\nI0126 07:54:10.044291 6015 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0126 07:54:10.044304 6015 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0126 07:54:10.044958 6015 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 07:54:10.045293 6015 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 07:54:10.045726 6015 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 07:54:10.045814 6015 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0126 07:54:10.046455 6015 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0126 07:54:10.046486 6015 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0126 07:54:10.046495 6015 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0126 07:54:10.046559 6015 factory.go:656] Stopping watch factory\\\\nI0126 07:54:10.046569 6015 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0126 07:54:10.046587 6015 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0126 07:54:10.046599 6015 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7b156e8395962880407d0db1c1b809b804ce946f09b03d256e2d49bbeea75f1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T07:54:13Z\\\",\\\"message\\\":\\\"ernalversions/factory.go:140\\\\nI0126 07:54:13.077605 6146 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 07:54:13.077710 6146 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0126 07:54:13.080415 6146 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0126 07:54:13.081317 6146 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 07:54:13.082235 6146 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 07:54:13.083087 6146 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 07:54:13.083515 6146 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e9d86c9f33a8ad421c133f7b674eff9362011ad9ce60cbe1b917bda84c85047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2105d09a59a290aa3e1915a2b8c2c8aeb3a3e94ddfd4b48dc574a5362f522d18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2105d09a59a290aa3e1915a2b8c2c8aeb3a3e94ddfd4b48dc574a5362f522d18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8mw7z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:13Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:13 crc kubenswrapper[4806]: I0126 07:54:13.383782 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a2716f1-7f63-4f1b-88e7-412b33a5afe3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1797a908e33ef3678ac1471e937a8e52c11dbb31d8de0f19d400a01706a98b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c111f2ec62507ccc570af4f4da19232bfa0946bb9e67bde595d5a3d66eff6a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f94ceee9c2ed3bd1c5ba58ee7e5047dd5f0a325c610c4277705f3426e69ed79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50228cb176a3defc821a78ed75d837706c2de3a3414f02e0afacf81289a15999\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:53:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:13Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:13 crc kubenswrapper[4806]: I0126 07:54:13.399928 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:13Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:13 crc kubenswrapper[4806]: I0126 07:54:13.408872 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:13 crc kubenswrapper[4806]: I0126 07:54:13.409114 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:13 crc kubenswrapper[4806]: I0126 07:54:13.409166 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:13 crc kubenswrapper[4806]: I0126 07:54:13.409189 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:13 crc kubenswrapper[4806]: I0126 07:54:13.409201 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:13Z","lastTransitionTime":"2026-01-26T07:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:13 crc kubenswrapper[4806]: I0126 07:54:13.415165 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:13Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:13 crc kubenswrapper[4806]: I0126 07:54:13.428119 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wx526" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"265e15b3-6ef8-47df-ab15-dcc9bd9574ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5080e1085d9267f3e432eaf62a6c620f7e59db08140bd5a901ab21840bfd6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmfr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wx526\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:13Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:13 crc kubenswrapper[4806]: I0126 07:54:13.443447 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d7glh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4320ae6b-0d73-47d7-9f2c-f3c5b6b69041\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da5e8c52d415e76190ace196927bd1783b08990164b11379940ecc2b6734551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpcqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d7glh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:13Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:13 crc kubenswrapper[4806]: I0126 07:54:13.461267 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-268q5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59844d88-1bf9-4761-b664-74623e7532c3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80fc9c4f56c3d017653449212b2b9457a701e8c567317798eb12c695bccec5c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19fc2da16d6c76dad40cc2947eb8da01b1ead5804be80df5756c132456575909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19fc2da16d6c76dad40cc2947eb8da01b1ead5804be80df5756c132456575909\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a426f2a63c3cdd56cec3bbd1f6e88ff76808304da3fe757043afeb2feb26614\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a426f2a63c3cdd56cec3bbd1f6e88ff76808304da3fe757043afeb2feb26614\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5459f7b834ef08b014a811d0955763235cb4dbb6cbe92670202b03f51f7d684d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5459f7b834ef08b014a811d0955763235cb4dbb6cbe92670202b03f51f7d684d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1efe94940fbf1babcccc68efae1d38751c22232d1a2f197485057be906db80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f1efe94940fbf1babcccc68efae1d38751c22232d1a2f197485057be906db80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1332a28e0cf08275ce281997fdc5674f44e397f8ff02fdd19fed12c2de2b70a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1332a28e0cf08275ce281997fdc5674f44e397f8ff02fdd19fed12c2de2b70a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://638da847467b7cade9c68a764044417e85bfa99cae93105cb70b260d3668c1a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://638da847467b7cade9c68a764044417e85bfa99cae93105cb70b260d3668c1a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-268q5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:13Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:13 crc kubenswrapper[4806]: I0126 07:54:13.484999 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"674c5d9f-0f9e-47cf-b6bb-5a011bd0af68\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06b15982445e897ebbaf881e9a84973b9b700b0763ea6dfc0dfa3d5adb3d5b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67cf748dcc41d3402937cba97eadb7311a4fd7ab359a24768ffd4b6f689e9e2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://374c11776db902d4a30fdeab722bf8c85be53043b87555ac42f9f18bbfca5f78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded50f9353eb262b2587b8d1a59d8e1a06946c9b228eb0991fe35ad79f19567b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dc1a26bcb2eff0795009b3d88f7fedd86bccf553e9b8b86f8bc4bdda154566d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b85e7f36ac03113362c2ab6b15fb73e517599c6a56c6a410b9dead70191502f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b85e7f36ac03113362c2ab6b15fb73e517599c6a56c6a410b9dead70191502f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7324fbdf53b0be79696bb4bf870fe072761633e09cc88cce3a213a40b339410d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7324fbdf53b0be79696bb4bf870fe072761633e09cc88cce3a213a40b339410d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1987002ea8dfdacf427291f1757243727d744b1fe99d0be1f8cddc26beb74fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1987002ea8dfdacf427291f1757243727d744b1fe99d0be1f8cddc26beb74fc9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:53:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:13Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:13 crc kubenswrapper[4806]: I0126 07:54:13.506614 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f837be3-c97c-4ec6-9ac0-1b2c210a2bd1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://370998a25827938e42e6ca8f51cb20c64068d266a2a6e92f43cfa6033efe1f1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://befc3a7310ed730ab06800d0ffe5a5ad206f5193f921d0080543d96e53c382cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f4e90bd8d92e4159f45b887647465c23595d6a14cb9fd5a6f3b6c736345b302\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d8fe9445b34c3dea0e5508fbbd9d8c72d3b0bd3c927f3d0469fb5f86a61b7a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6442d8b57ec5789bc5ad219da279d42cd053598b4a8a843b4190a619750483d3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 07:53:53.366863 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 07:53:53.368547 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1947576678/tls.crt::/tmp/serving-cert-1947576678/tls.key\\\\\\\"\\\\nI0126 07:53:59.068951 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 07:53:59.072863 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 07:53:59.072893 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 07:53:59.072922 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 07:53:59.072931 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 07:53:59.078386 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 07:53:59.078432 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 07:53:59.078439 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 07:53:59.078445 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 07:53:59.078449 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 07:53:59.078455 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 07:53:59.078459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 07:53:59.078571 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 07:53:59.084946 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eda625619208720589e5f413d0df8e1e55e7fd5ab4a2e59d64e19150afb05ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02368f0d186c7de1c3dc25d5c60985799ded799314f332c5132230a413d531a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02368f0d186c7de1c3dc25d5c60985799ded799314f332c5132230a413d531a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:53:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:13Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:13 crc kubenswrapper[4806]: I0126 07:54:13.511297 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:13 crc kubenswrapper[4806]: I0126 07:54:13.511321 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:13 crc kubenswrapper[4806]: I0126 07:54:13.511330 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:13 crc kubenswrapper[4806]: I0126 07:54:13.511348 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:13 crc kubenswrapper[4806]: I0126 07:54:13.511360 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:13Z","lastTransitionTime":"2026-01-26T07:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:13 crc kubenswrapper[4806]: I0126 07:54:13.527553 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acf0880fe013b6ee2bf3e8342e19ebbdc21b0f7aeb69749dce219fe0e5e4939a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:13Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:13 crc kubenswrapper[4806]: I0126 07:54:13.545297 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0298829a559cb39d3f8d2f3b0e53b249e74c432ba82385ab628135d53fa77272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70db7a55580b02bc2be913a8978f895c5a3826133883cf1c838bc84a76e89af1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:13Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:13 crc kubenswrapper[4806]: I0126 07:54:13.559951 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:13Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:13 crc kubenswrapper[4806]: I0126 07:54:13.579881 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dfe44c66a73660b38211ed7ae8fbb6a036c2ebc6a2716dd0e36da70b8d1fcc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:13Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:13 crc kubenswrapper[4806]: I0126 07:54:13.603898 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d07502a2-50b0-4012-b335-340a1c694c50\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03d55ca5e1d542a5cdb726c1e581fb9b644e237bf4575794fedadf60386c339c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67x45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://824bb40c04b0ea474c45020c9c84ce7b7ee18c52beef9fee9618ba5a6e59be04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67x45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k2tlk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:13Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:13 crc kubenswrapper[4806]: I0126 07:54:13.615003 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:13 crc kubenswrapper[4806]: I0126 07:54:13.615063 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:13 crc kubenswrapper[4806]: I0126 07:54:13.615077 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:13 crc kubenswrapper[4806]: I0126 07:54:13.615099 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:13 crc kubenswrapper[4806]: I0126 07:54:13.615112 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:13Z","lastTransitionTime":"2026-01-26T07:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:13 crc kubenswrapper[4806]: I0126 07:54:13.620312 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pw8cg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c73a9f4-20b2-4c8a-b25d-413770be4fac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://792a2a27d6043da345b05c611a152abc9233ff61958434dc7a7f8c13d185fc04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt9w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pw8cg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:13Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:13 crc kubenswrapper[4806]: I0126 07:54:13.637623 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tfbl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47635c72-c532-48f3-839a-d86393eb5d24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:12Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:12Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4lbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4lbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tfbl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:13Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:13 crc kubenswrapper[4806]: I0126 07:54:13.719066 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:13 crc kubenswrapper[4806]: I0126 07:54:13.719107 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:13 crc kubenswrapper[4806]: I0126 07:54:13.719140 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:13 crc kubenswrapper[4806]: I0126 07:54:13.719161 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:13 crc kubenswrapper[4806]: I0126 07:54:13.719173 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:13Z","lastTransitionTime":"2026-01-26T07:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:13 crc kubenswrapper[4806]: I0126 07:54:13.822024 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:13 crc kubenswrapper[4806]: I0126 07:54:13.822114 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:13 crc kubenswrapper[4806]: I0126 07:54:13.822136 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:13 crc kubenswrapper[4806]: I0126 07:54:13.822166 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:13 crc kubenswrapper[4806]: I0126 07:54:13.822183 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:13Z","lastTransitionTime":"2026-01-26T07:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:13 crc kubenswrapper[4806]: I0126 07:54:13.925773 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:13 crc kubenswrapper[4806]: I0126 07:54:13.925829 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:13 crc kubenswrapper[4806]: I0126 07:54:13.925849 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:13 crc kubenswrapper[4806]: I0126 07:54:13.925873 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:13 crc kubenswrapper[4806]: I0126 07:54:13.925890 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:13Z","lastTransitionTime":"2026-01-26T07:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:14 crc kubenswrapper[4806]: I0126 07:54:14.028970 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:14 crc kubenswrapper[4806]: I0126 07:54:14.029050 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:14 crc kubenswrapper[4806]: I0126 07:54:14.029071 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:14 crc kubenswrapper[4806]: I0126 07:54:14.029098 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:14 crc kubenswrapper[4806]: I0126 07:54:14.029119 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:14Z","lastTransitionTime":"2026-01-26T07:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:14 crc kubenswrapper[4806]: I0126 07:54:14.041696 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 07:54:14 crc kubenswrapper[4806]: I0126 07:54:14.041698 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 07:54:14 crc kubenswrapper[4806]: E0126 07:54:14.041915 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 07:54:14 crc kubenswrapper[4806]: I0126 07:54:14.041699 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 07:54:14 crc kubenswrapper[4806]: E0126 07:54:14.042054 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 07:54:14 crc kubenswrapper[4806]: E0126 07:54:14.042169 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 07:54:14 crc kubenswrapper[4806]: I0126 07:54:14.043870 4806 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 22:28:44.189429637 +0000 UTC Jan 26 07:54:14 crc kubenswrapper[4806]: I0126 07:54:14.133017 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:14 crc kubenswrapper[4806]: I0126 07:54:14.133088 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:14 crc kubenswrapper[4806]: I0126 07:54:14.133103 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:14 crc kubenswrapper[4806]: I0126 07:54:14.133120 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:14 crc kubenswrapper[4806]: I0126 07:54:14.133132 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:14Z","lastTransitionTime":"2026-01-26T07:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:14 crc kubenswrapper[4806]: I0126 07:54:14.237509 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:14 crc kubenswrapper[4806]: I0126 07:54:14.237629 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:14 crc kubenswrapper[4806]: I0126 07:54:14.237648 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:14 crc kubenswrapper[4806]: I0126 07:54:14.237677 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:14 crc kubenswrapper[4806]: I0126 07:54:14.237699 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:14Z","lastTransitionTime":"2026-01-26T07:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:14 crc kubenswrapper[4806]: I0126 07:54:14.338885 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tfbl7" event={"ID":"47635c72-c532-48f3-839a-d86393eb5d24","Type":"ContainerStarted","Data":"4910b820b4227a848efb96e462e633150fe7e06295dc04a0d9c45636cc4b83bc"} Jan 26 07:54:14 crc kubenswrapper[4806]: I0126 07:54:14.338998 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tfbl7" event={"ID":"47635c72-c532-48f3-839a-d86393eb5d24","Type":"ContainerStarted","Data":"b7ea777c2c49c1bc930bff2584c62a6ebd7752638e4d5aec2818f56cf0678ad8"} Jan 26 07:54:14 crc kubenswrapper[4806]: I0126 07:54:14.340625 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:14 crc kubenswrapper[4806]: I0126 07:54:14.340668 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:14 crc kubenswrapper[4806]: I0126 07:54:14.340685 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:14 crc kubenswrapper[4806]: I0126 07:54:14.340709 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:14 crc kubenswrapper[4806]: I0126 07:54:14.340730 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:14Z","lastTransitionTime":"2026-01-26T07:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:14 crc kubenswrapper[4806]: I0126 07:54:14.342217 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8mw7z_1f8b8acb-f4cf-41db-82f8-94ffd21c1594/ovnkube-controller/1.log" Jan 26 07:54:14 crc kubenswrapper[4806]: I0126 07:54:14.349848 4806 scope.go:117] "RemoveContainer" containerID="e7b156e8395962880407d0db1c1b809b804ce946f09b03d256e2d49bbeea75f1" Jan 26 07:54:14 crc kubenswrapper[4806]: E0126 07:54:14.350161 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-8mw7z_openshift-ovn-kubernetes(1f8b8acb-f4cf-41db-82f8-94ffd21c1594)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" podUID="1f8b8acb-f4cf-41db-82f8-94ffd21c1594" Jan 26 07:54:14 crc kubenswrapper[4806]: I0126 07:54:14.361351 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pw8cg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c73a9f4-20b2-4c8a-b25d-413770be4fac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://792a2a27d6043da345b05c611a152abc9233ff61958434dc7a7f8c13d185fc04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt9w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pw8cg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:14Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:14 crc kubenswrapper[4806]: I0126 07:54:14.385401 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tfbl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47635c72-c532-48f3-839a-d86393eb5d24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7ea777c2c49c1bc930bff2584c62a6ebd7752638e4d5aec2818f56cf0678ad8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4lbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4910b820b4227a848efb96e462e633150fe7e06295dc04a0d9c45636cc4b83bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4lbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tfbl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:14Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:14 crc kubenswrapper[4806]: I0126 07:54:14.409074 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a2716f1-7f63-4f1b-88e7-412b33a5afe3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1797a908e33ef3678ac1471e937a8e52c11dbb31d8de0f19d400a01706a98b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c111f2ec62507ccc570af4f4da19232bfa0946bb9e67bde595d5a3d66eff6a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f94ceee9c2ed3bd1c5ba58ee7e5047dd5f0a325c610c4277705f3426e69ed79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50228cb176a3defc821a78ed75d837706c2de3a3414f02e0afacf81289a15999\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:53:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:14Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:14 crc kubenswrapper[4806]: I0126 07:54:14.432678 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:14Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:14 crc kubenswrapper[4806]: I0126 07:54:14.444341 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:14 crc kubenswrapper[4806]: I0126 07:54:14.444408 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:14 crc kubenswrapper[4806]: I0126 07:54:14.444428 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:14 crc kubenswrapper[4806]: I0126 07:54:14.444456 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:14 crc kubenswrapper[4806]: I0126 07:54:14.444480 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:14Z","lastTransitionTime":"2026-01-26T07:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:14 crc kubenswrapper[4806]: I0126 07:54:14.462612 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:14Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:14 crc kubenswrapper[4806]: I0126 07:54:14.502152 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f8b8acb-f4cf-41db-82f8-94ffd21c1594\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21d2ea6bc7f0a91da2b372587419ec1b34cfe3aca8f70dd4e360bf3e818bb103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adbed86e005e6e9cecb4320e0dff72eab3cfd505ff656a64e6643cf47828c789\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d102c6b4ea9d79168f0fec24c556c8fec8e3c1191646eebcbe8ed0289edfb6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c90dede1fc0753cad500555db7d92be73cb9b51fe948e3e65bed66134d85628c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e05c58fd693a7827b1ff019c6e160e4d2198004aca34245ee6e08cd37f5627da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://634f160a9064240a06e9b425f70ceae6390db7205f898efdc601437e72c362df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7b156e8395962880407d0db1c1b809b804ce946f09b03d256e2d49bbeea75f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://250a784e5b1e7bbd67d31eec66bfcd8cfac8a2a80e8882838553063b1a951346\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T07:54:10Z\\\",\\\"message\\\":\\\"ons/factory.go:141\\\\nI0126 07:54:10.044291 6015 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0126 07:54:10.044304 6015 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0126 07:54:10.044958 6015 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 07:54:10.045293 6015 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 07:54:10.045726 6015 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 07:54:10.045814 6015 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0126 07:54:10.046455 6015 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0126 07:54:10.046486 6015 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0126 07:54:10.046495 6015 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0126 07:54:10.046559 6015 factory.go:656] Stopping watch factory\\\\nI0126 07:54:10.046569 6015 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0126 07:54:10.046587 6015 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0126 07:54:10.046599 6015 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7b156e8395962880407d0db1c1b809b804ce946f09b03d256e2d49bbeea75f1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T07:54:13Z\\\",\\\"message\\\":\\\"ernalversions/factory.go:140\\\\nI0126 07:54:13.077605 6146 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 07:54:13.077710 6146 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0126 07:54:13.080415 6146 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0126 07:54:13.081317 6146 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 07:54:13.082235 6146 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 07:54:13.083087 6146 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 07:54:13.083515 6146 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e9d86c9f33a8ad421c133f7b674eff9362011ad9ce60cbe1b917bda84c85047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2105d09a59a290aa3e1915a2b8c2c8aeb3a3e94ddfd4b48dc574a5362f522d18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2105d09a59a290aa3e1915a2b8c2c8aeb3a3e94ddfd4b48dc574a5362f522d18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8mw7z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:14Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:14 crc kubenswrapper[4806]: I0126 07:54:14.523102 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d7glh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4320ae6b-0d73-47d7-9f2c-f3c5b6b69041\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da5e8c52d415e76190ace196927bd1783b08990164b11379940ecc2b6734551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpcqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d7glh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:14Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:14 crc kubenswrapper[4806]: I0126 07:54:14.548461 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:14 crc kubenswrapper[4806]: I0126 07:54:14.548519 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:14 crc kubenswrapper[4806]: I0126 07:54:14.548548 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:14 crc kubenswrapper[4806]: I0126 07:54:14.548568 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:14 crc kubenswrapper[4806]: I0126 07:54:14.548582 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:14Z","lastTransitionTime":"2026-01-26T07:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:14 crc kubenswrapper[4806]: I0126 07:54:14.551393 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-268q5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59844d88-1bf9-4761-b664-74623e7532c3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80fc9c4f56c3d017653449212b2b9457a701e8c567317798eb12c695bccec5c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19fc2da16d6c76dad40cc2947eb8da01b1ead5804be80df5756c132456575909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19fc2da16d6c76dad40cc2947eb8da01b1ead5804be80df5756c132456575909\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a426f2a63c3cdd56cec3bbd1f6e88ff76808304da3fe757043afeb2feb26614\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a426f2a63c3cdd56cec3bbd1f6e88ff76808304da3fe757043afeb2feb26614\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5459f7b834ef08b014a811d0955763235cb4dbb6cbe92670202b03f51f7d684d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5459f7b834ef08b014a811d0955763235cb4dbb6cbe92670202b03f51f7d684d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1efe94940fbf1babcccc68efae1d38751c22232d1a2f197485057be906db80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f1efe94940fbf1babcccc68efae1d38751c22232d1a2f197485057be906db80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1332a28e0cf08275ce281997fdc5674f44e397f8ff02fdd19fed12c2de2b70a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1332a28e0cf08275ce281997fdc5674f44e397f8ff02fdd19fed12c2de2b70a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://638da847467b7cade9c68a764044417e85bfa99cae93105cb70b260d3668c1a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://638da847467b7cade9c68a764044417e85bfa99cae93105cb70b260d3668c1a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-268q5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:14Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:14 crc kubenswrapper[4806]: I0126 07:54:14.581048 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"674c5d9f-0f9e-47cf-b6bb-5a011bd0af68\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06b15982445e897ebbaf881e9a84973b9b700b0763ea6dfc0dfa3d5adb3d5b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67cf748dcc41d3402937cba97eadb7311a4fd7ab359a24768ffd4b6f689e9e2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://374c11776db902d4a30fdeab722bf8c85be53043b87555ac42f9f18bbfca5f78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded50f9353eb262b2587b8d1a59d8e1a06946c9b228eb0991fe35ad79f19567b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dc1a26bcb2eff0795009b3d88f7fedd86bccf553e9b8b86f8bc4bdda154566d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b85e7f36ac03113362c2ab6b15fb73e517599c6a56c6a410b9dead70191502f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b85e7f36ac03113362c2ab6b15fb73e517599c6a56c6a410b9dead70191502f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7324fbdf53b0be79696bb4bf870fe072761633e09cc88cce3a213a40b339410d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7324fbdf53b0be79696bb4bf870fe072761633e09cc88cce3a213a40b339410d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1987002ea8dfdacf427291f1757243727d744b1fe99d0be1f8cddc26beb74fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1987002ea8dfdacf427291f1757243727d744b1fe99d0be1f8cddc26beb74fc9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:53:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:14Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:14 crc kubenswrapper[4806]: I0126 07:54:14.608281 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f837be3-c97c-4ec6-9ac0-1b2c210a2bd1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://370998a25827938e42e6ca8f51cb20c64068d266a2a6e92f43cfa6033efe1f1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://befc3a7310ed730ab06800d0ffe5a5ad206f5193f921d0080543d96e53c382cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f4e90bd8d92e4159f45b887647465c23595d6a14cb9fd5a6f3b6c736345b302\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d8fe9445b34c3dea0e5508fbbd9d8c72d3b0bd3c927f3d0469fb5f86a61b7a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6442d8b57ec5789bc5ad219da279d42cd053598b4a8a843b4190a619750483d3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 07:53:53.366863 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 07:53:53.368547 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1947576678/tls.crt::/tmp/serving-cert-1947576678/tls.key\\\\\\\"\\\\nI0126 07:53:59.068951 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 07:53:59.072863 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 07:53:59.072893 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 07:53:59.072922 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 07:53:59.072931 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 07:53:59.078386 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 07:53:59.078432 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 07:53:59.078439 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 07:53:59.078445 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 07:53:59.078449 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 07:53:59.078455 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 07:53:59.078459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 07:53:59.078571 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 07:53:59.084946 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eda625619208720589e5f413d0df8e1e55e7fd5ab4a2e59d64e19150afb05ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02368f0d186c7de1c3dc25d5c60985799ded799314f332c5132230a413d531a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02368f0d186c7de1c3dc25d5c60985799ded799314f332c5132230a413d531a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:53:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:14Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:14 crc kubenswrapper[4806]: I0126 07:54:14.632273 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acf0880fe013b6ee2bf3e8342e19ebbdc21b0f7aeb69749dce219fe0e5e4939a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:14Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:14 crc kubenswrapper[4806]: I0126 07:54:14.652298 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:14 crc kubenswrapper[4806]: I0126 07:54:14.652370 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:14 crc kubenswrapper[4806]: I0126 07:54:14.652394 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:14 crc kubenswrapper[4806]: I0126 07:54:14.652427 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:14 crc kubenswrapper[4806]: I0126 07:54:14.652452 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:14Z","lastTransitionTime":"2026-01-26T07:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:14 crc kubenswrapper[4806]: I0126 07:54:14.655889 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0298829a559cb39d3f8d2f3b0e53b249e74c432ba82385ab628135d53fa77272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70db7a55580b02bc2be913a8978f895c5a3826133883cf1c838bc84a76e89af1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:14Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:14 crc kubenswrapper[4806]: I0126 07:54:14.675688 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wx526" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"265e15b3-6ef8-47df-ab15-dcc9bd9574ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5080e1085d9267f3e432eaf62a6c620f7e59db08140bd5a901ab21840bfd6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmfr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wx526\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:14Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:14 crc kubenswrapper[4806]: I0126 07:54:14.697683 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:14Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:14 crc kubenswrapper[4806]: I0126 07:54:14.720799 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dfe44c66a73660b38211ed7ae8fbb6a036c2ebc6a2716dd0e36da70b8d1fcc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:14Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:14 crc kubenswrapper[4806]: I0126 07:54:14.740156 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d07502a2-50b0-4012-b335-340a1c694c50\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03d55ca5e1d542a5cdb726c1e581fb9b644e237bf4575794fedadf60386c339c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67x45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://824bb40c04b0ea474c45020c9c84ce7b7ee18c52beef9fee9618ba5a6e59be04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67x45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k2tlk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:14Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:14 crc kubenswrapper[4806]: I0126 07:54:14.756206 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:14 crc kubenswrapper[4806]: I0126 07:54:14.756429 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:14 crc kubenswrapper[4806]: I0126 07:54:14.756590 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:14 crc kubenswrapper[4806]: I0126 07:54:14.756780 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:14 crc kubenswrapper[4806]: I0126 07:54:14.756918 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:14Z","lastTransitionTime":"2026-01-26T07:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:14 crc kubenswrapper[4806]: I0126 07:54:14.759449 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pw8cg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c73a9f4-20b2-4c8a-b25d-413770be4fac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://792a2a27d6043da345b05c611a152abc9233ff61958434dc7a7f8c13d185fc04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt9w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pw8cg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:14Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:14 crc kubenswrapper[4806]: I0126 07:54:14.780854 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tfbl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47635c72-c532-48f3-839a-d86393eb5d24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7ea777c2c49c1bc930bff2584c62a6ebd7752638e4d5aec2818f56cf0678ad8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4lbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4910b820b4227a848efb96e462e633150fe7e06295dc04a0d9c45636cc4b83bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4lbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tfbl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:14Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:14 crc kubenswrapper[4806]: I0126 07:54:14.803926 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:14Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:14 crc kubenswrapper[4806]: I0126 07:54:14.824618 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:14Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:14 crc kubenswrapper[4806]: I0126 07:54:14.861940 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:14 crc kubenswrapper[4806]: I0126 07:54:14.862001 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:14 crc kubenswrapper[4806]: I0126 07:54:14.862014 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:14 crc kubenswrapper[4806]: I0126 07:54:14.862036 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:14 crc kubenswrapper[4806]: I0126 07:54:14.862051 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:14Z","lastTransitionTime":"2026-01-26T07:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:14 crc kubenswrapper[4806]: I0126 07:54:14.875016 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f8b8acb-f4cf-41db-82f8-94ffd21c1594\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21d2ea6bc7f0a91da2b372587419ec1b34cfe3aca8f70dd4e360bf3e818bb103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adbed86e005e6e9cecb4320e0dff72eab3cfd505ff656a64e6643cf47828c789\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d102c6b4ea9d79168f0fec24c556c8fec8e3c1191646eebcbe8ed0289edfb6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c90dede1fc0753cad500555db7d92be73cb9b51fe948e3e65bed66134d85628c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e05c58fd693a7827b1ff019c6e160e4d2198004aca34245ee6e08cd37f5627da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://634f160a9064240a06e9b425f70ceae6390db7205f898efdc601437e72c362df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7b156e8395962880407d0db1c1b809b804ce946f09b03d256e2d49bbeea75f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7b156e8395962880407d0db1c1b809b804ce946f09b03d256e2d49bbeea75f1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T07:54:13Z\\\",\\\"message\\\":\\\"ernalversions/factory.go:140\\\\nI0126 07:54:13.077605 6146 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 07:54:13.077710 6146 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0126 07:54:13.080415 6146 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0126 07:54:13.081317 6146 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 07:54:13.082235 6146 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 07:54:13.083087 6146 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 07:54:13.083515 6146 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:11Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-8mw7z_openshift-ovn-kubernetes(1f8b8acb-f4cf-41db-82f8-94ffd21c1594)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e9d86c9f33a8ad421c133f7b674eff9362011ad9ce60cbe1b917bda84c85047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2105d09a59a290aa3e1915a2b8c2c8aeb3a3e94ddfd4b48dc574a5362f522d18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2105d09a59a290aa3e1915a2b8c2c8aeb3a3e94ddfd4b48dc574a5362f522d18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8mw7z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:14Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:14 crc kubenswrapper[4806]: I0126 07:54:14.884269 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-rqmvf"] Jan 26 07:54:14 crc kubenswrapper[4806]: I0126 07:54:14.885088 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqmvf" Jan 26 07:54:14 crc kubenswrapper[4806]: E0126 07:54:14.885182 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqmvf" podUID="137029f0-49ad-4400-b117-2eff9271bce3" Jan 26 07:54:14 crc kubenswrapper[4806]: I0126 07:54:14.898356 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a2716f1-7f63-4f1b-88e7-412b33a5afe3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1797a908e33ef3678ac1471e937a8e52c11dbb31d8de0f19d400a01706a98b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c111f2ec62507ccc570af4f4da19232bfa0946bb9e67bde595d5a3d66eff6a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f94ceee9c2ed3bd1c5ba58ee7e5047dd5f0a325c610c4277705f3426e69ed79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50228cb176a3defc821a78ed75d837706c2de3a3414f02e0afacf81289a15999\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:53:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:14Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:14 crc kubenswrapper[4806]: I0126 07:54:14.919217 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acf0880fe013b6ee2bf3e8342e19ebbdc21b0f7aeb69749dce219fe0e5e4939a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:14Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:14 crc kubenswrapper[4806]: I0126 07:54:14.944345 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0298829a559cb39d3f8d2f3b0e53b249e74c432ba82385ab628135d53fa77272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70db7a55580b02bc2be913a8978f895c5a3826133883cf1c838bc84a76e89af1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:14Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:14 crc kubenswrapper[4806]: I0126 07:54:14.959823 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bs9zt\" (UniqueName: \"kubernetes.io/projected/137029f0-49ad-4400-b117-2eff9271bce3-kube-api-access-bs9zt\") pod \"network-metrics-daemon-rqmvf\" (UID: \"137029f0-49ad-4400-b117-2eff9271bce3\") " pod="openshift-multus/network-metrics-daemon-rqmvf" Jan 26 07:54:14 crc kubenswrapper[4806]: I0126 07:54:14.959911 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/137029f0-49ad-4400-b117-2eff9271bce3-metrics-certs\") pod \"network-metrics-daemon-rqmvf\" (UID: \"137029f0-49ad-4400-b117-2eff9271bce3\") " pod="openshift-multus/network-metrics-daemon-rqmvf" Jan 26 07:54:14 crc kubenswrapper[4806]: I0126 07:54:14.962869 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wx526" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"265e15b3-6ef8-47df-ab15-dcc9bd9574ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5080e1085d9267f3e432eaf62a6c620f7e59db08140bd5a901ab21840bfd6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmfr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wx526\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:14Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:14 crc kubenswrapper[4806]: I0126 07:54:14.965465 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:14 crc kubenswrapper[4806]: I0126 07:54:14.965669 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:14 crc kubenswrapper[4806]: I0126 07:54:14.965770 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:14 crc kubenswrapper[4806]: I0126 07:54:14.965870 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:14 crc kubenswrapper[4806]: I0126 07:54:14.965972 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:14Z","lastTransitionTime":"2026-01-26T07:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:14 crc kubenswrapper[4806]: I0126 07:54:14.987082 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d7glh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4320ae6b-0d73-47d7-9f2c-f3c5b6b69041\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da5e8c52d415e76190ace196927bd1783b08990164b11379940ecc2b6734551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpcqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d7glh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:14Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:15 crc kubenswrapper[4806]: I0126 07:54:15.014069 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-268q5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59844d88-1bf9-4761-b664-74623e7532c3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80fc9c4f56c3d017653449212b2b9457a701e8c567317798eb12c695bccec5c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19fc2da16d6c76dad40cc2947eb8da01b1ead5804be80df5756c132456575909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19fc2da16d6c76dad40cc2947eb8da01b1ead5804be80df5756c132456575909\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a426f2a63c3cdd56cec3bbd1f6e88ff76808304da3fe757043afeb2feb26614\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a426f2a63c3cdd56cec3bbd1f6e88ff76808304da3fe757043afeb2feb26614\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5459f7b834ef08b014a811d0955763235cb4dbb6cbe92670202b03f51f7d684d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5459f7b834ef08b014a811d0955763235cb4dbb6cbe92670202b03f51f7d684d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1efe94940fbf1babcccc68efae1d38751c22232d1a2f197485057be906db80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f1efe94940fbf1babcccc68efae1d38751c22232d1a2f197485057be906db80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1332a28e0cf08275ce281997fdc5674f44e397f8ff02fdd19fed12c2de2b70a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1332a28e0cf08275ce281997fdc5674f44e397f8ff02fdd19fed12c2de2b70a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://638da847467b7cade9c68a764044417e85bfa99cae93105cb70b260d3668c1a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://638da847467b7cade9c68a764044417e85bfa99cae93105cb70b260d3668c1a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-268q5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:15Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:15 crc kubenswrapper[4806]: I0126 07:54:15.042510 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"674c5d9f-0f9e-47cf-b6bb-5a011bd0af68\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06b15982445e897ebbaf881e9a84973b9b700b0763ea6dfc0dfa3d5adb3d5b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67cf748dcc41d3402937cba97eadb7311a4fd7ab359a24768ffd4b6f689e9e2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://374c11776db902d4a30fdeab722bf8c85be53043b87555ac42f9f18bbfca5f78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded50f9353eb262b2587b8d1a59d8e1a06946c9b228eb0991fe35ad79f19567b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dc1a26bcb2eff0795009b3d88f7fedd86bccf553e9b8b86f8bc4bdda154566d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b85e7f36ac03113362c2ab6b15fb73e517599c6a56c6a410b9dead70191502f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b85e7f36ac03113362c2ab6b15fb73e517599c6a56c6a410b9dead70191502f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7324fbdf53b0be79696bb4bf870fe072761633e09cc88cce3a213a40b339410d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7324fbdf53b0be79696bb4bf870fe072761633e09cc88cce3a213a40b339410d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1987002ea8dfdacf427291f1757243727d744b1fe99d0be1f8cddc26beb74fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1987002ea8dfdacf427291f1757243727d744b1fe99d0be1f8cddc26beb74fc9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:53:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:15Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:15 crc kubenswrapper[4806]: I0126 07:54:15.044154 4806 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 05:08:06.268844523 +0000 UTC Jan 26 07:54:15 crc kubenswrapper[4806]: I0126 07:54:15.060905 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/137029f0-49ad-4400-b117-2eff9271bce3-metrics-certs\") pod \"network-metrics-daemon-rqmvf\" (UID: \"137029f0-49ad-4400-b117-2eff9271bce3\") " pod="openshift-multus/network-metrics-daemon-rqmvf" Jan 26 07:54:15 crc kubenswrapper[4806]: I0126 07:54:15.060971 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bs9zt\" (UniqueName: \"kubernetes.io/projected/137029f0-49ad-4400-b117-2eff9271bce3-kube-api-access-bs9zt\") pod \"network-metrics-daemon-rqmvf\" (UID: \"137029f0-49ad-4400-b117-2eff9271bce3\") " pod="openshift-multus/network-metrics-daemon-rqmvf" Jan 26 07:54:15 crc kubenswrapper[4806]: E0126 07:54:15.061080 4806 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 07:54:15 crc kubenswrapper[4806]: E0126 07:54:15.061165 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/137029f0-49ad-4400-b117-2eff9271bce3-metrics-certs podName:137029f0-49ad-4400-b117-2eff9271bce3 nodeName:}" failed. No retries permitted until 2026-01-26 07:54:15.561139883 +0000 UTC m=+34.825547959 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/137029f0-49ad-4400-b117-2eff9271bce3-metrics-certs") pod "network-metrics-daemon-rqmvf" (UID: "137029f0-49ad-4400-b117-2eff9271bce3") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 07:54:15 crc kubenswrapper[4806]: I0126 07:54:15.068571 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f837be3-c97c-4ec6-9ac0-1b2c210a2bd1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://370998a25827938e42e6ca8f51cb20c64068d266a2a6e92f43cfa6033efe1f1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://befc3a7310ed730ab06800d0ffe5a5ad206f5193f921d0080543d96e53c382cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f4e90bd8d92e4159f45b887647465c23595d6a14cb9fd5a6f3b6c736345b302\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d8fe9445b34c3dea0e5508fbbd9d8c72d3b0bd3c927f3d0469fb5f86a61b7a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6442d8b57ec5789bc5ad219da279d42cd053598b4a8a843b4190a619750483d3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 07:53:53.366863 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 07:53:53.368547 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1947576678/tls.crt::/tmp/serving-cert-1947576678/tls.key\\\\\\\"\\\\nI0126 07:53:59.068951 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 07:53:59.072863 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 07:53:59.072893 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 07:53:59.072922 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 07:53:59.072931 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 07:53:59.078386 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 07:53:59.078432 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 07:53:59.078439 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 07:53:59.078445 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 07:53:59.078449 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 07:53:59.078455 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 07:53:59.078459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 07:53:59.078571 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 07:53:59.084946 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eda625619208720589e5f413d0df8e1e55e7fd5ab4a2e59d64e19150afb05ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02368f0d186c7de1c3dc25d5c60985799ded799314f332c5132230a413d531a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02368f0d186c7de1c3dc25d5c60985799ded799314f332c5132230a413d531a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:53:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:15Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:15 crc kubenswrapper[4806]: I0126 07:54:15.068985 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:15 crc kubenswrapper[4806]: I0126 07:54:15.069029 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:15 crc kubenswrapper[4806]: I0126 07:54:15.069043 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:15 crc kubenswrapper[4806]: I0126 07:54:15.069066 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:15 crc kubenswrapper[4806]: I0126 07:54:15.069081 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:15Z","lastTransitionTime":"2026-01-26T07:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:15 crc kubenswrapper[4806]: I0126 07:54:15.084385 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dfe44c66a73660b38211ed7ae8fbb6a036c2ebc6a2716dd0e36da70b8d1fcc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:15Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:15 crc kubenswrapper[4806]: I0126 07:54:15.099728 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bs9zt\" (UniqueName: \"kubernetes.io/projected/137029f0-49ad-4400-b117-2eff9271bce3-kube-api-access-bs9zt\") pod \"network-metrics-daemon-rqmvf\" (UID: \"137029f0-49ad-4400-b117-2eff9271bce3\") " pod="openshift-multus/network-metrics-daemon-rqmvf" Jan 26 07:54:15 crc kubenswrapper[4806]: I0126 07:54:15.102314 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d07502a2-50b0-4012-b335-340a1c694c50\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03d55ca5e1d542a5cdb726c1e581fb9b644e237bf4575794fedadf60386c339c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67x45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://824bb40c04b0ea474c45020c9c84ce7b7ee18c52beef9fee9618ba5a6e59be04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67x45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k2tlk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:15Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:15 crc kubenswrapper[4806]: I0126 07:54:15.123396 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:15Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:15 crc kubenswrapper[4806]: I0126 07:54:15.139044 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pw8cg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c73a9f4-20b2-4c8a-b25d-413770be4fac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://792a2a27d6043da345b05c611a152abc9233ff61958434dc7a7f8c13d185fc04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt9w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pw8cg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:15Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:15 crc kubenswrapper[4806]: I0126 07:54:15.157186 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tfbl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47635c72-c532-48f3-839a-d86393eb5d24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7ea777c2c49c1bc930bff2584c62a6ebd7752638e4d5aec2818f56cf0678ad8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4lbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4910b820b4227a848efb96e462e633150fe7e06295dc04a0d9c45636cc4b83bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4lbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tfbl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:15Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:15 crc kubenswrapper[4806]: I0126 07:54:15.172217 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:15 crc kubenswrapper[4806]: I0126 07:54:15.172279 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:15 crc kubenswrapper[4806]: I0126 07:54:15.172299 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:15 crc kubenswrapper[4806]: I0126 07:54:15.172328 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:15 crc kubenswrapper[4806]: I0126 07:54:15.172352 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:15Z","lastTransitionTime":"2026-01-26T07:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:15 crc kubenswrapper[4806]: I0126 07:54:15.177405 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a2716f1-7f63-4f1b-88e7-412b33a5afe3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1797a908e33ef3678ac1471e937a8e52c11dbb31d8de0f19d400a01706a98b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c111f2ec62507ccc570af4f4da19232bfa0946bb9e67bde595d5a3d66eff6a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f94ceee9c2ed3bd1c5ba58ee7e5047dd5f0a325c610c4277705f3426e69ed79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50228cb176a3defc821a78ed75d837706c2de3a3414f02e0afacf81289a15999\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:53:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:15Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:15 crc kubenswrapper[4806]: I0126 07:54:15.198155 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:15Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:15 crc kubenswrapper[4806]: I0126 07:54:15.217448 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:15Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:15 crc kubenswrapper[4806]: I0126 07:54:15.244271 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f8b8acb-f4cf-41db-82f8-94ffd21c1594\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21d2ea6bc7f0a91da2b372587419ec1b34cfe3aca8f70dd4e360bf3e818bb103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adbed86e005e6e9cecb4320e0dff72eab3cfd505ff656a64e6643cf47828c789\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d102c6b4ea9d79168f0fec24c556c8fec8e3c1191646eebcbe8ed0289edfb6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c90dede1fc0753cad500555db7d92be73cb9b51fe948e3e65bed66134d85628c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e05c58fd693a7827b1ff019c6e160e4d2198004aca34245ee6e08cd37f5627da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://634f160a9064240a06e9b425f70ceae6390db7205f898efdc601437e72c362df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7b156e8395962880407d0db1c1b809b804ce946f09b03d256e2d49bbeea75f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7b156e8395962880407d0db1c1b809b804ce946f09b03d256e2d49bbeea75f1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T07:54:13Z\\\",\\\"message\\\":\\\"ernalversions/factory.go:140\\\\nI0126 07:54:13.077605 6146 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 07:54:13.077710 6146 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0126 07:54:13.080415 6146 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0126 07:54:13.081317 6146 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 07:54:13.082235 6146 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 07:54:13.083087 6146 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 07:54:13.083515 6146 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:11Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-8mw7z_openshift-ovn-kubernetes(1f8b8acb-f4cf-41db-82f8-94ffd21c1594)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e9d86c9f33a8ad421c133f7b674eff9362011ad9ce60cbe1b917bda84c85047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2105d09a59a290aa3e1915a2b8c2c8aeb3a3e94ddfd4b48dc574a5362f522d18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2105d09a59a290aa3e1915a2b8c2c8aeb3a3e94ddfd4b48dc574a5362f522d18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8mw7z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:15Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:15 crc kubenswrapper[4806]: I0126 07:54:15.260970 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rqmvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"137029f0-49ad-4400-b117-2eff9271bce3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9zt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9zt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:14Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rqmvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:15Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:15 crc kubenswrapper[4806]: I0126 07:54:15.275763 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:15 crc kubenswrapper[4806]: I0126 07:54:15.275810 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:15 crc kubenswrapper[4806]: I0126 07:54:15.275821 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:15 crc kubenswrapper[4806]: I0126 07:54:15.275842 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:15 crc kubenswrapper[4806]: I0126 07:54:15.275854 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:15Z","lastTransitionTime":"2026-01-26T07:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:15 crc kubenswrapper[4806]: I0126 07:54:15.283037 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-268q5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59844d88-1bf9-4761-b664-74623e7532c3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80fc9c4f56c3d017653449212b2b9457a701e8c567317798eb12c695bccec5c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19fc2da16d6c76dad40cc2947eb8da01b1ead5804be80df5756c132456575909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19fc2da16d6c76dad40cc2947eb8da01b1ead5804be80df5756c132456575909\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a426f2a63c3cdd56cec3bbd1f6e88ff76808304da3fe757043afeb2feb26614\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a426f2a63c3cdd56cec3bbd1f6e88ff76808304da3fe757043afeb2feb26614\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5459f7b834ef08b014a811d0955763235cb4dbb6cbe92670202b03f51f7d684d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5459f7b834ef08b014a811d0955763235cb4dbb6cbe92670202b03f51f7d684d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1efe94940fbf1babcccc68efae1d38751c22232d1a2f197485057be906db80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f1efe94940fbf1babcccc68efae1d38751c22232d1a2f197485057be906db80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1332a28e0cf08275ce281997fdc5674f44e397f8ff02fdd19fed12c2de2b70a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1332a28e0cf08275ce281997fdc5674f44e397f8ff02fdd19fed12c2de2b70a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://638da847467b7cade9c68a764044417e85bfa99cae93105cb70b260d3668c1a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://638da847467b7cade9c68a764044417e85bfa99cae93105cb70b260d3668c1a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-268q5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:15Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:15 crc kubenswrapper[4806]: I0126 07:54:15.319259 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"674c5d9f-0f9e-47cf-b6bb-5a011bd0af68\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06b15982445e897ebbaf881e9a84973b9b700b0763ea6dfc0dfa3d5adb3d5b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67cf748dcc41d3402937cba97eadb7311a4fd7ab359a24768ffd4b6f689e9e2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://374c11776db902d4a30fdeab722bf8c85be53043b87555ac42f9f18bbfca5f78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded50f9353eb262b2587b8d1a59d8e1a06946c9b228eb0991fe35ad79f19567b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dc1a26bcb2eff0795009b3d88f7fedd86bccf553e9b8b86f8bc4bdda154566d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b85e7f36ac03113362c2ab6b15fb73e517599c6a56c6a410b9dead70191502f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b85e7f36ac03113362c2ab6b15fb73e517599c6a56c6a410b9dead70191502f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7324fbdf53b0be79696bb4bf870fe072761633e09cc88cce3a213a40b339410d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7324fbdf53b0be79696bb4bf870fe072761633e09cc88cce3a213a40b339410d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1987002ea8dfdacf427291f1757243727d744b1fe99d0be1f8cddc26beb74fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1987002ea8dfdacf427291f1757243727d744b1fe99d0be1f8cddc26beb74fc9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:53:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:15Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:15 crc kubenswrapper[4806]: I0126 07:54:15.342622 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f837be3-c97c-4ec6-9ac0-1b2c210a2bd1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://370998a25827938e42e6ca8f51cb20c64068d266a2a6e92f43cfa6033efe1f1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://befc3a7310ed730ab06800d0ffe5a5ad206f5193f921d0080543d96e53c382cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f4e90bd8d92e4159f45b887647465c23595d6a14cb9fd5a6f3b6c736345b302\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d8fe9445b34c3dea0e5508fbbd9d8c72d3b0bd3c927f3d0469fb5f86a61b7a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6442d8b57ec5789bc5ad219da279d42cd053598b4a8a843b4190a619750483d3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 07:53:53.366863 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 07:53:53.368547 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1947576678/tls.crt::/tmp/serving-cert-1947576678/tls.key\\\\\\\"\\\\nI0126 07:53:59.068951 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 07:53:59.072863 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 07:53:59.072893 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 07:53:59.072922 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 07:53:59.072931 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 07:53:59.078386 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 07:53:59.078432 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 07:53:59.078439 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 07:53:59.078445 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 07:53:59.078449 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 07:53:59.078455 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 07:53:59.078459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 07:53:59.078571 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 07:53:59.084946 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eda625619208720589e5f413d0df8e1e55e7fd5ab4a2e59d64e19150afb05ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02368f0d186c7de1c3dc25d5c60985799ded799314f332c5132230a413d531a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02368f0d186c7de1c3dc25d5c60985799ded799314f332c5132230a413d531a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:53:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:15Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:15 crc kubenswrapper[4806]: I0126 07:54:15.359749 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acf0880fe013b6ee2bf3e8342e19ebbdc21b0f7aeb69749dce219fe0e5e4939a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:15Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:15 crc kubenswrapper[4806]: I0126 07:54:15.377045 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0298829a559cb39d3f8d2f3b0e53b249e74c432ba82385ab628135d53fa77272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70db7a55580b02bc2be913a8978f895c5a3826133883cf1c838bc84a76e89af1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:15Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:15 crc kubenswrapper[4806]: I0126 07:54:15.379181 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:15 crc kubenswrapper[4806]: I0126 07:54:15.379273 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:15 crc kubenswrapper[4806]: I0126 07:54:15.379294 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:15 crc kubenswrapper[4806]: I0126 07:54:15.379325 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:15 crc kubenswrapper[4806]: I0126 07:54:15.379350 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:15Z","lastTransitionTime":"2026-01-26T07:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:15 crc kubenswrapper[4806]: I0126 07:54:15.393169 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wx526" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"265e15b3-6ef8-47df-ab15-dcc9bd9574ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5080e1085d9267f3e432eaf62a6c620f7e59db08140bd5a901ab21840bfd6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmfr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wx526\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:15Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:15 crc kubenswrapper[4806]: I0126 07:54:15.410596 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d7glh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4320ae6b-0d73-47d7-9f2c-f3c5b6b69041\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da5e8c52d415e76190ace196927bd1783b08990164b11379940ecc2b6734551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpcqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d7glh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:15Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:15 crc kubenswrapper[4806]: I0126 07:54:15.424841 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:15Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:15 crc kubenswrapper[4806]: I0126 07:54:15.440887 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dfe44c66a73660b38211ed7ae8fbb6a036c2ebc6a2716dd0e36da70b8d1fcc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:15Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:15 crc kubenswrapper[4806]: I0126 07:54:15.453221 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d07502a2-50b0-4012-b335-340a1c694c50\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03d55ca5e1d542a5cdb726c1e581fb9b644e237bf4575794fedadf60386c339c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67x45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://824bb40c04b0ea474c45020c9c84ce7b7ee18c52beef9fee9618ba5a6e59be04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67x45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k2tlk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:15Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:15 crc kubenswrapper[4806]: I0126 07:54:15.482138 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:15 crc kubenswrapper[4806]: I0126 07:54:15.482167 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:15 crc kubenswrapper[4806]: I0126 07:54:15.482176 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:15 crc kubenswrapper[4806]: I0126 07:54:15.482211 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:15 crc kubenswrapper[4806]: I0126 07:54:15.482220 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:15Z","lastTransitionTime":"2026-01-26T07:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:15 crc kubenswrapper[4806]: I0126 07:54:15.564709 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/137029f0-49ad-4400-b117-2eff9271bce3-metrics-certs\") pod \"network-metrics-daemon-rqmvf\" (UID: \"137029f0-49ad-4400-b117-2eff9271bce3\") " pod="openshift-multus/network-metrics-daemon-rqmvf" Jan 26 07:54:15 crc kubenswrapper[4806]: E0126 07:54:15.564981 4806 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 07:54:15 crc kubenswrapper[4806]: E0126 07:54:15.565140 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/137029f0-49ad-4400-b117-2eff9271bce3-metrics-certs podName:137029f0-49ad-4400-b117-2eff9271bce3 nodeName:}" failed. No retries permitted until 2026-01-26 07:54:16.5651047 +0000 UTC m=+35.829512956 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/137029f0-49ad-4400-b117-2eff9271bce3-metrics-certs") pod "network-metrics-daemon-rqmvf" (UID: "137029f0-49ad-4400-b117-2eff9271bce3") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 07:54:15 crc kubenswrapper[4806]: I0126 07:54:15.584375 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:15 crc kubenswrapper[4806]: I0126 07:54:15.584434 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:15 crc kubenswrapper[4806]: I0126 07:54:15.584443 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:15 crc kubenswrapper[4806]: I0126 07:54:15.584457 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:15 crc kubenswrapper[4806]: I0126 07:54:15.584466 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:15Z","lastTransitionTime":"2026-01-26T07:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:15 crc kubenswrapper[4806]: I0126 07:54:15.687937 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:15 crc kubenswrapper[4806]: I0126 07:54:15.687991 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:15 crc kubenswrapper[4806]: I0126 07:54:15.688012 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:15 crc kubenswrapper[4806]: I0126 07:54:15.688039 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:15 crc kubenswrapper[4806]: I0126 07:54:15.688059 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:15Z","lastTransitionTime":"2026-01-26T07:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:15 crc kubenswrapper[4806]: I0126 07:54:15.791391 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:15 crc kubenswrapper[4806]: I0126 07:54:15.791431 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:15 crc kubenswrapper[4806]: I0126 07:54:15.791443 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:15 crc kubenswrapper[4806]: I0126 07:54:15.791464 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:15 crc kubenswrapper[4806]: I0126 07:54:15.791479 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:15Z","lastTransitionTime":"2026-01-26T07:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:15 crc kubenswrapper[4806]: I0126 07:54:15.869926 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 07:54:15 crc kubenswrapper[4806]: E0126 07:54:15.870233 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 07:54:31.870179871 +0000 UTC m=+51.134587967 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 07:54:15 crc kubenswrapper[4806]: I0126 07:54:15.894992 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:15 crc kubenswrapper[4806]: I0126 07:54:15.895418 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:15 crc kubenswrapper[4806]: I0126 07:54:15.895694 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:15 crc kubenswrapper[4806]: I0126 07:54:15.895951 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:15 crc kubenswrapper[4806]: I0126 07:54:15.896259 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:15Z","lastTransitionTime":"2026-01-26T07:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:15 crc kubenswrapper[4806]: I0126 07:54:15.971447 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 07:54:15 crc kubenswrapper[4806]: I0126 07:54:15.971961 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 07:54:15 crc kubenswrapper[4806]: I0126 07:54:15.972418 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 07:54:15 crc kubenswrapper[4806]: E0126 07:54:15.971761 4806 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 07:54:15 crc kubenswrapper[4806]: I0126 07:54:15.972780 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 07:54:15 crc kubenswrapper[4806]: E0126 07:54:15.973066 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 07:54:31.973015878 +0000 UTC m=+51.237424054 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 07:54:15 crc kubenswrapper[4806]: E0126 07:54:15.972303 4806 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 07:54:15 crc kubenswrapper[4806]: E0126 07:54:15.973299 4806 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 07:54:15 crc kubenswrapper[4806]: E0126 07:54:15.973325 4806 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 07:54:15 crc kubenswrapper[4806]: E0126 07:54:15.973346 4806 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 07:54:15 crc kubenswrapper[4806]: E0126 07:54:15.973375 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 07:54:31.973363609 +0000 UTC m=+51.237771655 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 07:54:15 crc kubenswrapper[4806]: E0126 07:54:15.973567 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 07:54:31.973497593 +0000 UTC m=+51.237905849 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 07:54:15 crc kubenswrapper[4806]: E0126 07:54:15.972677 4806 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 07:54:15 crc kubenswrapper[4806]: E0126 07:54:15.973632 4806 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 07:54:15 crc kubenswrapper[4806]: E0126 07:54:15.973662 4806 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 07:54:15 crc kubenswrapper[4806]: E0126 07:54:15.973715 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 07:54:31.973701629 +0000 UTC m=+51.238109725 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 07:54:15 crc kubenswrapper[4806]: I0126 07:54:15.999773 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:16 crc kubenswrapper[4806]: I0126 07:54:16.000211 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:16 crc kubenswrapper[4806]: I0126 07:54:16.000363 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:16 crc kubenswrapper[4806]: I0126 07:54:16.000503 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:16 crc kubenswrapper[4806]: I0126 07:54:16.000743 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:16Z","lastTransitionTime":"2026-01-26T07:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:16 crc kubenswrapper[4806]: I0126 07:54:16.041183 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 07:54:16 crc kubenswrapper[4806]: I0126 07:54:16.041306 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 07:54:16 crc kubenswrapper[4806]: I0126 07:54:16.041195 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 07:54:16 crc kubenswrapper[4806]: I0126 07:54:16.041439 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqmvf" Jan 26 07:54:16 crc kubenswrapper[4806]: E0126 07:54:16.041371 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 07:54:16 crc kubenswrapper[4806]: E0126 07:54:16.041667 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 07:54:16 crc kubenswrapper[4806]: E0126 07:54:16.041840 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 07:54:16 crc kubenswrapper[4806]: E0126 07:54:16.041931 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqmvf" podUID="137029f0-49ad-4400-b117-2eff9271bce3" Jan 26 07:54:16 crc kubenswrapper[4806]: I0126 07:54:16.044394 4806 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 01:26:13.701938363 +0000 UTC Jan 26 07:54:16 crc kubenswrapper[4806]: I0126 07:54:16.105274 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:16 crc kubenswrapper[4806]: I0126 07:54:16.105336 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:16 crc kubenswrapper[4806]: I0126 07:54:16.105355 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:16 crc kubenswrapper[4806]: I0126 07:54:16.105383 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:16 crc kubenswrapper[4806]: I0126 07:54:16.105404 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:16Z","lastTransitionTime":"2026-01-26T07:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:16 crc kubenswrapper[4806]: I0126 07:54:16.208489 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:16 crc kubenswrapper[4806]: I0126 07:54:16.208548 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:16 crc kubenswrapper[4806]: I0126 07:54:16.208558 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:16 crc kubenswrapper[4806]: I0126 07:54:16.208575 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:16 crc kubenswrapper[4806]: I0126 07:54:16.208584 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:16Z","lastTransitionTime":"2026-01-26T07:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:16 crc kubenswrapper[4806]: I0126 07:54:16.312950 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:16 crc kubenswrapper[4806]: I0126 07:54:16.312990 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:16 crc kubenswrapper[4806]: I0126 07:54:16.313000 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:16 crc kubenswrapper[4806]: I0126 07:54:16.313016 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:16 crc kubenswrapper[4806]: I0126 07:54:16.313026 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:16Z","lastTransitionTime":"2026-01-26T07:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:16 crc kubenswrapper[4806]: I0126 07:54:16.416905 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:16 crc kubenswrapper[4806]: I0126 07:54:16.416987 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:16 crc kubenswrapper[4806]: I0126 07:54:16.417006 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:16 crc kubenswrapper[4806]: I0126 07:54:16.417038 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:16 crc kubenswrapper[4806]: I0126 07:54:16.417062 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:16Z","lastTransitionTime":"2026-01-26T07:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:16 crc kubenswrapper[4806]: I0126 07:54:16.520339 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:16 crc kubenswrapper[4806]: I0126 07:54:16.520377 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:16 crc kubenswrapper[4806]: I0126 07:54:16.520389 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:16 crc kubenswrapper[4806]: I0126 07:54:16.520409 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:16 crc kubenswrapper[4806]: I0126 07:54:16.520421 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:16Z","lastTransitionTime":"2026-01-26T07:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:16 crc kubenswrapper[4806]: I0126 07:54:16.581917 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/137029f0-49ad-4400-b117-2eff9271bce3-metrics-certs\") pod \"network-metrics-daemon-rqmvf\" (UID: \"137029f0-49ad-4400-b117-2eff9271bce3\") " pod="openshift-multus/network-metrics-daemon-rqmvf" Jan 26 07:54:16 crc kubenswrapper[4806]: E0126 07:54:16.582148 4806 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 07:54:16 crc kubenswrapper[4806]: E0126 07:54:16.582256 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/137029f0-49ad-4400-b117-2eff9271bce3-metrics-certs podName:137029f0-49ad-4400-b117-2eff9271bce3 nodeName:}" failed. No retries permitted until 2026-01-26 07:54:18.582231754 +0000 UTC m=+37.846639810 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/137029f0-49ad-4400-b117-2eff9271bce3-metrics-certs") pod "network-metrics-daemon-rqmvf" (UID: "137029f0-49ad-4400-b117-2eff9271bce3") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 07:54:16 crc kubenswrapper[4806]: I0126 07:54:16.623556 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:16 crc kubenswrapper[4806]: I0126 07:54:16.623985 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:16 crc kubenswrapper[4806]: I0126 07:54:16.624068 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:16 crc kubenswrapper[4806]: I0126 07:54:16.624172 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:16 crc kubenswrapper[4806]: I0126 07:54:16.624260 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:16Z","lastTransitionTime":"2026-01-26T07:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:16 crc kubenswrapper[4806]: I0126 07:54:16.727122 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:16 crc kubenswrapper[4806]: I0126 07:54:16.727165 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:16 crc kubenswrapper[4806]: I0126 07:54:16.727176 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:16 crc kubenswrapper[4806]: I0126 07:54:16.727192 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:16 crc kubenswrapper[4806]: I0126 07:54:16.727202 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:16Z","lastTransitionTime":"2026-01-26T07:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:16 crc kubenswrapper[4806]: I0126 07:54:16.829758 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:16 crc kubenswrapper[4806]: I0126 07:54:16.829829 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:16 crc kubenswrapper[4806]: I0126 07:54:16.829851 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:16 crc kubenswrapper[4806]: I0126 07:54:16.829882 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:16 crc kubenswrapper[4806]: I0126 07:54:16.829902 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:16Z","lastTransitionTime":"2026-01-26T07:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:16 crc kubenswrapper[4806]: I0126 07:54:16.933498 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:16 crc kubenswrapper[4806]: I0126 07:54:16.934054 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:16 crc kubenswrapper[4806]: I0126 07:54:16.934493 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:16 crc kubenswrapper[4806]: I0126 07:54:16.934613 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:16 crc kubenswrapper[4806]: I0126 07:54:16.934678 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:16Z","lastTransitionTime":"2026-01-26T07:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:17 crc kubenswrapper[4806]: I0126 07:54:17.037338 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:17 crc kubenswrapper[4806]: I0126 07:54:17.037722 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:17 crc kubenswrapper[4806]: I0126 07:54:17.037878 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:17 crc kubenswrapper[4806]: I0126 07:54:17.037980 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:17 crc kubenswrapper[4806]: I0126 07:54:17.038065 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:17Z","lastTransitionTime":"2026-01-26T07:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:17 crc kubenswrapper[4806]: I0126 07:54:17.046410 4806 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 15:05:48.606684836 +0000 UTC Jan 26 07:54:17 crc kubenswrapper[4806]: I0126 07:54:17.143195 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:17 crc kubenswrapper[4806]: I0126 07:54:17.143261 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:17 crc kubenswrapper[4806]: I0126 07:54:17.143285 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:17 crc kubenswrapper[4806]: I0126 07:54:17.143312 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:17 crc kubenswrapper[4806]: I0126 07:54:17.143332 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:17Z","lastTransitionTime":"2026-01-26T07:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:17 crc kubenswrapper[4806]: I0126 07:54:17.208178 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 07:54:17 crc kubenswrapper[4806]: I0126 07:54:17.231700 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:17Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:17 crc kubenswrapper[4806]: I0126 07:54:17.247462 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:17 crc kubenswrapper[4806]: I0126 07:54:17.247593 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:17 crc kubenswrapper[4806]: I0126 07:54:17.247615 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:17 crc kubenswrapper[4806]: I0126 07:54:17.247657 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:17 crc kubenswrapper[4806]: I0126 07:54:17.247677 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:17Z","lastTransitionTime":"2026-01-26T07:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:17 crc kubenswrapper[4806]: I0126 07:54:17.254897 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:17Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:17 crc kubenswrapper[4806]: I0126 07:54:17.291903 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f8b8acb-f4cf-41db-82f8-94ffd21c1594\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21d2ea6bc7f0a91da2b372587419ec1b34cfe3aca8f70dd4e360bf3e818bb103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adbed86e005e6e9cecb4320e0dff72eab3cfd505ff656a64e6643cf47828c789\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d102c6b4ea9d79168f0fec24c556c8fec8e3c1191646eebcbe8ed0289edfb6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c90dede1fc0753cad500555db7d92be73cb9b51fe948e3e65bed66134d85628c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e05c58fd693a7827b1ff019c6e160e4d2198004aca34245ee6e08cd37f5627da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://634f160a9064240a06e9b425f70ceae6390db7205f898efdc601437e72c362df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7b156e8395962880407d0db1c1b809b804ce946f09b03d256e2d49bbeea75f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7b156e8395962880407d0db1c1b809b804ce946f09b03d256e2d49bbeea75f1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T07:54:13Z\\\",\\\"message\\\":\\\"ernalversions/factory.go:140\\\\nI0126 07:54:13.077605 6146 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 07:54:13.077710 6146 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0126 07:54:13.080415 6146 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0126 07:54:13.081317 6146 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 07:54:13.082235 6146 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 07:54:13.083087 6146 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 07:54:13.083515 6146 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:11Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-8mw7z_openshift-ovn-kubernetes(1f8b8acb-f4cf-41db-82f8-94ffd21c1594)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e9d86c9f33a8ad421c133f7b674eff9362011ad9ce60cbe1b917bda84c85047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2105d09a59a290aa3e1915a2b8c2c8aeb3a3e94ddfd4b48dc574a5362f522d18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2105d09a59a290aa3e1915a2b8c2c8aeb3a3e94ddfd4b48dc574a5362f522d18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8mw7z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:17Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:17 crc kubenswrapper[4806]: I0126 07:54:17.311179 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rqmvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"137029f0-49ad-4400-b117-2eff9271bce3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9zt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9zt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:14Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rqmvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:17Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:17 crc kubenswrapper[4806]: I0126 07:54:17.328839 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a2716f1-7f63-4f1b-88e7-412b33a5afe3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1797a908e33ef3678ac1471e937a8e52c11dbb31d8de0f19d400a01706a98b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c111f2ec62507ccc570af4f4da19232bfa0946bb9e67bde595d5a3d66eff6a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f94ceee9c2ed3bd1c5ba58ee7e5047dd5f0a325c610c4277705f3426e69ed79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50228cb176a3defc821a78ed75d837706c2de3a3414f02e0afacf81289a15999\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:53:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:17Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:17 crc kubenswrapper[4806]: I0126 07:54:17.345499 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acf0880fe013b6ee2bf3e8342e19ebbdc21b0f7aeb69749dce219fe0e5e4939a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:17Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:17 crc kubenswrapper[4806]: I0126 07:54:17.351489 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:17 crc kubenswrapper[4806]: I0126 07:54:17.351543 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:17 crc kubenswrapper[4806]: I0126 07:54:17.351558 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:17 crc kubenswrapper[4806]: I0126 07:54:17.351581 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:17 crc kubenswrapper[4806]: I0126 07:54:17.351596 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:17Z","lastTransitionTime":"2026-01-26T07:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:17 crc kubenswrapper[4806]: I0126 07:54:17.369390 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0298829a559cb39d3f8d2f3b0e53b249e74c432ba82385ab628135d53fa77272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70db7a55580b02bc2be913a8978f895c5a3826133883cf1c838bc84a76e89af1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:17Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:17 crc kubenswrapper[4806]: I0126 07:54:17.387931 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wx526" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"265e15b3-6ef8-47df-ab15-dcc9bd9574ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5080e1085d9267f3e432eaf62a6c620f7e59db08140bd5a901ab21840bfd6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmfr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wx526\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:17Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:17 crc kubenswrapper[4806]: I0126 07:54:17.405016 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d7glh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4320ae6b-0d73-47d7-9f2c-f3c5b6b69041\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da5e8c52d415e76190ace196927bd1783b08990164b11379940ecc2b6734551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpcqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d7glh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:17Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:17 crc kubenswrapper[4806]: I0126 07:54:17.430258 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-268q5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59844d88-1bf9-4761-b664-74623e7532c3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80fc9c4f56c3d017653449212b2b9457a701e8c567317798eb12c695bccec5c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19fc2da16d6c76dad40cc2947eb8da01b1ead5804be80df5756c132456575909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19fc2da16d6c76dad40cc2947eb8da01b1ead5804be80df5756c132456575909\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a426f2a63c3cdd56cec3bbd1f6e88ff76808304da3fe757043afeb2feb26614\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a426f2a63c3cdd56cec3bbd1f6e88ff76808304da3fe757043afeb2feb26614\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5459f7b834ef08b014a811d0955763235cb4dbb6cbe92670202b03f51f7d684d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5459f7b834ef08b014a811d0955763235cb4dbb6cbe92670202b03f51f7d684d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1efe94940fbf1babcccc68efae1d38751c22232d1a2f197485057be906db80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f1efe94940fbf1babcccc68efae1d38751c22232d1a2f197485057be906db80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1332a28e0cf08275ce281997fdc5674f44e397f8ff02fdd19fed12c2de2b70a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1332a28e0cf08275ce281997fdc5674f44e397f8ff02fdd19fed12c2de2b70a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://638da847467b7cade9c68a764044417e85bfa99cae93105cb70b260d3668c1a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://638da847467b7cade9c68a764044417e85bfa99cae93105cb70b260d3668c1a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-268q5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:17Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:17 crc kubenswrapper[4806]: I0126 07:54:17.456980 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:17 crc kubenswrapper[4806]: I0126 07:54:17.457040 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:17 crc kubenswrapper[4806]: I0126 07:54:17.457054 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:17 crc kubenswrapper[4806]: I0126 07:54:17.457078 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:17 crc kubenswrapper[4806]: I0126 07:54:17.457094 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:17Z","lastTransitionTime":"2026-01-26T07:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:17 crc kubenswrapper[4806]: I0126 07:54:17.473942 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"674c5d9f-0f9e-47cf-b6bb-5a011bd0af68\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06b15982445e897ebbaf881e9a84973b9b700b0763ea6dfc0dfa3d5adb3d5b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67cf748dcc41d3402937cba97eadb7311a4fd7ab359a24768ffd4b6f689e9e2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://374c11776db902d4a30fdeab722bf8c85be53043b87555ac42f9f18bbfca5f78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded50f9353eb262b2587b8d1a59d8e1a06946c9b228eb0991fe35ad79f19567b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dc1a26bcb2eff0795009b3d88f7fedd86bccf553e9b8b86f8bc4bdda154566d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b85e7f36ac03113362c2ab6b15fb73e517599c6a56c6a410b9dead70191502f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b85e7f36ac03113362c2ab6b15fb73e517599c6a56c6a410b9dead70191502f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7324fbdf53b0be79696bb4bf870fe072761633e09cc88cce3a213a40b339410d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7324fbdf53b0be79696bb4bf870fe072761633e09cc88cce3a213a40b339410d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1987002ea8dfdacf427291f1757243727d744b1fe99d0be1f8cddc26beb74fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1987002ea8dfdacf427291f1757243727d744b1fe99d0be1f8cddc26beb74fc9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:53:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:17Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:17 crc kubenswrapper[4806]: I0126 07:54:17.491420 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f837be3-c97c-4ec6-9ac0-1b2c210a2bd1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://370998a25827938e42e6ca8f51cb20c64068d266a2a6e92f43cfa6033efe1f1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://befc3a7310ed730ab06800d0ffe5a5ad206f5193f921d0080543d96e53c382cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f4e90bd8d92e4159f45b887647465c23595d6a14cb9fd5a6f3b6c736345b302\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d8fe9445b34c3dea0e5508fbbd9d8c72d3b0bd3c927f3d0469fb5f86a61b7a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6442d8b57ec5789bc5ad219da279d42cd053598b4a8a843b4190a619750483d3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 07:53:53.366863 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 07:53:53.368547 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1947576678/tls.crt::/tmp/serving-cert-1947576678/tls.key\\\\\\\"\\\\nI0126 07:53:59.068951 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 07:53:59.072863 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 07:53:59.072893 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 07:53:59.072922 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 07:53:59.072931 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 07:53:59.078386 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 07:53:59.078432 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 07:53:59.078439 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 07:53:59.078445 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 07:53:59.078449 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 07:53:59.078455 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 07:53:59.078459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 07:53:59.078571 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 07:53:59.084946 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eda625619208720589e5f413d0df8e1e55e7fd5ab4a2e59d64e19150afb05ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02368f0d186c7de1c3dc25d5c60985799ded799314f332c5132230a413d531a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02368f0d186c7de1c3dc25d5c60985799ded799314f332c5132230a413d531a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:53:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:17Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:17 crc kubenswrapper[4806]: I0126 07:54:17.511917 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dfe44c66a73660b38211ed7ae8fbb6a036c2ebc6a2716dd0e36da70b8d1fcc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:17Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:17 crc kubenswrapper[4806]: I0126 07:54:17.526234 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d07502a2-50b0-4012-b335-340a1c694c50\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03d55ca5e1d542a5cdb726c1e581fb9b644e237bf4575794fedadf60386c339c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67x45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://824bb40c04b0ea474c45020c9c84ce7b7ee18c52beef9fee9618ba5a6e59be04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67x45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k2tlk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:17Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:17 crc kubenswrapper[4806]: I0126 07:54:17.541751 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:17Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:17 crc kubenswrapper[4806]: I0126 07:54:17.558011 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pw8cg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c73a9f4-20b2-4c8a-b25d-413770be4fac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://792a2a27d6043da345b05c611a152abc9233ff61958434dc7a7f8c13d185fc04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt9w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pw8cg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:17Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:17 crc kubenswrapper[4806]: I0126 07:54:17.559837 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:17 crc kubenswrapper[4806]: I0126 07:54:17.559967 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:17 crc kubenswrapper[4806]: I0126 07:54:17.560077 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:17 crc kubenswrapper[4806]: I0126 07:54:17.560186 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:17 crc kubenswrapper[4806]: I0126 07:54:17.560287 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:17Z","lastTransitionTime":"2026-01-26T07:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:17 crc kubenswrapper[4806]: I0126 07:54:17.574407 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tfbl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47635c72-c532-48f3-839a-d86393eb5d24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7ea777c2c49c1bc930bff2584c62a6ebd7752638e4d5aec2818f56cf0678ad8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4lbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4910b820b4227a848efb96e462e633150fe7e06295dc04a0d9c45636cc4b83bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4lbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tfbl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:17Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:17 crc kubenswrapper[4806]: I0126 07:54:17.663632 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:17 crc kubenswrapper[4806]: I0126 07:54:17.664199 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:17 crc kubenswrapper[4806]: I0126 07:54:17.664467 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:17 crc kubenswrapper[4806]: I0126 07:54:17.664746 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:17 crc kubenswrapper[4806]: I0126 07:54:17.664988 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:17Z","lastTransitionTime":"2026-01-26T07:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:17 crc kubenswrapper[4806]: I0126 07:54:17.768600 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:17 crc kubenswrapper[4806]: I0126 07:54:17.768961 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:17 crc kubenswrapper[4806]: I0126 07:54:17.769122 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:17 crc kubenswrapper[4806]: I0126 07:54:17.769262 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:17 crc kubenswrapper[4806]: I0126 07:54:17.769391 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:17Z","lastTransitionTime":"2026-01-26T07:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:17 crc kubenswrapper[4806]: I0126 07:54:17.874302 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:17 crc kubenswrapper[4806]: I0126 07:54:17.874367 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:17 crc kubenswrapper[4806]: I0126 07:54:17.874389 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:17 crc kubenswrapper[4806]: I0126 07:54:17.874420 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:17 crc kubenswrapper[4806]: I0126 07:54:17.874439 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:17Z","lastTransitionTime":"2026-01-26T07:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:17 crc kubenswrapper[4806]: I0126 07:54:17.978460 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:17 crc kubenswrapper[4806]: I0126 07:54:17.978656 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:17 crc kubenswrapper[4806]: I0126 07:54:17.978687 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:17 crc kubenswrapper[4806]: I0126 07:54:17.978725 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:17 crc kubenswrapper[4806]: I0126 07:54:17.978752 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:17Z","lastTransitionTime":"2026-01-26T07:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:18 crc kubenswrapper[4806]: I0126 07:54:18.041175 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 07:54:18 crc kubenswrapper[4806]: I0126 07:54:18.041272 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqmvf" Jan 26 07:54:18 crc kubenswrapper[4806]: I0126 07:54:18.041386 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 07:54:18 crc kubenswrapper[4806]: I0126 07:54:18.041197 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 07:54:18 crc kubenswrapper[4806]: E0126 07:54:18.041399 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 07:54:18 crc kubenswrapper[4806]: E0126 07:54:18.041595 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 07:54:18 crc kubenswrapper[4806]: E0126 07:54:18.041709 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqmvf" podUID="137029f0-49ad-4400-b117-2eff9271bce3" Jan 26 07:54:18 crc kubenswrapper[4806]: E0126 07:54:18.041815 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 07:54:18 crc kubenswrapper[4806]: I0126 07:54:18.046958 4806 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 14:06:50.810313234 +0000 UTC Jan 26 07:54:18 crc kubenswrapper[4806]: I0126 07:54:18.083636 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:18 crc kubenswrapper[4806]: I0126 07:54:18.083688 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:18 crc kubenswrapper[4806]: I0126 07:54:18.083702 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:18 crc kubenswrapper[4806]: I0126 07:54:18.083728 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:18 crc kubenswrapper[4806]: I0126 07:54:18.083761 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:18Z","lastTransitionTime":"2026-01-26T07:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:18 crc kubenswrapper[4806]: I0126 07:54:18.186996 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:18 crc kubenswrapper[4806]: I0126 07:54:18.187067 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:18 crc kubenswrapper[4806]: I0126 07:54:18.187079 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:18 crc kubenswrapper[4806]: I0126 07:54:18.187097 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:18 crc kubenswrapper[4806]: I0126 07:54:18.187112 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:18Z","lastTransitionTime":"2026-01-26T07:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:18 crc kubenswrapper[4806]: I0126 07:54:18.291164 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:18 crc kubenswrapper[4806]: I0126 07:54:18.291231 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:18 crc kubenswrapper[4806]: I0126 07:54:18.291248 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:18 crc kubenswrapper[4806]: I0126 07:54:18.291276 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:18 crc kubenswrapper[4806]: I0126 07:54:18.291295 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:18Z","lastTransitionTime":"2026-01-26T07:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:18 crc kubenswrapper[4806]: I0126 07:54:18.394488 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:18 crc kubenswrapper[4806]: I0126 07:54:18.394611 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:18 crc kubenswrapper[4806]: I0126 07:54:18.394636 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:18 crc kubenswrapper[4806]: I0126 07:54:18.394664 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:18 crc kubenswrapper[4806]: I0126 07:54:18.394692 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:18Z","lastTransitionTime":"2026-01-26T07:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:18 crc kubenswrapper[4806]: I0126 07:54:18.496795 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:18 crc kubenswrapper[4806]: I0126 07:54:18.496854 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:18 crc kubenswrapper[4806]: I0126 07:54:18.496868 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:18 crc kubenswrapper[4806]: I0126 07:54:18.496894 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:18 crc kubenswrapper[4806]: I0126 07:54:18.496914 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:18Z","lastTransitionTime":"2026-01-26T07:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:18 crc kubenswrapper[4806]: I0126 07:54:18.600693 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:18 crc kubenswrapper[4806]: I0126 07:54:18.600732 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:18 crc kubenswrapper[4806]: I0126 07:54:18.600741 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:18 crc kubenswrapper[4806]: I0126 07:54:18.600758 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:18 crc kubenswrapper[4806]: I0126 07:54:18.600769 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:18Z","lastTransitionTime":"2026-01-26T07:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:18 crc kubenswrapper[4806]: I0126 07:54:18.606102 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/137029f0-49ad-4400-b117-2eff9271bce3-metrics-certs\") pod \"network-metrics-daemon-rqmvf\" (UID: \"137029f0-49ad-4400-b117-2eff9271bce3\") " pod="openshift-multus/network-metrics-daemon-rqmvf" Jan 26 07:54:18 crc kubenswrapper[4806]: E0126 07:54:18.606308 4806 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 07:54:18 crc kubenswrapper[4806]: E0126 07:54:18.606416 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/137029f0-49ad-4400-b117-2eff9271bce3-metrics-certs podName:137029f0-49ad-4400-b117-2eff9271bce3 nodeName:}" failed. No retries permitted until 2026-01-26 07:54:22.606392928 +0000 UTC m=+41.870801164 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/137029f0-49ad-4400-b117-2eff9271bce3-metrics-certs") pod "network-metrics-daemon-rqmvf" (UID: "137029f0-49ad-4400-b117-2eff9271bce3") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 07:54:18 crc kubenswrapper[4806]: I0126 07:54:18.703770 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:18 crc kubenswrapper[4806]: I0126 07:54:18.703831 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:18 crc kubenswrapper[4806]: I0126 07:54:18.703851 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:18 crc kubenswrapper[4806]: I0126 07:54:18.703878 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:18 crc kubenswrapper[4806]: I0126 07:54:18.703893 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:18Z","lastTransitionTime":"2026-01-26T07:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:18 crc kubenswrapper[4806]: I0126 07:54:18.808137 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:18 crc kubenswrapper[4806]: I0126 07:54:18.808407 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:18 crc kubenswrapper[4806]: I0126 07:54:18.808572 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:18 crc kubenswrapper[4806]: I0126 07:54:18.808733 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:18 crc kubenswrapper[4806]: I0126 07:54:18.808861 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:18Z","lastTransitionTime":"2026-01-26T07:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:18 crc kubenswrapper[4806]: I0126 07:54:18.912790 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:18 crc kubenswrapper[4806]: I0126 07:54:18.912847 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:18 crc kubenswrapper[4806]: I0126 07:54:18.912869 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:18 crc kubenswrapper[4806]: I0126 07:54:18.912894 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:18 crc kubenswrapper[4806]: I0126 07:54:18.912914 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:18Z","lastTransitionTime":"2026-01-26T07:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:19 crc kubenswrapper[4806]: I0126 07:54:19.015689 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:19 crc kubenswrapper[4806]: I0126 07:54:19.015745 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:19 crc kubenswrapper[4806]: I0126 07:54:19.015763 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:19 crc kubenswrapper[4806]: I0126 07:54:19.015792 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:19 crc kubenswrapper[4806]: I0126 07:54:19.015814 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:19Z","lastTransitionTime":"2026-01-26T07:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:19 crc kubenswrapper[4806]: I0126 07:54:19.047577 4806 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 13:41:28.26216912 +0000 UTC Jan 26 07:54:19 crc kubenswrapper[4806]: I0126 07:54:19.119808 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:19 crc kubenswrapper[4806]: I0126 07:54:19.120105 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:19 crc kubenswrapper[4806]: I0126 07:54:19.120459 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:19 crc kubenswrapper[4806]: I0126 07:54:19.120979 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:19 crc kubenswrapper[4806]: I0126 07:54:19.121301 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:19Z","lastTransitionTime":"2026-01-26T07:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:19 crc kubenswrapper[4806]: I0126 07:54:19.225053 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:19 crc kubenswrapper[4806]: I0126 07:54:19.225105 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:19 crc kubenswrapper[4806]: I0126 07:54:19.225123 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:19 crc kubenswrapper[4806]: I0126 07:54:19.225146 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:19 crc kubenswrapper[4806]: I0126 07:54:19.225162 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:19Z","lastTransitionTime":"2026-01-26T07:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:19 crc kubenswrapper[4806]: I0126 07:54:19.328855 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:19 crc kubenswrapper[4806]: I0126 07:54:19.329273 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:19 crc kubenswrapper[4806]: I0126 07:54:19.329483 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:19 crc kubenswrapper[4806]: I0126 07:54:19.329740 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:19 crc kubenswrapper[4806]: I0126 07:54:19.329947 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:19Z","lastTransitionTime":"2026-01-26T07:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:19 crc kubenswrapper[4806]: I0126 07:54:19.433633 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:19 crc kubenswrapper[4806]: I0126 07:54:19.433685 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:19 crc kubenswrapper[4806]: I0126 07:54:19.433705 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:19 crc kubenswrapper[4806]: I0126 07:54:19.433731 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:19 crc kubenswrapper[4806]: I0126 07:54:19.433748 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:19Z","lastTransitionTime":"2026-01-26T07:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:19 crc kubenswrapper[4806]: I0126 07:54:19.537503 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:19 crc kubenswrapper[4806]: I0126 07:54:19.537621 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:19 crc kubenswrapper[4806]: I0126 07:54:19.537654 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:19 crc kubenswrapper[4806]: I0126 07:54:19.537689 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:19 crc kubenswrapper[4806]: I0126 07:54:19.537714 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:19Z","lastTransitionTime":"2026-01-26T07:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:19 crc kubenswrapper[4806]: I0126 07:54:19.641588 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:19 crc kubenswrapper[4806]: I0126 07:54:19.641670 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:19 crc kubenswrapper[4806]: I0126 07:54:19.641694 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:19 crc kubenswrapper[4806]: I0126 07:54:19.641732 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:19 crc kubenswrapper[4806]: I0126 07:54:19.641757 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:19Z","lastTransitionTime":"2026-01-26T07:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:19 crc kubenswrapper[4806]: I0126 07:54:19.678920 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:19 crc kubenswrapper[4806]: I0126 07:54:19.679325 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:19 crc kubenswrapper[4806]: I0126 07:54:19.679447 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:19 crc kubenswrapper[4806]: I0126 07:54:19.679651 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:19 crc kubenswrapper[4806]: I0126 07:54:19.679797 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:19Z","lastTransitionTime":"2026-01-26T07:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:19 crc kubenswrapper[4806]: E0126 07:54:19.704004 4806 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:54:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:54:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:54:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:54:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6d591560-a509-477f-85dc-1a92a429bf2e\\\",\\\"systemUUID\\\":\\\"8cee8155-a08c-4d0d-aec6-2f132dd9ee01\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:19Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:19 crc kubenswrapper[4806]: I0126 07:54:19.710124 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:19 crc kubenswrapper[4806]: I0126 07:54:19.710208 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:19 crc kubenswrapper[4806]: I0126 07:54:19.710228 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:19 crc kubenswrapper[4806]: I0126 07:54:19.710260 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:19 crc kubenswrapper[4806]: I0126 07:54:19.710280 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:19Z","lastTransitionTime":"2026-01-26T07:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:19 crc kubenswrapper[4806]: E0126 07:54:19.730728 4806 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:54:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:54:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:54:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:54:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6d591560-a509-477f-85dc-1a92a429bf2e\\\",\\\"systemUUID\\\":\\\"8cee8155-a08c-4d0d-aec6-2f132dd9ee01\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:19Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:19 crc kubenswrapper[4806]: I0126 07:54:19.737195 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:19 crc kubenswrapper[4806]: I0126 07:54:19.737262 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:19 crc kubenswrapper[4806]: I0126 07:54:19.737283 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:19 crc kubenswrapper[4806]: I0126 07:54:19.737310 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:19 crc kubenswrapper[4806]: I0126 07:54:19.737331 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:19Z","lastTransitionTime":"2026-01-26T07:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:19 crc kubenswrapper[4806]: E0126 07:54:19.759912 4806 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:54:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:54:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:54:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:54:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6d591560-a509-477f-85dc-1a92a429bf2e\\\",\\\"systemUUID\\\":\\\"8cee8155-a08c-4d0d-aec6-2f132dd9ee01\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:19Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:19 crc kubenswrapper[4806]: I0126 07:54:19.765881 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:19 crc kubenswrapper[4806]: I0126 07:54:19.765933 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:19 crc kubenswrapper[4806]: I0126 07:54:19.765951 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:19 crc kubenswrapper[4806]: I0126 07:54:19.765979 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:19 crc kubenswrapper[4806]: I0126 07:54:19.765996 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:19Z","lastTransitionTime":"2026-01-26T07:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:19 crc kubenswrapper[4806]: E0126 07:54:19.786861 4806 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:54:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:54:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:54:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:54:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6d591560-a509-477f-85dc-1a92a429bf2e\\\",\\\"systemUUID\\\":\\\"8cee8155-a08c-4d0d-aec6-2f132dd9ee01\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:19Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:19 crc kubenswrapper[4806]: I0126 07:54:19.794387 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:19 crc kubenswrapper[4806]: I0126 07:54:19.794465 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:19 crc kubenswrapper[4806]: I0126 07:54:19.794483 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:19 crc kubenswrapper[4806]: I0126 07:54:19.794514 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:19 crc kubenswrapper[4806]: I0126 07:54:19.794558 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:19Z","lastTransitionTime":"2026-01-26T07:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:19 crc kubenswrapper[4806]: E0126 07:54:19.812327 4806 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:54:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:54:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:54:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:54:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6d591560-a509-477f-85dc-1a92a429bf2e\\\",\\\"systemUUID\\\":\\\"8cee8155-a08c-4d0d-aec6-2f132dd9ee01\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:19Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:19 crc kubenswrapper[4806]: E0126 07:54:19.812604 4806 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 07:54:19 crc kubenswrapper[4806]: I0126 07:54:19.814916 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:19 crc kubenswrapper[4806]: I0126 07:54:19.814981 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:19 crc kubenswrapper[4806]: I0126 07:54:19.815004 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:19 crc kubenswrapper[4806]: I0126 07:54:19.815035 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:19 crc kubenswrapper[4806]: I0126 07:54:19.815056 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:19Z","lastTransitionTime":"2026-01-26T07:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:19 crc kubenswrapper[4806]: I0126 07:54:19.918805 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:19 crc kubenswrapper[4806]: I0126 07:54:19.918866 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:19 crc kubenswrapper[4806]: I0126 07:54:19.918888 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:19 crc kubenswrapper[4806]: I0126 07:54:19.918916 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:19 crc kubenswrapper[4806]: I0126 07:54:19.918938 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:19Z","lastTransitionTime":"2026-01-26T07:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:20 crc kubenswrapper[4806]: I0126 07:54:20.022134 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:20 crc kubenswrapper[4806]: I0126 07:54:20.022244 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:20 crc kubenswrapper[4806]: I0126 07:54:20.022273 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:20 crc kubenswrapper[4806]: I0126 07:54:20.022310 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:20 crc kubenswrapper[4806]: I0126 07:54:20.022331 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:20Z","lastTransitionTime":"2026-01-26T07:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:20 crc kubenswrapper[4806]: I0126 07:54:20.041571 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqmvf" Jan 26 07:54:20 crc kubenswrapper[4806]: I0126 07:54:20.041621 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 07:54:20 crc kubenswrapper[4806]: I0126 07:54:20.041681 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 07:54:20 crc kubenswrapper[4806]: E0126 07:54:20.042889 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 07:54:20 crc kubenswrapper[4806]: I0126 07:54:20.041852 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 07:54:20 crc kubenswrapper[4806]: E0126 07:54:20.042971 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 07:54:20 crc kubenswrapper[4806]: E0126 07:54:20.042662 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqmvf" podUID="137029f0-49ad-4400-b117-2eff9271bce3" Jan 26 07:54:20 crc kubenswrapper[4806]: E0126 07:54:20.043274 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 07:54:20 crc kubenswrapper[4806]: I0126 07:54:20.047888 4806 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 18:04:30.400578538 +0000 UTC Jan 26 07:54:20 crc kubenswrapper[4806]: I0126 07:54:20.126212 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:20 crc kubenswrapper[4806]: I0126 07:54:20.126699 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:20 crc kubenswrapper[4806]: I0126 07:54:20.126850 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:20 crc kubenswrapper[4806]: I0126 07:54:20.126978 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:20 crc kubenswrapper[4806]: I0126 07:54:20.127136 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:20Z","lastTransitionTime":"2026-01-26T07:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:20 crc kubenswrapper[4806]: I0126 07:54:20.230131 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:20 crc kubenswrapper[4806]: I0126 07:54:20.230218 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:20 crc kubenswrapper[4806]: I0126 07:54:20.230232 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:20 crc kubenswrapper[4806]: I0126 07:54:20.230281 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:20 crc kubenswrapper[4806]: I0126 07:54:20.230297 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:20Z","lastTransitionTime":"2026-01-26T07:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:20 crc kubenswrapper[4806]: I0126 07:54:20.332374 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:20 crc kubenswrapper[4806]: I0126 07:54:20.332446 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:20 crc kubenswrapper[4806]: I0126 07:54:20.332469 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:20 crc kubenswrapper[4806]: I0126 07:54:20.332497 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:20 crc kubenswrapper[4806]: I0126 07:54:20.332513 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:20Z","lastTransitionTime":"2026-01-26T07:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:20 crc kubenswrapper[4806]: I0126 07:54:20.434899 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:20 crc kubenswrapper[4806]: I0126 07:54:20.435112 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:20 crc kubenswrapper[4806]: I0126 07:54:20.435206 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:20 crc kubenswrapper[4806]: I0126 07:54:20.435282 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:20 crc kubenswrapper[4806]: I0126 07:54:20.435339 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:20Z","lastTransitionTime":"2026-01-26T07:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:20 crc kubenswrapper[4806]: I0126 07:54:20.537859 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:20 crc kubenswrapper[4806]: I0126 07:54:20.538098 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:20 crc kubenswrapper[4806]: I0126 07:54:20.538163 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:20 crc kubenswrapper[4806]: I0126 07:54:20.538275 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:20 crc kubenswrapper[4806]: I0126 07:54:20.538338 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:20Z","lastTransitionTime":"2026-01-26T07:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:20 crc kubenswrapper[4806]: I0126 07:54:20.641644 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:20 crc kubenswrapper[4806]: I0126 07:54:20.641683 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:20 crc kubenswrapper[4806]: I0126 07:54:20.641693 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:20 crc kubenswrapper[4806]: I0126 07:54:20.641708 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:20 crc kubenswrapper[4806]: I0126 07:54:20.641719 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:20Z","lastTransitionTime":"2026-01-26T07:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:20 crc kubenswrapper[4806]: I0126 07:54:20.744381 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:20 crc kubenswrapper[4806]: I0126 07:54:20.744765 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:20 crc kubenswrapper[4806]: I0126 07:54:20.744928 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:20 crc kubenswrapper[4806]: I0126 07:54:20.745067 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:20 crc kubenswrapper[4806]: I0126 07:54:20.745201 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:20Z","lastTransitionTime":"2026-01-26T07:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:20 crc kubenswrapper[4806]: I0126 07:54:20.847084 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:20 crc kubenswrapper[4806]: I0126 07:54:20.847133 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:20 crc kubenswrapper[4806]: I0126 07:54:20.847147 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:20 crc kubenswrapper[4806]: I0126 07:54:20.847165 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:20 crc kubenswrapper[4806]: I0126 07:54:20.847177 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:20Z","lastTransitionTime":"2026-01-26T07:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:20 crc kubenswrapper[4806]: I0126 07:54:20.949911 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:20 crc kubenswrapper[4806]: I0126 07:54:20.949954 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:20 crc kubenswrapper[4806]: I0126 07:54:20.949970 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:20 crc kubenswrapper[4806]: I0126 07:54:20.949988 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:20 crc kubenswrapper[4806]: I0126 07:54:20.949999 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:20Z","lastTransitionTime":"2026-01-26T07:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:21 crc kubenswrapper[4806]: I0126 07:54:21.048160 4806 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 04:11:18.716594294 +0000 UTC Jan 26 07:54:21 crc kubenswrapper[4806]: I0126 07:54:21.054977 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:21 crc kubenswrapper[4806]: I0126 07:54:21.055016 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:21 crc kubenswrapper[4806]: I0126 07:54:21.055027 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:21 crc kubenswrapper[4806]: I0126 07:54:21.055041 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:21 crc kubenswrapper[4806]: I0126 07:54:21.055051 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:21Z","lastTransitionTime":"2026-01-26T07:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:21 crc kubenswrapper[4806]: I0126 07:54:21.060724 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:21Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:21 crc kubenswrapper[4806]: I0126 07:54:21.074946 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dfe44c66a73660b38211ed7ae8fbb6a036c2ebc6a2716dd0e36da70b8d1fcc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:21Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:21 crc kubenswrapper[4806]: I0126 07:54:21.085819 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d07502a2-50b0-4012-b335-340a1c694c50\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03d55ca5e1d542a5cdb726c1e581fb9b644e237bf4575794fedadf60386c339c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67x45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://824bb40c04b0ea474c45020c9c84ce7b7ee18c52beef9fee9618ba5a6e59be04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67x45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k2tlk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:21Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:21 crc kubenswrapper[4806]: I0126 07:54:21.095647 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pw8cg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c73a9f4-20b2-4c8a-b25d-413770be4fac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://792a2a27d6043da345b05c611a152abc9233ff61958434dc7a7f8c13d185fc04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt9w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pw8cg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:21Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:21 crc kubenswrapper[4806]: I0126 07:54:21.106976 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tfbl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47635c72-c532-48f3-839a-d86393eb5d24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7ea777c2c49c1bc930bff2584c62a6ebd7752638e4d5aec2818f56cf0678ad8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4lbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4910b820b4227a848efb96e462e633150fe7e06295dc04a0d9c45636cc4b83bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4lbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tfbl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:21Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:21 crc kubenswrapper[4806]: I0126 07:54:21.132139 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f8b8acb-f4cf-41db-82f8-94ffd21c1594\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21d2ea6bc7f0a91da2b372587419ec1b34cfe3aca8f70dd4e360bf3e818bb103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adbed86e005e6e9cecb4320e0dff72eab3cfd505ff656a64e6643cf47828c789\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d102c6b4ea9d79168f0fec24c556c8fec8e3c1191646eebcbe8ed0289edfb6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c90dede1fc0753cad500555db7d92be73cb9b51fe948e3e65bed66134d85628c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e05c58fd693a7827b1ff019c6e160e4d2198004aca34245ee6e08cd37f5627da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://634f160a9064240a06e9b425f70ceae6390db7205f898efdc601437e72c362df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7b156e8395962880407d0db1c1b809b804ce946f09b03d256e2d49bbeea75f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7b156e8395962880407d0db1c1b809b804ce946f09b03d256e2d49bbeea75f1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T07:54:13Z\\\",\\\"message\\\":\\\"ernalversions/factory.go:140\\\\nI0126 07:54:13.077605 6146 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 07:54:13.077710 6146 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0126 07:54:13.080415 6146 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0126 07:54:13.081317 6146 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 07:54:13.082235 6146 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 07:54:13.083087 6146 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 07:54:13.083515 6146 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:11Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-8mw7z_openshift-ovn-kubernetes(1f8b8acb-f4cf-41db-82f8-94ffd21c1594)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e9d86c9f33a8ad421c133f7b674eff9362011ad9ce60cbe1b917bda84c85047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2105d09a59a290aa3e1915a2b8c2c8aeb3a3e94ddfd4b48dc574a5362f522d18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2105d09a59a290aa3e1915a2b8c2c8aeb3a3e94ddfd4b48dc574a5362f522d18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8mw7z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:21Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:21 crc kubenswrapper[4806]: I0126 07:54:21.144168 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rqmvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"137029f0-49ad-4400-b117-2eff9271bce3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9zt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9zt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:14Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rqmvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:21Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:21 crc kubenswrapper[4806]: I0126 07:54:21.157368 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:21 crc kubenswrapper[4806]: I0126 07:54:21.157407 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:21 crc kubenswrapper[4806]: I0126 07:54:21.157420 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:21 crc kubenswrapper[4806]: I0126 07:54:21.157436 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:21 crc kubenswrapper[4806]: I0126 07:54:21.157446 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:21Z","lastTransitionTime":"2026-01-26T07:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:21 crc kubenswrapper[4806]: I0126 07:54:21.159924 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a2716f1-7f63-4f1b-88e7-412b33a5afe3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1797a908e33ef3678ac1471e937a8e52c11dbb31d8de0f19d400a01706a98b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c111f2ec62507ccc570af4f4da19232bfa0946bb9e67bde595d5a3d66eff6a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f94ceee9c2ed3bd1c5ba58ee7e5047dd5f0a325c610c4277705f3426e69ed79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50228cb176a3defc821a78ed75d837706c2de3a3414f02e0afacf81289a15999\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:53:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:21Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:21 crc kubenswrapper[4806]: I0126 07:54:21.173610 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:21Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:21 crc kubenswrapper[4806]: I0126 07:54:21.209109 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:21Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:21 crc kubenswrapper[4806]: I0126 07:54:21.219961 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wx526" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"265e15b3-6ef8-47df-ab15-dcc9bd9574ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5080e1085d9267f3e432eaf62a6c620f7e59db08140bd5a901ab21840bfd6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmfr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wx526\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:21Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:21 crc kubenswrapper[4806]: I0126 07:54:21.259489 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:21 crc kubenswrapper[4806]: I0126 07:54:21.259546 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:21 crc kubenswrapper[4806]: I0126 07:54:21.259557 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:21 crc kubenswrapper[4806]: I0126 07:54:21.259572 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:21 crc kubenswrapper[4806]: I0126 07:54:21.259582 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:21Z","lastTransitionTime":"2026-01-26T07:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:21 crc kubenswrapper[4806]: I0126 07:54:21.275216 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d7glh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4320ae6b-0d73-47d7-9f2c-f3c5b6b69041\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da5e8c52d415e76190ace196927bd1783b08990164b11379940ecc2b6734551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpcqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d7glh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:21Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:21 crc kubenswrapper[4806]: I0126 07:54:21.294921 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-268q5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59844d88-1bf9-4761-b664-74623e7532c3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80fc9c4f56c3d017653449212b2b9457a701e8c567317798eb12c695bccec5c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19fc2da16d6c76dad40cc2947eb8da01b1ead5804be80df5756c132456575909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19fc2da16d6c76dad40cc2947eb8da01b1ead5804be80df5756c132456575909\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a426f2a63c3cdd56cec3bbd1f6e88ff76808304da3fe757043afeb2feb26614\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a426f2a63c3cdd56cec3bbd1f6e88ff76808304da3fe757043afeb2feb26614\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5459f7b834ef08b014a811d0955763235cb4dbb6cbe92670202b03f51f7d684d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5459f7b834ef08b014a811d0955763235cb4dbb6cbe92670202b03f51f7d684d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1efe94940fbf1babcccc68efae1d38751c22232d1a2f197485057be906db80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f1efe94940fbf1babcccc68efae1d38751c22232d1a2f197485057be906db80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1332a28e0cf08275ce281997fdc5674f44e397f8ff02fdd19fed12c2de2b70a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1332a28e0cf08275ce281997fdc5674f44e397f8ff02fdd19fed12c2de2b70a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://638da847467b7cade9c68a764044417e85bfa99cae93105cb70b260d3668c1a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://638da847467b7cade9c68a764044417e85bfa99cae93105cb70b260d3668c1a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-268q5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:21Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:21 crc kubenswrapper[4806]: I0126 07:54:21.313791 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"674c5d9f-0f9e-47cf-b6bb-5a011bd0af68\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06b15982445e897ebbaf881e9a84973b9b700b0763ea6dfc0dfa3d5adb3d5b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67cf748dcc41d3402937cba97eadb7311a4fd7ab359a24768ffd4b6f689e9e2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://374c11776db902d4a30fdeab722bf8c85be53043b87555ac42f9f18bbfca5f78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded50f9353eb262b2587b8d1a59d8e1a06946c9b228eb0991fe35ad79f19567b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dc1a26bcb2eff0795009b3d88f7fedd86bccf553e9b8b86f8bc4bdda154566d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b85e7f36ac03113362c2ab6b15fb73e517599c6a56c6a410b9dead70191502f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b85e7f36ac03113362c2ab6b15fb73e517599c6a56c6a410b9dead70191502f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7324fbdf53b0be79696bb4bf870fe072761633e09cc88cce3a213a40b339410d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7324fbdf53b0be79696bb4bf870fe072761633e09cc88cce3a213a40b339410d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1987002ea8dfdacf427291f1757243727d744b1fe99d0be1f8cddc26beb74fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1987002ea8dfdacf427291f1757243727d744b1fe99d0be1f8cddc26beb74fc9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:53:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:21Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:21 crc kubenswrapper[4806]: I0126 07:54:21.325145 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f837be3-c97c-4ec6-9ac0-1b2c210a2bd1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://370998a25827938e42e6ca8f51cb20c64068d266a2a6e92f43cfa6033efe1f1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://befc3a7310ed730ab06800d0ffe5a5ad206f5193f921d0080543d96e53c382cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f4e90bd8d92e4159f45b887647465c23595d6a14cb9fd5a6f3b6c736345b302\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d8fe9445b34c3dea0e5508fbbd9d8c72d3b0bd3c927f3d0469fb5f86a61b7a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6442d8b57ec5789bc5ad219da279d42cd053598b4a8a843b4190a619750483d3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 07:53:53.366863 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 07:53:53.368547 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1947576678/tls.crt::/tmp/serving-cert-1947576678/tls.key\\\\\\\"\\\\nI0126 07:53:59.068951 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 07:53:59.072863 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 07:53:59.072893 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 07:53:59.072922 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 07:53:59.072931 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 07:53:59.078386 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 07:53:59.078432 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 07:53:59.078439 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 07:53:59.078445 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 07:53:59.078449 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 07:53:59.078455 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 07:53:59.078459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 07:53:59.078571 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 07:53:59.084946 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eda625619208720589e5f413d0df8e1e55e7fd5ab4a2e59d64e19150afb05ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02368f0d186c7de1c3dc25d5c60985799ded799314f332c5132230a413d531a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02368f0d186c7de1c3dc25d5c60985799ded799314f332c5132230a413d531a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:53:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:21Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:21 crc kubenswrapper[4806]: I0126 07:54:21.336438 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acf0880fe013b6ee2bf3e8342e19ebbdc21b0f7aeb69749dce219fe0e5e4939a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:21Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:21 crc kubenswrapper[4806]: I0126 07:54:21.348125 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0298829a559cb39d3f8d2f3b0e53b249e74c432ba82385ab628135d53fa77272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70db7a55580b02bc2be913a8978f895c5a3826133883cf1c838bc84a76e89af1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:21Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:21 crc kubenswrapper[4806]: I0126 07:54:21.361626 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:21 crc kubenswrapper[4806]: I0126 07:54:21.361663 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:21 crc kubenswrapper[4806]: I0126 07:54:21.361674 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:21 crc kubenswrapper[4806]: I0126 07:54:21.361691 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:21 crc kubenswrapper[4806]: I0126 07:54:21.361702 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:21Z","lastTransitionTime":"2026-01-26T07:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:21 crc kubenswrapper[4806]: I0126 07:54:21.463910 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:21 crc kubenswrapper[4806]: I0126 07:54:21.463962 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:21 crc kubenswrapper[4806]: I0126 07:54:21.463980 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:21 crc kubenswrapper[4806]: I0126 07:54:21.464004 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:21 crc kubenswrapper[4806]: I0126 07:54:21.464019 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:21Z","lastTransitionTime":"2026-01-26T07:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:21 crc kubenswrapper[4806]: I0126 07:54:21.566696 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:21 crc kubenswrapper[4806]: I0126 07:54:21.566779 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:21 crc kubenswrapper[4806]: I0126 07:54:21.566814 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:21 crc kubenswrapper[4806]: I0126 07:54:21.566846 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:21 crc kubenswrapper[4806]: I0126 07:54:21.566873 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:21Z","lastTransitionTime":"2026-01-26T07:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:21 crc kubenswrapper[4806]: I0126 07:54:21.669763 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:21 crc kubenswrapper[4806]: I0126 07:54:21.669833 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:21 crc kubenswrapper[4806]: I0126 07:54:21.669852 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:21 crc kubenswrapper[4806]: I0126 07:54:21.669882 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:21 crc kubenswrapper[4806]: I0126 07:54:21.669901 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:21Z","lastTransitionTime":"2026-01-26T07:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:21 crc kubenswrapper[4806]: I0126 07:54:21.772034 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:21 crc kubenswrapper[4806]: I0126 07:54:21.772083 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:21 crc kubenswrapper[4806]: I0126 07:54:21.772095 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:21 crc kubenswrapper[4806]: I0126 07:54:21.772113 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:21 crc kubenswrapper[4806]: I0126 07:54:21.772136 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:21Z","lastTransitionTime":"2026-01-26T07:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:21 crc kubenswrapper[4806]: I0126 07:54:21.875457 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:21 crc kubenswrapper[4806]: I0126 07:54:21.875587 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:21 crc kubenswrapper[4806]: I0126 07:54:21.875614 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:21 crc kubenswrapper[4806]: I0126 07:54:21.875648 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:21 crc kubenswrapper[4806]: I0126 07:54:21.875675 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:21Z","lastTransitionTime":"2026-01-26T07:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:21 crc kubenswrapper[4806]: I0126 07:54:21.983255 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:21 crc kubenswrapper[4806]: I0126 07:54:21.983919 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:21 crc kubenswrapper[4806]: I0126 07:54:21.983951 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:21 crc kubenswrapper[4806]: I0126 07:54:21.983973 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:21 crc kubenswrapper[4806]: I0126 07:54:21.983995 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:21Z","lastTransitionTime":"2026-01-26T07:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:22 crc kubenswrapper[4806]: I0126 07:54:22.041863 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 07:54:22 crc kubenswrapper[4806]: I0126 07:54:22.041946 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqmvf" Jan 26 07:54:22 crc kubenswrapper[4806]: I0126 07:54:22.042063 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 07:54:22 crc kubenswrapper[4806]: E0126 07:54:22.042097 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 07:54:22 crc kubenswrapper[4806]: I0126 07:54:22.042178 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 07:54:22 crc kubenswrapper[4806]: E0126 07:54:22.042363 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 07:54:22 crc kubenswrapper[4806]: E0126 07:54:22.042495 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 07:54:22 crc kubenswrapper[4806]: E0126 07:54:22.042674 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqmvf" podUID="137029f0-49ad-4400-b117-2eff9271bce3" Jan 26 07:54:22 crc kubenswrapper[4806]: I0126 07:54:22.049036 4806 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 10:12:02.487815477 +0000 UTC Jan 26 07:54:22 crc kubenswrapper[4806]: I0126 07:54:22.088270 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:22 crc kubenswrapper[4806]: I0126 07:54:22.088324 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:22 crc kubenswrapper[4806]: I0126 07:54:22.088338 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:22 crc kubenswrapper[4806]: I0126 07:54:22.088359 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:22 crc kubenswrapper[4806]: I0126 07:54:22.088372 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:22Z","lastTransitionTime":"2026-01-26T07:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:22 crc kubenswrapper[4806]: I0126 07:54:22.194103 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:22 crc kubenswrapper[4806]: I0126 07:54:22.194162 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:22 crc kubenswrapper[4806]: I0126 07:54:22.194185 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:22 crc kubenswrapper[4806]: I0126 07:54:22.194216 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:22 crc kubenswrapper[4806]: I0126 07:54:22.194236 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:22Z","lastTransitionTime":"2026-01-26T07:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:22 crc kubenswrapper[4806]: I0126 07:54:22.298361 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:22 crc kubenswrapper[4806]: I0126 07:54:22.298416 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:22 crc kubenswrapper[4806]: I0126 07:54:22.298430 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:22 crc kubenswrapper[4806]: I0126 07:54:22.298450 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:22 crc kubenswrapper[4806]: I0126 07:54:22.298466 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:22Z","lastTransitionTime":"2026-01-26T07:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:22 crc kubenswrapper[4806]: I0126 07:54:22.401458 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:22 crc kubenswrapper[4806]: I0126 07:54:22.401616 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:22 crc kubenswrapper[4806]: I0126 07:54:22.401635 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:22 crc kubenswrapper[4806]: I0126 07:54:22.401665 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:22 crc kubenswrapper[4806]: I0126 07:54:22.401716 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:22Z","lastTransitionTime":"2026-01-26T07:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:22 crc kubenswrapper[4806]: I0126 07:54:22.505197 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:22 crc kubenswrapper[4806]: I0126 07:54:22.505677 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:22 crc kubenswrapper[4806]: I0126 07:54:22.505845 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:22 crc kubenswrapper[4806]: I0126 07:54:22.506002 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:22 crc kubenswrapper[4806]: I0126 07:54:22.506128 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:22Z","lastTransitionTime":"2026-01-26T07:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:22 crc kubenswrapper[4806]: I0126 07:54:22.609913 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:22 crc kubenswrapper[4806]: I0126 07:54:22.609967 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:22 crc kubenswrapper[4806]: I0126 07:54:22.609980 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:22 crc kubenswrapper[4806]: I0126 07:54:22.610003 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:22 crc kubenswrapper[4806]: I0126 07:54:22.610017 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:22Z","lastTransitionTime":"2026-01-26T07:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:22 crc kubenswrapper[4806]: I0126 07:54:22.652580 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/137029f0-49ad-4400-b117-2eff9271bce3-metrics-certs\") pod \"network-metrics-daemon-rqmvf\" (UID: \"137029f0-49ad-4400-b117-2eff9271bce3\") " pod="openshift-multus/network-metrics-daemon-rqmvf" Jan 26 07:54:22 crc kubenswrapper[4806]: E0126 07:54:22.652720 4806 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 07:54:22 crc kubenswrapper[4806]: E0126 07:54:22.652779 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/137029f0-49ad-4400-b117-2eff9271bce3-metrics-certs podName:137029f0-49ad-4400-b117-2eff9271bce3 nodeName:}" failed. No retries permitted until 2026-01-26 07:54:30.652761608 +0000 UTC m=+49.917169664 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/137029f0-49ad-4400-b117-2eff9271bce3-metrics-certs") pod "network-metrics-daemon-rqmvf" (UID: "137029f0-49ad-4400-b117-2eff9271bce3") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 07:54:22 crc kubenswrapper[4806]: I0126 07:54:22.713783 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:22 crc kubenswrapper[4806]: I0126 07:54:22.713843 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:22 crc kubenswrapper[4806]: I0126 07:54:22.713861 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:22 crc kubenswrapper[4806]: I0126 07:54:22.713889 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:22 crc kubenswrapper[4806]: I0126 07:54:22.713909 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:22Z","lastTransitionTime":"2026-01-26T07:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:22 crc kubenswrapper[4806]: I0126 07:54:22.816909 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:22 crc kubenswrapper[4806]: I0126 07:54:22.816969 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:22 crc kubenswrapper[4806]: I0126 07:54:22.816989 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:22 crc kubenswrapper[4806]: I0126 07:54:22.817015 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:22 crc kubenswrapper[4806]: I0126 07:54:22.817035 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:22Z","lastTransitionTime":"2026-01-26T07:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:22 crc kubenswrapper[4806]: I0126 07:54:22.920674 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:22 crc kubenswrapper[4806]: I0126 07:54:22.920746 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:22 crc kubenswrapper[4806]: I0126 07:54:22.920766 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:22 crc kubenswrapper[4806]: I0126 07:54:22.920794 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:22 crc kubenswrapper[4806]: I0126 07:54:22.920812 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:22Z","lastTransitionTime":"2026-01-26T07:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:23 crc kubenswrapper[4806]: I0126 07:54:23.024252 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:23 crc kubenswrapper[4806]: I0126 07:54:23.024316 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:23 crc kubenswrapper[4806]: I0126 07:54:23.024338 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:23 crc kubenswrapper[4806]: I0126 07:54:23.024370 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:23 crc kubenswrapper[4806]: I0126 07:54:23.024394 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:23Z","lastTransitionTime":"2026-01-26T07:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:23 crc kubenswrapper[4806]: I0126 07:54:23.049633 4806 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 08:31:39.167500134 +0000 UTC Jan 26 07:54:23 crc kubenswrapper[4806]: I0126 07:54:23.127746 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:23 crc kubenswrapper[4806]: I0126 07:54:23.127809 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:23 crc kubenswrapper[4806]: I0126 07:54:23.127827 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:23 crc kubenswrapper[4806]: I0126 07:54:23.127849 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:23 crc kubenswrapper[4806]: I0126 07:54:23.127865 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:23Z","lastTransitionTime":"2026-01-26T07:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:23 crc kubenswrapper[4806]: I0126 07:54:23.231353 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:23 crc kubenswrapper[4806]: I0126 07:54:23.231407 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:23 crc kubenswrapper[4806]: I0126 07:54:23.231421 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:23 crc kubenswrapper[4806]: I0126 07:54:23.231446 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:23 crc kubenswrapper[4806]: I0126 07:54:23.231459 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:23Z","lastTransitionTime":"2026-01-26T07:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:23 crc kubenswrapper[4806]: I0126 07:54:23.334851 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:23 crc kubenswrapper[4806]: I0126 07:54:23.335290 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:23 crc kubenswrapper[4806]: I0126 07:54:23.335516 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:23 crc kubenswrapper[4806]: I0126 07:54:23.335741 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:23 crc kubenswrapper[4806]: I0126 07:54:23.335908 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:23Z","lastTransitionTime":"2026-01-26T07:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:23 crc kubenswrapper[4806]: I0126 07:54:23.439368 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:23 crc kubenswrapper[4806]: I0126 07:54:23.439445 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:23 crc kubenswrapper[4806]: I0126 07:54:23.439468 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:23 crc kubenswrapper[4806]: I0126 07:54:23.439499 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:23 crc kubenswrapper[4806]: I0126 07:54:23.439549 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:23Z","lastTransitionTime":"2026-01-26T07:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:23 crc kubenswrapper[4806]: I0126 07:54:23.542439 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:23 crc kubenswrapper[4806]: I0126 07:54:23.542479 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:23 crc kubenswrapper[4806]: I0126 07:54:23.542488 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:23 crc kubenswrapper[4806]: I0126 07:54:23.542502 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:23 crc kubenswrapper[4806]: I0126 07:54:23.542511 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:23Z","lastTransitionTime":"2026-01-26T07:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:23 crc kubenswrapper[4806]: I0126 07:54:23.646094 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:23 crc kubenswrapper[4806]: I0126 07:54:23.646189 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:23 crc kubenswrapper[4806]: I0126 07:54:23.646217 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:23 crc kubenswrapper[4806]: I0126 07:54:23.646254 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:23 crc kubenswrapper[4806]: I0126 07:54:23.646283 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:23Z","lastTransitionTime":"2026-01-26T07:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:23 crc kubenswrapper[4806]: I0126 07:54:23.750168 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:23 crc kubenswrapper[4806]: I0126 07:54:23.750236 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:23 crc kubenswrapper[4806]: I0126 07:54:23.750262 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:23 crc kubenswrapper[4806]: I0126 07:54:23.750300 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:23 crc kubenswrapper[4806]: I0126 07:54:23.750327 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:23Z","lastTransitionTime":"2026-01-26T07:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:23 crc kubenswrapper[4806]: I0126 07:54:23.854397 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:23 crc kubenswrapper[4806]: I0126 07:54:23.854462 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:23 crc kubenswrapper[4806]: I0126 07:54:23.854487 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:23 crc kubenswrapper[4806]: I0126 07:54:23.854556 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:23 crc kubenswrapper[4806]: I0126 07:54:23.854583 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:23Z","lastTransitionTime":"2026-01-26T07:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:23 crc kubenswrapper[4806]: I0126 07:54:23.958596 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:23 crc kubenswrapper[4806]: I0126 07:54:23.958653 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:23 crc kubenswrapper[4806]: I0126 07:54:23.958675 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:23 crc kubenswrapper[4806]: I0126 07:54:23.958703 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:23 crc kubenswrapper[4806]: I0126 07:54:23.958724 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:23Z","lastTransitionTime":"2026-01-26T07:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:24 crc kubenswrapper[4806]: I0126 07:54:24.041184 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 07:54:24 crc kubenswrapper[4806]: I0126 07:54:24.041344 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 07:54:24 crc kubenswrapper[4806]: E0126 07:54:24.041394 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 07:54:24 crc kubenswrapper[4806]: I0126 07:54:24.041180 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqmvf" Jan 26 07:54:24 crc kubenswrapper[4806]: I0126 07:54:24.041213 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 07:54:24 crc kubenswrapper[4806]: E0126 07:54:24.041622 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 07:54:24 crc kubenswrapper[4806]: E0126 07:54:24.041990 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 07:54:24 crc kubenswrapper[4806]: E0126 07:54:24.042122 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqmvf" podUID="137029f0-49ad-4400-b117-2eff9271bce3" Jan 26 07:54:24 crc kubenswrapper[4806]: I0126 07:54:24.050620 4806 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 21:39:32.836184564 +0000 UTC Jan 26 07:54:24 crc kubenswrapper[4806]: I0126 07:54:24.062071 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:24 crc kubenswrapper[4806]: I0126 07:54:24.062175 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:24 crc kubenswrapper[4806]: I0126 07:54:24.062207 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:24 crc kubenswrapper[4806]: I0126 07:54:24.062233 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:24 crc kubenswrapper[4806]: I0126 07:54:24.062251 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:24Z","lastTransitionTime":"2026-01-26T07:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:24 crc kubenswrapper[4806]: I0126 07:54:24.166510 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:24 crc kubenswrapper[4806]: I0126 07:54:24.166626 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:24 crc kubenswrapper[4806]: I0126 07:54:24.166651 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:24 crc kubenswrapper[4806]: I0126 07:54:24.166686 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:24 crc kubenswrapper[4806]: I0126 07:54:24.166711 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:24Z","lastTransitionTime":"2026-01-26T07:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:24 crc kubenswrapper[4806]: I0126 07:54:24.270213 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:24 crc kubenswrapper[4806]: I0126 07:54:24.270308 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:24 crc kubenswrapper[4806]: I0126 07:54:24.270335 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:24 crc kubenswrapper[4806]: I0126 07:54:24.270374 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:24 crc kubenswrapper[4806]: I0126 07:54:24.270401 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:24Z","lastTransitionTime":"2026-01-26T07:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:24 crc kubenswrapper[4806]: I0126 07:54:24.380768 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:24 crc kubenswrapper[4806]: I0126 07:54:24.380860 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:24 crc kubenswrapper[4806]: I0126 07:54:24.380885 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:24 crc kubenswrapper[4806]: I0126 07:54:24.380935 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:24 crc kubenswrapper[4806]: I0126 07:54:24.380955 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:24Z","lastTransitionTime":"2026-01-26T07:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:24 crc kubenswrapper[4806]: I0126 07:54:24.484436 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:24 crc kubenswrapper[4806]: I0126 07:54:24.484867 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:24 crc kubenswrapper[4806]: I0126 07:54:24.484996 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:24 crc kubenswrapper[4806]: I0126 07:54:24.485154 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:24 crc kubenswrapper[4806]: I0126 07:54:24.485310 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:24Z","lastTransitionTime":"2026-01-26T07:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:24 crc kubenswrapper[4806]: I0126 07:54:24.588435 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:24 crc kubenswrapper[4806]: I0126 07:54:24.589070 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:24 crc kubenswrapper[4806]: I0126 07:54:24.589290 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:24 crc kubenswrapper[4806]: I0126 07:54:24.589555 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:24 crc kubenswrapper[4806]: I0126 07:54:24.589772 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:24Z","lastTransitionTime":"2026-01-26T07:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:24 crc kubenswrapper[4806]: I0126 07:54:24.693298 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:24 crc kubenswrapper[4806]: I0126 07:54:24.693907 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:24 crc kubenswrapper[4806]: I0126 07:54:24.694128 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:24 crc kubenswrapper[4806]: I0126 07:54:24.694332 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:24 crc kubenswrapper[4806]: I0126 07:54:24.694581 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:24Z","lastTransitionTime":"2026-01-26T07:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:24 crc kubenswrapper[4806]: I0126 07:54:24.798917 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:24 crc kubenswrapper[4806]: I0126 07:54:24.798989 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:24 crc kubenswrapper[4806]: I0126 07:54:24.799009 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:24 crc kubenswrapper[4806]: I0126 07:54:24.799036 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:24 crc kubenswrapper[4806]: I0126 07:54:24.799056 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:24Z","lastTransitionTime":"2026-01-26T07:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:24 crc kubenswrapper[4806]: I0126 07:54:24.902709 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:24 crc kubenswrapper[4806]: I0126 07:54:24.902786 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:24 crc kubenswrapper[4806]: I0126 07:54:24.902807 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:24 crc kubenswrapper[4806]: I0126 07:54:24.902839 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:24 crc kubenswrapper[4806]: I0126 07:54:24.902860 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:24Z","lastTransitionTime":"2026-01-26T07:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:25 crc kubenswrapper[4806]: I0126 07:54:25.006618 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:25 crc kubenswrapper[4806]: I0126 07:54:25.006691 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:25 crc kubenswrapper[4806]: I0126 07:54:25.006710 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:25 crc kubenswrapper[4806]: I0126 07:54:25.006741 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:25 crc kubenswrapper[4806]: I0126 07:54:25.006763 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:25Z","lastTransitionTime":"2026-01-26T07:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:25 crc kubenswrapper[4806]: I0126 07:54:25.050926 4806 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 16:41:34.313597055 +0000 UTC Jan 26 07:54:25 crc kubenswrapper[4806]: I0126 07:54:25.110782 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:25 crc kubenswrapper[4806]: I0126 07:54:25.110848 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:25 crc kubenswrapper[4806]: I0126 07:54:25.110866 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:25 crc kubenswrapper[4806]: I0126 07:54:25.110895 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:25 crc kubenswrapper[4806]: I0126 07:54:25.110917 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:25Z","lastTransitionTime":"2026-01-26T07:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:25 crc kubenswrapper[4806]: I0126 07:54:25.214560 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:25 crc kubenswrapper[4806]: I0126 07:54:25.214938 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:25 crc kubenswrapper[4806]: I0126 07:54:25.215094 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:25 crc kubenswrapper[4806]: I0126 07:54:25.215243 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:25 crc kubenswrapper[4806]: I0126 07:54:25.215378 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:25Z","lastTransitionTime":"2026-01-26T07:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:25 crc kubenswrapper[4806]: I0126 07:54:25.319167 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:25 crc kubenswrapper[4806]: I0126 07:54:25.319617 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:25 crc kubenswrapper[4806]: I0126 07:54:25.320008 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:25 crc kubenswrapper[4806]: I0126 07:54:25.320411 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:25 crc kubenswrapper[4806]: I0126 07:54:25.320632 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:25Z","lastTransitionTime":"2026-01-26T07:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:25 crc kubenswrapper[4806]: I0126 07:54:25.424347 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:25 crc kubenswrapper[4806]: I0126 07:54:25.424403 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:25 crc kubenswrapper[4806]: I0126 07:54:25.424424 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:25 crc kubenswrapper[4806]: I0126 07:54:25.424449 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:25 crc kubenswrapper[4806]: I0126 07:54:25.424467 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:25Z","lastTransitionTime":"2026-01-26T07:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:25 crc kubenswrapper[4806]: I0126 07:54:25.528398 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:25 crc kubenswrapper[4806]: I0126 07:54:25.528796 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:25 crc kubenswrapper[4806]: I0126 07:54:25.528961 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:25 crc kubenswrapper[4806]: I0126 07:54:25.529148 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:25 crc kubenswrapper[4806]: I0126 07:54:25.529319 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:25Z","lastTransitionTime":"2026-01-26T07:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:25 crc kubenswrapper[4806]: I0126 07:54:25.632953 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:25 crc kubenswrapper[4806]: I0126 07:54:25.633036 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:25 crc kubenswrapper[4806]: I0126 07:54:25.633061 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:25 crc kubenswrapper[4806]: I0126 07:54:25.633095 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:25 crc kubenswrapper[4806]: I0126 07:54:25.633117 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:25Z","lastTransitionTime":"2026-01-26T07:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:25 crc kubenswrapper[4806]: I0126 07:54:25.736781 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:25 crc kubenswrapper[4806]: I0126 07:54:25.736860 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:25 crc kubenswrapper[4806]: I0126 07:54:25.736886 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:25 crc kubenswrapper[4806]: I0126 07:54:25.736926 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:25 crc kubenswrapper[4806]: I0126 07:54:25.736947 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:25Z","lastTransitionTime":"2026-01-26T07:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:25 crc kubenswrapper[4806]: I0126 07:54:25.840793 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:25 crc kubenswrapper[4806]: I0126 07:54:25.840867 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:25 crc kubenswrapper[4806]: I0126 07:54:25.840886 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:25 crc kubenswrapper[4806]: I0126 07:54:25.840910 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:25 crc kubenswrapper[4806]: I0126 07:54:25.840928 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:25Z","lastTransitionTime":"2026-01-26T07:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:25 crc kubenswrapper[4806]: I0126 07:54:25.944187 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:25 crc kubenswrapper[4806]: I0126 07:54:25.944597 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:25 crc kubenswrapper[4806]: I0126 07:54:25.944767 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:25 crc kubenswrapper[4806]: I0126 07:54:25.945009 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:25 crc kubenswrapper[4806]: I0126 07:54:25.945157 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:25Z","lastTransitionTime":"2026-01-26T07:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:26 crc kubenswrapper[4806]: I0126 07:54:26.041609 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqmvf" Jan 26 07:54:26 crc kubenswrapper[4806]: E0126 07:54:26.042190 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqmvf" podUID="137029f0-49ad-4400-b117-2eff9271bce3" Jan 26 07:54:26 crc kubenswrapper[4806]: I0126 07:54:26.041765 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 07:54:26 crc kubenswrapper[4806]: I0126 07:54:26.041840 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 07:54:26 crc kubenswrapper[4806]: E0126 07:54:26.042660 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 07:54:26 crc kubenswrapper[4806]: I0126 07:54:26.041765 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 07:54:26 crc kubenswrapper[4806]: E0126 07:54:26.043179 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 07:54:26 crc kubenswrapper[4806]: E0126 07:54:26.042874 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 07:54:26 crc kubenswrapper[4806]: I0126 07:54:26.048726 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:26 crc kubenswrapper[4806]: I0126 07:54:26.048976 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:26 crc kubenswrapper[4806]: I0126 07:54:26.049241 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:26 crc kubenswrapper[4806]: I0126 07:54:26.049480 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:26 crc kubenswrapper[4806]: I0126 07:54:26.049702 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:26Z","lastTransitionTime":"2026-01-26T07:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:26 crc kubenswrapper[4806]: I0126 07:54:26.051174 4806 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 11:37:28.135731508 +0000 UTC Jan 26 07:54:26 crc kubenswrapper[4806]: I0126 07:54:26.154157 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:26 crc kubenswrapper[4806]: I0126 07:54:26.154247 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:26 crc kubenswrapper[4806]: I0126 07:54:26.154276 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:26 crc kubenswrapper[4806]: I0126 07:54:26.154309 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:26 crc kubenswrapper[4806]: I0126 07:54:26.154332 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:26Z","lastTransitionTime":"2026-01-26T07:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:26 crc kubenswrapper[4806]: I0126 07:54:26.257068 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:26 crc kubenswrapper[4806]: I0126 07:54:26.257488 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:26 crc kubenswrapper[4806]: I0126 07:54:26.257685 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:26 crc kubenswrapper[4806]: I0126 07:54:26.257863 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:26 crc kubenswrapper[4806]: I0126 07:54:26.258020 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:26Z","lastTransitionTime":"2026-01-26T07:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:26 crc kubenswrapper[4806]: I0126 07:54:26.361772 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:26 crc kubenswrapper[4806]: I0126 07:54:26.362189 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:26 crc kubenswrapper[4806]: I0126 07:54:26.362320 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:26 crc kubenswrapper[4806]: I0126 07:54:26.362453 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:26 crc kubenswrapper[4806]: I0126 07:54:26.362610 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:26Z","lastTransitionTime":"2026-01-26T07:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:26 crc kubenswrapper[4806]: I0126 07:54:26.466640 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:26 crc kubenswrapper[4806]: I0126 07:54:26.466716 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:26 crc kubenswrapper[4806]: I0126 07:54:26.466739 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:26 crc kubenswrapper[4806]: I0126 07:54:26.466773 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:26 crc kubenswrapper[4806]: I0126 07:54:26.466795 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:26Z","lastTransitionTime":"2026-01-26T07:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:26 crc kubenswrapper[4806]: I0126 07:54:26.571949 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:26 crc kubenswrapper[4806]: I0126 07:54:26.572012 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:26 crc kubenswrapper[4806]: I0126 07:54:26.572030 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:26 crc kubenswrapper[4806]: I0126 07:54:26.572061 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:26 crc kubenswrapper[4806]: I0126 07:54:26.572081 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:26Z","lastTransitionTime":"2026-01-26T07:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:26 crc kubenswrapper[4806]: I0126 07:54:26.675330 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:26 crc kubenswrapper[4806]: I0126 07:54:26.675407 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:26 crc kubenswrapper[4806]: I0126 07:54:26.675649 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:26 crc kubenswrapper[4806]: I0126 07:54:26.675685 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:26 crc kubenswrapper[4806]: I0126 07:54:26.675707 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:26Z","lastTransitionTime":"2026-01-26T07:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:26 crc kubenswrapper[4806]: I0126 07:54:26.784817 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:26 crc kubenswrapper[4806]: I0126 07:54:26.784908 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:26 crc kubenswrapper[4806]: I0126 07:54:26.784945 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:26 crc kubenswrapper[4806]: I0126 07:54:26.784985 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:26 crc kubenswrapper[4806]: I0126 07:54:26.785017 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:26Z","lastTransitionTime":"2026-01-26T07:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:26 crc kubenswrapper[4806]: I0126 07:54:26.888670 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:26 crc kubenswrapper[4806]: I0126 07:54:26.888734 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:26 crc kubenswrapper[4806]: I0126 07:54:26.888750 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:26 crc kubenswrapper[4806]: I0126 07:54:26.888775 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:26 crc kubenswrapper[4806]: I0126 07:54:26.888791 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:26Z","lastTransitionTime":"2026-01-26T07:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:26 crc kubenswrapper[4806]: I0126 07:54:26.992367 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:26 crc kubenswrapper[4806]: I0126 07:54:26.992448 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:26 crc kubenswrapper[4806]: I0126 07:54:26.992694 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:26 crc kubenswrapper[4806]: I0126 07:54:26.992733 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:26 crc kubenswrapper[4806]: I0126 07:54:26.992753 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:26Z","lastTransitionTime":"2026-01-26T07:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:27 crc kubenswrapper[4806]: I0126 07:54:27.052322 4806 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 03:49:13.019160025 +0000 UTC Jan 26 07:54:27 crc kubenswrapper[4806]: I0126 07:54:27.096553 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:27 crc kubenswrapper[4806]: I0126 07:54:27.096626 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:27 crc kubenswrapper[4806]: I0126 07:54:27.096647 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:27 crc kubenswrapper[4806]: I0126 07:54:27.096677 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:27 crc kubenswrapper[4806]: I0126 07:54:27.096695 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:27Z","lastTransitionTime":"2026-01-26T07:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:27 crc kubenswrapper[4806]: I0126 07:54:27.201076 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:27 crc kubenswrapper[4806]: I0126 07:54:27.201139 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:27 crc kubenswrapper[4806]: I0126 07:54:27.201157 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:27 crc kubenswrapper[4806]: I0126 07:54:27.201187 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:27 crc kubenswrapper[4806]: I0126 07:54:27.201209 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:27Z","lastTransitionTime":"2026-01-26T07:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:27 crc kubenswrapper[4806]: I0126 07:54:27.305060 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:27 crc kubenswrapper[4806]: I0126 07:54:27.305107 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:27 crc kubenswrapper[4806]: I0126 07:54:27.305123 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:27 crc kubenswrapper[4806]: I0126 07:54:27.305147 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:27 crc kubenswrapper[4806]: I0126 07:54:27.305161 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:27Z","lastTransitionTime":"2026-01-26T07:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:27 crc kubenswrapper[4806]: I0126 07:54:27.407093 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:27 crc kubenswrapper[4806]: I0126 07:54:27.407124 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:27 crc kubenswrapper[4806]: I0126 07:54:27.407133 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:27 crc kubenswrapper[4806]: I0126 07:54:27.407149 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:27 crc kubenswrapper[4806]: I0126 07:54:27.407162 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:27Z","lastTransitionTime":"2026-01-26T07:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:27 crc kubenswrapper[4806]: I0126 07:54:27.510288 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:27 crc kubenswrapper[4806]: I0126 07:54:27.510331 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:27 crc kubenswrapper[4806]: I0126 07:54:27.510343 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:27 crc kubenswrapper[4806]: I0126 07:54:27.510362 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:27 crc kubenswrapper[4806]: I0126 07:54:27.510396 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:27Z","lastTransitionTime":"2026-01-26T07:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:27 crc kubenswrapper[4806]: I0126 07:54:27.614345 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:27 crc kubenswrapper[4806]: I0126 07:54:27.614392 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:27 crc kubenswrapper[4806]: I0126 07:54:27.614403 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:27 crc kubenswrapper[4806]: I0126 07:54:27.614423 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:27 crc kubenswrapper[4806]: I0126 07:54:27.614443 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:27Z","lastTransitionTime":"2026-01-26T07:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:27 crc kubenswrapper[4806]: I0126 07:54:27.717237 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:27 crc kubenswrapper[4806]: I0126 07:54:27.717283 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:27 crc kubenswrapper[4806]: I0126 07:54:27.717296 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:27 crc kubenswrapper[4806]: I0126 07:54:27.717315 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:27 crc kubenswrapper[4806]: I0126 07:54:27.717330 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:27Z","lastTransitionTime":"2026-01-26T07:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:27 crc kubenswrapper[4806]: I0126 07:54:27.821372 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:27 crc kubenswrapper[4806]: I0126 07:54:27.821424 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:27 crc kubenswrapper[4806]: I0126 07:54:27.821437 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:27 crc kubenswrapper[4806]: I0126 07:54:27.821458 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:27 crc kubenswrapper[4806]: I0126 07:54:27.821474 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:27Z","lastTransitionTime":"2026-01-26T07:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:27 crc kubenswrapper[4806]: I0126 07:54:27.925436 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:27 crc kubenswrapper[4806]: I0126 07:54:27.925956 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:27 crc kubenswrapper[4806]: I0126 07:54:27.926139 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:27 crc kubenswrapper[4806]: I0126 07:54:27.926339 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:27 crc kubenswrapper[4806]: I0126 07:54:27.926554 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:27Z","lastTransitionTime":"2026-01-26T07:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:28 crc kubenswrapper[4806]: I0126 07:54:28.029617 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:28 crc kubenswrapper[4806]: I0126 07:54:28.029728 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:28 crc kubenswrapper[4806]: I0126 07:54:28.029806 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:28 crc kubenswrapper[4806]: I0126 07:54:28.029891 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:28 crc kubenswrapper[4806]: I0126 07:54:28.029922 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:28Z","lastTransitionTime":"2026-01-26T07:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:28 crc kubenswrapper[4806]: I0126 07:54:28.041943 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 07:54:28 crc kubenswrapper[4806]: I0126 07:54:28.041952 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqmvf" Jan 26 07:54:28 crc kubenswrapper[4806]: I0126 07:54:28.041979 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 07:54:28 crc kubenswrapper[4806]: I0126 07:54:28.042057 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 07:54:28 crc kubenswrapper[4806]: E0126 07:54:28.042615 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 07:54:28 crc kubenswrapper[4806]: E0126 07:54:28.042713 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 07:54:28 crc kubenswrapper[4806]: E0126 07:54:28.042400 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqmvf" podUID="137029f0-49ad-4400-b117-2eff9271bce3" Jan 26 07:54:28 crc kubenswrapper[4806]: E0126 07:54:28.042409 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 07:54:28 crc kubenswrapper[4806]: I0126 07:54:28.052507 4806 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 10:11:36.6585808 +0000 UTC Jan 26 07:54:28 crc kubenswrapper[4806]: I0126 07:54:28.134172 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:28 crc kubenswrapper[4806]: I0126 07:54:28.134604 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:28 crc kubenswrapper[4806]: I0126 07:54:28.134806 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:28 crc kubenswrapper[4806]: I0126 07:54:28.134992 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:28 crc kubenswrapper[4806]: I0126 07:54:28.135189 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:28Z","lastTransitionTime":"2026-01-26T07:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:28 crc kubenswrapper[4806]: I0126 07:54:28.239881 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:28 crc kubenswrapper[4806]: I0126 07:54:28.239995 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:28 crc kubenswrapper[4806]: I0126 07:54:28.240018 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:28 crc kubenswrapper[4806]: I0126 07:54:28.240086 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:28 crc kubenswrapper[4806]: I0126 07:54:28.240154 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:28Z","lastTransitionTime":"2026-01-26T07:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:28 crc kubenswrapper[4806]: I0126 07:54:28.342985 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:28 crc kubenswrapper[4806]: I0126 07:54:28.343076 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:28 crc kubenswrapper[4806]: I0126 07:54:28.343086 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:28 crc kubenswrapper[4806]: I0126 07:54:28.343137 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:28 crc kubenswrapper[4806]: I0126 07:54:28.343151 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:28Z","lastTransitionTime":"2026-01-26T07:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:28 crc kubenswrapper[4806]: I0126 07:54:28.446192 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:28 crc kubenswrapper[4806]: I0126 07:54:28.446258 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:28 crc kubenswrapper[4806]: I0126 07:54:28.446278 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:28 crc kubenswrapper[4806]: I0126 07:54:28.446310 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:28 crc kubenswrapper[4806]: I0126 07:54:28.446330 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:28Z","lastTransitionTime":"2026-01-26T07:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:28 crc kubenswrapper[4806]: I0126 07:54:28.549656 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:28 crc kubenswrapper[4806]: I0126 07:54:28.549721 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:28 crc kubenswrapper[4806]: I0126 07:54:28.549740 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:28 crc kubenswrapper[4806]: I0126 07:54:28.549769 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:28 crc kubenswrapper[4806]: I0126 07:54:28.549793 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:28Z","lastTransitionTime":"2026-01-26T07:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:28 crc kubenswrapper[4806]: I0126 07:54:28.653687 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:28 crc kubenswrapper[4806]: I0126 07:54:28.653753 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:28 crc kubenswrapper[4806]: I0126 07:54:28.653775 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:28 crc kubenswrapper[4806]: I0126 07:54:28.653805 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:28 crc kubenswrapper[4806]: I0126 07:54:28.653826 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:28Z","lastTransitionTime":"2026-01-26T07:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:28 crc kubenswrapper[4806]: I0126 07:54:28.757039 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:28 crc kubenswrapper[4806]: I0126 07:54:28.757102 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:28 crc kubenswrapper[4806]: I0126 07:54:28.757117 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:28 crc kubenswrapper[4806]: I0126 07:54:28.757141 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:28 crc kubenswrapper[4806]: I0126 07:54:28.757156 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:28Z","lastTransitionTime":"2026-01-26T07:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:28 crc kubenswrapper[4806]: I0126 07:54:28.860755 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:28 crc kubenswrapper[4806]: I0126 07:54:28.860865 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:28 crc kubenswrapper[4806]: I0126 07:54:28.860896 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:28 crc kubenswrapper[4806]: I0126 07:54:28.860941 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:28 crc kubenswrapper[4806]: I0126 07:54:28.860967 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:28Z","lastTransitionTime":"2026-01-26T07:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:28 crc kubenswrapper[4806]: I0126 07:54:28.964841 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:28 crc kubenswrapper[4806]: I0126 07:54:28.964915 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:28 crc kubenswrapper[4806]: I0126 07:54:28.964938 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:28 crc kubenswrapper[4806]: I0126 07:54:28.964975 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:28 crc kubenswrapper[4806]: I0126 07:54:28.964996 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:28Z","lastTransitionTime":"2026-01-26T07:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:29 crc kubenswrapper[4806]: I0126 07:54:29.053618 4806 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 13:18:08.728131302 +0000 UTC Jan 26 07:54:29 crc kubenswrapper[4806]: I0126 07:54:29.068134 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:29 crc kubenswrapper[4806]: I0126 07:54:29.068193 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:29 crc kubenswrapper[4806]: I0126 07:54:29.068210 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:29 crc kubenswrapper[4806]: I0126 07:54:29.068236 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:29 crc kubenswrapper[4806]: I0126 07:54:29.068256 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:29Z","lastTransitionTime":"2026-01-26T07:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:29 crc kubenswrapper[4806]: I0126 07:54:29.172101 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:29 crc kubenswrapper[4806]: I0126 07:54:29.172170 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:29 crc kubenswrapper[4806]: I0126 07:54:29.172187 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:29 crc kubenswrapper[4806]: I0126 07:54:29.172215 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:29 crc kubenswrapper[4806]: I0126 07:54:29.172233 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:29Z","lastTransitionTime":"2026-01-26T07:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:29 crc kubenswrapper[4806]: I0126 07:54:29.276550 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:29 crc kubenswrapper[4806]: I0126 07:54:29.276607 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:29 crc kubenswrapper[4806]: I0126 07:54:29.276626 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:29 crc kubenswrapper[4806]: I0126 07:54:29.276652 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:29 crc kubenswrapper[4806]: I0126 07:54:29.276670 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:29Z","lastTransitionTime":"2026-01-26T07:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:29 crc kubenswrapper[4806]: I0126 07:54:29.380517 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:29 crc kubenswrapper[4806]: I0126 07:54:29.380890 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:29 crc kubenswrapper[4806]: I0126 07:54:29.381025 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:29 crc kubenswrapper[4806]: I0126 07:54:29.381190 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:29 crc kubenswrapper[4806]: I0126 07:54:29.381329 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:29Z","lastTransitionTime":"2026-01-26T07:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:29 crc kubenswrapper[4806]: I0126 07:54:29.485115 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:29 crc kubenswrapper[4806]: I0126 07:54:29.485178 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:29 crc kubenswrapper[4806]: I0126 07:54:29.485192 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:29 crc kubenswrapper[4806]: I0126 07:54:29.485212 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:29 crc kubenswrapper[4806]: I0126 07:54:29.485645 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:29Z","lastTransitionTime":"2026-01-26T07:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:29 crc kubenswrapper[4806]: I0126 07:54:29.589091 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:29 crc kubenswrapper[4806]: I0126 07:54:29.589153 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:29 crc kubenswrapper[4806]: I0126 07:54:29.589171 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:29 crc kubenswrapper[4806]: I0126 07:54:29.589196 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:29 crc kubenswrapper[4806]: I0126 07:54:29.589213 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:29Z","lastTransitionTime":"2026-01-26T07:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:29 crc kubenswrapper[4806]: I0126 07:54:29.692851 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:29 crc kubenswrapper[4806]: I0126 07:54:29.692919 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:29 crc kubenswrapper[4806]: I0126 07:54:29.692943 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:29 crc kubenswrapper[4806]: I0126 07:54:29.692980 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:29 crc kubenswrapper[4806]: I0126 07:54:29.693008 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:29Z","lastTransitionTime":"2026-01-26T07:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:29 crc kubenswrapper[4806]: I0126 07:54:29.796121 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:29 crc kubenswrapper[4806]: I0126 07:54:29.796189 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:29 crc kubenswrapper[4806]: I0126 07:54:29.796225 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:29 crc kubenswrapper[4806]: I0126 07:54:29.796253 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:29 crc kubenswrapper[4806]: I0126 07:54:29.796272 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:29Z","lastTransitionTime":"2026-01-26T07:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:29 crc kubenswrapper[4806]: I0126 07:54:29.881637 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:29 crc kubenswrapper[4806]: I0126 07:54:29.881697 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:29 crc kubenswrapper[4806]: I0126 07:54:29.881706 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:29 crc kubenswrapper[4806]: I0126 07:54:29.881729 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:29 crc kubenswrapper[4806]: I0126 07:54:29.881747 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:29Z","lastTransitionTime":"2026-01-26T07:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:29 crc kubenswrapper[4806]: E0126 07:54:29.898101 4806 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:54:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:54:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:54:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:54:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6d591560-a509-477f-85dc-1a92a429bf2e\\\",\\\"systemUUID\\\":\\\"8cee8155-a08c-4d0d-aec6-2f132dd9ee01\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:29Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:29 crc kubenswrapper[4806]: I0126 07:54:29.903950 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:29 crc kubenswrapper[4806]: I0126 07:54:29.903997 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:29 crc kubenswrapper[4806]: I0126 07:54:29.904013 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:29 crc kubenswrapper[4806]: I0126 07:54:29.904034 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:29 crc kubenswrapper[4806]: I0126 07:54:29.904044 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:29Z","lastTransitionTime":"2026-01-26T07:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:29 crc kubenswrapper[4806]: E0126 07:54:29.922514 4806 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:54:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:54:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:54:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:54:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6d591560-a509-477f-85dc-1a92a429bf2e\\\",\\\"systemUUID\\\":\\\"8cee8155-a08c-4d0d-aec6-2f132dd9ee01\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:29Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:29 crc kubenswrapper[4806]: I0126 07:54:29.927374 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:29 crc kubenswrapper[4806]: I0126 07:54:29.927660 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:29 crc kubenswrapper[4806]: I0126 07:54:29.927845 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:29 crc kubenswrapper[4806]: I0126 07:54:29.928045 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:29 crc kubenswrapper[4806]: I0126 07:54:29.928248 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:29Z","lastTransitionTime":"2026-01-26T07:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:29 crc kubenswrapper[4806]: E0126 07:54:29.951195 4806 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:54:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:54:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:54:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:54:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6d591560-a509-477f-85dc-1a92a429bf2e\\\",\\\"systemUUID\\\":\\\"8cee8155-a08c-4d0d-aec6-2f132dd9ee01\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:29Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:29 crc kubenswrapper[4806]: I0126 07:54:29.956554 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:29 crc kubenswrapper[4806]: I0126 07:54:29.956635 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:29 crc kubenswrapper[4806]: I0126 07:54:29.956646 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:29 crc kubenswrapper[4806]: I0126 07:54:29.956669 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:29 crc kubenswrapper[4806]: I0126 07:54:29.956686 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:29Z","lastTransitionTime":"2026-01-26T07:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:29 crc kubenswrapper[4806]: E0126 07:54:29.976721 4806 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:54:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:54:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:54:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:54:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6d591560-a509-477f-85dc-1a92a429bf2e\\\",\\\"systemUUID\\\":\\\"8cee8155-a08c-4d0d-aec6-2f132dd9ee01\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:29Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:29 crc kubenswrapper[4806]: I0126 07:54:29.982798 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:29 crc kubenswrapper[4806]: I0126 07:54:29.982860 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:29 crc kubenswrapper[4806]: I0126 07:54:29.982883 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:29 crc kubenswrapper[4806]: I0126 07:54:29.982920 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:29 crc kubenswrapper[4806]: I0126 07:54:29.982945 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:29Z","lastTransitionTime":"2026-01-26T07:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:30 crc kubenswrapper[4806]: E0126 07:54:30.004422 4806 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:54:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:54:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:54:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:54:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6d591560-a509-477f-85dc-1a92a429bf2e\\\",\\\"systemUUID\\\":\\\"8cee8155-a08c-4d0d-aec6-2f132dd9ee01\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:30Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:30 crc kubenswrapper[4806]: E0126 07:54:30.004718 4806 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 07:54:30 crc kubenswrapper[4806]: I0126 07:54:30.007195 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:30 crc kubenswrapper[4806]: I0126 07:54:30.007244 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:30 crc kubenswrapper[4806]: I0126 07:54:30.007265 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:30 crc kubenswrapper[4806]: I0126 07:54:30.007296 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:30 crc kubenswrapper[4806]: I0126 07:54:30.007317 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:30Z","lastTransitionTime":"2026-01-26T07:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:30 crc kubenswrapper[4806]: I0126 07:54:30.041952 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 07:54:30 crc kubenswrapper[4806]: I0126 07:54:30.042004 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 07:54:30 crc kubenswrapper[4806]: I0126 07:54:30.042040 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqmvf" Jan 26 07:54:30 crc kubenswrapper[4806]: E0126 07:54:30.042779 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 07:54:30 crc kubenswrapper[4806]: I0126 07:54:30.042155 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 07:54:30 crc kubenswrapper[4806]: E0126 07:54:30.043168 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqmvf" podUID="137029f0-49ad-4400-b117-2eff9271bce3" Jan 26 07:54:30 crc kubenswrapper[4806]: E0126 07:54:30.043292 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 07:54:30 crc kubenswrapper[4806]: E0126 07:54:30.043463 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 07:54:30 crc kubenswrapper[4806]: I0126 07:54:30.043731 4806 scope.go:117] "RemoveContainer" containerID="e7b156e8395962880407d0db1c1b809b804ce946f09b03d256e2d49bbeea75f1" Jan 26 07:54:30 crc kubenswrapper[4806]: I0126 07:54:30.054139 4806 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 03:36:37.770751223 +0000 UTC Jan 26 07:54:30 crc kubenswrapper[4806]: I0126 07:54:30.111506 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:30 crc kubenswrapper[4806]: I0126 07:54:30.111920 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:30 crc kubenswrapper[4806]: I0126 07:54:30.112111 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:30 crc kubenswrapper[4806]: I0126 07:54:30.112326 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:30 crc kubenswrapper[4806]: I0126 07:54:30.112615 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:30Z","lastTransitionTime":"2026-01-26T07:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:30 crc kubenswrapper[4806]: I0126 07:54:30.217063 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:30 crc kubenswrapper[4806]: I0126 07:54:30.217133 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:30 crc kubenswrapper[4806]: I0126 07:54:30.217155 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:30 crc kubenswrapper[4806]: I0126 07:54:30.217184 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:30 crc kubenswrapper[4806]: I0126 07:54:30.217205 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:30Z","lastTransitionTime":"2026-01-26T07:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:30 crc kubenswrapper[4806]: I0126 07:54:30.322147 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:30 crc kubenswrapper[4806]: I0126 07:54:30.322668 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:30 crc kubenswrapper[4806]: I0126 07:54:30.322689 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:30 crc kubenswrapper[4806]: I0126 07:54:30.322723 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:30 crc kubenswrapper[4806]: I0126 07:54:30.322745 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:30Z","lastTransitionTime":"2026-01-26T07:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:30 crc kubenswrapper[4806]: I0126 07:54:30.420629 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8mw7z_1f8b8acb-f4cf-41db-82f8-94ffd21c1594/ovnkube-controller/1.log" Jan 26 07:54:30 crc kubenswrapper[4806]: I0126 07:54:30.424843 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:30 crc kubenswrapper[4806]: I0126 07:54:30.424901 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:30 crc kubenswrapper[4806]: I0126 07:54:30.424918 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:30 crc kubenswrapper[4806]: I0126 07:54:30.424946 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:30 crc kubenswrapper[4806]: I0126 07:54:30.424970 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:30Z","lastTransitionTime":"2026-01-26T07:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:30 crc kubenswrapper[4806]: I0126 07:54:30.426112 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" event={"ID":"1f8b8acb-f4cf-41db-82f8-94ffd21c1594","Type":"ContainerStarted","Data":"cfc2e9b8a1bd4f2725d86ece596811599fc42c42930193e14dec2164c415acd3"} Jan 26 07:54:30 crc kubenswrapper[4806]: I0126 07:54:30.426695 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" Jan 26 07:54:30 crc kubenswrapper[4806]: I0126 07:54:30.451003 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acf0880fe013b6ee2bf3e8342e19ebbdc21b0f7aeb69749dce219fe0e5e4939a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:30Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:30 crc kubenswrapper[4806]: I0126 07:54:30.470066 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0298829a559cb39d3f8d2f3b0e53b249e74c432ba82385ab628135d53fa77272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70db7a55580b02bc2be913a8978f895c5a3826133883cf1c838bc84a76e89af1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:30Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:30 crc kubenswrapper[4806]: I0126 07:54:30.485692 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wx526" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"265e15b3-6ef8-47df-ab15-dcc9bd9574ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5080e1085d9267f3e432eaf62a6c620f7e59db08140bd5a901ab21840bfd6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmfr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wx526\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:30Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:30 crc kubenswrapper[4806]: I0126 07:54:30.506164 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d7glh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4320ae6b-0d73-47d7-9f2c-f3c5b6b69041\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da5e8c52d415e76190ace196927bd1783b08990164b11379940ecc2b6734551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpcqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d7glh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:30Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:30 crc kubenswrapper[4806]: I0126 07:54:30.528388 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:30 crc kubenswrapper[4806]: I0126 07:54:30.528453 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:30 crc kubenswrapper[4806]: I0126 07:54:30.528465 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:30 crc kubenswrapper[4806]: I0126 07:54:30.528486 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:30 crc kubenswrapper[4806]: I0126 07:54:30.528500 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:30Z","lastTransitionTime":"2026-01-26T07:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:30 crc kubenswrapper[4806]: I0126 07:54:30.552398 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-268q5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59844d88-1bf9-4761-b664-74623e7532c3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80fc9c4f56c3d017653449212b2b9457a701e8c567317798eb12c695bccec5c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19fc2da16d6c76dad40cc2947eb8da01b1ead5804be80df5756c132456575909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19fc2da16d6c76dad40cc2947eb8da01b1ead5804be80df5756c132456575909\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a426f2a63c3cdd56cec3bbd1f6e88ff76808304da3fe757043afeb2feb26614\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a426f2a63c3cdd56cec3bbd1f6e88ff76808304da3fe757043afeb2feb26614\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5459f7b834ef08b014a811d0955763235cb4dbb6cbe92670202b03f51f7d684d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5459f7b834ef08b014a811d0955763235cb4dbb6cbe92670202b03f51f7d684d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1efe94940fbf1babcccc68efae1d38751c22232d1a2f197485057be906db80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f1efe94940fbf1babcccc68efae1d38751c22232d1a2f197485057be906db80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1332a28e0cf08275ce281997fdc5674f44e397f8ff02fdd19fed12c2de2b70a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1332a28e0cf08275ce281997fdc5674f44e397f8ff02fdd19fed12c2de2b70a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://638da847467b7cade9c68a764044417e85bfa99cae93105cb70b260d3668c1a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://638da847467b7cade9c68a764044417e85bfa99cae93105cb70b260d3668c1a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-268q5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:30Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:30 crc kubenswrapper[4806]: I0126 07:54:30.591627 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"674c5d9f-0f9e-47cf-b6bb-5a011bd0af68\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06b15982445e897ebbaf881e9a84973b9b700b0763ea6dfc0dfa3d5adb3d5b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67cf748dcc41d3402937cba97eadb7311a4fd7ab359a24768ffd4b6f689e9e2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://374c11776db902d4a30fdeab722bf8c85be53043b87555ac42f9f18bbfca5f78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded50f9353eb262b2587b8d1a59d8e1a06946c9b228eb0991fe35ad79f19567b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dc1a26bcb2eff0795009b3d88f7fedd86bccf553e9b8b86f8bc4bdda154566d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b85e7f36ac03113362c2ab6b15fb73e517599c6a56c6a410b9dead70191502f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b85e7f36ac03113362c2ab6b15fb73e517599c6a56c6a410b9dead70191502f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7324fbdf53b0be79696bb4bf870fe072761633e09cc88cce3a213a40b339410d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7324fbdf53b0be79696bb4bf870fe072761633e09cc88cce3a213a40b339410d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1987002ea8dfdacf427291f1757243727d744b1fe99d0be1f8cddc26beb74fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1987002ea8dfdacf427291f1757243727d744b1fe99d0be1f8cddc26beb74fc9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:53:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:30Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:30 crc kubenswrapper[4806]: I0126 07:54:30.610347 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f837be3-c97c-4ec6-9ac0-1b2c210a2bd1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://370998a25827938e42e6ca8f51cb20c64068d266a2a6e92f43cfa6033efe1f1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://befc3a7310ed730ab06800d0ffe5a5ad206f5193f921d0080543d96e53c382cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f4e90bd8d92e4159f45b887647465c23595d6a14cb9fd5a6f3b6c736345b302\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d8fe9445b34c3dea0e5508fbbd9d8c72d3b0bd3c927f3d0469fb5f86a61b7a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6442d8b57ec5789bc5ad219da279d42cd053598b4a8a843b4190a619750483d3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 07:53:53.366863 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 07:53:53.368547 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1947576678/tls.crt::/tmp/serving-cert-1947576678/tls.key\\\\\\\"\\\\nI0126 07:53:59.068951 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 07:53:59.072863 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 07:53:59.072893 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 07:53:59.072922 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 07:53:59.072931 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 07:53:59.078386 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 07:53:59.078432 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 07:53:59.078439 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 07:53:59.078445 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 07:53:59.078449 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 07:53:59.078455 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 07:53:59.078459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 07:53:59.078571 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 07:53:59.084946 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eda625619208720589e5f413d0df8e1e55e7fd5ab4a2e59d64e19150afb05ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02368f0d186c7de1c3dc25d5c60985799ded799314f332c5132230a413d531a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02368f0d186c7de1c3dc25d5c60985799ded799314f332c5132230a413d531a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:53:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:30Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:30 crc kubenswrapper[4806]: I0126 07:54:30.624880 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dfe44c66a73660b38211ed7ae8fbb6a036c2ebc6a2716dd0e36da70b8d1fcc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:30Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:30 crc kubenswrapper[4806]: I0126 07:54:30.631089 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:30 crc kubenswrapper[4806]: I0126 07:54:30.631131 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:30 crc kubenswrapper[4806]: I0126 07:54:30.631140 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:30 crc kubenswrapper[4806]: I0126 07:54:30.631161 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:30 crc kubenswrapper[4806]: I0126 07:54:30.631173 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:30Z","lastTransitionTime":"2026-01-26T07:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:30 crc kubenswrapper[4806]: I0126 07:54:30.636640 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d07502a2-50b0-4012-b335-340a1c694c50\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03d55ca5e1d542a5cdb726c1e581fb9b644e237bf4575794fedadf60386c339c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67x45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://824bb40c04b0ea474c45020c9c84ce7b7ee18c52beef9fee9618ba5a6e59be04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67x45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k2tlk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:30Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:30 crc kubenswrapper[4806]: I0126 07:54:30.650871 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:30Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:30 crc kubenswrapper[4806]: I0126 07:54:30.660533 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pw8cg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c73a9f4-20b2-4c8a-b25d-413770be4fac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://792a2a27d6043da345b05c611a152abc9233ff61958434dc7a7f8c13d185fc04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt9w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pw8cg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:30Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:30 crc kubenswrapper[4806]: I0126 07:54:30.660936 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/137029f0-49ad-4400-b117-2eff9271bce3-metrics-certs\") pod \"network-metrics-daemon-rqmvf\" (UID: \"137029f0-49ad-4400-b117-2eff9271bce3\") " pod="openshift-multus/network-metrics-daemon-rqmvf" Jan 26 07:54:30 crc kubenswrapper[4806]: E0126 07:54:30.661163 4806 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 07:54:30 crc kubenswrapper[4806]: E0126 07:54:30.661246 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/137029f0-49ad-4400-b117-2eff9271bce3-metrics-certs podName:137029f0-49ad-4400-b117-2eff9271bce3 nodeName:}" failed. No retries permitted until 2026-01-26 07:54:46.661222517 +0000 UTC m=+65.925630573 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/137029f0-49ad-4400-b117-2eff9271bce3-metrics-certs") pod "network-metrics-daemon-rqmvf" (UID: "137029f0-49ad-4400-b117-2eff9271bce3") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 07:54:30 crc kubenswrapper[4806]: I0126 07:54:30.672215 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tfbl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47635c72-c532-48f3-839a-d86393eb5d24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7ea777c2c49c1bc930bff2584c62a6ebd7752638e4d5aec2818f56cf0678ad8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4lbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4910b820b4227a848efb96e462e633150fe7e06295dc04a0d9c45636cc4b83bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4lbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tfbl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:30Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:30 crc kubenswrapper[4806]: I0126 07:54:30.685647 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:30Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:30 crc kubenswrapper[4806]: I0126 07:54:30.702399 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:30Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:30 crc kubenswrapper[4806]: I0126 07:54:30.724282 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f8b8acb-f4cf-41db-82f8-94ffd21c1594\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21d2ea6bc7f0a91da2b372587419ec1b34cfe3aca8f70dd4e360bf3e818bb103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adbed86e005e6e9cecb4320e0dff72eab3cfd505ff656a64e6643cf47828c789\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d102c6b4ea9d79168f0fec24c556c8fec8e3c1191646eebcbe8ed0289edfb6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c90dede1fc0753cad500555db7d92be73cb9b51fe948e3e65bed66134d85628c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e05c58fd693a7827b1ff019c6e160e4d2198004aca34245ee6e08cd37f5627da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://634f160a9064240a06e9b425f70ceae6390db7205f898efdc601437e72c362df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfc2e9b8a1bd4f2725d86ece596811599fc42c42930193e14dec2164c415acd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7b156e8395962880407d0db1c1b809b804ce946f09b03d256e2d49bbeea75f1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T07:54:13Z\\\",\\\"message\\\":\\\"ernalversions/factory.go:140\\\\nI0126 07:54:13.077605 6146 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 07:54:13.077710 6146 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0126 07:54:13.080415 6146 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0126 07:54:13.081317 6146 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 07:54:13.082235 6146 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 07:54:13.083087 6146 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 07:54:13.083515 6146 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:11Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e9d86c9f33a8ad421c133f7b674eff9362011ad9ce60cbe1b917bda84c85047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2105d09a59a290aa3e1915a2b8c2c8aeb3a3e94ddfd4b48dc574a5362f522d18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2105d09a59a290aa3e1915a2b8c2c8aeb3a3e94ddfd4b48dc574a5362f522d18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8mw7z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:30Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:30 crc kubenswrapper[4806]: I0126 07:54:30.734243 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:30 crc kubenswrapper[4806]: I0126 07:54:30.734299 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:30 crc kubenswrapper[4806]: I0126 07:54:30.734311 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:30 crc kubenswrapper[4806]: I0126 07:54:30.734333 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:30 crc kubenswrapper[4806]: I0126 07:54:30.734346 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:30Z","lastTransitionTime":"2026-01-26T07:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:30 crc kubenswrapper[4806]: I0126 07:54:30.740303 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rqmvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"137029f0-49ad-4400-b117-2eff9271bce3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9zt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9zt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:14Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rqmvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:30Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:30 crc kubenswrapper[4806]: I0126 07:54:30.761615 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a2716f1-7f63-4f1b-88e7-412b33a5afe3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1797a908e33ef3678ac1471e937a8e52c11dbb31d8de0f19d400a01706a98b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c111f2ec62507ccc570af4f4da19232bfa0946bb9e67bde595d5a3d66eff6a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f94ceee9c2ed3bd1c5ba58ee7e5047dd5f0a325c610c4277705f3426e69ed79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50228cb176a3defc821a78ed75d837706c2de3a3414f02e0afacf81289a15999\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:53:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:30Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:30 crc kubenswrapper[4806]: I0126 07:54:30.837101 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:30 crc kubenswrapper[4806]: I0126 07:54:30.837133 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:30 crc kubenswrapper[4806]: I0126 07:54:30.837146 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:30 crc kubenswrapper[4806]: I0126 07:54:30.837164 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:30 crc kubenswrapper[4806]: I0126 07:54:30.837177 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:30Z","lastTransitionTime":"2026-01-26T07:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:30 crc kubenswrapper[4806]: I0126 07:54:30.939645 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:30 crc kubenswrapper[4806]: I0126 07:54:30.939690 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:30 crc kubenswrapper[4806]: I0126 07:54:30.939700 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:30 crc kubenswrapper[4806]: I0126 07:54:30.939718 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:30 crc kubenswrapper[4806]: I0126 07:54:30.939732 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:30Z","lastTransitionTime":"2026-01-26T07:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:31 crc kubenswrapper[4806]: I0126 07:54:31.042765 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:31 crc kubenswrapper[4806]: I0126 07:54:31.042812 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:31 crc kubenswrapper[4806]: I0126 07:54:31.042821 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:31 crc kubenswrapper[4806]: I0126 07:54:31.042844 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:31 crc kubenswrapper[4806]: I0126 07:54:31.042854 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:31Z","lastTransitionTime":"2026-01-26T07:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:31 crc kubenswrapper[4806]: I0126 07:54:31.055329 4806 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 05:32:45.101756155 +0000 UTC Jan 26 07:54:31 crc kubenswrapper[4806]: I0126 07:54:31.064685 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:31Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:31 crc kubenswrapper[4806]: I0126 07:54:31.082231 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dfe44c66a73660b38211ed7ae8fbb6a036c2ebc6a2716dd0e36da70b8d1fcc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:31Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:31 crc kubenswrapper[4806]: I0126 07:54:31.103388 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d07502a2-50b0-4012-b335-340a1c694c50\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03d55ca5e1d542a5cdb726c1e581fb9b644e237bf4575794fedadf60386c339c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67x45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://824bb40c04b0ea474c45020c9c84ce7b7ee18c52beef9fee9618ba5a6e59be04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67x45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k2tlk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:31Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:31 crc kubenswrapper[4806]: I0126 07:54:31.125596 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pw8cg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c73a9f4-20b2-4c8a-b25d-413770be4fac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://792a2a27d6043da345b05c611a152abc9233ff61958434dc7a7f8c13d185fc04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt9w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pw8cg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:31Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:31 crc kubenswrapper[4806]: I0126 07:54:31.146680 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:31 crc kubenswrapper[4806]: I0126 07:54:31.146747 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:31 crc kubenswrapper[4806]: I0126 07:54:31.146771 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:31 crc kubenswrapper[4806]: I0126 07:54:31.147181 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:31 crc kubenswrapper[4806]: I0126 07:54:31.147215 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:31Z","lastTransitionTime":"2026-01-26T07:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:31 crc kubenswrapper[4806]: I0126 07:54:31.147492 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tfbl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47635c72-c532-48f3-839a-d86393eb5d24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7ea777c2c49c1bc930bff2584c62a6ebd7752638e4d5aec2818f56cf0678ad8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4lbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4910b820b4227a848efb96e462e633150fe7e06295dc04a0d9c45636cc4b83bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4lbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tfbl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:31Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:31 crc kubenswrapper[4806]: I0126 07:54:31.166186 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a2716f1-7f63-4f1b-88e7-412b33a5afe3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1797a908e33ef3678ac1471e937a8e52c11dbb31d8de0f19d400a01706a98b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c111f2ec62507ccc570af4f4da19232bfa0946bb9e67bde595d5a3d66eff6a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f94ceee9c2ed3bd1c5ba58ee7e5047dd5f0a325c610c4277705f3426e69ed79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50228cb176a3defc821a78ed75d837706c2de3a3414f02e0afacf81289a15999\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:53:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:31Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:31 crc kubenswrapper[4806]: I0126 07:54:31.185634 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:31Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:31 crc kubenswrapper[4806]: I0126 07:54:31.202152 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:31Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:31 crc kubenswrapper[4806]: I0126 07:54:31.226387 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f8b8acb-f4cf-41db-82f8-94ffd21c1594\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21d2ea6bc7f0a91da2b372587419ec1b34cfe3aca8f70dd4e360bf3e818bb103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adbed86e005e6e9cecb4320e0dff72eab3cfd505ff656a64e6643cf47828c789\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d102c6b4ea9d79168f0fec24c556c8fec8e3c1191646eebcbe8ed0289edfb6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c90dede1fc0753cad500555db7d92be73cb9b51fe948e3e65bed66134d85628c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e05c58fd693a7827b1ff019c6e160e4d2198004aca34245ee6e08cd37f5627da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://634f160a9064240a06e9b425f70ceae6390db7205f898efdc601437e72c362df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfc2e9b8a1bd4f2725d86ece596811599fc42c42930193e14dec2164c415acd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7b156e8395962880407d0db1c1b809b804ce946f09b03d256e2d49bbeea75f1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T07:54:13Z\\\",\\\"message\\\":\\\"ernalversions/factory.go:140\\\\nI0126 07:54:13.077605 6146 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 07:54:13.077710 6146 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0126 07:54:13.080415 6146 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0126 07:54:13.081317 6146 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 07:54:13.082235 6146 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 07:54:13.083087 6146 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 07:54:13.083515 6146 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:11Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e9d86c9f33a8ad421c133f7b674eff9362011ad9ce60cbe1b917bda84c85047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2105d09a59a290aa3e1915a2b8c2c8aeb3a3e94ddfd4b48dc574a5362f522d18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2105d09a59a290aa3e1915a2b8c2c8aeb3a3e94ddfd4b48dc574a5362f522d18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8mw7z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:31Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:31 crc kubenswrapper[4806]: I0126 07:54:31.238119 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rqmvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"137029f0-49ad-4400-b117-2eff9271bce3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9zt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9zt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:14Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rqmvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:31Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:31 crc kubenswrapper[4806]: I0126 07:54:31.250188 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:31 crc kubenswrapper[4806]: I0126 07:54:31.250242 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:31 crc kubenswrapper[4806]: I0126 07:54:31.250259 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:31 crc kubenswrapper[4806]: I0126 07:54:31.250282 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:31 crc kubenswrapper[4806]: I0126 07:54:31.250297 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:31Z","lastTransitionTime":"2026-01-26T07:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:31 crc kubenswrapper[4806]: I0126 07:54:31.270805 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"674c5d9f-0f9e-47cf-b6bb-5a011bd0af68\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06b15982445e897ebbaf881e9a84973b9b700b0763ea6dfc0dfa3d5adb3d5b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67cf748dcc41d3402937cba97eadb7311a4fd7ab359a24768ffd4b6f689e9e2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://374c11776db902d4a30fdeab722bf8c85be53043b87555ac42f9f18bbfca5f78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded50f9353eb262b2587b8d1a59d8e1a06946c9b228eb0991fe35ad79f19567b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dc1a26bcb2eff0795009b3d88f7fedd86bccf553e9b8b86f8bc4bdda154566d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b85e7f36ac03113362c2ab6b15fb73e517599c6a56c6a410b9dead70191502f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b85e7f36ac03113362c2ab6b15fb73e517599c6a56c6a410b9dead70191502f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7324fbdf53b0be79696bb4bf870fe072761633e09cc88cce3a213a40b339410d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7324fbdf53b0be79696bb4bf870fe072761633e09cc88cce3a213a40b339410d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1987002ea8dfdacf427291f1757243727d744b1fe99d0be1f8cddc26beb74fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1987002ea8dfdacf427291f1757243727d744b1fe99d0be1f8cddc26beb74fc9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:53:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:31Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:31 crc kubenswrapper[4806]: I0126 07:54:31.287684 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f837be3-c97c-4ec6-9ac0-1b2c210a2bd1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://370998a25827938e42e6ca8f51cb20c64068d266a2a6e92f43cfa6033efe1f1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://befc3a7310ed730ab06800d0ffe5a5ad206f5193f921d0080543d96e53c382cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f4e90bd8d92e4159f45b887647465c23595d6a14cb9fd5a6f3b6c736345b302\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d8fe9445b34c3dea0e5508fbbd9d8c72d3b0bd3c927f3d0469fb5f86a61b7a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6442d8b57ec5789bc5ad219da279d42cd053598b4a8a843b4190a619750483d3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 07:53:53.366863 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 07:53:53.368547 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1947576678/tls.crt::/tmp/serving-cert-1947576678/tls.key\\\\\\\"\\\\nI0126 07:53:59.068951 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 07:53:59.072863 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 07:53:59.072893 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 07:53:59.072922 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 07:53:59.072931 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 07:53:59.078386 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 07:53:59.078432 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 07:53:59.078439 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 07:53:59.078445 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 07:53:59.078449 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 07:53:59.078455 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 07:53:59.078459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 07:53:59.078571 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 07:53:59.084946 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eda625619208720589e5f413d0df8e1e55e7fd5ab4a2e59d64e19150afb05ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02368f0d186c7de1c3dc25d5c60985799ded799314f332c5132230a413d531a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02368f0d186c7de1c3dc25d5c60985799ded799314f332c5132230a413d531a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:53:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:31Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:31 crc kubenswrapper[4806]: I0126 07:54:31.303544 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acf0880fe013b6ee2bf3e8342e19ebbdc21b0f7aeb69749dce219fe0e5e4939a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:31Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:31 crc kubenswrapper[4806]: I0126 07:54:31.321453 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0298829a559cb39d3f8d2f3b0e53b249e74c432ba82385ab628135d53fa77272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70db7a55580b02bc2be913a8978f895c5a3826133883cf1c838bc84a76e89af1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:31Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:31 crc kubenswrapper[4806]: I0126 07:54:31.335366 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wx526" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"265e15b3-6ef8-47df-ab15-dcc9bd9574ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5080e1085d9267f3e432eaf62a6c620f7e59db08140bd5a901ab21840bfd6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmfr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wx526\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:31Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:31 crc kubenswrapper[4806]: I0126 07:54:31.349854 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d7glh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4320ae6b-0d73-47d7-9f2c-f3c5b6b69041\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da5e8c52d415e76190ace196927bd1783b08990164b11379940ecc2b6734551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpcqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d7glh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:31Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:31 crc kubenswrapper[4806]: I0126 07:54:31.352685 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:31 crc kubenswrapper[4806]: I0126 07:54:31.352737 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:31 crc kubenswrapper[4806]: I0126 07:54:31.352750 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:31 crc kubenswrapper[4806]: I0126 07:54:31.352817 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:31 crc kubenswrapper[4806]: I0126 07:54:31.352833 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:31Z","lastTransitionTime":"2026-01-26T07:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:31 crc kubenswrapper[4806]: I0126 07:54:31.365055 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-268q5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59844d88-1bf9-4761-b664-74623e7532c3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80fc9c4f56c3d017653449212b2b9457a701e8c567317798eb12c695bccec5c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19fc2da16d6c76dad40cc2947eb8da01b1ead5804be80df5756c132456575909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19fc2da16d6c76dad40cc2947eb8da01b1ead5804be80df5756c132456575909\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a426f2a63c3cdd56cec3bbd1f6e88ff76808304da3fe757043afeb2feb26614\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a426f2a63c3cdd56cec3bbd1f6e88ff76808304da3fe757043afeb2feb26614\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5459f7b834ef08b014a811d0955763235cb4dbb6cbe92670202b03f51f7d684d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5459f7b834ef08b014a811d0955763235cb4dbb6cbe92670202b03f51f7d684d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1efe94940fbf1babcccc68efae1d38751c22232d1a2f197485057be906db80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f1efe94940fbf1babcccc68efae1d38751c22232d1a2f197485057be906db80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1332a28e0cf08275ce281997fdc5674f44e397f8ff02fdd19fed12c2de2b70a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1332a28e0cf08275ce281997fdc5674f44e397f8ff02fdd19fed12c2de2b70a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://638da847467b7cade9c68a764044417e85bfa99cae93105cb70b260d3668c1a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://638da847467b7cade9c68a764044417e85bfa99cae93105cb70b260d3668c1a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-268q5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:31Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:31 crc kubenswrapper[4806]: I0126 07:54:31.434333 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8mw7z_1f8b8acb-f4cf-41db-82f8-94ffd21c1594/ovnkube-controller/2.log" Jan 26 07:54:31 crc kubenswrapper[4806]: I0126 07:54:31.436109 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8mw7z_1f8b8acb-f4cf-41db-82f8-94ffd21c1594/ovnkube-controller/1.log" Jan 26 07:54:31 crc kubenswrapper[4806]: I0126 07:54:31.440510 4806 generic.go:334] "Generic (PLEG): container finished" podID="1f8b8acb-f4cf-41db-82f8-94ffd21c1594" containerID="cfc2e9b8a1bd4f2725d86ece596811599fc42c42930193e14dec2164c415acd3" exitCode=1 Jan 26 07:54:31 crc kubenswrapper[4806]: I0126 07:54:31.440651 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" event={"ID":"1f8b8acb-f4cf-41db-82f8-94ffd21c1594","Type":"ContainerDied","Data":"cfc2e9b8a1bd4f2725d86ece596811599fc42c42930193e14dec2164c415acd3"} Jan 26 07:54:31 crc kubenswrapper[4806]: I0126 07:54:31.440722 4806 scope.go:117] "RemoveContainer" containerID="e7b156e8395962880407d0db1c1b809b804ce946f09b03d256e2d49bbeea75f1" Jan 26 07:54:31 crc kubenswrapper[4806]: I0126 07:54:31.442265 4806 scope.go:117] "RemoveContainer" containerID="cfc2e9b8a1bd4f2725d86ece596811599fc42c42930193e14dec2164c415acd3" Jan 26 07:54:31 crc kubenswrapper[4806]: E0126 07:54:31.442733 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8mw7z_openshift-ovn-kubernetes(1f8b8acb-f4cf-41db-82f8-94ffd21c1594)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" podUID="1f8b8acb-f4cf-41db-82f8-94ffd21c1594" Jan 26 07:54:31 crc kubenswrapper[4806]: I0126 07:54:31.456708 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:31 crc kubenswrapper[4806]: I0126 07:54:31.456831 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:31 crc kubenswrapper[4806]: I0126 07:54:31.456934 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:31 crc kubenswrapper[4806]: I0126 07:54:31.457013 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:31 crc kubenswrapper[4806]: I0126 07:54:31.457073 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:31Z","lastTransitionTime":"2026-01-26T07:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:31 crc kubenswrapper[4806]: I0126 07:54:31.467396 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-268q5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59844d88-1bf9-4761-b664-74623e7532c3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80fc9c4f56c3d017653449212b2b9457a701e8c567317798eb12c695bccec5c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19fc2da16d6c76dad40cc2947eb8da01b1ead5804be80df5756c132456575909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19fc2da16d6c76dad40cc2947eb8da01b1ead5804be80df5756c132456575909\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a426f2a63c3cdd56cec3bbd1f6e88ff76808304da3fe757043afeb2feb26614\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a426f2a63c3cdd56cec3bbd1f6e88ff76808304da3fe757043afeb2feb26614\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5459f7b834ef08b014a811d0955763235cb4dbb6cbe92670202b03f51f7d684d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5459f7b834ef08b014a811d0955763235cb4dbb6cbe92670202b03f51f7d684d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1efe94940fbf1babcccc68efae1d38751c22232d1a2f197485057be906db80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f1efe94940fbf1babcccc68efae1d38751c22232d1a2f197485057be906db80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1332a28e0cf08275ce281997fdc5674f44e397f8ff02fdd19fed12c2de2b70a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1332a28e0cf08275ce281997fdc5674f44e397f8ff02fdd19fed12c2de2b70a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://638da847467b7cade9c68a764044417e85bfa99cae93105cb70b260d3668c1a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://638da847467b7cade9c68a764044417e85bfa99cae93105cb70b260d3668c1a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-268q5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:31Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:31 crc kubenswrapper[4806]: I0126 07:54:31.506313 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"674c5d9f-0f9e-47cf-b6bb-5a011bd0af68\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06b15982445e897ebbaf881e9a84973b9b700b0763ea6dfc0dfa3d5adb3d5b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67cf748dcc41d3402937cba97eadb7311a4fd7ab359a24768ffd4b6f689e9e2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://374c11776db902d4a30fdeab722bf8c85be53043b87555ac42f9f18bbfca5f78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded50f9353eb262b2587b8d1a59d8e1a06946c9b228eb0991fe35ad79f19567b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dc1a26bcb2eff0795009b3d88f7fedd86bccf553e9b8b86f8bc4bdda154566d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b85e7f36ac03113362c2ab6b15fb73e517599c6a56c6a410b9dead70191502f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b85e7f36ac03113362c2ab6b15fb73e517599c6a56c6a410b9dead70191502f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7324fbdf53b0be79696bb4bf870fe072761633e09cc88cce3a213a40b339410d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7324fbdf53b0be79696bb4bf870fe072761633e09cc88cce3a213a40b339410d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1987002ea8dfdacf427291f1757243727d744b1fe99d0be1f8cddc26beb74fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1987002ea8dfdacf427291f1757243727d744b1fe99d0be1f8cddc26beb74fc9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:53:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:31Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:31 crc kubenswrapper[4806]: I0126 07:54:31.533927 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f837be3-c97c-4ec6-9ac0-1b2c210a2bd1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://370998a25827938e42e6ca8f51cb20c64068d266a2a6e92f43cfa6033efe1f1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://befc3a7310ed730ab06800d0ffe5a5ad206f5193f921d0080543d96e53c382cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f4e90bd8d92e4159f45b887647465c23595d6a14cb9fd5a6f3b6c736345b302\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d8fe9445b34c3dea0e5508fbbd9d8c72d3b0bd3c927f3d0469fb5f86a61b7a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6442d8b57ec5789bc5ad219da279d42cd053598b4a8a843b4190a619750483d3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 07:53:53.366863 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 07:53:53.368547 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1947576678/tls.crt::/tmp/serving-cert-1947576678/tls.key\\\\\\\"\\\\nI0126 07:53:59.068951 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 07:53:59.072863 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 07:53:59.072893 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 07:53:59.072922 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 07:53:59.072931 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 07:53:59.078386 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 07:53:59.078432 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 07:53:59.078439 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 07:53:59.078445 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 07:53:59.078449 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 07:53:59.078455 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 07:53:59.078459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 07:53:59.078571 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 07:53:59.084946 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eda625619208720589e5f413d0df8e1e55e7fd5ab4a2e59d64e19150afb05ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02368f0d186c7de1c3dc25d5c60985799ded799314f332c5132230a413d531a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02368f0d186c7de1c3dc25d5c60985799ded799314f332c5132230a413d531a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:53:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:31Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:31 crc kubenswrapper[4806]: I0126 07:54:31.552180 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acf0880fe013b6ee2bf3e8342e19ebbdc21b0f7aeb69749dce219fe0e5e4939a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:31Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:31 crc kubenswrapper[4806]: I0126 07:54:31.561483 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:31 crc kubenswrapper[4806]: I0126 07:54:31.561801 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:31 crc kubenswrapper[4806]: I0126 07:54:31.561890 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:31 crc kubenswrapper[4806]: I0126 07:54:31.561980 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:31 crc kubenswrapper[4806]: I0126 07:54:31.562076 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:31Z","lastTransitionTime":"2026-01-26T07:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:31 crc kubenswrapper[4806]: I0126 07:54:31.566505 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0298829a559cb39d3f8d2f3b0e53b249e74c432ba82385ab628135d53fa77272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70db7a55580b02bc2be913a8978f895c5a3826133883cf1c838bc84a76e89af1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:31Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:31 crc kubenswrapper[4806]: I0126 07:54:31.578255 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wx526" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"265e15b3-6ef8-47df-ab15-dcc9bd9574ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5080e1085d9267f3e432eaf62a6c620f7e59db08140bd5a901ab21840bfd6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmfr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wx526\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:31Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:31 crc kubenswrapper[4806]: I0126 07:54:31.593381 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d7glh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4320ae6b-0d73-47d7-9f2c-f3c5b6b69041\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da5e8c52d415e76190ace196927bd1783b08990164b11379940ecc2b6734551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpcqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d7glh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:31Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:31 crc kubenswrapper[4806]: I0126 07:54:31.606769 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:31Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:31 crc kubenswrapper[4806]: I0126 07:54:31.621262 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dfe44c66a73660b38211ed7ae8fbb6a036c2ebc6a2716dd0e36da70b8d1fcc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:31Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:31 crc kubenswrapper[4806]: I0126 07:54:31.633579 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d07502a2-50b0-4012-b335-340a1c694c50\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03d55ca5e1d542a5cdb726c1e581fb9b644e237bf4575794fedadf60386c339c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67x45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://824bb40c04b0ea474c45020c9c84ce7b7ee18c52beef9fee9618ba5a6e59be04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67x45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k2tlk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:31Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:31 crc kubenswrapper[4806]: I0126 07:54:31.646827 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pw8cg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c73a9f4-20b2-4c8a-b25d-413770be4fac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://792a2a27d6043da345b05c611a152abc9233ff61958434dc7a7f8c13d185fc04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt9w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pw8cg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:31Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:31 crc kubenswrapper[4806]: I0126 07:54:31.660388 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tfbl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47635c72-c532-48f3-839a-d86393eb5d24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7ea777c2c49c1bc930bff2584c62a6ebd7752638e4d5aec2818f56cf0678ad8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4lbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4910b820b4227a848efb96e462e633150fe7e06295dc04a0d9c45636cc4b83bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4lbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tfbl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:31Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:31 crc kubenswrapper[4806]: I0126 07:54:31.664825 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:31 crc kubenswrapper[4806]: I0126 07:54:31.664870 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:31 crc kubenswrapper[4806]: I0126 07:54:31.664885 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:31 crc kubenswrapper[4806]: I0126 07:54:31.664905 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:31 crc kubenswrapper[4806]: I0126 07:54:31.664918 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:31Z","lastTransitionTime":"2026-01-26T07:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:31 crc kubenswrapper[4806]: I0126 07:54:31.675140 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a2716f1-7f63-4f1b-88e7-412b33a5afe3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1797a908e33ef3678ac1471e937a8e52c11dbb31d8de0f19d400a01706a98b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c111f2ec62507ccc570af4f4da19232bfa0946bb9e67bde595d5a3d66eff6a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f94ceee9c2ed3bd1c5ba58ee7e5047dd5f0a325c610c4277705f3426e69ed79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50228cb176a3defc821a78ed75d837706c2de3a3414f02e0afacf81289a15999\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:53:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:31Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:31 crc kubenswrapper[4806]: I0126 07:54:31.688743 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:31Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:31 crc kubenswrapper[4806]: I0126 07:54:31.699954 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:31Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:31 crc kubenswrapper[4806]: I0126 07:54:31.720639 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f8b8acb-f4cf-41db-82f8-94ffd21c1594\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21d2ea6bc7f0a91da2b372587419ec1b34cfe3aca8f70dd4e360bf3e818bb103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adbed86e005e6e9cecb4320e0dff72eab3cfd505ff656a64e6643cf47828c789\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d102c6b4ea9d79168f0fec24c556c8fec8e3c1191646eebcbe8ed0289edfb6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c90dede1fc0753cad500555db7d92be73cb9b51fe948e3e65bed66134d85628c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e05c58fd693a7827b1ff019c6e160e4d2198004aca34245ee6e08cd37f5627da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://634f160a9064240a06e9b425f70ceae6390db7205f898efdc601437e72c362df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfc2e9b8a1bd4f2725d86ece596811599fc42c42930193e14dec2164c415acd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7b156e8395962880407d0db1c1b809b804ce946f09b03d256e2d49bbeea75f1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T07:54:13Z\\\",\\\"message\\\":\\\"ernalversions/factory.go:140\\\\nI0126 07:54:13.077605 6146 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 07:54:13.077710 6146 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0126 07:54:13.080415 6146 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0126 07:54:13.081317 6146 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 07:54:13.082235 6146 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 07:54:13.083087 6146 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0126 07:54:13.083515 6146 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:11Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfc2e9b8a1bd4f2725d86ece596811599fc42c42930193e14dec2164c415acd3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T07:54:31Z\\\",\\\"message\\\":\\\"o create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:30Z is after 2025-08-24T17:21:41Z]\\\\nI0126 07:54:31.006399 6338 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-network-console/networking-console-plugin_TCP_cluster\\\\\\\", UUID:\\\\\\\"ab0b1d51-5ec6-479b-8881-93dfa8d30337\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-network-console/networking-console-plugin\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGr\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e9d86c9f33a8ad421c133f7b674eff9362011ad9ce60cbe1b917bda84c85047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2105d09a59a290aa3e1915a2b8c2c8aeb3a3e94ddfd4b48dc574a5362f522d18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2105d09a59a290aa3e1915a2b8c2c8aeb3a3e94ddfd4b48dc574a5362f522d18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8mw7z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:31Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:31 crc kubenswrapper[4806]: I0126 07:54:31.738574 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rqmvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"137029f0-49ad-4400-b117-2eff9271bce3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9zt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9zt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:14Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rqmvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:31Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:31 crc kubenswrapper[4806]: I0126 07:54:31.768375 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:31 crc kubenswrapper[4806]: I0126 07:54:31.768432 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:31 crc kubenswrapper[4806]: I0126 07:54:31.768446 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:31 crc kubenswrapper[4806]: I0126 07:54:31.768469 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:31 crc kubenswrapper[4806]: I0126 07:54:31.768485 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:31Z","lastTransitionTime":"2026-01-26T07:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:31 crc kubenswrapper[4806]: I0126 07:54:31.871707 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:31 crc kubenswrapper[4806]: I0126 07:54:31.871737 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:31 crc kubenswrapper[4806]: I0126 07:54:31.871746 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:31 crc kubenswrapper[4806]: I0126 07:54:31.871760 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:31 crc kubenswrapper[4806]: I0126 07:54:31.871770 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:31Z","lastTransitionTime":"2026-01-26T07:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:31 crc kubenswrapper[4806]: I0126 07:54:31.873734 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 07:54:31 crc kubenswrapper[4806]: E0126 07:54:31.873988 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 07:55:03.873971531 +0000 UTC m=+83.138379587 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 07:54:31 crc kubenswrapper[4806]: I0126 07:54:31.975011 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 07:54:31 crc kubenswrapper[4806]: I0126 07:54:31.975457 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:31 crc kubenswrapper[4806]: I0126 07:54:31.975498 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:31 crc kubenswrapper[4806]: I0126 07:54:31.975508 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:31 crc kubenswrapper[4806]: I0126 07:54:31.975539 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:31 crc kubenswrapper[4806]: I0126 07:54:31.975550 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:31Z","lastTransitionTime":"2026-01-26T07:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:31 crc kubenswrapper[4806]: E0126 07:54:31.975233 4806 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 07:54:31 crc kubenswrapper[4806]: I0126 07:54:31.976121 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 07:54:31 crc kubenswrapper[4806]: E0126 07:54:31.976177 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 07:55:03.976149249 +0000 UTC m=+83.240557305 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 07:54:31 crc kubenswrapper[4806]: I0126 07:54:31.976473 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 07:54:31 crc kubenswrapper[4806]: I0126 07:54:31.976726 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 07:54:31 crc kubenswrapper[4806]: E0126 07:54:31.976260 4806 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 07:54:31 crc kubenswrapper[4806]: E0126 07:54:31.977107 4806 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 07:54:31 crc kubenswrapper[4806]: E0126 07:54:31.977263 4806 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 07:54:31 crc kubenswrapper[4806]: E0126 07:54:31.977463 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 07:55:03.977435347 +0000 UTC m=+83.241843443 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 07:54:31 crc kubenswrapper[4806]: E0126 07:54:31.976654 4806 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 07:54:31 crc kubenswrapper[4806]: E0126 07:54:31.976779 4806 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 07:54:31 crc kubenswrapper[4806]: E0126 07:54:31.978000 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 07:55:03.977985993 +0000 UTC m=+83.242394049 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 07:54:31 crc kubenswrapper[4806]: E0126 07:54:31.979669 4806 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 07:54:31 crc kubenswrapper[4806]: E0126 07:54:31.979714 4806 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 07:54:31 crc kubenswrapper[4806]: E0126 07:54:31.979766 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 07:55:03.979752135 +0000 UTC m=+83.244160401 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 07:54:32 crc kubenswrapper[4806]: I0126 07:54:32.041859 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 07:54:32 crc kubenswrapper[4806]: I0126 07:54:32.041970 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 07:54:32 crc kubenswrapper[4806]: I0126 07:54:32.042047 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 07:54:32 crc kubenswrapper[4806]: E0126 07:54:32.042104 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 07:54:32 crc kubenswrapper[4806]: I0126 07:54:32.041983 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqmvf" Jan 26 07:54:32 crc kubenswrapper[4806]: E0126 07:54:32.042058 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 07:54:32 crc kubenswrapper[4806]: E0126 07:54:32.042307 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 07:54:32 crc kubenswrapper[4806]: E0126 07:54:32.042357 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqmvf" podUID="137029f0-49ad-4400-b117-2eff9271bce3" Jan 26 07:54:32 crc kubenswrapper[4806]: I0126 07:54:32.056609 4806 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 05:48:36.696301496 +0000 UTC Jan 26 07:54:32 crc kubenswrapper[4806]: I0126 07:54:32.078387 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:32 crc kubenswrapper[4806]: I0126 07:54:32.078443 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:32 crc kubenswrapper[4806]: I0126 07:54:32.078455 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:32 crc kubenswrapper[4806]: I0126 07:54:32.078477 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:32 crc kubenswrapper[4806]: I0126 07:54:32.078493 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:32Z","lastTransitionTime":"2026-01-26T07:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:32 crc kubenswrapper[4806]: I0126 07:54:32.180933 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:32 crc kubenswrapper[4806]: I0126 07:54:32.181053 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:32 crc kubenswrapper[4806]: I0126 07:54:32.181073 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:32 crc kubenswrapper[4806]: I0126 07:54:32.181093 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:32 crc kubenswrapper[4806]: I0126 07:54:32.181108 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:32Z","lastTransitionTime":"2026-01-26T07:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:32 crc kubenswrapper[4806]: I0126 07:54:32.284851 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:32 crc kubenswrapper[4806]: I0126 07:54:32.284921 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:32 crc kubenswrapper[4806]: I0126 07:54:32.284940 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:32 crc kubenswrapper[4806]: I0126 07:54:32.284969 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:32 crc kubenswrapper[4806]: I0126 07:54:32.285023 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:32Z","lastTransitionTime":"2026-01-26T07:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:32 crc kubenswrapper[4806]: I0126 07:54:32.388402 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:32 crc kubenswrapper[4806]: I0126 07:54:32.388447 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:32 crc kubenswrapper[4806]: I0126 07:54:32.388460 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:32 crc kubenswrapper[4806]: I0126 07:54:32.388478 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:32 crc kubenswrapper[4806]: I0126 07:54:32.388488 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:32Z","lastTransitionTime":"2026-01-26T07:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:32 crc kubenswrapper[4806]: I0126 07:54:32.447580 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8mw7z_1f8b8acb-f4cf-41db-82f8-94ffd21c1594/ovnkube-controller/2.log" Jan 26 07:54:32 crc kubenswrapper[4806]: I0126 07:54:32.452450 4806 scope.go:117] "RemoveContainer" containerID="cfc2e9b8a1bd4f2725d86ece596811599fc42c42930193e14dec2164c415acd3" Jan 26 07:54:32 crc kubenswrapper[4806]: E0126 07:54:32.452646 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8mw7z_openshift-ovn-kubernetes(1f8b8acb-f4cf-41db-82f8-94ffd21c1594)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" podUID="1f8b8acb-f4cf-41db-82f8-94ffd21c1594" Jan 26 07:54:32 crc kubenswrapper[4806]: I0126 07:54:32.484809 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f8b8acb-f4cf-41db-82f8-94ffd21c1594\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21d2ea6bc7f0a91da2b372587419ec1b34cfe3aca8f70dd4e360bf3e818bb103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adbed86e005e6e9cecb4320e0dff72eab3cfd505ff656a64e6643cf47828c789\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d102c6b4ea9d79168f0fec24c556c8fec8e3c1191646eebcbe8ed0289edfb6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c90dede1fc0753cad500555db7d92be73cb9b51fe948e3e65bed66134d85628c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e05c58fd693a7827b1ff019c6e160e4d2198004aca34245ee6e08cd37f5627da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://634f160a9064240a06e9b425f70ceae6390db7205f898efdc601437e72c362df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfc2e9b8a1bd4f2725d86ece596811599fc42c42930193e14dec2164c415acd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfc2e9b8a1bd4f2725d86ece596811599fc42c42930193e14dec2164c415acd3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T07:54:31Z\\\",\\\"message\\\":\\\"o create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:30Z is after 2025-08-24T17:21:41Z]\\\\nI0126 07:54:31.006399 6338 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-network-console/networking-console-plugin_TCP_cluster\\\\\\\", UUID:\\\\\\\"ab0b1d51-5ec6-479b-8881-93dfa8d30337\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-network-console/networking-console-plugin\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGr\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8mw7z_openshift-ovn-kubernetes(1f8b8acb-f4cf-41db-82f8-94ffd21c1594)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e9d86c9f33a8ad421c133f7b674eff9362011ad9ce60cbe1b917bda84c85047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2105d09a59a290aa3e1915a2b8c2c8aeb3a3e94ddfd4b48dc574a5362f522d18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2105d09a59a290aa3e1915a2b8c2c8aeb3a3e94ddfd4b48dc574a5362f522d18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8mw7z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:32Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:32 crc kubenswrapper[4806]: I0126 07:54:32.491053 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:32 crc kubenswrapper[4806]: I0126 07:54:32.491096 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:32 crc kubenswrapper[4806]: I0126 07:54:32.491106 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:32 crc kubenswrapper[4806]: I0126 07:54:32.491122 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:32 crc kubenswrapper[4806]: I0126 07:54:32.491138 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:32Z","lastTransitionTime":"2026-01-26T07:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:32 crc kubenswrapper[4806]: I0126 07:54:32.500773 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rqmvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"137029f0-49ad-4400-b117-2eff9271bce3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9zt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9zt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:14Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rqmvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:32Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:32 crc kubenswrapper[4806]: I0126 07:54:32.519360 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a2716f1-7f63-4f1b-88e7-412b33a5afe3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1797a908e33ef3678ac1471e937a8e52c11dbb31d8de0f19d400a01706a98b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c111f2ec62507ccc570af4f4da19232bfa0946bb9e67bde595d5a3d66eff6a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f94ceee9c2ed3bd1c5ba58ee7e5047dd5f0a325c610c4277705f3426e69ed79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50228cb176a3defc821a78ed75d837706c2de3a3414f02e0afacf81289a15999\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:53:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:32Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:32 crc kubenswrapper[4806]: I0126 07:54:32.544585 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:32Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:32 crc kubenswrapper[4806]: I0126 07:54:32.559947 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:32Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:32 crc kubenswrapper[4806]: I0126 07:54:32.576715 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wx526" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"265e15b3-6ef8-47df-ab15-dcc9bd9574ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5080e1085d9267f3e432eaf62a6c620f7e59db08140bd5a901ab21840bfd6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmfr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wx526\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:32Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:32 crc kubenswrapper[4806]: I0126 07:54:32.594301 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:32 crc kubenswrapper[4806]: I0126 07:54:32.594357 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:32 crc kubenswrapper[4806]: I0126 07:54:32.594371 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:32 crc kubenswrapper[4806]: I0126 07:54:32.594390 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:32 crc kubenswrapper[4806]: I0126 07:54:32.594402 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:32Z","lastTransitionTime":"2026-01-26T07:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:32 crc kubenswrapper[4806]: I0126 07:54:32.596992 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d7glh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4320ae6b-0d73-47d7-9f2c-f3c5b6b69041\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da5e8c52d415e76190ace196927bd1783b08990164b11379940ecc2b6734551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpcqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d7glh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:32Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:32 crc kubenswrapper[4806]: I0126 07:54:32.622972 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-268q5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59844d88-1bf9-4761-b664-74623e7532c3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80fc9c4f56c3d017653449212b2b9457a701e8c567317798eb12c695bccec5c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19fc2da16d6c76dad40cc2947eb8da01b1ead5804be80df5756c132456575909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19fc2da16d6c76dad40cc2947eb8da01b1ead5804be80df5756c132456575909\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a426f2a63c3cdd56cec3bbd1f6e88ff76808304da3fe757043afeb2feb26614\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a426f2a63c3cdd56cec3bbd1f6e88ff76808304da3fe757043afeb2feb26614\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5459f7b834ef08b014a811d0955763235cb4dbb6cbe92670202b03f51f7d684d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5459f7b834ef08b014a811d0955763235cb4dbb6cbe92670202b03f51f7d684d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1efe94940fbf1babcccc68efae1d38751c22232d1a2f197485057be906db80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f1efe94940fbf1babcccc68efae1d38751c22232d1a2f197485057be906db80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1332a28e0cf08275ce281997fdc5674f44e397f8ff02fdd19fed12c2de2b70a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1332a28e0cf08275ce281997fdc5674f44e397f8ff02fdd19fed12c2de2b70a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://638da847467b7cade9c68a764044417e85bfa99cae93105cb70b260d3668c1a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://638da847467b7cade9c68a764044417e85bfa99cae93105cb70b260d3668c1a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-268q5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:32Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:32 crc kubenswrapper[4806]: I0126 07:54:32.644821 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"674c5d9f-0f9e-47cf-b6bb-5a011bd0af68\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06b15982445e897ebbaf881e9a84973b9b700b0763ea6dfc0dfa3d5adb3d5b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67cf748dcc41d3402937cba97eadb7311a4fd7ab359a24768ffd4b6f689e9e2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://374c11776db902d4a30fdeab722bf8c85be53043b87555ac42f9f18bbfca5f78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded50f9353eb262b2587b8d1a59d8e1a06946c9b228eb0991fe35ad79f19567b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dc1a26bcb2eff0795009b3d88f7fedd86bccf553e9b8b86f8bc4bdda154566d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b85e7f36ac03113362c2ab6b15fb73e517599c6a56c6a410b9dead70191502f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b85e7f36ac03113362c2ab6b15fb73e517599c6a56c6a410b9dead70191502f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7324fbdf53b0be79696bb4bf870fe072761633e09cc88cce3a213a40b339410d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7324fbdf53b0be79696bb4bf870fe072761633e09cc88cce3a213a40b339410d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1987002ea8dfdacf427291f1757243727d744b1fe99d0be1f8cddc26beb74fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1987002ea8dfdacf427291f1757243727d744b1fe99d0be1f8cddc26beb74fc9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:53:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:32Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:32 crc kubenswrapper[4806]: I0126 07:54:32.661161 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f837be3-c97c-4ec6-9ac0-1b2c210a2bd1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://370998a25827938e42e6ca8f51cb20c64068d266a2a6e92f43cfa6033efe1f1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://befc3a7310ed730ab06800d0ffe5a5ad206f5193f921d0080543d96e53c382cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f4e90bd8d92e4159f45b887647465c23595d6a14cb9fd5a6f3b6c736345b302\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d8fe9445b34c3dea0e5508fbbd9d8c72d3b0bd3c927f3d0469fb5f86a61b7a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6442d8b57ec5789bc5ad219da279d42cd053598b4a8a843b4190a619750483d3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 07:53:53.366863 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 07:53:53.368547 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1947576678/tls.crt::/tmp/serving-cert-1947576678/tls.key\\\\\\\"\\\\nI0126 07:53:59.068951 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 07:53:59.072863 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 07:53:59.072893 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 07:53:59.072922 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 07:53:59.072931 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 07:53:59.078386 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 07:53:59.078432 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 07:53:59.078439 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 07:53:59.078445 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 07:53:59.078449 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 07:53:59.078455 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 07:53:59.078459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 07:53:59.078571 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 07:53:59.084946 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eda625619208720589e5f413d0df8e1e55e7fd5ab4a2e59d64e19150afb05ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02368f0d186c7de1c3dc25d5c60985799ded799314f332c5132230a413d531a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02368f0d186c7de1c3dc25d5c60985799ded799314f332c5132230a413d531a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:53:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:32Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:32 crc kubenswrapper[4806]: I0126 07:54:32.678765 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acf0880fe013b6ee2bf3e8342e19ebbdc21b0f7aeb69749dce219fe0e5e4939a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:32Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:32 crc kubenswrapper[4806]: I0126 07:54:32.691613 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0298829a559cb39d3f8d2f3b0e53b249e74c432ba82385ab628135d53fa77272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70db7a55580b02bc2be913a8978f895c5a3826133883cf1c838bc84a76e89af1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:32Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:32 crc kubenswrapper[4806]: I0126 07:54:32.698126 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:32 crc kubenswrapper[4806]: I0126 07:54:32.698273 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:32 crc kubenswrapper[4806]: I0126 07:54:32.698293 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:32 crc kubenswrapper[4806]: I0126 07:54:32.698353 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:32 crc kubenswrapper[4806]: I0126 07:54:32.698375 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:32Z","lastTransitionTime":"2026-01-26T07:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:32 crc kubenswrapper[4806]: I0126 07:54:32.705041 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:32Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:32 crc kubenswrapper[4806]: I0126 07:54:32.726873 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dfe44c66a73660b38211ed7ae8fbb6a036c2ebc6a2716dd0e36da70b8d1fcc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:32Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:32 crc kubenswrapper[4806]: I0126 07:54:32.747771 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d07502a2-50b0-4012-b335-340a1c694c50\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03d55ca5e1d542a5cdb726c1e581fb9b644e237bf4575794fedadf60386c339c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67x45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://824bb40c04b0ea474c45020c9c84ce7b7ee18c52beef9fee9618ba5a6e59be04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67x45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k2tlk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:32Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:32 crc kubenswrapper[4806]: I0126 07:54:32.767501 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pw8cg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c73a9f4-20b2-4c8a-b25d-413770be4fac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://792a2a27d6043da345b05c611a152abc9233ff61958434dc7a7f8c13d185fc04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt9w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pw8cg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:32Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:32 crc kubenswrapper[4806]: I0126 07:54:32.781992 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tfbl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47635c72-c532-48f3-839a-d86393eb5d24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7ea777c2c49c1bc930bff2584c62a6ebd7752638e4d5aec2818f56cf0678ad8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4lbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4910b820b4227a848efb96e462e633150fe7e06295dc04a0d9c45636cc4b83bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4lbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tfbl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:32Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:32 crc kubenswrapper[4806]: I0126 07:54:32.802062 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:32 crc kubenswrapper[4806]: I0126 07:54:32.802105 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:32 crc kubenswrapper[4806]: I0126 07:54:32.802115 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:32 crc kubenswrapper[4806]: I0126 07:54:32.802133 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:32 crc kubenswrapper[4806]: I0126 07:54:32.802146 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:32Z","lastTransitionTime":"2026-01-26T07:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:32 crc kubenswrapper[4806]: I0126 07:54:32.807346 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 07:54:32 crc kubenswrapper[4806]: I0126 07:54:32.827898 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 26 07:54:32 crc kubenswrapper[4806]: I0126 07:54:32.835100 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pw8cg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c73a9f4-20b2-4c8a-b25d-413770be4fac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://792a2a27d6043da345b05c611a152abc9233ff61958434dc7a7f8c13d185fc04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt9w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pw8cg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:32Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:32 crc kubenswrapper[4806]: I0126 07:54:32.849704 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tfbl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47635c72-c532-48f3-839a-d86393eb5d24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7ea777c2c49c1bc930bff2584c62a6ebd7752638e4d5aec2818f56cf0678ad8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4lbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4910b820b4227a848efb96e462e633150fe7e06295dc04a0d9c45636cc4b83bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4lbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tfbl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:32Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:32 crc kubenswrapper[4806]: I0126 07:54:32.874509 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:32Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:32 crc kubenswrapper[4806]: I0126 07:54:32.895319 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:32Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:32 crc kubenswrapper[4806]: I0126 07:54:32.904777 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:32 crc kubenswrapper[4806]: I0126 07:54:32.904978 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:32 crc kubenswrapper[4806]: I0126 07:54:32.905103 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:32 crc kubenswrapper[4806]: I0126 07:54:32.905219 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:32 crc kubenswrapper[4806]: I0126 07:54:32.905310 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:32Z","lastTransitionTime":"2026-01-26T07:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:32 crc kubenswrapper[4806]: I0126 07:54:32.930206 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f8b8acb-f4cf-41db-82f8-94ffd21c1594\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21d2ea6bc7f0a91da2b372587419ec1b34cfe3aca8f70dd4e360bf3e818bb103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adbed86e005e6e9cecb4320e0dff72eab3cfd505ff656a64e6643cf47828c789\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d102c6b4ea9d79168f0fec24c556c8fec8e3c1191646eebcbe8ed0289edfb6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c90dede1fc0753cad500555db7d92be73cb9b51fe948e3e65bed66134d85628c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e05c58fd693a7827b1ff019c6e160e4d2198004aca34245ee6e08cd37f5627da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://634f160a9064240a06e9b425f70ceae6390db7205f898efdc601437e72c362df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfc2e9b8a1bd4f2725d86ece596811599fc42c42930193e14dec2164c415acd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfc2e9b8a1bd4f2725d86ece596811599fc42c42930193e14dec2164c415acd3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T07:54:31Z\\\",\\\"message\\\":\\\"o create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:30Z is after 2025-08-24T17:21:41Z]\\\\nI0126 07:54:31.006399 6338 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-network-console/networking-console-plugin_TCP_cluster\\\\\\\", UUID:\\\\\\\"ab0b1d51-5ec6-479b-8881-93dfa8d30337\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-network-console/networking-console-plugin\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGr\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8mw7z_openshift-ovn-kubernetes(1f8b8acb-f4cf-41db-82f8-94ffd21c1594)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e9d86c9f33a8ad421c133f7b674eff9362011ad9ce60cbe1b917bda84c85047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2105d09a59a290aa3e1915a2b8c2c8aeb3a3e94ddfd4b48dc574a5362f522d18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2105d09a59a290aa3e1915a2b8c2c8aeb3a3e94ddfd4b48dc574a5362f522d18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8mw7z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:32Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:32 crc kubenswrapper[4806]: I0126 07:54:32.952111 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rqmvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"137029f0-49ad-4400-b117-2eff9271bce3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9zt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9zt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:14Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rqmvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:32Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:32 crc kubenswrapper[4806]: I0126 07:54:32.971246 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a2716f1-7f63-4f1b-88e7-412b33a5afe3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1797a908e33ef3678ac1471e937a8e52c11dbb31d8de0f19d400a01706a98b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c111f2ec62507ccc570af4f4da19232bfa0946bb9e67bde595d5a3d66eff6a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f94ceee9c2ed3bd1c5ba58ee7e5047dd5f0a325c610c4277705f3426e69ed79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50228cb176a3defc821a78ed75d837706c2de3a3414f02e0afacf81289a15999\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:53:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:32Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:32 crc kubenswrapper[4806]: I0126 07:54:32.984983 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acf0880fe013b6ee2bf3e8342e19ebbdc21b0f7aeb69749dce219fe0e5e4939a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:32Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:32 crc kubenswrapper[4806]: I0126 07:54:32.998984 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0298829a559cb39d3f8d2f3b0e53b249e74c432ba82385ab628135d53fa77272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70db7a55580b02bc2be913a8978f895c5a3826133883cf1c838bc84a76e89af1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:32Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:33 crc kubenswrapper[4806]: I0126 07:54:33.007885 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:33 crc kubenswrapper[4806]: I0126 07:54:33.007934 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:33 crc kubenswrapper[4806]: I0126 07:54:33.007952 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:33 crc kubenswrapper[4806]: I0126 07:54:33.007978 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:33 crc kubenswrapper[4806]: I0126 07:54:33.007996 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:33Z","lastTransitionTime":"2026-01-26T07:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:33 crc kubenswrapper[4806]: I0126 07:54:33.013129 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wx526" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"265e15b3-6ef8-47df-ab15-dcc9bd9574ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5080e1085d9267f3e432eaf62a6c620f7e59db08140bd5a901ab21840bfd6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmfr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wx526\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:33Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:33 crc kubenswrapper[4806]: I0126 07:54:33.031097 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d7glh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4320ae6b-0d73-47d7-9f2c-f3c5b6b69041\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da5e8c52d415e76190ace196927bd1783b08990164b11379940ecc2b6734551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpcqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d7glh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:33Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:33 crc kubenswrapper[4806]: I0126 07:54:33.085612 4806 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 15:44:46.257804666 +0000 UTC Jan 26 07:54:33 crc kubenswrapper[4806]: I0126 07:54:33.085925 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-268q5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59844d88-1bf9-4761-b664-74623e7532c3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80fc9c4f56c3d017653449212b2b9457a701e8c567317798eb12c695bccec5c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19fc2da16d6c76dad40cc2947eb8da01b1ead5804be80df5756c132456575909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19fc2da16d6c76dad40cc2947eb8da01b1ead5804be80df5756c132456575909\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a426f2a63c3cdd56cec3bbd1f6e88ff76808304da3fe757043afeb2feb26614\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a426f2a63c3cdd56cec3bbd1f6e88ff76808304da3fe757043afeb2feb26614\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5459f7b834ef08b014a811d0955763235cb4dbb6cbe92670202b03f51f7d684d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5459f7b834ef08b014a811d0955763235cb4dbb6cbe92670202b03f51f7d684d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1efe94940fbf1babcccc68efae1d38751c22232d1a2f197485057be906db80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f1efe94940fbf1babcccc68efae1d38751c22232d1a2f197485057be906db80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1332a28e0cf08275ce281997fdc5674f44e397f8ff02fdd19fed12c2de2b70a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1332a28e0cf08275ce281997fdc5674f44e397f8ff02fdd19fed12c2de2b70a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://638da847467b7cade9c68a764044417e85bfa99cae93105cb70b260d3668c1a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://638da847467b7cade9c68a764044417e85bfa99cae93105cb70b260d3668c1a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-268q5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:33Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:33 crc kubenswrapper[4806]: I0126 07:54:33.110781 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"674c5d9f-0f9e-47cf-b6bb-5a011bd0af68\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06b15982445e897ebbaf881e9a84973b9b700b0763ea6dfc0dfa3d5adb3d5b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67cf748dcc41d3402937cba97eadb7311a4fd7ab359a24768ffd4b6f689e9e2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://374c11776db902d4a30fdeab722bf8c85be53043b87555ac42f9f18bbfca5f78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded50f9353eb262b2587b8d1a59d8e1a06946c9b228eb0991fe35ad79f19567b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dc1a26bcb2eff0795009b3d88f7fedd86bccf553e9b8b86f8bc4bdda154566d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b85e7f36ac03113362c2ab6b15fb73e517599c6a56c6a410b9dead70191502f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b85e7f36ac03113362c2ab6b15fb73e517599c6a56c6a410b9dead70191502f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7324fbdf53b0be79696bb4bf870fe072761633e09cc88cce3a213a40b339410d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7324fbdf53b0be79696bb4bf870fe072761633e09cc88cce3a213a40b339410d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1987002ea8dfdacf427291f1757243727d744b1fe99d0be1f8cddc26beb74fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1987002ea8dfdacf427291f1757243727d744b1fe99d0be1f8cddc26beb74fc9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:53:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:33Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:33 crc kubenswrapper[4806]: I0126 07:54:33.111711 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:33 crc kubenswrapper[4806]: I0126 07:54:33.111747 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:33 crc kubenswrapper[4806]: I0126 07:54:33.111757 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:33 crc kubenswrapper[4806]: I0126 07:54:33.111790 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:33 crc kubenswrapper[4806]: I0126 07:54:33.111802 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:33Z","lastTransitionTime":"2026-01-26T07:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:33 crc kubenswrapper[4806]: I0126 07:54:33.132446 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f837be3-c97c-4ec6-9ac0-1b2c210a2bd1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://370998a25827938e42e6ca8f51cb20c64068d266a2a6e92f43cfa6033efe1f1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://befc3a7310ed730ab06800d0ffe5a5ad206f5193f921d0080543d96e53c382cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f4e90bd8d92e4159f45b887647465c23595d6a14cb9fd5a6f3b6c736345b302\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d8fe9445b34c3dea0e5508fbbd9d8c72d3b0bd3c927f3d0469fb5f86a61b7a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6442d8b57ec5789bc5ad219da279d42cd053598b4a8a843b4190a619750483d3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 07:53:53.366863 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 07:53:53.368547 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1947576678/tls.crt::/tmp/serving-cert-1947576678/tls.key\\\\\\\"\\\\nI0126 07:53:59.068951 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 07:53:59.072863 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 07:53:59.072893 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 07:53:59.072922 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 07:53:59.072931 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 07:53:59.078386 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 07:53:59.078432 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 07:53:59.078439 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 07:53:59.078445 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 07:53:59.078449 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 07:53:59.078455 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 07:53:59.078459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 07:53:59.078571 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 07:53:59.084946 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eda625619208720589e5f413d0df8e1e55e7fd5ab4a2e59d64e19150afb05ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02368f0d186c7de1c3dc25d5c60985799ded799314f332c5132230a413d531a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02368f0d186c7de1c3dc25d5c60985799ded799314f332c5132230a413d531a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:53:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:33Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:33 crc kubenswrapper[4806]: I0126 07:54:33.151846 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dfe44c66a73660b38211ed7ae8fbb6a036c2ebc6a2716dd0e36da70b8d1fcc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:33Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:33 crc kubenswrapper[4806]: I0126 07:54:33.168150 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d07502a2-50b0-4012-b335-340a1c694c50\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03d55ca5e1d542a5cdb726c1e581fb9b644e237bf4575794fedadf60386c339c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67x45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://824bb40c04b0ea474c45020c9c84ce7b7ee18c52beef9fee9618ba5a6e59be04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67x45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k2tlk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:33Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:33 crc kubenswrapper[4806]: I0126 07:54:33.183553 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:33Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:33 crc kubenswrapper[4806]: I0126 07:54:33.215433 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:33 crc kubenswrapper[4806]: I0126 07:54:33.215577 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:33 crc kubenswrapper[4806]: I0126 07:54:33.215607 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:33 crc kubenswrapper[4806]: I0126 07:54:33.215642 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:33 crc kubenswrapper[4806]: I0126 07:54:33.215673 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:33Z","lastTransitionTime":"2026-01-26T07:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:33 crc kubenswrapper[4806]: I0126 07:54:33.319134 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:33 crc kubenswrapper[4806]: I0126 07:54:33.319708 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:33 crc kubenswrapper[4806]: I0126 07:54:33.319925 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:33 crc kubenswrapper[4806]: I0126 07:54:33.320092 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:33 crc kubenswrapper[4806]: I0126 07:54:33.320243 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:33Z","lastTransitionTime":"2026-01-26T07:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:33 crc kubenswrapper[4806]: I0126 07:54:33.422982 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:33 crc kubenswrapper[4806]: I0126 07:54:33.423253 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:33 crc kubenswrapper[4806]: I0126 07:54:33.423351 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:33 crc kubenswrapper[4806]: I0126 07:54:33.423457 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:33 crc kubenswrapper[4806]: I0126 07:54:33.423568 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:33Z","lastTransitionTime":"2026-01-26T07:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:33 crc kubenswrapper[4806]: I0126 07:54:33.526049 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:33 crc kubenswrapper[4806]: I0126 07:54:33.526090 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:33 crc kubenswrapper[4806]: I0126 07:54:33.526099 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:33 crc kubenswrapper[4806]: I0126 07:54:33.526112 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:33 crc kubenswrapper[4806]: I0126 07:54:33.526121 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:33Z","lastTransitionTime":"2026-01-26T07:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:33 crc kubenswrapper[4806]: I0126 07:54:33.628266 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:33 crc kubenswrapper[4806]: I0126 07:54:33.628305 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:33 crc kubenswrapper[4806]: I0126 07:54:33.628313 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:33 crc kubenswrapper[4806]: I0126 07:54:33.628326 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:33 crc kubenswrapper[4806]: I0126 07:54:33.628336 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:33Z","lastTransitionTime":"2026-01-26T07:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:33 crc kubenswrapper[4806]: I0126 07:54:33.730371 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:33 crc kubenswrapper[4806]: I0126 07:54:33.730405 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:33 crc kubenswrapper[4806]: I0126 07:54:33.730422 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:33 crc kubenswrapper[4806]: I0126 07:54:33.730442 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:33 crc kubenswrapper[4806]: I0126 07:54:33.730454 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:33Z","lastTransitionTime":"2026-01-26T07:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:33 crc kubenswrapper[4806]: I0126 07:54:33.832788 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:33 crc kubenswrapper[4806]: I0126 07:54:33.833181 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:33 crc kubenswrapper[4806]: I0126 07:54:33.833258 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:33 crc kubenswrapper[4806]: I0126 07:54:33.833328 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:33 crc kubenswrapper[4806]: I0126 07:54:33.833397 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:33Z","lastTransitionTime":"2026-01-26T07:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:33 crc kubenswrapper[4806]: I0126 07:54:33.935713 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:33 crc kubenswrapper[4806]: I0126 07:54:33.935776 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:33 crc kubenswrapper[4806]: I0126 07:54:33.935786 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:33 crc kubenswrapper[4806]: I0126 07:54:33.935800 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:33 crc kubenswrapper[4806]: I0126 07:54:33.935810 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:33Z","lastTransitionTime":"2026-01-26T07:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:34 crc kubenswrapper[4806]: I0126 07:54:34.038032 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:34 crc kubenswrapper[4806]: I0126 07:54:34.038326 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:34 crc kubenswrapper[4806]: I0126 07:54:34.038416 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:34 crc kubenswrapper[4806]: I0126 07:54:34.038487 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:34 crc kubenswrapper[4806]: I0126 07:54:34.038583 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:34Z","lastTransitionTime":"2026-01-26T07:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:34 crc kubenswrapper[4806]: I0126 07:54:34.041249 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 07:54:34 crc kubenswrapper[4806]: I0126 07:54:34.041296 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 07:54:34 crc kubenswrapper[4806]: I0126 07:54:34.041352 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqmvf" Jan 26 07:54:34 crc kubenswrapper[4806]: E0126 07:54:34.041474 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 07:54:34 crc kubenswrapper[4806]: I0126 07:54:34.041546 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 07:54:34 crc kubenswrapper[4806]: E0126 07:54:34.041607 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 07:54:34 crc kubenswrapper[4806]: E0126 07:54:34.041752 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 07:54:34 crc kubenswrapper[4806]: E0126 07:54:34.041804 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqmvf" podUID="137029f0-49ad-4400-b117-2eff9271bce3" Jan 26 07:54:34 crc kubenswrapper[4806]: I0126 07:54:34.086827 4806 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 12:58:44.28439897 +0000 UTC Jan 26 07:54:34 crc kubenswrapper[4806]: I0126 07:54:34.141155 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:34 crc kubenswrapper[4806]: I0126 07:54:34.141198 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:34 crc kubenswrapper[4806]: I0126 07:54:34.141209 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:34 crc kubenswrapper[4806]: I0126 07:54:34.141230 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:34 crc kubenswrapper[4806]: I0126 07:54:34.141243 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:34Z","lastTransitionTime":"2026-01-26T07:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:34 crc kubenswrapper[4806]: I0126 07:54:34.244186 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:34 crc kubenswrapper[4806]: I0126 07:54:34.244253 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:34 crc kubenswrapper[4806]: I0126 07:54:34.244264 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:34 crc kubenswrapper[4806]: I0126 07:54:34.244279 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:34 crc kubenswrapper[4806]: I0126 07:54:34.244307 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:34Z","lastTransitionTime":"2026-01-26T07:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:34 crc kubenswrapper[4806]: I0126 07:54:34.345938 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:34 crc kubenswrapper[4806]: I0126 07:54:34.345982 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:34 crc kubenswrapper[4806]: I0126 07:54:34.345991 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:34 crc kubenswrapper[4806]: I0126 07:54:34.346005 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:34 crc kubenswrapper[4806]: I0126 07:54:34.346014 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:34Z","lastTransitionTime":"2026-01-26T07:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:34 crc kubenswrapper[4806]: I0126 07:54:34.448610 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:34 crc kubenswrapper[4806]: I0126 07:54:34.448663 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:34 crc kubenswrapper[4806]: I0126 07:54:34.448672 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:34 crc kubenswrapper[4806]: I0126 07:54:34.448686 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:34 crc kubenswrapper[4806]: I0126 07:54:34.448695 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:34Z","lastTransitionTime":"2026-01-26T07:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:34 crc kubenswrapper[4806]: I0126 07:54:34.551973 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:34 crc kubenswrapper[4806]: I0126 07:54:34.552043 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:34 crc kubenswrapper[4806]: I0126 07:54:34.552063 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:34 crc kubenswrapper[4806]: I0126 07:54:34.552085 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:34 crc kubenswrapper[4806]: I0126 07:54:34.552116 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:34Z","lastTransitionTime":"2026-01-26T07:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:34 crc kubenswrapper[4806]: I0126 07:54:34.654954 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:34 crc kubenswrapper[4806]: I0126 07:54:34.655012 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:34 crc kubenswrapper[4806]: I0126 07:54:34.655024 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:34 crc kubenswrapper[4806]: I0126 07:54:34.655044 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:34 crc kubenswrapper[4806]: I0126 07:54:34.655057 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:34Z","lastTransitionTime":"2026-01-26T07:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:34 crc kubenswrapper[4806]: I0126 07:54:34.757203 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:34 crc kubenswrapper[4806]: I0126 07:54:34.757297 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:34 crc kubenswrapper[4806]: I0126 07:54:34.757324 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:34 crc kubenswrapper[4806]: I0126 07:54:34.757353 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:34 crc kubenswrapper[4806]: I0126 07:54:34.757390 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:34Z","lastTransitionTime":"2026-01-26T07:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:34 crc kubenswrapper[4806]: I0126 07:54:34.860142 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:34 crc kubenswrapper[4806]: I0126 07:54:34.860172 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:34 crc kubenswrapper[4806]: I0126 07:54:34.860181 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:34 crc kubenswrapper[4806]: I0126 07:54:34.860194 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:34 crc kubenswrapper[4806]: I0126 07:54:34.860203 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:34Z","lastTransitionTime":"2026-01-26T07:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:34 crc kubenswrapper[4806]: I0126 07:54:34.963663 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:34 crc kubenswrapper[4806]: I0126 07:54:34.963734 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:34 crc kubenswrapper[4806]: I0126 07:54:34.963751 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:34 crc kubenswrapper[4806]: I0126 07:54:34.964206 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:34 crc kubenswrapper[4806]: I0126 07:54:34.964251 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:34Z","lastTransitionTime":"2026-01-26T07:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:35 crc kubenswrapper[4806]: I0126 07:54:35.068195 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:35 crc kubenswrapper[4806]: I0126 07:54:35.068292 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:35 crc kubenswrapper[4806]: I0126 07:54:35.068311 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:35 crc kubenswrapper[4806]: I0126 07:54:35.068331 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:35 crc kubenswrapper[4806]: I0126 07:54:35.068346 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:35Z","lastTransitionTime":"2026-01-26T07:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:35 crc kubenswrapper[4806]: I0126 07:54:35.088426 4806 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 00:20:57.803149903 +0000 UTC Jan 26 07:54:35 crc kubenswrapper[4806]: I0126 07:54:35.171286 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:35 crc kubenswrapper[4806]: I0126 07:54:35.171337 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:35 crc kubenswrapper[4806]: I0126 07:54:35.171354 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:35 crc kubenswrapper[4806]: I0126 07:54:35.171380 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:35 crc kubenswrapper[4806]: I0126 07:54:35.171396 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:35Z","lastTransitionTime":"2026-01-26T07:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:35 crc kubenswrapper[4806]: I0126 07:54:35.274395 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:35 crc kubenswrapper[4806]: I0126 07:54:35.274458 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:35 crc kubenswrapper[4806]: I0126 07:54:35.274475 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:35 crc kubenswrapper[4806]: I0126 07:54:35.274501 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:35 crc kubenswrapper[4806]: I0126 07:54:35.274550 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:35Z","lastTransitionTime":"2026-01-26T07:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:35 crc kubenswrapper[4806]: I0126 07:54:35.377314 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:35 crc kubenswrapper[4806]: I0126 07:54:35.377357 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:35 crc kubenswrapper[4806]: I0126 07:54:35.377368 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:35 crc kubenswrapper[4806]: I0126 07:54:35.377429 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:35 crc kubenswrapper[4806]: I0126 07:54:35.377444 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:35Z","lastTransitionTime":"2026-01-26T07:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:35 crc kubenswrapper[4806]: I0126 07:54:35.480653 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:35 crc kubenswrapper[4806]: I0126 07:54:35.480717 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:35 crc kubenswrapper[4806]: I0126 07:54:35.480734 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:35 crc kubenswrapper[4806]: I0126 07:54:35.480760 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:35 crc kubenswrapper[4806]: I0126 07:54:35.480778 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:35Z","lastTransitionTime":"2026-01-26T07:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:35 crc kubenswrapper[4806]: I0126 07:54:35.583610 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:35 crc kubenswrapper[4806]: I0126 07:54:35.583643 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:35 crc kubenswrapper[4806]: I0126 07:54:35.583653 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:35 crc kubenswrapper[4806]: I0126 07:54:35.583698 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:35 crc kubenswrapper[4806]: I0126 07:54:35.583707 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:35Z","lastTransitionTime":"2026-01-26T07:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:35 crc kubenswrapper[4806]: I0126 07:54:35.686063 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:35 crc kubenswrapper[4806]: I0126 07:54:35.686101 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:35 crc kubenswrapper[4806]: I0126 07:54:35.686113 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:35 crc kubenswrapper[4806]: I0126 07:54:35.686130 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:35 crc kubenswrapper[4806]: I0126 07:54:35.686141 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:35Z","lastTransitionTime":"2026-01-26T07:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:35 crc kubenswrapper[4806]: I0126 07:54:35.788390 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:35 crc kubenswrapper[4806]: I0126 07:54:35.788517 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:35 crc kubenswrapper[4806]: I0126 07:54:35.788570 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:35 crc kubenswrapper[4806]: I0126 07:54:35.788595 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:35 crc kubenswrapper[4806]: I0126 07:54:35.788614 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:35Z","lastTransitionTime":"2026-01-26T07:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:35 crc kubenswrapper[4806]: I0126 07:54:35.891866 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:35 crc kubenswrapper[4806]: I0126 07:54:35.891923 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:35 crc kubenswrapper[4806]: I0126 07:54:35.891940 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:35 crc kubenswrapper[4806]: I0126 07:54:35.891955 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:35 crc kubenswrapper[4806]: I0126 07:54:35.891994 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:35Z","lastTransitionTime":"2026-01-26T07:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:35 crc kubenswrapper[4806]: I0126 07:54:35.994251 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:35 crc kubenswrapper[4806]: I0126 07:54:35.994303 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:35 crc kubenswrapper[4806]: I0126 07:54:35.994319 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:35 crc kubenswrapper[4806]: I0126 07:54:35.994342 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:35 crc kubenswrapper[4806]: I0126 07:54:35.994359 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:35Z","lastTransitionTime":"2026-01-26T07:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:36 crc kubenswrapper[4806]: I0126 07:54:36.041393 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 07:54:36 crc kubenswrapper[4806]: I0126 07:54:36.041506 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqmvf" Jan 26 07:54:36 crc kubenswrapper[4806]: E0126 07:54:36.041543 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 07:54:36 crc kubenswrapper[4806]: I0126 07:54:36.041579 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 07:54:36 crc kubenswrapper[4806]: E0126 07:54:36.041815 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqmvf" podUID="137029f0-49ad-4400-b117-2eff9271bce3" Jan 26 07:54:36 crc kubenswrapper[4806]: E0126 07:54:36.041907 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 07:54:36 crc kubenswrapper[4806]: I0126 07:54:36.041413 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 07:54:36 crc kubenswrapper[4806]: E0126 07:54:36.042440 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 07:54:36 crc kubenswrapper[4806]: I0126 07:54:36.088878 4806 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 02:46:24.313219031 +0000 UTC Jan 26 07:54:36 crc kubenswrapper[4806]: I0126 07:54:36.097404 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:36 crc kubenswrapper[4806]: I0126 07:54:36.097458 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:36 crc kubenswrapper[4806]: I0126 07:54:36.097474 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:36 crc kubenswrapper[4806]: I0126 07:54:36.097501 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:36 crc kubenswrapper[4806]: I0126 07:54:36.097545 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:36Z","lastTransitionTime":"2026-01-26T07:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:36 crc kubenswrapper[4806]: I0126 07:54:36.202192 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:36 crc kubenswrapper[4806]: I0126 07:54:36.202275 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:36 crc kubenswrapper[4806]: I0126 07:54:36.202455 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:36 crc kubenswrapper[4806]: I0126 07:54:36.202496 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:36 crc kubenswrapper[4806]: I0126 07:54:36.202551 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:36Z","lastTransitionTime":"2026-01-26T07:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:36 crc kubenswrapper[4806]: I0126 07:54:36.305973 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:36 crc kubenswrapper[4806]: I0126 07:54:36.306026 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:36 crc kubenswrapper[4806]: I0126 07:54:36.306043 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:36 crc kubenswrapper[4806]: I0126 07:54:36.306061 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:36 crc kubenswrapper[4806]: I0126 07:54:36.306076 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:36Z","lastTransitionTime":"2026-01-26T07:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:36 crc kubenswrapper[4806]: I0126 07:54:36.409118 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:36 crc kubenswrapper[4806]: I0126 07:54:36.409198 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:36 crc kubenswrapper[4806]: I0126 07:54:36.409217 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:36 crc kubenswrapper[4806]: I0126 07:54:36.409310 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:36 crc kubenswrapper[4806]: I0126 07:54:36.409329 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:36Z","lastTransitionTime":"2026-01-26T07:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:36 crc kubenswrapper[4806]: I0126 07:54:36.512462 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:36 crc kubenswrapper[4806]: I0126 07:54:36.512504 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:36 crc kubenswrapper[4806]: I0126 07:54:36.512513 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:36 crc kubenswrapper[4806]: I0126 07:54:36.512544 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:36 crc kubenswrapper[4806]: I0126 07:54:36.512555 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:36Z","lastTransitionTime":"2026-01-26T07:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:36 crc kubenswrapper[4806]: I0126 07:54:36.614584 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:36 crc kubenswrapper[4806]: I0126 07:54:36.614649 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:36 crc kubenswrapper[4806]: I0126 07:54:36.614674 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:36 crc kubenswrapper[4806]: I0126 07:54:36.614701 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:36 crc kubenswrapper[4806]: I0126 07:54:36.614722 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:36Z","lastTransitionTime":"2026-01-26T07:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:36 crc kubenswrapper[4806]: I0126 07:54:36.774984 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:36 crc kubenswrapper[4806]: I0126 07:54:36.775013 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:36 crc kubenswrapper[4806]: I0126 07:54:36.775022 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:36 crc kubenswrapper[4806]: I0126 07:54:36.775035 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:36 crc kubenswrapper[4806]: I0126 07:54:36.775045 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:36Z","lastTransitionTime":"2026-01-26T07:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:36 crc kubenswrapper[4806]: I0126 07:54:36.877158 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:36 crc kubenswrapper[4806]: I0126 07:54:36.877188 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:36 crc kubenswrapper[4806]: I0126 07:54:36.877197 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:36 crc kubenswrapper[4806]: I0126 07:54:36.877210 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:36 crc kubenswrapper[4806]: I0126 07:54:36.877219 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:36Z","lastTransitionTime":"2026-01-26T07:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:36 crc kubenswrapper[4806]: I0126 07:54:36.978764 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:36 crc kubenswrapper[4806]: I0126 07:54:36.978802 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:36 crc kubenswrapper[4806]: I0126 07:54:36.978811 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:36 crc kubenswrapper[4806]: I0126 07:54:36.978828 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:36 crc kubenswrapper[4806]: I0126 07:54:36.978838 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:36Z","lastTransitionTime":"2026-01-26T07:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:37 crc kubenswrapper[4806]: I0126 07:54:37.081621 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:37 crc kubenswrapper[4806]: I0126 07:54:37.081667 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:37 crc kubenswrapper[4806]: I0126 07:54:37.081680 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:37 crc kubenswrapper[4806]: I0126 07:54:37.081700 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:37 crc kubenswrapper[4806]: I0126 07:54:37.081711 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:37Z","lastTransitionTime":"2026-01-26T07:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:37 crc kubenswrapper[4806]: I0126 07:54:37.089765 4806 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 03:50:15.352612879 +0000 UTC Jan 26 07:54:37 crc kubenswrapper[4806]: I0126 07:54:37.184655 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:37 crc kubenswrapper[4806]: I0126 07:54:37.184733 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:37 crc kubenswrapper[4806]: I0126 07:54:37.184744 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:37 crc kubenswrapper[4806]: I0126 07:54:37.184762 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:37 crc kubenswrapper[4806]: I0126 07:54:37.184773 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:37Z","lastTransitionTime":"2026-01-26T07:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:37 crc kubenswrapper[4806]: I0126 07:54:37.287397 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:37 crc kubenswrapper[4806]: I0126 07:54:37.287452 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:37 crc kubenswrapper[4806]: I0126 07:54:37.287474 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:37 crc kubenswrapper[4806]: I0126 07:54:37.287497 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:37 crc kubenswrapper[4806]: I0126 07:54:37.287514 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:37Z","lastTransitionTime":"2026-01-26T07:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:37 crc kubenswrapper[4806]: I0126 07:54:37.390256 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:37 crc kubenswrapper[4806]: I0126 07:54:37.390332 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:37 crc kubenswrapper[4806]: I0126 07:54:37.390356 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:37 crc kubenswrapper[4806]: I0126 07:54:37.390387 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:37 crc kubenswrapper[4806]: I0126 07:54:37.390409 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:37Z","lastTransitionTime":"2026-01-26T07:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:37 crc kubenswrapper[4806]: I0126 07:54:37.492814 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:37 crc kubenswrapper[4806]: I0126 07:54:37.493100 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:37 crc kubenswrapper[4806]: I0126 07:54:37.493191 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:37 crc kubenswrapper[4806]: I0126 07:54:37.493275 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:37 crc kubenswrapper[4806]: I0126 07:54:37.493355 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:37Z","lastTransitionTime":"2026-01-26T07:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:37 crc kubenswrapper[4806]: I0126 07:54:37.596217 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:37 crc kubenswrapper[4806]: I0126 07:54:37.596495 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:37 crc kubenswrapper[4806]: I0126 07:54:37.596717 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:37 crc kubenswrapper[4806]: I0126 07:54:37.596840 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:37 crc kubenswrapper[4806]: I0126 07:54:37.596963 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:37Z","lastTransitionTime":"2026-01-26T07:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:37 crc kubenswrapper[4806]: I0126 07:54:37.699799 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:37 crc kubenswrapper[4806]: I0126 07:54:37.700268 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:37 crc kubenswrapper[4806]: I0126 07:54:37.700556 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:37 crc kubenswrapper[4806]: I0126 07:54:37.700835 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:37 crc kubenswrapper[4806]: I0126 07:54:37.701018 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:37Z","lastTransitionTime":"2026-01-26T07:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:37 crc kubenswrapper[4806]: I0126 07:54:37.804463 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:37 crc kubenswrapper[4806]: I0126 07:54:37.804572 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:37 crc kubenswrapper[4806]: I0126 07:54:37.804632 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:37 crc kubenswrapper[4806]: I0126 07:54:37.804667 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:37 crc kubenswrapper[4806]: I0126 07:54:37.804699 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:37Z","lastTransitionTime":"2026-01-26T07:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:37 crc kubenswrapper[4806]: I0126 07:54:37.908883 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:37 crc kubenswrapper[4806]: I0126 07:54:37.909206 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:37 crc kubenswrapper[4806]: I0126 07:54:37.909310 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:37 crc kubenswrapper[4806]: I0126 07:54:37.909425 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:37 crc kubenswrapper[4806]: I0126 07:54:37.909606 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:37Z","lastTransitionTime":"2026-01-26T07:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:38 crc kubenswrapper[4806]: I0126 07:54:38.012689 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:38 crc kubenswrapper[4806]: I0126 07:54:38.013098 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:38 crc kubenswrapper[4806]: I0126 07:54:38.013201 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:38 crc kubenswrapper[4806]: I0126 07:54:38.013388 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:38 crc kubenswrapper[4806]: I0126 07:54:38.013563 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:38Z","lastTransitionTime":"2026-01-26T07:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:38 crc kubenswrapper[4806]: I0126 07:54:38.041204 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 07:54:38 crc kubenswrapper[4806]: E0126 07:54:38.041407 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 07:54:38 crc kubenswrapper[4806]: I0126 07:54:38.041790 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 07:54:38 crc kubenswrapper[4806]: E0126 07:54:38.042160 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 07:54:38 crc kubenswrapper[4806]: I0126 07:54:38.041822 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqmvf" Jan 26 07:54:38 crc kubenswrapper[4806]: I0126 07:54:38.041233 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 07:54:38 crc kubenswrapper[4806]: E0126 07:54:38.042722 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqmvf" podUID="137029f0-49ad-4400-b117-2eff9271bce3" Jan 26 07:54:38 crc kubenswrapper[4806]: E0126 07:54:38.043480 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 07:54:38 crc kubenswrapper[4806]: I0126 07:54:38.090942 4806 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 05:00:26.557970009 +0000 UTC Jan 26 07:54:38 crc kubenswrapper[4806]: I0126 07:54:38.116311 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:38 crc kubenswrapper[4806]: I0126 07:54:38.116673 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:38 crc kubenswrapper[4806]: I0126 07:54:38.116923 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:38 crc kubenswrapper[4806]: I0126 07:54:38.117079 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:38 crc kubenswrapper[4806]: I0126 07:54:38.117200 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:38Z","lastTransitionTime":"2026-01-26T07:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:38 crc kubenswrapper[4806]: I0126 07:54:38.220571 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:38 crc kubenswrapper[4806]: I0126 07:54:38.220626 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:38 crc kubenswrapper[4806]: I0126 07:54:38.220640 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:38 crc kubenswrapper[4806]: I0126 07:54:38.220665 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:38 crc kubenswrapper[4806]: I0126 07:54:38.220682 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:38Z","lastTransitionTime":"2026-01-26T07:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:38 crc kubenswrapper[4806]: I0126 07:54:38.324141 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:38 crc kubenswrapper[4806]: I0126 07:54:38.324195 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:38 crc kubenswrapper[4806]: I0126 07:54:38.324212 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:38 crc kubenswrapper[4806]: I0126 07:54:38.324235 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:38 crc kubenswrapper[4806]: I0126 07:54:38.324252 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:38Z","lastTransitionTime":"2026-01-26T07:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:38 crc kubenswrapper[4806]: I0126 07:54:38.426841 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:38 crc kubenswrapper[4806]: I0126 07:54:38.427289 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:38 crc kubenswrapper[4806]: I0126 07:54:38.427464 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:38 crc kubenswrapper[4806]: I0126 07:54:38.427681 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:38 crc kubenswrapper[4806]: I0126 07:54:38.427869 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:38Z","lastTransitionTime":"2026-01-26T07:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:38 crc kubenswrapper[4806]: I0126 07:54:38.531548 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:38 crc kubenswrapper[4806]: I0126 07:54:38.531636 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:38 crc kubenswrapper[4806]: I0126 07:54:38.531656 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:38 crc kubenswrapper[4806]: I0126 07:54:38.531685 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:38 crc kubenswrapper[4806]: I0126 07:54:38.531704 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:38Z","lastTransitionTime":"2026-01-26T07:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:38 crc kubenswrapper[4806]: I0126 07:54:38.635607 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:38 crc kubenswrapper[4806]: I0126 07:54:38.635663 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:38 crc kubenswrapper[4806]: I0126 07:54:38.635680 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:38 crc kubenswrapper[4806]: I0126 07:54:38.635707 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:38 crc kubenswrapper[4806]: I0126 07:54:38.635724 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:38Z","lastTransitionTime":"2026-01-26T07:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:38 crc kubenswrapper[4806]: I0126 07:54:38.740250 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:38 crc kubenswrapper[4806]: I0126 07:54:38.740346 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:38 crc kubenswrapper[4806]: I0126 07:54:38.740359 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:38 crc kubenswrapper[4806]: I0126 07:54:38.740379 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:38 crc kubenswrapper[4806]: I0126 07:54:38.740398 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:38Z","lastTransitionTime":"2026-01-26T07:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:38 crc kubenswrapper[4806]: I0126 07:54:38.844163 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:38 crc kubenswrapper[4806]: I0126 07:54:38.844220 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:38 crc kubenswrapper[4806]: I0126 07:54:38.844230 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:38 crc kubenswrapper[4806]: I0126 07:54:38.844250 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:38 crc kubenswrapper[4806]: I0126 07:54:38.844261 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:38Z","lastTransitionTime":"2026-01-26T07:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:38 crc kubenswrapper[4806]: I0126 07:54:38.948099 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:38 crc kubenswrapper[4806]: I0126 07:54:38.948193 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:38 crc kubenswrapper[4806]: I0126 07:54:38.948241 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:38 crc kubenswrapper[4806]: I0126 07:54:38.948274 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:38 crc kubenswrapper[4806]: I0126 07:54:38.948291 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:38Z","lastTransitionTime":"2026-01-26T07:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:39 crc kubenswrapper[4806]: I0126 07:54:39.050711 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:39 crc kubenswrapper[4806]: I0126 07:54:39.050786 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:39 crc kubenswrapper[4806]: I0126 07:54:39.050806 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:39 crc kubenswrapper[4806]: I0126 07:54:39.050915 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:39 crc kubenswrapper[4806]: I0126 07:54:39.050935 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:39Z","lastTransitionTime":"2026-01-26T07:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:39 crc kubenswrapper[4806]: I0126 07:54:39.091928 4806 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 02:32:37.090581941 +0000 UTC Jan 26 07:54:39 crc kubenswrapper[4806]: I0126 07:54:39.154178 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:39 crc kubenswrapper[4806]: I0126 07:54:39.154232 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:39 crc kubenswrapper[4806]: I0126 07:54:39.154252 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:39 crc kubenswrapper[4806]: I0126 07:54:39.154276 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:39 crc kubenswrapper[4806]: I0126 07:54:39.154294 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:39Z","lastTransitionTime":"2026-01-26T07:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:39 crc kubenswrapper[4806]: I0126 07:54:39.257717 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:39 crc kubenswrapper[4806]: I0126 07:54:39.258113 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:39 crc kubenswrapper[4806]: I0126 07:54:39.258266 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:39 crc kubenswrapper[4806]: I0126 07:54:39.258375 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:39 crc kubenswrapper[4806]: I0126 07:54:39.258476 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:39Z","lastTransitionTime":"2026-01-26T07:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:39 crc kubenswrapper[4806]: I0126 07:54:39.361970 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:39 crc kubenswrapper[4806]: I0126 07:54:39.362059 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:39 crc kubenswrapper[4806]: I0126 07:54:39.362080 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:39 crc kubenswrapper[4806]: I0126 07:54:39.362104 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:39 crc kubenswrapper[4806]: I0126 07:54:39.362120 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:39Z","lastTransitionTime":"2026-01-26T07:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:39 crc kubenswrapper[4806]: I0126 07:54:39.465658 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:39 crc kubenswrapper[4806]: I0126 07:54:39.465716 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:39 crc kubenswrapper[4806]: I0126 07:54:39.465731 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:39 crc kubenswrapper[4806]: I0126 07:54:39.465755 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:39 crc kubenswrapper[4806]: I0126 07:54:39.465771 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:39Z","lastTransitionTime":"2026-01-26T07:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:39 crc kubenswrapper[4806]: I0126 07:54:39.569904 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:39 crc kubenswrapper[4806]: I0126 07:54:39.570019 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:39 crc kubenswrapper[4806]: I0126 07:54:39.570032 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:39 crc kubenswrapper[4806]: I0126 07:54:39.570052 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:39 crc kubenswrapper[4806]: I0126 07:54:39.570064 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:39Z","lastTransitionTime":"2026-01-26T07:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:39 crc kubenswrapper[4806]: I0126 07:54:39.673236 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:39 crc kubenswrapper[4806]: I0126 07:54:39.673278 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:39 crc kubenswrapper[4806]: I0126 07:54:39.673290 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:39 crc kubenswrapper[4806]: I0126 07:54:39.673309 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:39 crc kubenswrapper[4806]: I0126 07:54:39.673321 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:39Z","lastTransitionTime":"2026-01-26T07:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:39 crc kubenswrapper[4806]: I0126 07:54:39.777205 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:39 crc kubenswrapper[4806]: I0126 07:54:39.777278 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:39 crc kubenswrapper[4806]: I0126 07:54:39.777298 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:39 crc kubenswrapper[4806]: I0126 07:54:39.777330 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:39 crc kubenswrapper[4806]: I0126 07:54:39.777351 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:39Z","lastTransitionTime":"2026-01-26T07:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:39 crc kubenswrapper[4806]: I0126 07:54:39.881309 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:39 crc kubenswrapper[4806]: I0126 07:54:39.881364 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:39 crc kubenswrapper[4806]: I0126 07:54:39.881378 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:39 crc kubenswrapper[4806]: I0126 07:54:39.881400 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:39 crc kubenswrapper[4806]: I0126 07:54:39.881414 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:39Z","lastTransitionTime":"2026-01-26T07:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:39 crc kubenswrapper[4806]: I0126 07:54:39.984244 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:39 crc kubenswrapper[4806]: I0126 07:54:39.984301 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:39 crc kubenswrapper[4806]: I0126 07:54:39.984313 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:39 crc kubenswrapper[4806]: I0126 07:54:39.984335 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:39 crc kubenswrapper[4806]: I0126 07:54:39.984346 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:39Z","lastTransitionTime":"2026-01-26T07:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:40 crc kubenswrapper[4806]: I0126 07:54:40.042029 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 07:54:40 crc kubenswrapper[4806]: I0126 07:54:40.042094 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 07:54:40 crc kubenswrapper[4806]: I0126 07:54:40.042056 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqmvf" Jan 26 07:54:40 crc kubenswrapper[4806]: I0126 07:54:40.042029 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 07:54:40 crc kubenswrapper[4806]: E0126 07:54:40.042248 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 07:54:40 crc kubenswrapper[4806]: E0126 07:54:40.042404 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqmvf" podUID="137029f0-49ad-4400-b117-2eff9271bce3" Jan 26 07:54:40 crc kubenswrapper[4806]: E0126 07:54:40.042540 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 07:54:40 crc kubenswrapper[4806]: E0126 07:54:40.042851 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 07:54:40 crc kubenswrapper[4806]: I0126 07:54:40.088301 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:40 crc kubenswrapper[4806]: I0126 07:54:40.088383 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:40 crc kubenswrapper[4806]: I0126 07:54:40.088407 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:40 crc kubenswrapper[4806]: I0126 07:54:40.088446 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:40 crc kubenswrapper[4806]: I0126 07:54:40.088474 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:40Z","lastTransitionTime":"2026-01-26T07:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:40 crc kubenswrapper[4806]: I0126 07:54:40.092564 4806 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 03:02:52.435055981 +0000 UTC Jan 26 07:54:40 crc kubenswrapper[4806]: I0126 07:54:40.191956 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:40 crc kubenswrapper[4806]: I0126 07:54:40.192026 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:40 crc kubenswrapper[4806]: I0126 07:54:40.192048 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:40 crc kubenswrapper[4806]: I0126 07:54:40.192077 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:40 crc kubenswrapper[4806]: I0126 07:54:40.192096 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:40Z","lastTransitionTime":"2026-01-26T07:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:40 crc kubenswrapper[4806]: I0126 07:54:40.294929 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:40 crc kubenswrapper[4806]: I0126 07:54:40.295029 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:40 crc kubenswrapper[4806]: I0126 07:54:40.295059 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:40 crc kubenswrapper[4806]: I0126 07:54:40.295103 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:40 crc kubenswrapper[4806]: I0126 07:54:40.295130 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:40Z","lastTransitionTime":"2026-01-26T07:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:40 crc kubenswrapper[4806]: I0126 07:54:40.329215 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:40 crc kubenswrapper[4806]: I0126 07:54:40.329295 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:40 crc kubenswrapper[4806]: I0126 07:54:40.329316 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:40 crc kubenswrapper[4806]: I0126 07:54:40.329345 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:40 crc kubenswrapper[4806]: I0126 07:54:40.329365 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:40Z","lastTransitionTime":"2026-01-26T07:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:40 crc kubenswrapper[4806]: E0126 07:54:40.354625 4806 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:54:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:54:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:54:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:54:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6d591560-a509-477f-85dc-1a92a429bf2e\\\",\\\"systemUUID\\\":\\\"8cee8155-a08c-4d0d-aec6-2f132dd9ee01\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:40Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:40 crc kubenswrapper[4806]: I0126 07:54:40.359933 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:40 crc kubenswrapper[4806]: I0126 07:54:40.360011 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:40 crc kubenswrapper[4806]: I0126 07:54:40.360033 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:40 crc kubenswrapper[4806]: I0126 07:54:40.360065 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:40 crc kubenswrapper[4806]: I0126 07:54:40.360084 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:40Z","lastTransitionTime":"2026-01-26T07:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:40 crc kubenswrapper[4806]: E0126 07:54:40.378766 4806 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:54:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:54:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:54:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:54:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6d591560-a509-477f-85dc-1a92a429bf2e\\\",\\\"systemUUID\\\":\\\"8cee8155-a08c-4d0d-aec6-2f132dd9ee01\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:40Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:40 crc kubenswrapper[4806]: I0126 07:54:40.384527 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:40 crc kubenswrapper[4806]: I0126 07:54:40.384613 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:40 crc kubenswrapper[4806]: I0126 07:54:40.384633 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:40 crc kubenswrapper[4806]: I0126 07:54:40.384661 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:40 crc kubenswrapper[4806]: I0126 07:54:40.384682 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:40Z","lastTransitionTime":"2026-01-26T07:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:40 crc kubenswrapper[4806]: E0126 07:54:40.403209 4806 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:54:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:54:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:54:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:54:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6d591560-a509-477f-85dc-1a92a429bf2e\\\",\\\"systemUUID\\\":\\\"8cee8155-a08c-4d0d-aec6-2f132dd9ee01\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:40Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:40 crc kubenswrapper[4806]: I0126 07:54:40.409944 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:40 crc kubenswrapper[4806]: I0126 07:54:40.410009 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:40 crc kubenswrapper[4806]: I0126 07:54:40.410028 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:40 crc kubenswrapper[4806]: I0126 07:54:40.410054 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:40 crc kubenswrapper[4806]: I0126 07:54:40.410070 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:40Z","lastTransitionTime":"2026-01-26T07:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:40 crc kubenswrapper[4806]: E0126 07:54:40.431328 4806 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:54:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:54:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:54:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:54:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6d591560-a509-477f-85dc-1a92a429bf2e\\\",\\\"systemUUID\\\":\\\"8cee8155-a08c-4d0d-aec6-2f132dd9ee01\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:40Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:40 crc kubenswrapper[4806]: I0126 07:54:40.436771 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:40 crc kubenswrapper[4806]: I0126 07:54:40.436896 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:40 crc kubenswrapper[4806]: I0126 07:54:40.436977 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:40 crc kubenswrapper[4806]: I0126 07:54:40.437048 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:40 crc kubenswrapper[4806]: I0126 07:54:40.437105 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:40Z","lastTransitionTime":"2026-01-26T07:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:40 crc kubenswrapper[4806]: E0126 07:54:40.458835 4806 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:54:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:54:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:54:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:54:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6d591560-a509-477f-85dc-1a92a429bf2e\\\",\\\"systemUUID\\\":\\\"8cee8155-a08c-4d0d-aec6-2f132dd9ee01\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:40Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:40 crc kubenswrapper[4806]: E0126 07:54:40.459105 4806 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 07:54:40 crc kubenswrapper[4806]: I0126 07:54:40.461730 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:40 crc kubenswrapper[4806]: I0126 07:54:40.461879 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:40 crc kubenswrapper[4806]: I0126 07:54:40.461980 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:40 crc kubenswrapper[4806]: I0126 07:54:40.462071 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:40 crc kubenswrapper[4806]: I0126 07:54:40.462216 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:40Z","lastTransitionTime":"2026-01-26T07:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:40 crc kubenswrapper[4806]: I0126 07:54:40.565581 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:40 crc kubenswrapper[4806]: I0126 07:54:40.565925 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:40 crc kubenswrapper[4806]: I0126 07:54:40.566011 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:40 crc kubenswrapper[4806]: I0126 07:54:40.566107 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:40 crc kubenswrapper[4806]: I0126 07:54:40.566196 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:40Z","lastTransitionTime":"2026-01-26T07:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:40 crc kubenswrapper[4806]: I0126 07:54:40.669803 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:40 crc kubenswrapper[4806]: I0126 07:54:40.669851 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:40 crc kubenswrapper[4806]: I0126 07:54:40.669867 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:40 crc kubenswrapper[4806]: I0126 07:54:40.669888 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:40 crc kubenswrapper[4806]: I0126 07:54:40.669905 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:40Z","lastTransitionTime":"2026-01-26T07:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:40 crc kubenswrapper[4806]: I0126 07:54:40.773417 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:40 crc kubenswrapper[4806]: I0126 07:54:40.773469 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:40 crc kubenswrapper[4806]: I0126 07:54:40.773481 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:40 crc kubenswrapper[4806]: I0126 07:54:40.773500 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:40 crc kubenswrapper[4806]: I0126 07:54:40.773514 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:40Z","lastTransitionTime":"2026-01-26T07:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:40 crc kubenswrapper[4806]: I0126 07:54:40.878123 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:40 crc kubenswrapper[4806]: I0126 07:54:40.878178 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:40 crc kubenswrapper[4806]: I0126 07:54:40.878187 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:40 crc kubenswrapper[4806]: I0126 07:54:40.878209 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:40 crc kubenswrapper[4806]: I0126 07:54:40.878220 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:40Z","lastTransitionTime":"2026-01-26T07:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:40 crc kubenswrapper[4806]: I0126 07:54:40.981662 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:40 crc kubenswrapper[4806]: I0126 07:54:40.981730 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:40 crc kubenswrapper[4806]: I0126 07:54:40.981750 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:40 crc kubenswrapper[4806]: I0126 07:54:40.981789 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:40 crc kubenswrapper[4806]: I0126 07:54:40.981812 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:40Z","lastTransitionTime":"2026-01-26T07:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:41 crc kubenswrapper[4806]: I0126 07:54:41.060136 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f2d0753-513a-4924-9985-d1058d2cda9b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da291d9c45edfc19eed33fce385cb1d8005585d2e9ac5fdfc50040a9c3038683\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6719f7820924ce02c29e99fc9fb5c6e733597c1f006b3a2701046680af978e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8e90cbd21cd7080a82e2440f441264b1fd1b45ee048fbcadc7afbd7b4384996\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e49f6187d5b008fdb24b0f1a268d099bba2d3700d3ebd07c6f2d05e5718bf990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e49f6187d5b008fdb24b0f1a268d099bba2d3700d3ebd07c6f2d05e5718bf990\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:41Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:53:41Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:41Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:41 crc kubenswrapper[4806]: I0126 07:54:41.079703 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:41Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:41 crc kubenswrapper[4806]: I0126 07:54:41.085611 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:41 crc kubenswrapper[4806]: I0126 07:54:41.085693 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:41 crc kubenswrapper[4806]: I0126 07:54:41.085714 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:41 crc kubenswrapper[4806]: I0126 07:54:41.085739 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:41 crc kubenswrapper[4806]: I0126 07:54:41.085755 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:41Z","lastTransitionTime":"2026-01-26T07:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:41 crc kubenswrapper[4806]: I0126 07:54:41.092770 4806 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 21:58:52.053453302 +0000 UTC Jan 26 07:54:41 crc kubenswrapper[4806]: I0126 07:54:41.094902 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dfe44c66a73660b38211ed7ae8fbb6a036c2ebc6a2716dd0e36da70b8d1fcc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:41Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:41 crc kubenswrapper[4806]: I0126 07:54:41.113020 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d07502a2-50b0-4012-b335-340a1c694c50\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03d55ca5e1d542a5cdb726c1e581fb9b644e237bf4575794fedadf60386c339c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67x45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://824bb40c04b0ea474c45020c9c84ce7b7ee18c52beef9fee9618ba5a6e59be04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67x45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k2tlk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:41Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:41 crc kubenswrapper[4806]: I0126 07:54:41.125415 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pw8cg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c73a9f4-20b2-4c8a-b25d-413770be4fac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://792a2a27d6043da345b05c611a152abc9233ff61958434dc7a7f8c13d185fc04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt9w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pw8cg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:41Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:41 crc kubenswrapper[4806]: I0126 07:54:41.140446 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tfbl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47635c72-c532-48f3-839a-d86393eb5d24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7ea777c2c49c1bc930bff2584c62a6ebd7752638e4d5aec2818f56cf0678ad8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4lbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4910b820b4227a848efb96e462e633150fe7e06295dc04a0d9c45636cc4b83bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4lbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tfbl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:41Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:41 crc kubenswrapper[4806]: I0126 07:54:41.155662 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a2716f1-7f63-4f1b-88e7-412b33a5afe3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1797a908e33ef3678ac1471e937a8e52c11dbb31d8de0f19d400a01706a98b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c111f2ec62507ccc570af4f4da19232bfa0946bb9e67bde595d5a3d66eff6a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f94ceee9c2ed3bd1c5ba58ee7e5047dd5f0a325c610c4277705f3426e69ed79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50228cb176a3defc821a78ed75d837706c2de3a3414f02e0afacf81289a15999\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:53:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:41Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:41 crc kubenswrapper[4806]: I0126 07:54:41.169435 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:41Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:41 crc kubenswrapper[4806]: I0126 07:54:41.184510 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:41Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:41 crc kubenswrapper[4806]: I0126 07:54:41.187598 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:41 crc kubenswrapper[4806]: I0126 07:54:41.187628 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:41 crc kubenswrapper[4806]: I0126 07:54:41.187636 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:41 crc kubenswrapper[4806]: I0126 07:54:41.187652 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:41 crc kubenswrapper[4806]: I0126 07:54:41.187662 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:41Z","lastTransitionTime":"2026-01-26T07:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:41 crc kubenswrapper[4806]: I0126 07:54:41.210263 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f8b8acb-f4cf-41db-82f8-94ffd21c1594\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21d2ea6bc7f0a91da2b372587419ec1b34cfe3aca8f70dd4e360bf3e818bb103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adbed86e005e6e9cecb4320e0dff72eab3cfd505ff656a64e6643cf47828c789\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d102c6b4ea9d79168f0fec24c556c8fec8e3c1191646eebcbe8ed0289edfb6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c90dede1fc0753cad500555db7d92be73cb9b51fe948e3e65bed66134d85628c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e05c58fd693a7827b1ff019c6e160e4d2198004aca34245ee6e08cd37f5627da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://634f160a9064240a06e9b425f70ceae6390db7205f898efdc601437e72c362df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfc2e9b8a1bd4f2725d86ece596811599fc42c42930193e14dec2164c415acd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfc2e9b8a1bd4f2725d86ece596811599fc42c42930193e14dec2164c415acd3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T07:54:31Z\\\",\\\"message\\\":\\\"o create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:30Z is after 2025-08-24T17:21:41Z]\\\\nI0126 07:54:31.006399 6338 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-network-console/networking-console-plugin_TCP_cluster\\\\\\\", UUID:\\\\\\\"ab0b1d51-5ec6-479b-8881-93dfa8d30337\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-network-console/networking-console-plugin\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGr\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8mw7z_openshift-ovn-kubernetes(1f8b8acb-f4cf-41db-82f8-94ffd21c1594)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e9d86c9f33a8ad421c133f7b674eff9362011ad9ce60cbe1b917bda84c85047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2105d09a59a290aa3e1915a2b8c2c8aeb3a3e94ddfd4b48dc574a5362f522d18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2105d09a59a290aa3e1915a2b8c2c8aeb3a3e94ddfd4b48dc574a5362f522d18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8mw7z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:41Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:41 crc kubenswrapper[4806]: I0126 07:54:41.222179 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rqmvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"137029f0-49ad-4400-b117-2eff9271bce3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9zt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9zt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:14Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rqmvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:41Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:41 crc kubenswrapper[4806]: I0126 07:54:41.250362 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"674c5d9f-0f9e-47cf-b6bb-5a011bd0af68\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06b15982445e897ebbaf881e9a84973b9b700b0763ea6dfc0dfa3d5adb3d5b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67cf748dcc41d3402937cba97eadb7311a4fd7ab359a24768ffd4b6f689e9e2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://374c11776db902d4a30fdeab722bf8c85be53043b87555ac42f9f18bbfca5f78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded50f9353eb262b2587b8d1a59d8e1a06946c9b228eb0991fe35ad79f19567b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dc1a26bcb2eff0795009b3d88f7fedd86bccf553e9b8b86f8bc4bdda154566d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b85e7f36ac03113362c2ab6b15fb73e517599c6a56c6a410b9dead70191502f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b85e7f36ac03113362c2ab6b15fb73e517599c6a56c6a410b9dead70191502f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7324fbdf53b0be79696bb4bf870fe072761633e09cc88cce3a213a40b339410d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7324fbdf53b0be79696bb4bf870fe072761633e09cc88cce3a213a40b339410d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1987002ea8dfdacf427291f1757243727d744b1fe99d0be1f8cddc26beb74fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1987002ea8dfdacf427291f1757243727d744b1fe99d0be1f8cddc26beb74fc9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:53:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:41Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:41 crc kubenswrapper[4806]: I0126 07:54:41.272619 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f837be3-c97c-4ec6-9ac0-1b2c210a2bd1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://370998a25827938e42e6ca8f51cb20c64068d266a2a6e92f43cfa6033efe1f1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://befc3a7310ed730ab06800d0ffe5a5ad206f5193f921d0080543d96e53c382cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f4e90bd8d92e4159f45b887647465c23595d6a14cb9fd5a6f3b6c736345b302\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d8fe9445b34c3dea0e5508fbbd9d8c72d3b0bd3c927f3d0469fb5f86a61b7a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6442d8b57ec5789bc5ad219da279d42cd053598b4a8a843b4190a619750483d3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 07:53:53.366863 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 07:53:53.368547 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1947576678/tls.crt::/tmp/serving-cert-1947576678/tls.key\\\\\\\"\\\\nI0126 07:53:59.068951 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 07:53:59.072863 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 07:53:59.072893 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 07:53:59.072922 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 07:53:59.072931 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 07:53:59.078386 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 07:53:59.078432 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 07:53:59.078439 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 07:53:59.078445 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 07:53:59.078449 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 07:53:59.078455 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 07:53:59.078459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 07:53:59.078571 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 07:53:59.084946 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eda625619208720589e5f413d0df8e1e55e7fd5ab4a2e59d64e19150afb05ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02368f0d186c7de1c3dc25d5c60985799ded799314f332c5132230a413d531a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02368f0d186c7de1c3dc25d5c60985799ded799314f332c5132230a413d531a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:53:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:41Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:41 crc kubenswrapper[4806]: I0126 07:54:41.287283 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acf0880fe013b6ee2bf3e8342e19ebbdc21b0f7aeb69749dce219fe0e5e4939a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:41Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:41 crc kubenswrapper[4806]: I0126 07:54:41.289856 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:41 crc kubenswrapper[4806]: I0126 07:54:41.289902 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:41 crc kubenswrapper[4806]: I0126 07:54:41.289917 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:41 crc kubenswrapper[4806]: I0126 07:54:41.289941 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:41 crc kubenswrapper[4806]: I0126 07:54:41.289956 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:41Z","lastTransitionTime":"2026-01-26T07:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:41 crc kubenswrapper[4806]: I0126 07:54:41.306224 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0298829a559cb39d3f8d2f3b0e53b249e74c432ba82385ab628135d53fa77272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70db7a55580b02bc2be913a8978f895c5a3826133883cf1c838bc84a76e89af1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:41Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:41 crc kubenswrapper[4806]: I0126 07:54:41.319958 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wx526" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"265e15b3-6ef8-47df-ab15-dcc9bd9574ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5080e1085d9267f3e432eaf62a6c620f7e59db08140bd5a901ab21840bfd6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmfr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wx526\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:41Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:41 crc kubenswrapper[4806]: I0126 07:54:41.336053 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d7glh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4320ae6b-0d73-47d7-9f2c-f3c5b6b69041\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da5e8c52d415e76190ace196927bd1783b08990164b11379940ecc2b6734551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpcqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d7glh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:41Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:41 crc kubenswrapper[4806]: I0126 07:54:41.358347 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-268q5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59844d88-1bf9-4761-b664-74623e7532c3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80fc9c4f56c3d017653449212b2b9457a701e8c567317798eb12c695bccec5c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19fc2da16d6c76dad40cc2947eb8da01b1ead5804be80df5756c132456575909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19fc2da16d6c76dad40cc2947eb8da01b1ead5804be80df5756c132456575909\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a426f2a63c3cdd56cec3bbd1f6e88ff76808304da3fe757043afeb2feb26614\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a426f2a63c3cdd56cec3bbd1f6e88ff76808304da3fe757043afeb2feb26614\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5459f7b834ef08b014a811d0955763235cb4dbb6cbe92670202b03f51f7d684d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5459f7b834ef08b014a811d0955763235cb4dbb6cbe92670202b03f51f7d684d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1efe94940fbf1babcccc68efae1d38751c22232d1a2f197485057be906db80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f1efe94940fbf1babcccc68efae1d38751c22232d1a2f197485057be906db80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1332a28e0cf08275ce281997fdc5674f44e397f8ff02fdd19fed12c2de2b70a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1332a28e0cf08275ce281997fdc5674f44e397f8ff02fdd19fed12c2de2b70a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://638da847467b7cade9c68a764044417e85bfa99cae93105cb70b260d3668c1a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://638da847467b7cade9c68a764044417e85bfa99cae93105cb70b260d3668c1a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-268q5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:41Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:41 crc kubenswrapper[4806]: I0126 07:54:41.391949 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:41 crc kubenswrapper[4806]: I0126 07:54:41.391978 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:41 crc kubenswrapper[4806]: I0126 07:54:41.391989 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:41 crc kubenswrapper[4806]: I0126 07:54:41.392003 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:41 crc kubenswrapper[4806]: I0126 07:54:41.392012 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:41Z","lastTransitionTime":"2026-01-26T07:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:41 crc kubenswrapper[4806]: I0126 07:54:41.493404 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:41 crc kubenswrapper[4806]: I0126 07:54:41.493433 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:41 crc kubenswrapper[4806]: I0126 07:54:41.493488 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:41 crc kubenswrapper[4806]: I0126 07:54:41.493502 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:41 crc kubenswrapper[4806]: I0126 07:54:41.493544 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:41Z","lastTransitionTime":"2026-01-26T07:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:41 crc kubenswrapper[4806]: I0126 07:54:41.595632 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:41 crc kubenswrapper[4806]: I0126 07:54:41.595691 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:41 crc kubenswrapper[4806]: I0126 07:54:41.595704 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:41 crc kubenswrapper[4806]: I0126 07:54:41.595727 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:41 crc kubenswrapper[4806]: I0126 07:54:41.595745 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:41Z","lastTransitionTime":"2026-01-26T07:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:41 crc kubenswrapper[4806]: I0126 07:54:41.698332 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:41 crc kubenswrapper[4806]: I0126 07:54:41.698388 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:41 crc kubenswrapper[4806]: I0126 07:54:41.698401 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:41 crc kubenswrapper[4806]: I0126 07:54:41.698419 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:41 crc kubenswrapper[4806]: I0126 07:54:41.698431 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:41Z","lastTransitionTime":"2026-01-26T07:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:41 crc kubenswrapper[4806]: I0126 07:54:41.801256 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:41 crc kubenswrapper[4806]: I0126 07:54:41.801309 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:41 crc kubenswrapper[4806]: I0126 07:54:41.801360 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:41 crc kubenswrapper[4806]: I0126 07:54:41.801374 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:41 crc kubenswrapper[4806]: I0126 07:54:41.801383 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:41Z","lastTransitionTime":"2026-01-26T07:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:41 crc kubenswrapper[4806]: I0126 07:54:41.904045 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:41 crc kubenswrapper[4806]: I0126 07:54:41.904093 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:41 crc kubenswrapper[4806]: I0126 07:54:41.904106 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:41 crc kubenswrapper[4806]: I0126 07:54:41.904129 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:41 crc kubenswrapper[4806]: I0126 07:54:41.904143 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:41Z","lastTransitionTime":"2026-01-26T07:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:42 crc kubenswrapper[4806]: I0126 07:54:42.008360 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:42 crc kubenswrapper[4806]: I0126 07:54:42.008407 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:42 crc kubenswrapper[4806]: I0126 07:54:42.008418 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:42 crc kubenswrapper[4806]: I0126 07:54:42.008438 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:42 crc kubenswrapper[4806]: I0126 07:54:42.008451 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:42Z","lastTransitionTime":"2026-01-26T07:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:42 crc kubenswrapper[4806]: I0126 07:54:42.041435 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqmvf" Jan 26 07:54:42 crc kubenswrapper[4806]: I0126 07:54:42.041564 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 07:54:42 crc kubenswrapper[4806]: I0126 07:54:42.041622 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 07:54:42 crc kubenswrapper[4806]: I0126 07:54:42.041461 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 07:54:42 crc kubenswrapper[4806]: E0126 07:54:42.041662 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqmvf" podUID="137029f0-49ad-4400-b117-2eff9271bce3" Jan 26 07:54:42 crc kubenswrapper[4806]: E0126 07:54:42.041768 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 07:54:42 crc kubenswrapper[4806]: E0126 07:54:42.042049 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 07:54:42 crc kubenswrapper[4806]: E0126 07:54:42.042124 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 07:54:42 crc kubenswrapper[4806]: I0126 07:54:42.093719 4806 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 13:13:35.592636912 +0000 UTC Jan 26 07:54:42 crc kubenswrapper[4806]: I0126 07:54:42.111721 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:42 crc kubenswrapper[4806]: I0126 07:54:42.111773 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:42 crc kubenswrapper[4806]: I0126 07:54:42.111784 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:42 crc kubenswrapper[4806]: I0126 07:54:42.111802 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:42 crc kubenswrapper[4806]: I0126 07:54:42.111824 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:42Z","lastTransitionTime":"2026-01-26T07:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:42 crc kubenswrapper[4806]: I0126 07:54:42.213932 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:42 crc kubenswrapper[4806]: I0126 07:54:42.213991 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:42 crc kubenswrapper[4806]: I0126 07:54:42.214010 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:42 crc kubenswrapper[4806]: I0126 07:54:42.214030 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:42 crc kubenswrapper[4806]: I0126 07:54:42.214046 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:42Z","lastTransitionTime":"2026-01-26T07:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:42 crc kubenswrapper[4806]: I0126 07:54:42.316232 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:42 crc kubenswrapper[4806]: I0126 07:54:42.316295 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:42 crc kubenswrapper[4806]: I0126 07:54:42.316310 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:42 crc kubenswrapper[4806]: I0126 07:54:42.316337 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:42 crc kubenswrapper[4806]: I0126 07:54:42.316351 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:42Z","lastTransitionTime":"2026-01-26T07:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:42 crc kubenswrapper[4806]: I0126 07:54:42.419425 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:42 crc kubenswrapper[4806]: I0126 07:54:42.419476 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:42 crc kubenswrapper[4806]: I0126 07:54:42.419490 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:42 crc kubenswrapper[4806]: I0126 07:54:42.419510 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:42 crc kubenswrapper[4806]: I0126 07:54:42.419525 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:42Z","lastTransitionTime":"2026-01-26T07:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:42 crc kubenswrapper[4806]: I0126 07:54:42.522918 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:42 crc kubenswrapper[4806]: I0126 07:54:42.522968 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:42 crc kubenswrapper[4806]: I0126 07:54:42.522978 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:42 crc kubenswrapper[4806]: I0126 07:54:42.522994 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:42 crc kubenswrapper[4806]: I0126 07:54:42.523005 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:42Z","lastTransitionTime":"2026-01-26T07:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:42 crc kubenswrapper[4806]: I0126 07:54:42.625688 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:42 crc kubenswrapper[4806]: I0126 07:54:42.625754 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:42 crc kubenswrapper[4806]: I0126 07:54:42.625773 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:42 crc kubenswrapper[4806]: I0126 07:54:42.625791 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:42 crc kubenswrapper[4806]: I0126 07:54:42.625802 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:42Z","lastTransitionTime":"2026-01-26T07:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:42 crc kubenswrapper[4806]: I0126 07:54:42.729481 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:42 crc kubenswrapper[4806]: I0126 07:54:42.729582 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:42 crc kubenswrapper[4806]: I0126 07:54:42.729600 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:42 crc kubenswrapper[4806]: I0126 07:54:42.729628 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:42 crc kubenswrapper[4806]: I0126 07:54:42.729647 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:42Z","lastTransitionTime":"2026-01-26T07:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:42 crc kubenswrapper[4806]: I0126 07:54:42.832370 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:42 crc kubenswrapper[4806]: I0126 07:54:42.832422 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:42 crc kubenswrapper[4806]: I0126 07:54:42.832434 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:42 crc kubenswrapper[4806]: I0126 07:54:42.832451 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:42 crc kubenswrapper[4806]: I0126 07:54:42.832466 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:42Z","lastTransitionTime":"2026-01-26T07:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:42 crc kubenswrapper[4806]: I0126 07:54:42.935730 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:42 crc kubenswrapper[4806]: I0126 07:54:42.935784 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:42 crc kubenswrapper[4806]: I0126 07:54:42.935798 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:42 crc kubenswrapper[4806]: I0126 07:54:42.935815 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:42 crc kubenswrapper[4806]: I0126 07:54:42.935826 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:42Z","lastTransitionTime":"2026-01-26T07:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:43 crc kubenswrapper[4806]: I0126 07:54:43.038426 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:43 crc kubenswrapper[4806]: I0126 07:54:43.038466 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:43 crc kubenswrapper[4806]: I0126 07:54:43.038475 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:43 crc kubenswrapper[4806]: I0126 07:54:43.038489 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:43 crc kubenswrapper[4806]: I0126 07:54:43.038498 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:43Z","lastTransitionTime":"2026-01-26T07:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:43 crc kubenswrapper[4806]: I0126 07:54:43.094963 4806 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 00:04:16.623295732 +0000 UTC Jan 26 07:54:43 crc kubenswrapper[4806]: I0126 07:54:43.142500 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:43 crc kubenswrapper[4806]: I0126 07:54:43.142594 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:43 crc kubenswrapper[4806]: I0126 07:54:43.142609 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:43 crc kubenswrapper[4806]: I0126 07:54:43.142636 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:43 crc kubenswrapper[4806]: I0126 07:54:43.142650 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:43Z","lastTransitionTime":"2026-01-26T07:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:43 crc kubenswrapper[4806]: I0126 07:54:43.246117 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:43 crc kubenswrapper[4806]: I0126 07:54:43.246214 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:43 crc kubenswrapper[4806]: I0126 07:54:43.246228 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:43 crc kubenswrapper[4806]: I0126 07:54:43.246250 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:43 crc kubenswrapper[4806]: I0126 07:54:43.246264 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:43Z","lastTransitionTime":"2026-01-26T07:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:43 crc kubenswrapper[4806]: I0126 07:54:43.348599 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:43 crc kubenswrapper[4806]: I0126 07:54:43.348648 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:43 crc kubenswrapper[4806]: I0126 07:54:43.348661 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:43 crc kubenswrapper[4806]: I0126 07:54:43.348678 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:43 crc kubenswrapper[4806]: I0126 07:54:43.348690 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:43Z","lastTransitionTime":"2026-01-26T07:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:43 crc kubenswrapper[4806]: I0126 07:54:43.450791 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:43 crc kubenswrapper[4806]: I0126 07:54:43.450862 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:43 crc kubenswrapper[4806]: I0126 07:54:43.450877 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:43 crc kubenswrapper[4806]: I0126 07:54:43.450895 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:43 crc kubenswrapper[4806]: I0126 07:54:43.450906 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:43Z","lastTransitionTime":"2026-01-26T07:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:43 crc kubenswrapper[4806]: I0126 07:54:43.554050 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:43 crc kubenswrapper[4806]: I0126 07:54:43.554090 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:43 crc kubenswrapper[4806]: I0126 07:54:43.554100 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:43 crc kubenswrapper[4806]: I0126 07:54:43.554114 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:43 crc kubenswrapper[4806]: I0126 07:54:43.554308 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:43Z","lastTransitionTime":"2026-01-26T07:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:43 crc kubenswrapper[4806]: I0126 07:54:43.657919 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:43 crc kubenswrapper[4806]: I0126 07:54:43.657968 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:43 crc kubenswrapper[4806]: I0126 07:54:43.657981 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:43 crc kubenswrapper[4806]: I0126 07:54:43.657999 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:43 crc kubenswrapper[4806]: I0126 07:54:43.658011 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:43Z","lastTransitionTime":"2026-01-26T07:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:43 crc kubenswrapper[4806]: I0126 07:54:43.761429 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:43 crc kubenswrapper[4806]: I0126 07:54:43.761555 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:43 crc kubenswrapper[4806]: I0126 07:54:43.761583 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:43 crc kubenswrapper[4806]: I0126 07:54:43.761616 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:43 crc kubenswrapper[4806]: I0126 07:54:43.761640 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:43Z","lastTransitionTime":"2026-01-26T07:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:43 crc kubenswrapper[4806]: I0126 07:54:43.864363 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:43 crc kubenswrapper[4806]: I0126 07:54:43.864408 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:43 crc kubenswrapper[4806]: I0126 07:54:43.864425 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:43 crc kubenswrapper[4806]: I0126 07:54:43.864445 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:43 crc kubenswrapper[4806]: I0126 07:54:43.864459 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:43Z","lastTransitionTime":"2026-01-26T07:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:43 crc kubenswrapper[4806]: I0126 07:54:43.967864 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:43 crc kubenswrapper[4806]: I0126 07:54:43.967903 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:43 crc kubenswrapper[4806]: I0126 07:54:43.967915 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:43 crc kubenswrapper[4806]: I0126 07:54:43.967931 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:43 crc kubenswrapper[4806]: I0126 07:54:43.967942 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:43Z","lastTransitionTime":"2026-01-26T07:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:44 crc kubenswrapper[4806]: I0126 07:54:44.041025 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqmvf" Jan 26 07:54:44 crc kubenswrapper[4806]: E0126 07:54:44.041172 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqmvf" podUID="137029f0-49ad-4400-b117-2eff9271bce3" Jan 26 07:54:44 crc kubenswrapper[4806]: I0126 07:54:44.041613 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 07:54:44 crc kubenswrapper[4806]: E0126 07:54:44.041681 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 07:54:44 crc kubenswrapper[4806]: I0126 07:54:44.041757 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 07:54:44 crc kubenswrapper[4806]: E0126 07:54:44.042367 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 07:54:44 crc kubenswrapper[4806]: I0126 07:54:44.042472 4806 scope.go:117] "RemoveContainer" containerID="cfc2e9b8a1bd4f2725d86ece596811599fc42c42930193e14dec2164c415acd3" Jan 26 07:54:44 crc kubenswrapper[4806]: I0126 07:54:44.042585 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 07:54:44 crc kubenswrapper[4806]: E0126 07:54:44.042687 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 07:54:44 crc kubenswrapper[4806]: E0126 07:54:44.042757 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8mw7z_openshift-ovn-kubernetes(1f8b8acb-f4cf-41db-82f8-94ffd21c1594)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" podUID="1f8b8acb-f4cf-41db-82f8-94ffd21c1594" Jan 26 07:54:44 crc kubenswrapper[4806]: I0126 07:54:44.070906 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:44 crc kubenswrapper[4806]: I0126 07:54:44.070947 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:44 crc kubenswrapper[4806]: I0126 07:54:44.070957 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:44 crc kubenswrapper[4806]: I0126 07:54:44.070976 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:44 crc kubenswrapper[4806]: I0126 07:54:44.070987 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:44Z","lastTransitionTime":"2026-01-26T07:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:44 crc kubenswrapper[4806]: I0126 07:54:44.095981 4806 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 11:58:39.971129028 +0000 UTC Jan 26 07:54:44 crc kubenswrapper[4806]: I0126 07:54:44.173360 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:44 crc kubenswrapper[4806]: I0126 07:54:44.173406 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:44 crc kubenswrapper[4806]: I0126 07:54:44.173419 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:44 crc kubenswrapper[4806]: I0126 07:54:44.173437 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:44 crc kubenswrapper[4806]: I0126 07:54:44.173449 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:44Z","lastTransitionTime":"2026-01-26T07:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:44 crc kubenswrapper[4806]: I0126 07:54:44.276899 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:44 crc kubenswrapper[4806]: I0126 07:54:44.276947 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:44 crc kubenswrapper[4806]: I0126 07:54:44.276957 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:44 crc kubenswrapper[4806]: I0126 07:54:44.276973 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:44 crc kubenswrapper[4806]: I0126 07:54:44.276984 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:44Z","lastTransitionTime":"2026-01-26T07:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:44 crc kubenswrapper[4806]: I0126 07:54:44.379416 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:44 crc kubenswrapper[4806]: I0126 07:54:44.379460 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:44 crc kubenswrapper[4806]: I0126 07:54:44.379472 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:44 crc kubenswrapper[4806]: I0126 07:54:44.379490 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:44 crc kubenswrapper[4806]: I0126 07:54:44.379500 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:44Z","lastTransitionTime":"2026-01-26T07:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:44 crc kubenswrapper[4806]: I0126 07:54:44.481729 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:44 crc kubenswrapper[4806]: I0126 07:54:44.481766 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:44 crc kubenswrapper[4806]: I0126 07:54:44.481778 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:44 crc kubenswrapper[4806]: I0126 07:54:44.481793 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:44 crc kubenswrapper[4806]: I0126 07:54:44.481806 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:44Z","lastTransitionTime":"2026-01-26T07:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:44 crc kubenswrapper[4806]: I0126 07:54:44.584589 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:44 crc kubenswrapper[4806]: I0126 07:54:44.584639 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:44 crc kubenswrapper[4806]: I0126 07:54:44.584650 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:44 crc kubenswrapper[4806]: I0126 07:54:44.584665 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:44 crc kubenswrapper[4806]: I0126 07:54:44.584677 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:44Z","lastTransitionTime":"2026-01-26T07:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:44 crc kubenswrapper[4806]: I0126 07:54:44.686911 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:44 crc kubenswrapper[4806]: I0126 07:54:44.686951 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:44 crc kubenswrapper[4806]: I0126 07:54:44.686960 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:44 crc kubenswrapper[4806]: I0126 07:54:44.686973 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:44 crc kubenswrapper[4806]: I0126 07:54:44.686982 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:44Z","lastTransitionTime":"2026-01-26T07:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:44 crc kubenswrapper[4806]: I0126 07:54:44.788770 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:44 crc kubenswrapper[4806]: I0126 07:54:44.788821 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:44 crc kubenswrapper[4806]: I0126 07:54:44.788830 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:44 crc kubenswrapper[4806]: I0126 07:54:44.788845 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:44 crc kubenswrapper[4806]: I0126 07:54:44.788857 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:44Z","lastTransitionTime":"2026-01-26T07:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:44 crc kubenswrapper[4806]: I0126 07:54:44.890667 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:44 crc kubenswrapper[4806]: I0126 07:54:44.890704 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:44 crc kubenswrapper[4806]: I0126 07:54:44.890712 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:44 crc kubenswrapper[4806]: I0126 07:54:44.890728 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:44 crc kubenswrapper[4806]: I0126 07:54:44.890738 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:44Z","lastTransitionTime":"2026-01-26T07:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:44 crc kubenswrapper[4806]: I0126 07:54:44.992602 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:44 crc kubenswrapper[4806]: I0126 07:54:44.992647 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:44 crc kubenswrapper[4806]: I0126 07:54:44.992657 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:44 crc kubenswrapper[4806]: I0126 07:54:44.992678 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:44 crc kubenswrapper[4806]: I0126 07:54:44.992697 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:44Z","lastTransitionTime":"2026-01-26T07:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:45 crc kubenswrapper[4806]: I0126 07:54:45.094563 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:45 crc kubenswrapper[4806]: I0126 07:54:45.094597 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:45 crc kubenswrapper[4806]: I0126 07:54:45.094618 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:45 crc kubenswrapper[4806]: I0126 07:54:45.094632 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:45 crc kubenswrapper[4806]: I0126 07:54:45.094641 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:45Z","lastTransitionTime":"2026-01-26T07:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:45 crc kubenswrapper[4806]: I0126 07:54:45.096878 4806 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 10:48:44.740600572 +0000 UTC Jan 26 07:54:45 crc kubenswrapper[4806]: I0126 07:54:45.196325 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:45 crc kubenswrapper[4806]: I0126 07:54:45.196369 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:45 crc kubenswrapper[4806]: I0126 07:54:45.196378 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:45 crc kubenswrapper[4806]: I0126 07:54:45.196394 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:45 crc kubenswrapper[4806]: I0126 07:54:45.196403 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:45Z","lastTransitionTime":"2026-01-26T07:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:45 crc kubenswrapper[4806]: I0126 07:54:45.298844 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:45 crc kubenswrapper[4806]: I0126 07:54:45.298895 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:45 crc kubenswrapper[4806]: I0126 07:54:45.298914 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:45 crc kubenswrapper[4806]: I0126 07:54:45.298936 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:45 crc kubenswrapper[4806]: I0126 07:54:45.298948 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:45Z","lastTransitionTime":"2026-01-26T07:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:45 crc kubenswrapper[4806]: I0126 07:54:45.400851 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:45 crc kubenswrapper[4806]: I0126 07:54:45.400912 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:45 crc kubenswrapper[4806]: I0126 07:54:45.400926 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:45 crc kubenswrapper[4806]: I0126 07:54:45.400953 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:45 crc kubenswrapper[4806]: I0126 07:54:45.400966 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:45Z","lastTransitionTime":"2026-01-26T07:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:45 crc kubenswrapper[4806]: I0126 07:54:45.505252 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:45 crc kubenswrapper[4806]: I0126 07:54:45.505333 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:45 crc kubenswrapper[4806]: I0126 07:54:45.505342 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:45 crc kubenswrapper[4806]: I0126 07:54:45.505358 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:45 crc kubenswrapper[4806]: I0126 07:54:45.505368 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:45Z","lastTransitionTime":"2026-01-26T07:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:45 crc kubenswrapper[4806]: I0126 07:54:45.607690 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:45 crc kubenswrapper[4806]: I0126 07:54:45.607734 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:45 crc kubenswrapper[4806]: I0126 07:54:45.607742 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:45 crc kubenswrapper[4806]: I0126 07:54:45.607757 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:45 crc kubenswrapper[4806]: I0126 07:54:45.607766 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:45Z","lastTransitionTime":"2026-01-26T07:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:45 crc kubenswrapper[4806]: I0126 07:54:45.722252 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:45 crc kubenswrapper[4806]: I0126 07:54:45.722319 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:45 crc kubenswrapper[4806]: I0126 07:54:45.722333 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:45 crc kubenswrapper[4806]: I0126 07:54:45.722347 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:45 crc kubenswrapper[4806]: I0126 07:54:45.722356 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:45Z","lastTransitionTime":"2026-01-26T07:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:45 crc kubenswrapper[4806]: I0126 07:54:45.824791 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:45 crc kubenswrapper[4806]: I0126 07:54:45.824824 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:45 crc kubenswrapper[4806]: I0126 07:54:45.824834 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:45 crc kubenswrapper[4806]: I0126 07:54:45.824859 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:45 crc kubenswrapper[4806]: I0126 07:54:45.824877 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:45Z","lastTransitionTime":"2026-01-26T07:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:45 crc kubenswrapper[4806]: I0126 07:54:45.927477 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:45 crc kubenswrapper[4806]: I0126 07:54:45.927514 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:45 crc kubenswrapper[4806]: I0126 07:54:45.927540 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:45 crc kubenswrapper[4806]: I0126 07:54:45.927555 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:45 crc kubenswrapper[4806]: I0126 07:54:45.927568 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:45Z","lastTransitionTime":"2026-01-26T07:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:46 crc kubenswrapper[4806]: I0126 07:54:46.030289 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:46 crc kubenswrapper[4806]: I0126 07:54:46.030342 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:46 crc kubenswrapper[4806]: I0126 07:54:46.030360 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:46 crc kubenswrapper[4806]: I0126 07:54:46.030382 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:46 crc kubenswrapper[4806]: I0126 07:54:46.030400 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:46Z","lastTransitionTime":"2026-01-26T07:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:46 crc kubenswrapper[4806]: I0126 07:54:46.074939 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 07:54:46 crc kubenswrapper[4806]: I0126 07:54:46.075015 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 07:54:46 crc kubenswrapper[4806]: E0126 07:54:46.075064 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 07:54:46 crc kubenswrapper[4806]: I0126 07:54:46.075098 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqmvf" Jan 26 07:54:46 crc kubenswrapper[4806]: I0126 07:54:46.075208 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 07:54:46 crc kubenswrapper[4806]: E0126 07:54:46.075212 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 07:54:46 crc kubenswrapper[4806]: E0126 07:54:46.075291 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 07:54:46 crc kubenswrapper[4806]: E0126 07:54:46.075369 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqmvf" podUID="137029f0-49ad-4400-b117-2eff9271bce3" Jan 26 07:54:46 crc kubenswrapper[4806]: I0126 07:54:46.097715 4806 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 06:08:25.491763785 +0000 UTC Jan 26 07:54:46 crc kubenswrapper[4806]: I0126 07:54:46.132761 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:46 crc kubenswrapper[4806]: I0126 07:54:46.132796 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:46 crc kubenswrapper[4806]: I0126 07:54:46.132804 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:46 crc kubenswrapper[4806]: I0126 07:54:46.132819 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:46 crc kubenswrapper[4806]: I0126 07:54:46.132829 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:46Z","lastTransitionTime":"2026-01-26T07:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:46 crc kubenswrapper[4806]: I0126 07:54:46.234840 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:46 crc kubenswrapper[4806]: I0126 07:54:46.234876 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:46 crc kubenswrapper[4806]: I0126 07:54:46.234885 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:46 crc kubenswrapper[4806]: I0126 07:54:46.234900 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:46 crc kubenswrapper[4806]: I0126 07:54:46.234909 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:46Z","lastTransitionTime":"2026-01-26T07:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:46 crc kubenswrapper[4806]: I0126 07:54:46.337628 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:46 crc kubenswrapper[4806]: I0126 07:54:46.337684 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:46 crc kubenswrapper[4806]: I0126 07:54:46.337696 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:46 crc kubenswrapper[4806]: I0126 07:54:46.337714 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:46 crc kubenswrapper[4806]: I0126 07:54:46.337726 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:46Z","lastTransitionTime":"2026-01-26T07:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:46 crc kubenswrapper[4806]: I0126 07:54:46.439598 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:46 crc kubenswrapper[4806]: I0126 07:54:46.439635 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:46 crc kubenswrapper[4806]: I0126 07:54:46.439645 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:46 crc kubenswrapper[4806]: I0126 07:54:46.439688 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:46 crc kubenswrapper[4806]: I0126 07:54:46.439701 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:46Z","lastTransitionTime":"2026-01-26T07:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:46 crc kubenswrapper[4806]: I0126 07:54:46.541763 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:46 crc kubenswrapper[4806]: I0126 07:54:46.541810 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:46 crc kubenswrapper[4806]: I0126 07:54:46.541822 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:46 crc kubenswrapper[4806]: I0126 07:54:46.541841 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:46 crc kubenswrapper[4806]: I0126 07:54:46.541854 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:46Z","lastTransitionTime":"2026-01-26T07:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:46 crc kubenswrapper[4806]: I0126 07:54:46.644021 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:46 crc kubenswrapper[4806]: I0126 07:54:46.644075 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:46 crc kubenswrapper[4806]: I0126 07:54:46.644083 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:46 crc kubenswrapper[4806]: I0126 07:54:46.644106 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:46 crc kubenswrapper[4806]: I0126 07:54:46.644117 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:46Z","lastTransitionTime":"2026-01-26T07:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:46 crc kubenswrapper[4806]: I0126 07:54:46.681688 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/137029f0-49ad-4400-b117-2eff9271bce3-metrics-certs\") pod \"network-metrics-daemon-rqmvf\" (UID: \"137029f0-49ad-4400-b117-2eff9271bce3\") " pod="openshift-multus/network-metrics-daemon-rqmvf" Jan 26 07:54:46 crc kubenswrapper[4806]: E0126 07:54:46.681876 4806 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 07:54:46 crc kubenswrapper[4806]: E0126 07:54:46.681961 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/137029f0-49ad-4400-b117-2eff9271bce3-metrics-certs podName:137029f0-49ad-4400-b117-2eff9271bce3 nodeName:}" failed. No retries permitted until 2026-01-26 07:55:18.681942282 +0000 UTC m=+97.946350338 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/137029f0-49ad-4400-b117-2eff9271bce3-metrics-certs") pod "network-metrics-daemon-rqmvf" (UID: "137029f0-49ad-4400-b117-2eff9271bce3") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 07:54:46 crc kubenswrapper[4806]: I0126 07:54:46.746482 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:46 crc kubenswrapper[4806]: I0126 07:54:46.746601 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:46 crc kubenswrapper[4806]: I0126 07:54:46.746626 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:46 crc kubenswrapper[4806]: I0126 07:54:46.746653 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:46 crc kubenswrapper[4806]: I0126 07:54:46.746676 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:46Z","lastTransitionTime":"2026-01-26T07:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:46 crc kubenswrapper[4806]: I0126 07:54:46.849345 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:46 crc kubenswrapper[4806]: I0126 07:54:46.849472 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:46 crc kubenswrapper[4806]: I0126 07:54:46.849493 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:46 crc kubenswrapper[4806]: I0126 07:54:46.849517 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:46 crc kubenswrapper[4806]: I0126 07:54:46.849569 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:46Z","lastTransitionTime":"2026-01-26T07:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:46 crc kubenswrapper[4806]: I0126 07:54:46.951919 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:46 crc kubenswrapper[4806]: I0126 07:54:46.951975 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:46 crc kubenswrapper[4806]: I0126 07:54:46.951983 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:46 crc kubenswrapper[4806]: I0126 07:54:46.951999 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:46 crc kubenswrapper[4806]: I0126 07:54:46.952010 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:46Z","lastTransitionTime":"2026-01-26T07:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:47 crc kubenswrapper[4806]: I0126 07:54:47.054329 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:47 crc kubenswrapper[4806]: I0126 07:54:47.054375 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:47 crc kubenswrapper[4806]: I0126 07:54:47.054387 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:47 crc kubenswrapper[4806]: I0126 07:54:47.054404 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:47 crc kubenswrapper[4806]: I0126 07:54:47.054417 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:47Z","lastTransitionTime":"2026-01-26T07:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:47 crc kubenswrapper[4806]: I0126 07:54:47.098664 4806 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 17:15:09.91274724 +0000 UTC Jan 26 07:54:47 crc kubenswrapper[4806]: I0126 07:54:47.157169 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:47 crc kubenswrapper[4806]: I0126 07:54:47.157214 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:47 crc kubenswrapper[4806]: I0126 07:54:47.157225 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:47 crc kubenswrapper[4806]: I0126 07:54:47.157247 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:47 crc kubenswrapper[4806]: I0126 07:54:47.157584 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:47Z","lastTransitionTime":"2026-01-26T07:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:47 crc kubenswrapper[4806]: I0126 07:54:47.259819 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:47 crc kubenswrapper[4806]: I0126 07:54:47.259849 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:47 crc kubenswrapper[4806]: I0126 07:54:47.259857 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:47 crc kubenswrapper[4806]: I0126 07:54:47.259869 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:47 crc kubenswrapper[4806]: I0126 07:54:47.259878 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:47Z","lastTransitionTime":"2026-01-26T07:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:47 crc kubenswrapper[4806]: I0126 07:54:47.361662 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:47 crc kubenswrapper[4806]: I0126 07:54:47.361698 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:47 crc kubenswrapper[4806]: I0126 07:54:47.361708 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:47 crc kubenswrapper[4806]: I0126 07:54:47.361723 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:47 crc kubenswrapper[4806]: I0126 07:54:47.361735 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:47Z","lastTransitionTime":"2026-01-26T07:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:47 crc kubenswrapper[4806]: I0126 07:54:47.464387 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:47 crc kubenswrapper[4806]: I0126 07:54:47.464439 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:47 crc kubenswrapper[4806]: I0126 07:54:47.464452 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:47 crc kubenswrapper[4806]: I0126 07:54:47.464473 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:47 crc kubenswrapper[4806]: I0126 07:54:47.464487 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:47Z","lastTransitionTime":"2026-01-26T07:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:47 crc kubenswrapper[4806]: I0126 07:54:47.513176 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-d7glh_4320ae6b-0d73-47d7-9f2c-f3c5b6b69041/kube-multus/0.log" Jan 26 07:54:47 crc kubenswrapper[4806]: I0126 07:54:47.513223 4806 generic.go:334] "Generic (PLEG): container finished" podID="4320ae6b-0d73-47d7-9f2c-f3c5b6b69041" containerID="9da5e8c52d415e76190ace196927bd1783b08990164b11379940ecc2b6734551" exitCode=1 Jan 26 07:54:47 crc kubenswrapper[4806]: I0126 07:54:47.513264 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-d7glh" event={"ID":"4320ae6b-0d73-47d7-9f2c-f3c5b6b69041","Type":"ContainerDied","Data":"9da5e8c52d415e76190ace196927bd1783b08990164b11379940ecc2b6734551"} Jan 26 07:54:47 crc kubenswrapper[4806]: I0126 07:54:47.513761 4806 scope.go:117] "RemoveContainer" containerID="9da5e8c52d415e76190ace196927bd1783b08990164b11379940ecc2b6734551" Jan 26 07:54:47 crc kubenswrapper[4806]: I0126 07:54:47.537151 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f8b8acb-f4cf-41db-82f8-94ffd21c1594\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21d2ea6bc7f0a91da2b372587419ec1b34cfe3aca8f70dd4e360bf3e818bb103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adbed86e005e6e9cecb4320e0dff72eab3cfd505ff656a64e6643cf47828c789\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d102c6b4ea9d79168f0fec24c556c8fec8e3c1191646eebcbe8ed0289edfb6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c90dede1fc0753cad500555db7d92be73cb9b51fe948e3e65bed66134d85628c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e05c58fd693a7827b1ff019c6e160e4d2198004aca34245ee6e08cd37f5627da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://634f160a9064240a06e9b425f70ceae6390db7205f898efdc601437e72c362df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfc2e9b8a1bd4f2725d86ece596811599fc42c42930193e14dec2164c415acd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfc2e9b8a1bd4f2725d86ece596811599fc42c42930193e14dec2164c415acd3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T07:54:31Z\\\",\\\"message\\\":\\\"o create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:30Z is after 2025-08-24T17:21:41Z]\\\\nI0126 07:54:31.006399 6338 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-network-console/networking-console-plugin_TCP_cluster\\\\\\\", UUID:\\\\\\\"ab0b1d51-5ec6-479b-8881-93dfa8d30337\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-network-console/networking-console-plugin\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGr\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8mw7z_openshift-ovn-kubernetes(1f8b8acb-f4cf-41db-82f8-94ffd21c1594)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e9d86c9f33a8ad421c133f7b674eff9362011ad9ce60cbe1b917bda84c85047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2105d09a59a290aa3e1915a2b8c2c8aeb3a3e94ddfd4b48dc574a5362f522d18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2105d09a59a290aa3e1915a2b8c2c8aeb3a3e94ddfd4b48dc574a5362f522d18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8mw7z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:47Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:47 crc kubenswrapper[4806]: I0126 07:54:47.547620 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rqmvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"137029f0-49ad-4400-b117-2eff9271bce3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9zt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9zt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:14Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rqmvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:47Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:47 crc kubenswrapper[4806]: I0126 07:54:47.557703 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a2716f1-7f63-4f1b-88e7-412b33a5afe3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1797a908e33ef3678ac1471e937a8e52c11dbb31d8de0f19d400a01706a98b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c111f2ec62507ccc570af4f4da19232bfa0946bb9e67bde595d5a3d66eff6a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f94ceee9c2ed3bd1c5ba58ee7e5047dd5f0a325c610c4277705f3426e69ed79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50228cb176a3defc821a78ed75d837706c2de3a3414f02e0afacf81289a15999\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:53:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:47Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:47 crc kubenswrapper[4806]: I0126 07:54:47.567781 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:47 crc kubenswrapper[4806]: I0126 07:54:47.567815 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:47 crc kubenswrapper[4806]: I0126 07:54:47.567825 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:47 crc kubenswrapper[4806]: I0126 07:54:47.567841 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:47 crc kubenswrapper[4806]: I0126 07:54:47.567849 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:47Z","lastTransitionTime":"2026-01-26T07:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:47 crc kubenswrapper[4806]: I0126 07:54:47.568235 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:47Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:47 crc kubenswrapper[4806]: I0126 07:54:47.580125 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:47Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:47 crc kubenswrapper[4806]: I0126 07:54:47.589880 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wx526" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"265e15b3-6ef8-47df-ab15-dcc9bd9574ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5080e1085d9267f3e432eaf62a6c620f7e59db08140bd5a901ab21840bfd6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmfr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wx526\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:47Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:47 crc kubenswrapper[4806]: I0126 07:54:47.602406 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d7glh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4320ae6b-0d73-47d7-9f2c-f3c5b6b69041\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da5e8c52d415e76190ace196927bd1783b08990164b11379940ecc2b6734551\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9da5e8c52d415e76190ace196927bd1783b08990164b11379940ecc2b6734551\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T07:54:47Z\\\",\\\"message\\\":\\\"2026-01-26T07:54:01+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6a15fa94-db47-433a-a1f7-0d0ac0229e59\\\\n2026-01-26T07:54:01+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6a15fa94-db47-433a-a1f7-0d0ac0229e59 to /host/opt/cni/bin/\\\\n2026-01-26T07:54:02Z [verbose] multus-daemon started\\\\n2026-01-26T07:54:02Z [verbose] Readiness Indicator file check\\\\n2026-01-26T07:54:47Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpcqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d7glh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:47Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:47 crc kubenswrapper[4806]: I0126 07:54:47.616933 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-268q5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59844d88-1bf9-4761-b664-74623e7532c3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80fc9c4f56c3d017653449212b2b9457a701e8c567317798eb12c695bccec5c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19fc2da16d6c76dad40cc2947eb8da01b1ead5804be80df5756c132456575909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19fc2da16d6c76dad40cc2947eb8da01b1ead5804be80df5756c132456575909\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a426f2a63c3cdd56cec3bbd1f6e88ff76808304da3fe757043afeb2feb26614\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a426f2a63c3cdd56cec3bbd1f6e88ff76808304da3fe757043afeb2feb26614\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5459f7b834ef08b014a811d0955763235cb4dbb6cbe92670202b03f51f7d684d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5459f7b834ef08b014a811d0955763235cb4dbb6cbe92670202b03f51f7d684d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1efe94940fbf1babcccc68efae1d38751c22232d1a2f197485057be906db80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f1efe94940fbf1babcccc68efae1d38751c22232d1a2f197485057be906db80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1332a28e0cf08275ce281997fdc5674f44e397f8ff02fdd19fed12c2de2b70a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1332a28e0cf08275ce281997fdc5674f44e397f8ff02fdd19fed12c2de2b70a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://638da847467b7cade9c68a764044417e85bfa99cae93105cb70b260d3668c1a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://638da847467b7cade9c68a764044417e85bfa99cae93105cb70b260d3668c1a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-268q5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:47Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:47 crc kubenswrapper[4806]: I0126 07:54:47.636678 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"674c5d9f-0f9e-47cf-b6bb-5a011bd0af68\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06b15982445e897ebbaf881e9a84973b9b700b0763ea6dfc0dfa3d5adb3d5b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67cf748dcc41d3402937cba97eadb7311a4fd7ab359a24768ffd4b6f689e9e2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://374c11776db902d4a30fdeab722bf8c85be53043b87555ac42f9f18bbfca5f78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded50f9353eb262b2587b8d1a59d8e1a06946c9b228eb0991fe35ad79f19567b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dc1a26bcb2eff0795009b3d88f7fedd86bccf553e9b8b86f8bc4bdda154566d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b85e7f36ac03113362c2ab6b15fb73e517599c6a56c6a410b9dead70191502f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b85e7f36ac03113362c2ab6b15fb73e517599c6a56c6a410b9dead70191502f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7324fbdf53b0be79696bb4bf870fe072761633e09cc88cce3a213a40b339410d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7324fbdf53b0be79696bb4bf870fe072761633e09cc88cce3a213a40b339410d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1987002ea8dfdacf427291f1757243727d744b1fe99d0be1f8cddc26beb74fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1987002ea8dfdacf427291f1757243727d744b1fe99d0be1f8cddc26beb74fc9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:53:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:47Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:47 crc kubenswrapper[4806]: I0126 07:54:47.655968 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f837be3-c97c-4ec6-9ac0-1b2c210a2bd1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://370998a25827938e42e6ca8f51cb20c64068d266a2a6e92f43cfa6033efe1f1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://befc3a7310ed730ab06800d0ffe5a5ad206f5193f921d0080543d96e53c382cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f4e90bd8d92e4159f45b887647465c23595d6a14cb9fd5a6f3b6c736345b302\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d8fe9445b34c3dea0e5508fbbd9d8c72d3b0bd3c927f3d0469fb5f86a61b7a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6442d8b57ec5789bc5ad219da279d42cd053598b4a8a843b4190a619750483d3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 07:53:53.366863 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 07:53:53.368547 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1947576678/tls.crt::/tmp/serving-cert-1947576678/tls.key\\\\\\\"\\\\nI0126 07:53:59.068951 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 07:53:59.072863 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 07:53:59.072893 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 07:53:59.072922 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 07:53:59.072931 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 07:53:59.078386 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 07:53:59.078432 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 07:53:59.078439 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 07:53:59.078445 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 07:53:59.078449 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 07:53:59.078455 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 07:53:59.078459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 07:53:59.078571 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 07:53:59.084946 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eda625619208720589e5f413d0df8e1e55e7fd5ab4a2e59d64e19150afb05ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02368f0d186c7de1c3dc25d5c60985799ded799314f332c5132230a413d531a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02368f0d186c7de1c3dc25d5c60985799ded799314f332c5132230a413d531a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:53:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:47Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:47 crc kubenswrapper[4806]: I0126 07:54:47.667710 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acf0880fe013b6ee2bf3e8342e19ebbdc21b0f7aeb69749dce219fe0e5e4939a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:47Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:47 crc kubenswrapper[4806]: I0126 07:54:47.670579 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:47 crc kubenswrapper[4806]: I0126 07:54:47.670622 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:47 crc kubenswrapper[4806]: I0126 07:54:47.670633 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:47 crc kubenswrapper[4806]: I0126 07:54:47.670651 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:47 crc kubenswrapper[4806]: I0126 07:54:47.670662 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:47Z","lastTransitionTime":"2026-01-26T07:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:47 crc kubenswrapper[4806]: I0126 07:54:47.682833 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0298829a559cb39d3f8d2f3b0e53b249e74c432ba82385ab628135d53fa77272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70db7a55580b02bc2be913a8978f895c5a3826133883cf1c838bc84a76e89af1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:47Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:47 crc kubenswrapper[4806]: I0126 07:54:47.694118 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f2d0753-513a-4924-9985-d1058d2cda9b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da291d9c45edfc19eed33fce385cb1d8005585d2e9ac5fdfc50040a9c3038683\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6719f7820924ce02c29e99fc9fb5c6e733597c1f006b3a2701046680af978e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8e90cbd21cd7080a82e2440f441264b1fd1b45ee048fbcadc7afbd7b4384996\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e49f6187d5b008fdb24b0f1a268d099bba2d3700d3ebd07c6f2d05e5718bf990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e49f6187d5b008fdb24b0f1a268d099bba2d3700d3ebd07c6f2d05e5718bf990\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:41Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:53:41Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:47Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:47 crc kubenswrapper[4806]: I0126 07:54:47.709832 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:47Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:47 crc kubenswrapper[4806]: I0126 07:54:47.721721 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dfe44c66a73660b38211ed7ae8fbb6a036c2ebc6a2716dd0e36da70b8d1fcc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:47Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:47 crc kubenswrapper[4806]: I0126 07:54:47.733991 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d07502a2-50b0-4012-b335-340a1c694c50\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03d55ca5e1d542a5cdb726c1e581fb9b644e237bf4575794fedadf60386c339c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67x45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://824bb40c04b0ea474c45020c9c84ce7b7ee18c52beef9fee9618ba5a6e59be04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67x45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k2tlk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:47Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:47 crc kubenswrapper[4806]: I0126 07:54:47.743426 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pw8cg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c73a9f4-20b2-4c8a-b25d-413770be4fac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://792a2a27d6043da345b05c611a152abc9233ff61958434dc7a7f8c13d185fc04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt9w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pw8cg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:47Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:47 crc kubenswrapper[4806]: I0126 07:54:47.755235 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tfbl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47635c72-c532-48f3-839a-d86393eb5d24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7ea777c2c49c1bc930bff2584c62a6ebd7752638e4d5aec2818f56cf0678ad8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4lbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4910b820b4227a848efb96e462e633150fe7e06295dc04a0d9c45636cc4b83bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4lbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tfbl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:47Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:47 crc kubenswrapper[4806]: I0126 07:54:47.773325 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:47 crc kubenswrapper[4806]: I0126 07:54:47.773362 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:47 crc kubenswrapper[4806]: I0126 07:54:47.773373 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:47 crc kubenswrapper[4806]: I0126 07:54:47.773391 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:47 crc kubenswrapper[4806]: I0126 07:54:47.773403 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:47Z","lastTransitionTime":"2026-01-26T07:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:47 crc kubenswrapper[4806]: I0126 07:54:47.876067 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:47 crc kubenswrapper[4806]: I0126 07:54:47.876110 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:47 crc kubenswrapper[4806]: I0126 07:54:47.876147 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:47 crc kubenswrapper[4806]: I0126 07:54:47.876167 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:47 crc kubenswrapper[4806]: I0126 07:54:47.876180 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:47Z","lastTransitionTime":"2026-01-26T07:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:47 crc kubenswrapper[4806]: I0126 07:54:47.978178 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:47 crc kubenswrapper[4806]: I0126 07:54:47.978508 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:47 crc kubenswrapper[4806]: I0126 07:54:47.979311 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:47 crc kubenswrapper[4806]: I0126 07:54:47.979387 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:47 crc kubenswrapper[4806]: I0126 07:54:47.979447 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:47Z","lastTransitionTime":"2026-01-26T07:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:48 crc kubenswrapper[4806]: I0126 07:54:48.041846 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 07:54:48 crc kubenswrapper[4806]: I0126 07:54:48.041846 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 07:54:48 crc kubenswrapper[4806]: E0126 07:54:48.042003 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 07:54:48 crc kubenswrapper[4806]: I0126 07:54:48.041870 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 07:54:48 crc kubenswrapper[4806]: E0126 07:54:48.042123 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 07:54:48 crc kubenswrapper[4806]: E0126 07:54:48.042203 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 07:54:48 crc kubenswrapper[4806]: I0126 07:54:48.042605 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqmvf" Jan 26 07:54:48 crc kubenswrapper[4806]: E0126 07:54:48.042895 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqmvf" podUID="137029f0-49ad-4400-b117-2eff9271bce3" Jan 26 07:54:48 crc kubenswrapper[4806]: I0126 07:54:48.081398 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:48 crc kubenswrapper[4806]: I0126 07:54:48.081774 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:48 crc kubenswrapper[4806]: I0126 07:54:48.081837 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:48 crc kubenswrapper[4806]: I0126 07:54:48.081922 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:48 crc kubenswrapper[4806]: I0126 07:54:48.081996 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:48Z","lastTransitionTime":"2026-01-26T07:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:48 crc kubenswrapper[4806]: I0126 07:54:48.099828 4806 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 07:27:28.679782855 +0000 UTC Jan 26 07:54:48 crc kubenswrapper[4806]: I0126 07:54:48.184828 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:48 crc kubenswrapper[4806]: I0126 07:54:48.184868 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:48 crc kubenswrapper[4806]: I0126 07:54:48.184878 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:48 crc kubenswrapper[4806]: I0126 07:54:48.184892 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:48 crc kubenswrapper[4806]: I0126 07:54:48.184902 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:48Z","lastTransitionTime":"2026-01-26T07:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:48 crc kubenswrapper[4806]: I0126 07:54:48.286903 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:48 crc kubenswrapper[4806]: I0126 07:54:48.286944 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:48 crc kubenswrapper[4806]: I0126 07:54:48.286957 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:48 crc kubenswrapper[4806]: I0126 07:54:48.286974 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:48 crc kubenswrapper[4806]: I0126 07:54:48.286987 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:48Z","lastTransitionTime":"2026-01-26T07:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:48 crc kubenswrapper[4806]: I0126 07:54:48.389269 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:48 crc kubenswrapper[4806]: I0126 07:54:48.389302 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:48 crc kubenswrapper[4806]: I0126 07:54:48.389314 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:48 crc kubenswrapper[4806]: I0126 07:54:48.389331 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:48 crc kubenswrapper[4806]: I0126 07:54:48.389343 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:48Z","lastTransitionTime":"2026-01-26T07:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:48 crc kubenswrapper[4806]: I0126 07:54:48.491580 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:48 crc kubenswrapper[4806]: I0126 07:54:48.491624 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:48 crc kubenswrapper[4806]: I0126 07:54:48.491637 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:48 crc kubenswrapper[4806]: I0126 07:54:48.491654 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:48 crc kubenswrapper[4806]: I0126 07:54:48.491665 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:48Z","lastTransitionTime":"2026-01-26T07:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:48 crc kubenswrapper[4806]: I0126 07:54:48.517145 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-d7glh_4320ae6b-0d73-47d7-9f2c-f3c5b6b69041/kube-multus/0.log" Jan 26 07:54:48 crc kubenswrapper[4806]: I0126 07:54:48.517195 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-d7glh" event={"ID":"4320ae6b-0d73-47d7-9f2c-f3c5b6b69041","Type":"ContainerStarted","Data":"0b2b56f712aa5b59214153bb170a5866aea874524f826e53be3e703b8ff912fd"} Jan 26 07:54:48 crc kubenswrapper[4806]: I0126 07:54:48.527803 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pw8cg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c73a9f4-20b2-4c8a-b25d-413770be4fac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://792a2a27d6043da345b05c611a152abc9233ff61958434dc7a7f8c13d185fc04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt9w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pw8cg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:48Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:48 crc kubenswrapper[4806]: I0126 07:54:48.537814 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tfbl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47635c72-c532-48f3-839a-d86393eb5d24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7ea777c2c49c1bc930bff2584c62a6ebd7752638e4d5aec2818f56cf0678ad8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4lbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4910b820b4227a848efb96e462e633150fe7e06295dc04a0d9c45636cc4b83bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4lbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tfbl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:48Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:48 crc kubenswrapper[4806]: I0126 07:54:48.547814 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a2716f1-7f63-4f1b-88e7-412b33a5afe3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1797a908e33ef3678ac1471e937a8e52c11dbb31d8de0f19d400a01706a98b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c111f2ec62507ccc570af4f4da19232bfa0946bb9e67bde595d5a3d66eff6a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f94ceee9c2ed3bd1c5ba58ee7e5047dd5f0a325c610c4277705f3426e69ed79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50228cb176a3defc821a78ed75d837706c2de3a3414f02e0afacf81289a15999\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:53:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:48Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:48 crc kubenswrapper[4806]: I0126 07:54:48.559501 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:48Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:48 crc kubenswrapper[4806]: I0126 07:54:48.570374 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:48Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:48 crc kubenswrapper[4806]: I0126 07:54:48.586451 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f8b8acb-f4cf-41db-82f8-94ffd21c1594\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21d2ea6bc7f0a91da2b372587419ec1b34cfe3aca8f70dd4e360bf3e818bb103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adbed86e005e6e9cecb4320e0dff72eab3cfd505ff656a64e6643cf47828c789\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d102c6b4ea9d79168f0fec24c556c8fec8e3c1191646eebcbe8ed0289edfb6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c90dede1fc0753cad500555db7d92be73cb9b51fe948e3e65bed66134d85628c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e05c58fd693a7827b1ff019c6e160e4d2198004aca34245ee6e08cd37f5627da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://634f160a9064240a06e9b425f70ceae6390db7205f898efdc601437e72c362df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfc2e9b8a1bd4f2725d86ece596811599fc42c42930193e14dec2164c415acd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfc2e9b8a1bd4f2725d86ece596811599fc42c42930193e14dec2164c415acd3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T07:54:31Z\\\",\\\"message\\\":\\\"o create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:30Z is after 2025-08-24T17:21:41Z]\\\\nI0126 07:54:31.006399 6338 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-network-console/networking-console-plugin_TCP_cluster\\\\\\\", UUID:\\\\\\\"ab0b1d51-5ec6-479b-8881-93dfa8d30337\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-network-console/networking-console-plugin\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGr\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8mw7z_openshift-ovn-kubernetes(1f8b8acb-f4cf-41db-82f8-94ffd21c1594)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e9d86c9f33a8ad421c133f7b674eff9362011ad9ce60cbe1b917bda84c85047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2105d09a59a290aa3e1915a2b8c2c8aeb3a3e94ddfd4b48dc574a5362f522d18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2105d09a59a290aa3e1915a2b8c2c8aeb3a3e94ddfd4b48dc574a5362f522d18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8mw7z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:48Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:48 crc kubenswrapper[4806]: I0126 07:54:48.594044 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:48 crc kubenswrapper[4806]: I0126 07:54:48.594077 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:48 crc kubenswrapper[4806]: I0126 07:54:48.594085 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:48 crc kubenswrapper[4806]: I0126 07:54:48.594098 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:48 crc kubenswrapper[4806]: I0126 07:54:48.594108 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:48Z","lastTransitionTime":"2026-01-26T07:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:48 crc kubenswrapper[4806]: I0126 07:54:48.597775 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rqmvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"137029f0-49ad-4400-b117-2eff9271bce3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9zt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9zt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:14Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rqmvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:48Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:48 crc kubenswrapper[4806]: I0126 07:54:48.615005 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"674c5d9f-0f9e-47cf-b6bb-5a011bd0af68\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06b15982445e897ebbaf881e9a84973b9b700b0763ea6dfc0dfa3d5adb3d5b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67cf748dcc41d3402937cba97eadb7311a4fd7ab359a24768ffd4b6f689e9e2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://374c11776db902d4a30fdeab722bf8c85be53043b87555ac42f9f18bbfca5f78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded50f9353eb262b2587b8d1a59d8e1a06946c9b228eb0991fe35ad79f19567b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dc1a26bcb2eff0795009b3d88f7fedd86bccf553e9b8b86f8bc4bdda154566d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b85e7f36ac03113362c2ab6b15fb73e517599c6a56c6a410b9dead70191502f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b85e7f36ac03113362c2ab6b15fb73e517599c6a56c6a410b9dead70191502f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7324fbdf53b0be79696bb4bf870fe072761633e09cc88cce3a213a40b339410d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7324fbdf53b0be79696bb4bf870fe072761633e09cc88cce3a213a40b339410d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1987002ea8dfdacf427291f1757243727d744b1fe99d0be1f8cddc26beb74fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1987002ea8dfdacf427291f1757243727d744b1fe99d0be1f8cddc26beb74fc9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:53:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:48Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:48 crc kubenswrapper[4806]: I0126 07:54:48.628859 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f837be3-c97c-4ec6-9ac0-1b2c210a2bd1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://370998a25827938e42e6ca8f51cb20c64068d266a2a6e92f43cfa6033efe1f1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://befc3a7310ed730ab06800d0ffe5a5ad206f5193f921d0080543d96e53c382cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f4e90bd8d92e4159f45b887647465c23595d6a14cb9fd5a6f3b6c736345b302\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d8fe9445b34c3dea0e5508fbbd9d8c72d3b0bd3c927f3d0469fb5f86a61b7a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6442d8b57ec5789bc5ad219da279d42cd053598b4a8a843b4190a619750483d3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 07:53:53.366863 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 07:53:53.368547 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1947576678/tls.crt::/tmp/serving-cert-1947576678/tls.key\\\\\\\"\\\\nI0126 07:53:59.068951 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 07:53:59.072863 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 07:53:59.072893 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 07:53:59.072922 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 07:53:59.072931 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 07:53:59.078386 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 07:53:59.078432 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 07:53:59.078439 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 07:53:59.078445 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 07:53:59.078449 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 07:53:59.078455 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 07:53:59.078459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 07:53:59.078571 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 07:53:59.084946 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eda625619208720589e5f413d0df8e1e55e7fd5ab4a2e59d64e19150afb05ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02368f0d186c7de1c3dc25d5c60985799ded799314f332c5132230a413d531a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02368f0d186c7de1c3dc25d5c60985799ded799314f332c5132230a413d531a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:53:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:48Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:48 crc kubenswrapper[4806]: I0126 07:54:48.644158 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acf0880fe013b6ee2bf3e8342e19ebbdc21b0f7aeb69749dce219fe0e5e4939a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:48Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:48 crc kubenswrapper[4806]: I0126 07:54:48.655779 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0298829a559cb39d3f8d2f3b0e53b249e74c432ba82385ab628135d53fa77272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70db7a55580b02bc2be913a8978f895c5a3826133883cf1c838bc84a76e89af1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:48Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:48 crc kubenswrapper[4806]: I0126 07:54:48.666441 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wx526" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"265e15b3-6ef8-47df-ab15-dcc9bd9574ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5080e1085d9267f3e432eaf62a6c620f7e59db08140bd5a901ab21840bfd6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmfr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wx526\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:48Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:48 crc kubenswrapper[4806]: I0126 07:54:48.677136 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d7glh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4320ae6b-0d73-47d7-9f2c-f3c5b6b69041\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b2b56f712aa5b59214153bb170a5866aea874524f826e53be3e703b8ff912fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9da5e8c52d415e76190ace196927bd1783b08990164b11379940ecc2b6734551\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T07:54:47Z\\\",\\\"message\\\":\\\"2026-01-26T07:54:01+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6a15fa94-db47-433a-a1f7-0d0ac0229e59\\\\n2026-01-26T07:54:01+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6a15fa94-db47-433a-a1f7-0d0ac0229e59 to /host/opt/cni/bin/\\\\n2026-01-26T07:54:02Z [verbose] multus-daemon started\\\\n2026-01-26T07:54:02Z [verbose] Readiness Indicator file check\\\\n2026-01-26T07:54:47Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:01Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpcqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d7glh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:48Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:48 crc kubenswrapper[4806]: I0126 07:54:48.689469 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-268q5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59844d88-1bf9-4761-b664-74623e7532c3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80fc9c4f56c3d017653449212b2b9457a701e8c567317798eb12c695bccec5c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19fc2da16d6c76dad40cc2947eb8da01b1ead5804be80df5756c132456575909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19fc2da16d6c76dad40cc2947eb8da01b1ead5804be80df5756c132456575909\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a426f2a63c3cdd56cec3bbd1f6e88ff76808304da3fe757043afeb2feb26614\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a426f2a63c3cdd56cec3bbd1f6e88ff76808304da3fe757043afeb2feb26614\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5459f7b834ef08b014a811d0955763235cb4dbb6cbe92670202b03f51f7d684d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5459f7b834ef08b014a811d0955763235cb4dbb6cbe92670202b03f51f7d684d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1efe94940fbf1babcccc68efae1d38751c22232d1a2f197485057be906db80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f1efe94940fbf1babcccc68efae1d38751c22232d1a2f197485057be906db80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1332a28e0cf08275ce281997fdc5674f44e397f8ff02fdd19fed12c2de2b70a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1332a28e0cf08275ce281997fdc5674f44e397f8ff02fdd19fed12c2de2b70a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://638da847467b7cade9c68a764044417e85bfa99cae93105cb70b260d3668c1a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://638da847467b7cade9c68a764044417e85bfa99cae93105cb70b260d3668c1a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-268q5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:48Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:48 crc kubenswrapper[4806]: I0126 07:54:48.695781 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:48 crc kubenswrapper[4806]: I0126 07:54:48.695812 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:48 crc kubenswrapper[4806]: I0126 07:54:48.695823 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:48 crc kubenswrapper[4806]: I0126 07:54:48.695840 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:48 crc kubenswrapper[4806]: I0126 07:54:48.695852 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:48Z","lastTransitionTime":"2026-01-26T07:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:48 crc kubenswrapper[4806]: I0126 07:54:48.699856 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f2d0753-513a-4924-9985-d1058d2cda9b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da291d9c45edfc19eed33fce385cb1d8005585d2e9ac5fdfc50040a9c3038683\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6719f7820924ce02c29e99fc9fb5c6e733597c1f006b3a2701046680af978e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8e90cbd21cd7080a82e2440f441264b1fd1b45ee048fbcadc7afbd7b4384996\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e49f6187d5b008fdb24b0f1a268d099bba2d3700d3ebd07c6f2d05e5718bf990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e49f6187d5b008fdb24b0f1a268d099bba2d3700d3ebd07c6f2d05e5718bf990\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:41Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:53:41Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:48Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:48 crc kubenswrapper[4806]: I0126 07:54:48.709809 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:48Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:48 crc kubenswrapper[4806]: I0126 07:54:48.720074 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dfe44c66a73660b38211ed7ae8fbb6a036c2ebc6a2716dd0e36da70b8d1fcc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:48Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:48 crc kubenswrapper[4806]: I0126 07:54:48.731035 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d07502a2-50b0-4012-b335-340a1c694c50\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03d55ca5e1d542a5cdb726c1e581fb9b644e237bf4575794fedadf60386c339c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67x45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://824bb40c04b0ea474c45020c9c84ce7b7ee18c52beef9fee9618ba5a6e59be04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67x45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k2tlk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:48Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:48 crc kubenswrapper[4806]: I0126 07:54:48.798430 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:48 crc kubenswrapper[4806]: I0126 07:54:48.798463 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:48 crc kubenswrapper[4806]: I0126 07:54:48.798474 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:48 crc kubenswrapper[4806]: I0126 07:54:48.798488 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:48 crc kubenswrapper[4806]: I0126 07:54:48.798496 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:48Z","lastTransitionTime":"2026-01-26T07:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:48 crc kubenswrapper[4806]: I0126 07:54:48.900488 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:48 crc kubenswrapper[4806]: I0126 07:54:48.900522 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:48 crc kubenswrapper[4806]: I0126 07:54:48.900552 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:48 crc kubenswrapper[4806]: I0126 07:54:48.900564 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:48 crc kubenswrapper[4806]: I0126 07:54:48.900573 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:48Z","lastTransitionTime":"2026-01-26T07:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:49 crc kubenswrapper[4806]: I0126 07:54:49.002732 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:49 crc kubenswrapper[4806]: I0126 07:54:49.002760 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:49 crc kubenswrapper[4806]: I0126 07:54:49.002776 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:49 crc kubenswrapper[4806]: I0126 07:54:49.002791 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:49 crc kubenswrapper[4806]: I0126 07:54:49.002800 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:49Z","lastTransitionTime":"2026-01-26T07:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:49 crc kubenswrapper[4806]: I0126 07:54:49.101153 4806 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 14:36:14.130421027 +0000 UTC Jan 26 07:54:49 crc kubenswrapper[4806]: I0126 07:54:49.105362 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:49 crc kubenswrapper[4806]: I0126 07:54:49.105398 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:49 crc kubenswrapper[4806]: I0126 07:54:49.105406 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:49 crc kubenswrapper[4806]: I0126 07:54:49.105420 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:49 crc kubenswrapper[4806]: I0126 07:54:49.105431 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:49Z","lastTransitionTime":"2026-01-26T07:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:49 crc kubenswrapper[4806]: I0126 07:54:49.207926 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:49 crc kubenswrapper[4806]: I0126 07:54:49.207970 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:49 crc kubenswrapper[4806]: I0126 07:54:49.207979 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:49 crc kubenswrapper[4806]: I0126 07:54:49.207995 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:49 crc kubenswrapper[4806]: I0126 07:54:49.208005 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:49Z","lastTransitionTime":"2026-01-26T07:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:49 crc kubenswrapper[4806]: I0126 07:54:49.310005 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:49 crc kubenswrapper[4806]: I0126 07:54:49.310055 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:49 crc kubenswrapper[4806]: I0126 07:54:49.310065 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:49 crc kubenswrapper[4806]: I0126 07:54:49.310081 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:49 crc kubenswrapper[4806]: I0126 07:54:49.310092 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:49Z","lastTransitionTime":"2026-01-26T07:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:49 crc kubenswrapper[4806]: I0126 07:54:49.412417 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:49 crc kubenswrapper[4806]: I0126 07:54:49.412475 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:49 crc kubenswrapper[4806]: I0126 07:54:49.412488 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:49 crc kubenswrapper[4806]: I0126 07:54:49.412506 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:49 crc kubenswrapper[4806]: I0126 07:54:49.412517 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:49Z","lastTransitionTime":"2026-01-26T07:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:49 crc kubenswrapper[4806]: I0126 07:54:49.514199 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:49 crc kubenswrapper[4806]: I0126 07:54:49.514237 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:49 crc kubenswrapper[4806]: I0126 07:54:49.514249 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:49 crc kubenswrapper[4806]: I0126 07:54:49.514264 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:49 crc kubenswrapper[4806]: I0126 07:54:49.514274 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:49Z","lastTransitionTime":"2026-01-26T07:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:49 crc kubenswrapper[4806]: I0126 07:54:49.616532 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:49 crc kubenswrapper[4806]: I0126 07:54:49.616573 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:49 crc kubenswrapper[4806]: I0126 07:54:49.616583 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:49 crc kubenswrapper[4806]: I0126 07:54:49.616600 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:49 crc kubenswrapper[4806]: I0126 07:54:49.616610 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:49Z","lastTransitionTime":"2026-01-26T07:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:49 crc kubenswrapper[4806]: I0126 07:54:49.719923 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:49 crc kubenswrapper[4806]: I0126 07:54:49.719976 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:49 crc kubenswrapper[4806]: I0126 07:54:49.719984 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:49 crc kubenswrapper[4806]: I0126 07:54:49.720000 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:49 crc kubenswrapper[4806]: I0126 07:54:49.720010 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:49Z","lastTransitionTime":"2026-01-26T07:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:49 crc kubenswrapper[4806]: I0126 07:54:49.821741 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:49 crc kubenswrapper[4806]: I0126 07:54:49.821786 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:49 crc kubenswrapper[4806]: I0126 07:54:49.821796 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:49 crc kubenswrapper[4806]: I0126 07:54:49.821814 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:49 crc kubenswrapper[4806]: I0126 07:54:49.821825 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:49Z","lastTransitionTime":"2026-01-26T07:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:49 crc kubenswrapper[4806]: I0126 07:54:49.923503 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:49 crc kubenswrapper[4806]: I0126 07:54:49.923591 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:49 crc kubenswrapper[4806]: I0126 07:54:49.923606 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:49 crc kubenswrapper[4806]: I0126 07:54:49.923651 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:49 crc kubenswrapper[4806]: I0126 07:54:49.923665 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:49Z","lastTransitionTime":"2026-01-26T07:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:50 crc kubenswrapper[4806]: I0126 07:54:50.025545 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:50 crc kubenswrapper[4806]: I0126 07:54:50.025599 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:50 crc kubenswrapper[4806]: I0126 07:54:50.025612 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:50 crc kubenswrapper[4806]: I0126 07:54:50.025633 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:50 crc kubenswrapper[4806]: I0126 07:54:50.025644 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:50Z","lastTransitionTime":"2026-01-26T07:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:50 crc kubenswrapper[4806]: I0126 07:54:50.041257 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 07:54:50 crc kubenswrapper[4806]: I0126 07:54:50.041292 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqmvf" Jan 26 07:54:50 crc kubenswrapper[4806]: I0126 07:54:50.041354 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 07:54:50 crc kubenswrapper[4806]: I0126 07:54:50.041280 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 07:54:50 crc kubenswrapper[4806]: E0126 07:54:50.041405 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 07:54:50 crc kubenswrapper[4806]: E0126 07:54:50.041741 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 07:54:50 crc kubenswrapper[4806]: E0126 07:54:50.041815 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 07:54:50 crc kubenswrapper[4806]: E0126 07:54:50.041904 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqmvf" podUID="137029f0-49ad-4400-b117-2eff9271bce3" Jan 26 07:54:50 crc kubenswrapper[4806]: I0126 07:54:50.051388 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 26 07:54:50 crc kubenswrapper[4806]: I0126 07:54:50.101829 4806 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 17:43:39.128716679 +0000 UTC Jan 26 07:54:50 crc kubenswrapper[4806]: I0126 07:54:50.127826 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:50 crc kubenswrapper[4806]: I0126 07:54:50.127885 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:50 crc kubenswrapper[4806]: I0126 07:54:50.127901 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:50 crc kubenswrapper[4806]: I0126 07:54:50.127928 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:50 crc kubenswrapper[4806]: I0126 07:54:50.127941 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:50Z","lastTransitionTime":"2026-01-26T07:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:50 crc kubenswrapper[4806]: I0126 07:54:50.231399 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:50 crc kubenswrapper[4806]: I0126 07:54:50.231453 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:50 crc kubenswrapper[4806]: I0126 07:54:50.231465 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:50 crc kubenswrapper[4806]: I0126 07:54:50.231483 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:50 crc kubenswrapper[4806]: I0126 07:54:50.231495 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:50Z","lastTransitionTime":"2026-01-26T07:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:50 crc kubenswrapper[4806]: I0126 07:54:50.334356 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:50 crc kubenswrapper[4806]: I0126 07:54:50.334411 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:50 crc kubenswrapper[4806]: I0126 07:54:50.334427 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:50 crc kubenswrapper[4806]: I0126 07:54:50.334451 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:50 crc kubenswrapper[4806]: I0126 07:54:50.334468 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:50Z","lastTransitionTime":"2026-01-26T07:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:50 crc kubenswrapper[4806]: I0126 07:54:50.437154 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:50 crc kubenswrapper[4806]: I0126 07:54:50.437205 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:50 crc kubenswrapper[4806]: I0126 07:54:50.437218 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:50 crc kubenswrapper[4806]: I0126 07:54:50.437236 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:50 crc kubenswrapper[4806]: I0126 07:54:50.437251 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:50Z","lastTransitionTime":"2026-01-26T07:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:50 crc kubenswrapper[4806]: I0126 07:54:50.539649 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:50 crc kubenswrapper[4806]: I0126 07:54:50.539693 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:50 crc kubenswrapper[4806]: I0126 07:54:50.539702 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:50 crc kubenswrapper[4806]: I0126 07:54:50.539718 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:50 crc kubenswrapper[4806]: I0126 07:54:50.539727 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:50Z","lastTransitionTime":"2026-01-26T07:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:50 crc kubenswrapper[4806]: I0126 07:54:50.642120 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:50 crc kubenswrapper[4806]: I0126 07:54:50.642170 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:50 crc kubenswrapper[4806]: I0126 07:54:50.642179 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:50 crc kubenswrapper[4806]: I0126 07:54:50.642195 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:50 crc kubenswrapper[4806]: I0126 07:54:50.642204 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:50Z","lastTransitionTime":"2026-01-26T07:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:50 crc kubenswrapper[4806]: I0126 07:54:50.744760 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:50 crc kubenswrapper[4806]: I0126 07:54:50.744799 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:50 crc kubenswrapper[4806]: I0126 07:54:50.744808 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:50 crc kubenswrapper[4806]: I0126 07:54:50.744823 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:50 crc kubenswrapper[4806]: I0126 07:54:50.744833 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:50Z","lastTransitionTime":"2026-01-26T07:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:50 crc kubenswrapper[4806]: I0126 07:54:50.847216 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:50 crc kubenswrapper[4806]: I0126 07:54:50.847264 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:50 crc kubenswrapper[4806]: I0126 07:54:50.847275 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:50 crc kubenswrapper[4806]: I0126 07:54:50.847291 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:50 crc kubenswrapper[4806]: I0126 07:54:50.847303 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:50Z","lastTransitionTime":"2026-01-26T07:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:50 crc kubenswrapper[4806]: I0126 07:54:50.848585 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:50 crc kubenswrapper[4806]: I0126 07:54:50.848607 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:50 crc kubenswrapper[4806]: I0126 07:54:50.848618 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:50 crc kubenswrapper[4806]: I0126 07:54:50.848630 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:50 crc kubenswrapper[4806]: I0126 07:54:50.848641 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:50Z","lastTransitionTime":"2026-01-26T07:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:50 crc kubenswrapper[4806]: E0126 07:54:50.860512 4806 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:54:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:54:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:54:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:54:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6d591560-a509-477f-85dc-1a92a429bf2e\\\",\\\"systemUUID\\\":\\\"8cee8155-a08c-4d0d-aec6-2f132dd9ee01\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:50Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:50 crc kubenswrapper[4806]: I0126 07:54:50.864976 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:50 crc kubenswrapper[4806]: I0126 07:54:50.865019 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:50 crc kubenswrapper[4806]: I0126 07:54:50.865030 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:50 crc kubenswrapper[4806]: I0126 07:54:50.865048 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:50 crc kubenswrapper[4806]: I0126 07:54:50.865058 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:50Z","lastTransitionTime":"2026-01-26T07:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:50 crc kubenswrapper[4806]: E0126 07:54:50.881660 4806 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:54:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:54:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:54:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:54:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6d591560-a509-477f-85dc-1a92a429bf2e\\\",\\\"systemUUID\\\":\\\"8cee8155-a08c-4d0d-aec6-2f132dd9ee01\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:50Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:50 crc kubenswrapper[4806]: I0126 07:54:50.885457 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:50 crc kubenswrapper[4806]: I0126 07:54:50.885489 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:50 crc kubenswrapper[4806]: I0126 07:54:50.885497 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:50 crc kubenswrapper[4806]: I0126 07:54:50.885512 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:50 crc kubenswrapper[4806]: I0126 07:54:50.885535 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:50Z","lastTransitionTime":"2026-01-26T07:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:50 crc kubenswrapper[4806]: E0126 07:54:50.901676 4806 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:54:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:54:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:54:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:54:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6d591560-a509-477f-85dc-1a92a429bf2e\\\",\\\"systemUUID\\\":\\\"8cee8155-a08c-4d0d-aec6-2f132dd9ee01\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:50Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:50 crc kubenswrapper[4806]: I0126 07:54:50.904445 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:50 crc kubenswrapper[4806]: I0126 07:54:50.904472 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:50 crc kubenswrapper[4806]: I0126 07:54:50.904482 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:50 crc kubenswrapper[4806]: I0126 07:54:50.904512 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:50 crc kubenswrapper[4806]: I0126 07:54:50.904537 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:50Z","lastTransitionTime":"2026-01-26T07:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:50 crc kubenswrapper[4806]: E0126 07:54:50.914392 4806 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:54:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:54:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:54:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:54:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6d591560-a509-477f-85dc-1a92a429bf2e\\\",\\\"systemUUID\\\":\\\"8cee8155-a08c-4d0d-aec6-2f132dd9ee01\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:50Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:50 crc kubenswrapper[4806]: I0126 07:54:50.917411 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:50 crc kubenswrapper[4806]: I0126 07:54:50.917463 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:50 crc kubenswrapper[4806]: I0126 07:54:50.917475 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:50 crc kubenswrapper[4806]: I0126 07:54:50.917487 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:50 crc kubenswrapper[4806]: I0126 07:54:50.917496 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:50Z","lastTransitionTime":"2026-01-26T07:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:50 crc kubenswrapper[4806]: E0126 07:54:50.929301 4806 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:54:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:54:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:54:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:54:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6d591560-a509-477f-85dc-1a92a429bf2e\\\",\\\"systemUUID\\\":\\\"8cee8155-a08c-4d0d-aec6-2f132dd9ee01\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:50Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:50 crc kubenswrapper[4806]: E0126 07:54:50.929401 4806 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 07:54:50 crc kubenswrapper[4806]: I0126 07:54:50.949042 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:50 crc kubenswrapper[4806]: I0126 07:54:50.949070 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:50 crc kubenswrapper[4806]: I0126 07:54:50.949080 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:50 crc kubenswrapper[4806]: I0126 07:54:50.949089 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:50 crc kubenswrapper[4806]: I0126 07:54:50.949097 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:50Z","lastTransitionTime":"2026-01-26T07:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:51 crc kubenswrapper[4806]: I0126 07:54:51.052743 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:51 crc kubenswrapper[4806]: I0126 07:54:51.052808 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:51 crc kubenswrapper[4806]: I0126 07:54:51.052828 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:51 crc kubenswrapper[4806]: I0126 07:54:51.052855 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:51 crc kubenswrapper[4806]: I0126 07:54:51.052874 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:51Z","lastTransitionTime":"2026-01-26T07:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:51 crc kubenswrapper[4806]: I0126 07:54:51.060504 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-268q5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59844d88-1bf9-4761-b664-74623e7532c3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80fc9c4f56c3d017653449212b2b9457a701e8c567317798eb12c695bccec5c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19fc2da16d6c76dad40cc2947eb8da01b1ead5804be80df5756c132456575909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19fc2da16d6c76dad40cc2947eb8da01b1ead5804be80df5756c132456575909\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a426f2a63c3cdd56cec3bbd1f6e88ff76808304da3fe757043afeb2feb26614\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a426f2a63c3cdd56cec3bbd1f6e88ff76808304da3fe757043afeb2feb26614\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5459f7b834ef08b014a811d0955763235cb4dbb6cbe92670202b03f51f7d684d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5459f7b834ef08b014a811d0955763235cb4dbb6cbe92670202b03f51f7d684d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1efe94940fbf1babcccc68efae1d38751c22232d1a2f197485057be906db80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f1efe94940fbf1babcccc68efae1d38751c22232d1a2f197485057be906db80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1332a28e0cf08275ce281997fdc5674f44e397f8ff02fdd19fed12c2de2b70a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1332a28e0cf08275ce281997fdc5674f44e397f8ff02fdd19fed12c2de2b70a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://638da847467b7cade9c68a764044417e85bfa99cae93105cb70b260d3668c1a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://638da847467b7cade9c68a764044417e85bfa99cae93105cb70b260d3668c1a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-268q5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:51Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:51 crc kubenswrapper[4806]: I0126 07:54:51.089683 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"674c5d9f-0f9e-47cf-b6bb-5a011bd0af68\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06b15982445e897ebbaf881e9a84973b9b700b0763ea6dfc0dfa3d5adb3d5b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67cf748dcc41d3402937cba97eadb7311a4fd7ab359a24768ffd4b6f689e9e2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://374c11776db902d4a30fdeab722bf8c85be53043b87555ac42f9f18bbfca5f78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded50f9353eb262b2587b8d1a59d8e1a06946c9b228eb0991fe35ad79f19567b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dc1a26bcb2eff0795009b3d88f7fedd86bccf553e9b8b86f8bc4bdda154566d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b85e7f36ac03113362c2ab6b15fb73e517599c6a56c6a410b9dead70191502f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b85e7f36ac03113362c2ab6b15fb73e517599c6a56c6a410b9dead70191502f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7324fbdf53b0be79696bb4bf870fe072761633e09cc88cce3a213a40b339410d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7324fbdf53b0be79696bb4bf870fe072761633e09cc88cce3a213a40b339410d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1987002ea8dfdacf427291f1757243727d744b1fe99d0be1f8cddc26beb74fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1987002ea8dfdacf427291f1757243727d744b1fe99d0be1f8cddc26beb74fc9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:53:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:51Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:51 crc kubenswrapper[4806]: I0126 07:54:51.102093 4806 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 11:07:50.276838138 +0000 UTC Jan 26 07:54:51 crc kubenswrapper[4806]: I0126 07:54:51.104957 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f837be3-c97c-4ec6-9ac0-1b2c210a2bd1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://370998a25827938e42e6ca8f51cb20c64068d266a2a6e92f43cfa6033efe1f1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://befc3a7310ed730ab06800d0ffe5a5ad206f5193f921d0080543d96e53c382cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f4e90bd8d92e4159f45b887647465c23595d6a14cb9fd5a6f3b6c736345b302\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d8fe9445b34c3dea0e5508fbbd9d8c72d3b0bd3c927f3d0469fb5f86a61b7a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6442d8b57ec5789bc5ad219da279d42cd053598b4a8a843b4190a619750483d3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 07:53:53.366863 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 07:53:53.368547 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1947576678/tls.crt::/tmp/serving-cert-1947576678/tls.key\\\\\\\"\\\\nI0126 07:53:59.068951 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 07:53:59.072863 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 07:53:59.072893 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 07:53:59.072922 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 07:53:59.072931 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 07:53:59.078386 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 07:53:59.078432 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 07:53:59.078439 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 07:53:59.078445 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 07:53:59.078449 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 07:53:59.078455 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 07:53:59.078459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 07:53:59.078571 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 07:53:59.084946 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eda625619208720589e5f413d0df8e1e55e7fd5ab4a2e59d64e19150afb05ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02368f0d186c7de1c3dc25d5c60985799ded799314f332c5132230a413d531a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02368f0d186c7de1c3dc25d5c60985799ded799314f332c5132230a413d531a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:53:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:51Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:51 crc kubenswrapper[4806]: I0126 07:54:51.124286 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acf0880fe013b6ee2bf3e8342e19ebbdc21b0f7aeb69749dce219fe0e5e4939a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:51Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:51 crc kubenswrapper[4806]: I0126 07:54:51.141304 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0298829a559cb39d3f8d2f3b0e53b249e74c432ba82385ab628135d53fa77272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70db7a55580b02bc2be913a8978f895c5a3826133883cf1c838bc84a76e89af1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:51Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:51 crc kubenswrapper[4806]: I0126 07:54:51.152811 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wx526" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"265e15b3-6ef8-47df-ab15-dcc9bd9574ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5080e1085d9267f3e432eaf62a6c620f7e59db08140bd5a901ab21840bfd6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmfr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wx526\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:51Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:51 crc kubenswrapper[4806]: I0126 07:54:51.156086 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:51 crc kubenswrapper[4806]: I0126 07:54:51.156127 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:51 crc kubenswrapper[4806]: I0126 07:54:51.156142 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:51 crc kubenswrapper[4806]: I0126 07:54:51.156181 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:51 crc kubenswrapper[4806]: I0126 07:54:51.156192 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:51Z","lastTransitionTime":"2026-01-26T07:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:51 crc kubenswrapper[4806]: I0126 07:54:51.173648 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d7glh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4320ae6b-0d73-47d7-9f2c-f3c5b6b69041\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b2b56f712aa5b59214153bb170a5866aea874524f826e53be3e703b8ff912fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9da5e8c52d415e76190ace196927bd1783b08990164b11379940ecc2b6734551\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T07:54:47Z\\\",\\\"message\\\":\\\"2026-01-26T07:54:01+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6a15fa94-db47-433a-a1f7-0d0ac0229e59\\\\n2026-01-26T07:54:01+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6a15fa94-db47-433a-a1f7-0d0ac0229e59 to /host/opt/cni/bin/\\\\n2026-01-26T07:54:02Z [verbose] multus-daemon started\\\\n2026-01-26T07:54:02Z [verbose] Readiness Indicator file check\\\\n2026-01-26T07:54:47Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:01Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpcqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d7glh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:51Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:51 crc kubenswrapper[4806]: I0126 07:54:51.189728 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f2d0753-513a-4924-9985-d1058d2cda9b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da291d9c45edfc19eed33fce385cb1d8005585d2e9ac5fdfc50040a9c3038683\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6719f7820924ce02c29e99fc9fb5c6e733597c1f006b3a2701046680af978e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8e90cbd21cd7080a82e2440f441264b1fd1b45ee048fbcadc7afbd7b4384996\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e49f6187d5b008fdb24b0f1a268d099bba2d3700d3ebd07c6f2d05e5718bf990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e49f6187d5b008fdb24b0f1a268d099bba2d3700d3ebd07c6f2d05e5718bf990\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:41Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:53:41Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:51Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:51 crc kubenswrapper[4806]: I0126 07:54:51.202910 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:51Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:51 crc kubenswrapper[4806]: I0126 07:54:51.214353 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dfe44c66a73660b38211ed7ae8fbb6a036c2ebc6a2716dd0e36da70b8d1fcc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:51Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:51 crc kubenswrapper[4806]: I0126 07:54:51.224095 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d07502a2-50b0-4012-b335-340a1c694c50\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03d55ca5e1d542a5cdb726c1e581fb9b644e237bf4575794fedadf60386c339c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67x45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://824bb40c04b0ea474c45020c9c84ce7b7ee18c52beef9fee9618ba5a6e59be04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67x45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k2tlk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:51Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:51 crc kubenswrapper[4806]: I0126 07:54:51.233548 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pw8cg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c73a9f4-20b2-4c8a-b25d-413770be4fac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://792a2a27d6043da345b05c611a152abc9233ff61958434dc7a7f8c13d185fc04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt9w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pw8cg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:51Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:51 crc kubenswrapper[4806]: I0126 07:54:51.246590 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tfbl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47635c72-c532-48f3-839a-d86393eb5d24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7ea777c2c49c1bc930bff2584c62a6ebd7752638e4d5aec2818f56cf0678ad8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4lbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4910b820b4227a848efb96e462e633150fe7e06295dc04a0d9c45636cc4b83bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4lbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tfbl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:51Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:51 crc kubenswrapper[4806]: I0126 07:54:51.258414 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:51 crc kubenswrapper[4806]: I0126 07:54:51.258452 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:51 crc kubenswrapper[4806]: I0126 07:54:51.258463 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:51 crc kubenswrapper[4806]: I0126 07:54:51.258480 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:51 crc kubenswrapper[4806]: I0126 07:54:51.258493 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:51Z","lastTransitionTime":"2026-01-26T07:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:51 crc kubenswrapper[4806]: I0126 07:54:51.258267 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a365c4a-dfc5-4290-920d-f1f04e322061\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d54a191a8969d10a108b9fa36fe6c03f55969e7bd6c73aef14d2936d92290ccb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad400013f30ae5d55431037940b99d47dc5a2449b4535963218a0e9510e5fcc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad400013f30ae5d55431037940b99d47dc5a2449b4535963218a0e9510e5fcc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:53:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:51Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:51 crc kubenswrapper[4806]: I0126 07:54:51.271030 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a2716f1-7f63-4f1b-88e7-412b33a5afe3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1797a908e33ef3678ac1471e937a8e52c11dbb31d8de0f19d400a01706a98b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c111f2ec62507ccc570af4f4da19232bfa0946bb9e67bde595d5a3d66eff6a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f94ceee9c2ed3bd1c5ba58ee7e5047dd5f0a325c610c4277705f3426e69ed79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50228cb176a3defc821a78ed75d837706c2de3a3414f02e0afacf81289a15999\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:53:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:51Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:51 crc kubenswrapper[4806]: I0126 07:54:51.282562 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:51Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:51 crc kubenswrapper[4806]: I0126 07:54:51.292502 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:51Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:51 crc kubenswrapper[4806]: I0126 07:54:51.308686 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f8b8acb-f4cf-41db-82f8-94ffd21c1594\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21d2ea6bc7f0a91da2b372587419ec1b34cfe3aca8f70dd4e360bf3e818bb103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adbed86e005e6e9cecb4320e0dff72eab3cfd505ff656a64e6643cf47828c789\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d102c6b4ea9d79168f0fec24c556c8fec8e3c1191646eebcbe8ed0289edfb6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c90dede1fc0753cad500555db7d92be73cb9b51fe948e3e65bed66134d85628c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e05c58fd693a7827b1ff019c6e160e4d2198004aca34245ee6e08cd37f5627da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://634f160a9064240a06e9b425f70ceae6390db7205f898efdc601437e72c362df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfc2e9b8a1bd4f2725d86ece596811599fc42c42930193e14dec2164c415acd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfc2e9b8a1bd4f2725d86ece596811599fc42c42930193e14dec2164c415acd3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T07:54:31Z\\\",\\\"message\\\":\\\"o create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:30Z is after 2025-08-24T17:21:41Z]\\\\nI0126 07:54:31.006399 6338 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-network-console/networking-console-plugin_TCP_cluster\\\\\\\", UUID:\\\\\\\"ab0b1d51-5ec6-479b-8881-93dfa8d30337\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-network-console/networking-console-plugin\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGr\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-8mw7z_openshift-ovn-kubernetes(1f8b8acb-f4cf-41db-82f8-94ffd21c1594)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e9d86c9f33a8ad421c133f7b674eff9362011ad9ce60cbe1b917bda84c85047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2105d09a59a290aa3e1915a2b8c2c8aeb3a3e94ddfd4b48dc574a5362f522d18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2105d09a59a290aa3e1915a2b8c2c8aeb3a3e94ddfd4b48dc574a5362f522d18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8mw7z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:51Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:51 crc kubenswrapper[4806]: I0126 07:54:51.319044 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rqmvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"137029f0-49ad-4400-b117-2eff9271bce3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9zt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9zt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:14Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rqmvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:51Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:51 crc kubenswrapper[4806]: I0126 07:54:51.360490 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:51 crc kubenswrapper[4806]: I0126 07:54:51.360546 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:51 crc kubenswrapper[4806]: I0126 07:54:51.360556 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:51 crc kubenswrapper[4806]: I0126 07:54:51.360572 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:51 crc kubenswrapper[4806]: I0126 07:54:51.360583 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:51Z","lastTransitionTime":"2026-01-26T07:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:51 crc kubenswrapper[4806]: I0126 07:54:51.463629 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:51 crc kubenswrapper[4806]: I0126 07:54:51.463676 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:51 crc kubenswrapper[4806]: I0126 07:54:51.463686 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:51 crc kubenswrapper[4806]: I0126 07:54:51.463705 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:51 crc kubenswrapper[4806]: I0126 07:54:51.463718 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:51Z","lastTransitionTime":"2026-01-26T07:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:51 crc kubenswrapper[4806]: I0126 07:54:51.565840 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:51 crc kubenswrapper[4806]: I0126 07:54:51.565890 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:51 crc kubenswrapper[4806]: I0126 07:54:51.565902 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:51 crc kubenswrapper[4806]: I0126 07:54:51.565918 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:51 crc kubenswrapper[4806]: I0126 07:54:51.565931 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:51Z","lastTransitionTime":"2026-01-26T07:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:51 crc kubenswrapper[4806]: I0126 07:54:51.668137 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:51 crc kubenswrapper[4806]: I0126 07:54:51.668175 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:51 crc kubenswrapper[4806]: I0126 07:54:51.668184 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:51 crc kubenswrapper[4806]: I0126 07:54:51.668199 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:51 crc kubenswrapper[4806]: I0126 07:54:51.668209 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:51Z","lastTransitionTime":"2026-01-26T07:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:51 crc kubenswrapper[4806]: I0126 07:54:51.770405 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:51 crc kubenswrapper[4806]: I0126 07:54:51.770448 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:51 crc kubenswrapper[4806]: I0126 07:54:51.770457 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:51 crc kubenswrapper[4806]: I0126 07:54:51.770472 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:51 crc kubenswrapper[4806]: I0126 07:54:51.770482 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:51Z","lastTransitionTime":"2026-01-26T07:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:51 crc kubenswrapper[4806]: I0126 07:54:51.872215 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:51 crc kubenswrapper[4806]: I0126 07:54:51.872258 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:51 crc kubenswrapper[4806]: I0126 07:54:51.872269 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:51 crc kubenswrapper[4806]: I0126 07:54:51.872284 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:51 crc kubenswrapper[4806]: I0126 07:54:51.872293 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:51Z","lastTransitionTime":"2026-01-26T07:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:51 crc kubenswrapper[4806]: I0126 07:54:51.974898 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:51 crc kubenswrapper[4806]: I0126 07:54:51.974936 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:51 crc kubenswrapper[4806]: I0126 07:54:51.974946 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:51 crc kubenswrapper[4806]: I0126 07:54:51.974960 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:51 crc kubenswrapper[4806]: I0126 07:54:51.974970 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:51Z","lastTransitionTime":"2026-01-26T07:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:52 crc kubenswrapper[4806]: I0126 07:54:52.041504 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 07:54:52 crc kubenswrapper[4806]: I0126 07:54:52.041568 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 07:54:52 crc kubenswrapper[4806]: I0126 07:54:52.041513 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqmvf" Jan 26 07:54:52 crc kubenswrapper[4806]: I0126 07:54:52.041643 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 07:54:52 crc kubenswrapper[4806]: E0126 07:54:52.041799 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 07:54:52 crc kubenswrapper[4806]: E0126 07:54:52.041875 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqmvf" podUID="137029f0-49ad-4400-b117-2eff9271bce3" Jan 26 07:54:52 crc kubenswrapper[4806]: E0126 07:54:52.041936 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 07:54:52 crc kubenswrapper[4806]: E0126 07:54:52.042055 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 07:54:52 crc kubenswrapper[4806]: I0126 07:54:52.077481 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:52 crc kubenswrapper[4806]: I0126 07:54:52.077549 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:52 crc kubenswrapper[4806]: I0126 07:54:52.077562 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:52 crc kubenswrapper[4806]: I0126 07:54:52.077580 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:52 crc kubenswrapper[4806]: I0126 07:54:52.077612 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:52Z","lastTransitionTime":"2026-01-26T07:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:52 crc kubenswrapper[4806]: I0126 07:54:52.103074 4806 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 23:59:50.610381753 +0000 UTC Jan 26 07:54:52 crc kubenswrapper[4806]: I0126 07:54:52.180185 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:52 crc kubenswrapper[4806]: I0126 07:54:52.180221 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:52 crc kubenswrapper[4806]: I0126 07:54:52.180232 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:52 crc kubenswrapper[4806]: I0126 07:54:52.180250 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:52 crc kubenswrapper[4806]: I0126 07:54:52.180264 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:52Z","lastTransitionTime":"2026-01-26T07:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:52 crc kubenswrapper[4806]: I0126 07:54:52.282079 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:52 crc kubenswrapper[4806]: I0126 07:54:52.282118 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:52 crc kubenswrapper[4806]: I0126 07:54:52.282127 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:52 crc kubenswrapper[4806]: I0126 07:54:52.282141 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:52 crc kubenswrapper[4806]: I0126 07:54:52.282150 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:52Z","lastTransitionTime":"2026-01-26T07:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:52 crc kubenswrapper[4806]: I0126 07:54:52.384607 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:52 crc kubenswrapper[4806]: I0126 07:54:52.384634 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:52 crc kubenswrapper[4806]: I0126 07:54:52.384642 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:52 crc kubenswrapper[4806]: I0126 07:54:52.384655 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:52 crc kubenswrapper[4806]: I0126 07:54:52.384663 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:52Z","lastTransitionTime":"2026-01-26T07:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:52 crc kubenswrapper[4806]: I0126 07:54:52.486469 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:52 crc kubenswrapper[4806]: I0126 07:54:52.486497 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:52 crc kubenswrapper[4806]: I0126 07:54:52.486506 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:52 crc kubenswrapper[4806]: I0126 07:54:52.486522 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:52 crc kubenswrapper[4806]: I0126 07:54:52.486542 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:52Z","lastTransitionTime":"2026-01-26T07:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:52 crc kubenswrapper[4806]: I0126 07:54:52.589260 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:52 crc kubenswrapper[4806]: I0126 07:54:52.589311 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:52 crc kubenswrapper[4806]: I0126 07:54:52.589322 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:52 crc kubenswrapper[4806]: I0126 07:54:52.589341 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:52 crc kubenswrapper[4806]: I0126 07:54:52.589353 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:52Z","lastTransitionTime":"2026-01-26T07:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:52 crc kubenswrapper[4806]: I0126 07:54:52.691650 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:52 crc kubenswrapper[4806]: I0126 07:54:52.691683 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:52 crc kubenswrapper[4806]: I0126 07:54:52.691693 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:52 crc kubenswrapper[4806]: I0126 07:54:52.691707 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:52 crc kubenswrapper[4806]: I0126 07:54:52.691720 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:52Z","lastTransitionTime":"2026-01-26T07:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:52 crc kubenswrapper[4806]: I0126 07:54:52.795367 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:52 crc kubenswrapper[4806]: I0126 07:54:52.795405 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:52 crc kubenswrapper[4806]: I0126 07:54:52.795415 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:52 crc kubenswrapper[4806]: I0126 07:54:52.795431 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:52 crc kubenswrapper[4806]: I0126 07:54:52.795441 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:52Z","lastTransitionTime":"2026-01-26T07:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:52 crc kubenswrapper[4806]: I0126 07:54:52.897514 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:52 crc kubenswrapper[4806]: I0126 07:54:52.897576 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:52 crc kubenswrapper[4806]: I0126 07:54:52.897589 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:52 crc kubenswrapper[4806]: I0126 07:54:52.897608 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:52 crc kubenswrapper[4806]: I0126 07:54:52.897620 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:52Z","lastTransitionTime":"2026-01-26T07:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:53 crc kubenswrapper[4806]: I0126 07:54:53.000533 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:53 crc kubenswrapper[4806]: I0126 07:54:53.000583 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:53 crc kubenswrapper[4806]: I0126 07:54:53.000593 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:53 crc kubenswrapper[4806]: I0126 07:54:53.000610 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:53 crc kubenswrapper[4806]: I0126 07:54:53.000623 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:53Z","lastTransitionTime":"2026-01-26T07:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:53 crc kubenswrapper[4806]: I0126 07:54:53.102767 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:53 crc kubenswrapper[4806]: I0126 07:54:53.102812 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:53 crc kubenswrapper[4806]: I0126 07:54:53.102835 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:53 crc kubenswrapper[4806]: I0126 07:54:53.102853 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:53 crc kubenswrapper[4806]: I0126 07:54:53.102864 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:53Z","lastTransitionTime":"2026-01-26T07:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:53 crc kubenswrapper[4806]: I0126 07:54:53.103851 4806 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 06:29:04.242586844 +0000 UTC Jan 26 07:54:53 crc kubenswrapper[4806]: I0126 07:54:53.205598 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:53 crc kubenswrapper[4806]: I0126 07:54:53.205645 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:53 crc kubenswrapper[4806]: I0126 07:54:53.205655 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:53 crc kubenswrapper[4806]: I0126 07:54:53.205671 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:53 crc kubenswrapper[4806]: I0126 07:54:53.205683 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:53Z","lastTransitionTime":"2026-01-26T07:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:53 crc kubenswrapper[4806]: I0126 07:54:53.307759 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:53 crc kubenswrapper[4806]: I0126 07:54:53.307799 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:53 crc kubenswrapper[4806]: I0126 07:54:53.307811 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:53 crc kubenswrapper[4806]: I0126 07:54:53.307825 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:53 crc kubenswrapper[4806]: I0126 07:54:53.307835 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:53Z","lastTransitionTime":"2026-01-26T07:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:53 crc kubenswrapper[4806]: I0126 07:54:53.410634 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:53 crc kubenswrapper[4806]: I0126 07:54:53.410671 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:53 crc kubenswrapper[4806]: I0126 07:54:53.410679 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:53 crc kubenswrapper[4806]: I0126 07:54:53.410692 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:53 crc kubenswrapper[4806]: I0126 07:54:53.410701 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:53Z","lastTransitionTime":"2026-01-26T07:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:53 crc kubenswrapper[4806]: I0126 07:54:53.512670 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:53 crc kubenswrapper[4806]: I0126 07:54:53.512699 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:53 crc kubenswrapper[4806]: I0126 07:54:53.512709 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:53 crc kubenswrapper[4806]: I0126 07:54:53.512722 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:53 crc kubenswrapper[4806]: I0126 07:54:53.512731 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:53Z","lastTransitionTime":"2026-01-26T07:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:53 crc kubenswrapper[4806]: I0126 07:54:53.614812 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:53 crc kubenswrapper[4806]: I0126 07:54:53.614855 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:53 crc kubenswrapper[4806]: I0126 07:54:53.614867 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:53 crc kubenswrapper[4806]: I0126 07:54:53.614883 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:53 crc kubenswrapper[4806]: I0126 07:54:53.614893 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:53Z","lastTransitionTime":"2026-01-26T07:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:53 crc kubenswrapper[4806]: I0126 07:54:53.717952 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:53 crc kubenswrapper[4806]: I0126 07:54:53.717998 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:53 crc kubenswrapper[4806]: I0126 07:54:53.718012 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:53 crc kubenswrapper[4806]: I0126 07:54:53.718033 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:53 crc kubenswrapper[4806]: I0126 07:54:53.718048 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:53Z","lastTransitionTime":"2026-01-26T07:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:53 crc kubenswrapper[4806]: I0126 07:54:53.820507 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:53 crc kubenswrapper[4806]: I0126 07:54:53.820594 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:53 crc kubenswrapper[4806]: I0126 07:54:53.820609 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:53 crc kubenswrapper[4806]: I0126 07:54:53.820629 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:53 crc kubenswrapper[4806]: I0126 07:54:53.820643 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:53Z","lastTransitionTime":"2026-01-26T07:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:53 crc kubenswrapper[4806]: I0126 07:54:53.923354 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:53 crc kubenswrapper[4806]: I0126 07:54:53.923405 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:53 crc kubenswrapper[4806]: I0126 07:54:53.923417 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:53 crc kubenswrapper[4806]: I0126 07:54:53.923435 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:53 crc kubenswrapper[4806]: I0126 07:54:53.923500 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:53Z","lastTransitionTime":"2026-01-26T07:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:54 crc kubenswrapper[4806]: I0126 07:54:54.025682 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:54 crc kubenswrapper[4806]: I0126 07:54:54.025747 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:54 crc kubenswrapper[4806]: I0126 07:54:54.025765 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:54 crc kubenswrapper[4806]: I0126 07:54:54.025789 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:54 crc kubenswrapper[4806]: I0126 07:54:54.025807 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:54Z","lastTransitionTime":"2026-01-26T07:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:54 crc kubenswrapper[4806]: I0126 07:54:54.041062 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 07:54:54 crc kubenswrapper[4806]: I0126 07:54:54.041160 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 07:54:54 crc kubenswrapper[4806]: E0126 07:54:54.041186 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 07:54:54 crc kubenswrapper[4806]: I0126 07:54:54.041245 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 07:54:54 crc kubenswrapper[4806]: I0126 07:54:54.041261 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqmvf" Jan 26 07:54:54 crc kubenswrapper[4806]: E0126 07:54:54.041378 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 07:54:54 crc kubenswrapper[4806]: E0126 07:54:54.041450 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 07:54:54 crc kubenswrapper[4806]: E0126 07:54:54.041544 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqmvf" podUID="137029f0-49ad-4400-b117-2eff9271bce3" Jan 26 07:54:54 crc kubenswrapper[4806]: I0126 07:54:54.133965 4806 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 21:02:17.303287371 +0000 UTC Jan 26 07:54:54 crc kubenswrapper[4806]: I0126 07:54:54.135070 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:54 crc kubenswrapper[4806]: I0126 07:54:54.135121 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:54 crc kubenswrapper[4806]: I0126 07:54:54.135134 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:54 crc kubenswrapper[4806]: I0126 07:54:54.135155 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:54 crc kubenswrapper[4806]: I0126 07:54:54.135167 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:54Z","lastTransitionTime":"2026-01-26T07:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:54 crc kubenswrapper[4806]: I0126 07:54:54.237654 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:54 crc kubenswrapper[4806]: I0126 07:54:54.237693 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:54 crc kubenswrapper[4806]: I0126 07:54:54.237704 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:54 crc kubenswrapper[4806]: I0126 07:54:54.237721 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:54 crc kubenswrapper[4806]: I0126 07:54:54.237733 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:54Z","lastTransitionTime":"2026-01-26T07:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:54 crc kubenswrapper[4806]: I0126 07:54:54.340181 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:54 crc kubenswrapper[4806]: I0126 07:54:54.340222 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:54 crc kubenswrapper[4806]: I0126 07:54:54.340234 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:54 crc kubenswrapper[4806]: I0126 07:54:54.340255 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:54 crc kubenswrapper[4806]: I0126 07:54:54.340267 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:54Z","lastTransitionTime":"2026-01-26T07:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:54 crc kubenswrapper[4806]: I0126 07:54:54.443048 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:54 crc kubenswrapper[4806]: I0126 07:54:54.443099 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:54 crc kubenswrapper[4806]: I0126 07:54:54.443111 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:54 crc kubenswrapper[4806]: I0126 07:54:54.443134 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:54 crc kubenswrapper[4806]: I0126 07:54:54.443151 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:54Z","lastTransitionTime":"2026-01-26T07:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:54 crc kubenswrapper[4806]: I0126 07:54:54.546163 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:54 crc kubenswrapper[4806]: I0126 07:54:54.546212 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:54 crc kubenswrapper[4806]: I0126 07:54:54.546222 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:54 crc kubenswrapper[4806]: I0126 07:54:54.546244 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:54 crc kubenswrapper[4806]: I0126 07:54:54.546255 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:54Z","lastTransitionTime":"2026-01-26T07:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:54 crc kubenswrapper[4806]: I0126 07:54:54.648793 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:54 crc kubenswrapper[4806]: I0126 07:54:54.648845 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:54 crc kubenswrapper[4806]: I0126 07:54:54.648856 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:54 crc kubenswrapper[4806]: I0126 07:54:54.648876 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:54 crc kubenswrapper[4806]: I0126 07:54:54.648888 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:54Z","lastTransitionTime":"2026-01-26T07:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:54 crc kubenswrapper[4806]: I0126 07:54:54.752047 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:54 crc kubenswrapper[4806]: I0126 07:54:54.752096 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:54 crc kubenswrapper[4806]: I0126 07:54:54.752110 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:54 crc kubenswrapper[4806]: I0126 07:54:54.752131 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:54 crc kubenswrapper[4806]: I0126 07:54:54.752148 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:54Z","lastTransitionTime":"2026-01-26T07:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:54 crc kubenswrapper[4806]: I0126 07:54:54.854871 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:54 crc kubenswrapper[4806]: I0126 07:54:54.854920 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:54 crc kubenswrapper[4806]: I0126 07:54:54.854931 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:54 crc kubenswrapper[4806]: I0126 07:54:54.854950 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:54 crc kubenswrapper[4806]: I0126 07:54:54.854960 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:54Z","lastTransitionTime":"2026-01-26T07:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:54 crc kubenswrapper[4806]: I0126 07:54:54.957473 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:54 crc kubenswrapper[4806]: I0126 07:54:54.957661 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:54 crc kubenswrapper[4806]: I0126 07:54:54.957674 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:54 crc kubenswrapper[4806]: I0126 07:54:54.957695 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:54 crc kubenswrapper[4806]: I0126 07:54:54.957712 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:54Z","lastTransitionTime":"2026-01-26T07:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:55 crc kubenswrapper[4806]: I0126 07:54:55.042941 4806 scope.go:117] "RemoveContainer" containerID="cfc2e9b8a1bd4f2725d86ece596811599fc42c42930193e14dec2164c415acd3" Jan 26 07:54:55 crc kubenswrapper[4806]: I0126 07:54:55.060869 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:55 crc kubenswrapper[4806]: I0126 07:54:55.060901 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:55 crc kubenswrapper[4806]: I0126 07:54:55.060910 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:55 crc kubenswrapper[4806]: I0126 07:54:55.060927 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:55 crc kubenswrapper[4806]: I0126 07:54:55.060937 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:55Z","lastTransitionTime":"2026-01-26T07:54:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:55 crc kubenswrapper[4806]: I0126 07:54:55.134171 4806 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 11:40:49.040882648 +0000 UTC Jan 26 07:54:55 crc kubenswrapper[4806]: I0126 07:54:55.165042 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:55 crc kubenswrapper[4806]: I0126 07:54:55.165087 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:55 crc kubenswrapper[4806]: I0126 07:54:55.165125 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:55 crc kubenswrapper[4806]: I0126 07:54:55.165148 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:55 crc kubenswrapper[4806]: I0126 07:54:55.165207 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:55Z","lastTransitionTime":"2026-01-26T07:54:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:55 crc kubenswrapper[4806]: I0126 07:54:55.268985 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:55 crc kubenswrapper[4806]: I0126 07:54:55.269080 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:55 crc kubenswrapper[4806]: I0126 07:54:55.269100 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:55 crc kubenswrapper[4806]: I0126 07:54:55.269157 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:55 crc kubenswrapper[4806]: I0126 07:54:55.269177 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:55Z","lastTransitionTime":"2026-01-26T07:54:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:55 crc kubenswrapper[4806]: I0126 07:54:55.376977 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:55 crc kubenswrapper[4806]: I0126 07:54:55.377052 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:55 crc kubenswrapper[4806]: I0126 07:54:55.377070 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:55 crc kubenswrapper[4806]: I0126 07:54:55.377098 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:55 crc kubenswrapper[4806]: I0126 07:54:55.377123 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:55Z","lastTransitionTime":"2026-01-26T07:54:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:55 crc kubenswrapper[4806]: I0126 07:54:55.480977 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:55 crc kubenswrapper[4806]: I0126 07:54:55.481059 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:55 crc kubenswrapper[4806]: I0126 07:54:55.481081 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:55 crc kubenswrapper[4806]: I0126 07:54:55.481111 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:55 crc kubenswrapper[4806]: I0126 07:54:55.481159 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:55Z","lastTransitionTime":"2026-01-26T07:54:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:55 crc kubenswrapper[4806]: I0126 07:54:55.545817 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8mw7z_1f8b8acb-f4cf-41db-82f8-94ffd21c1594/ovnkube-controller/2.log" Jan 26 07:54:55 crc kubenswrapper[4806]: I0126 07:54:55.553014 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" event={"ID":"1f8b8acb-f4cf-41db-82f8-94ffd21c1594","Type":"ContainerStarted","Data":"df2814fbaab959fae53eddddeffb17ab97a5a8ad31fbc20bcca4cdad140a24e2"} Jan 26 07:54:55 crc kubenswrapper[4806]: I0126 07:54:55.554788 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" Jan 26 07:54:55 crc kubenswrapper[4806]: I0126 07:54:55.575777 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rqmvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"137029f0-49ad-4400-b117-2eff9271bce3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9zt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9zt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:14Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rqmvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:55Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:55 crc kubenswrapper[4806]: I0126 07:54:55.585286 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:55 crc kubenswrapper[4806]: I0126 07:54:55.585312 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:55 crc kubenswrapper[4806]: I0126 07:54:55.585323 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:55 crc kubenswrapper[4806]: I0126 07:54:55.585342 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:55 crc kubenswrapper[4806]: I0126 07:54:55.585356 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:55Z","lastTransitionTime":"2026-01-26T07:54:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:55 crc kubenswrapper[4806]: I0126 07:54:55.590190 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a365c4a-dfc5-4290-920d-f1f04e322061\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d54a191a8969d10a108b9fa36fe6c03f55969e7bd6c73aef14d2936d92290ccb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad400013f30ae5d55431037940b99d47dc5a2449b4535963218a0e9510e5fcc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad400013f30ae5d55431037940b99d47dc5a2449b4535963218a0e9510e5fcc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:53:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:55Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:55 crc kubenswrapper[4806]: I0126 07:54:55.609473 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a2716f1-7f63-4f1b-88e7-412b33a5afe3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1797a908e33ef3678ac1471e937a8e52c11dbb31d8de0f19d400a01706a98b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c111f2ec62507ccc570af4f4da19232bfa0946bb9e67bde595d5a3d66eff6a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f94ceee9c2ed3bd1c5ba58ee7e5047dd5f0a325c610c4277705f3426e69ed79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50228cb176a3defc821a78ed75d837706c2de3a3414f02e0afacf81289a15999\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:53:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:55Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:55 crc kubenswrapper[4806]: I0126 07:54:55.629423 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:55Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:55 crc kubenswrapper[4806]: I0126 07:54:55.651578 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:55Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:55 crc kubenswrapper[4806]: I0126 07:54:55.681415 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f8b8acb-f4cf-41db-82f8-94ffd21c1594\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21d2ea6bc7f0a91da2b372587419ec1b34cfe3aca8f70dd4e360bf3e818bb103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adbed86e005e6e9cecb4320e0dff72eab3cfd505ff656a64e6643cf47828c789\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d102c6b4ea9d79168f0fec24c556c8fec8e3c1191646eebcbe8ed0289edfb6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c90dede1fc0753cad500555db7d92be73cb9b51fe948e3e65bed66134d85628c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e05c58fd693a7827b1ff019c6e160e4d2198004aca34245ee6e08cd37f5627da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://634f160a9064240a06e9b425f70ceae6390db7205f898efdc601437e72c362df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df2814fbaab959fae53eddddeffb17ab97a5a8ad31fbc20bcca4cdad140a24e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfc2e9b8a1bd4f2725d86ece596811599fc42c42930193e14dec2164c415acd3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T07:54:31Z\\\",\\\"message\\\":\\\"o create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:30Z is after 2025-08-24T17:21:41Z]\\\\nI0126 07:54:31.006399 6338 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-network-console/networking-console-plugin_TCP_cluster\\\\\\\", UUID:\\\\\\\"ab0b1d51-5ec6-479b-8881-93dfa8d30337\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-network-console/networking-console-plugin\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGr\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e9d86c9f33a8ad421c133f7b674eff9362011ad9ce60cbe1b917bda84c85047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2105d09a59a290aa3e1915a2b8c2c8aeb3a3e94ddfd4b48dc574a5362f522d18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2105d09a59a290aa3e1915a2b8c2c8aeb3a3e94ddfd4b48dc574a5362f522d18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8mw7z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:55Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:55 crc kubenswrapper[4806]: I0126 07:54:55.688544 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:55 crc kubenswrapper[4806]: I0126 07:54:55.688568 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:55 crc kubenswrapper[4806]: I0126 07:54:55.688577 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:55 crc kubenswrapper[4806]: I0126 07:54:55.688591 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:55 crc kubenswrapper[4806]: I0126 07:54:55.688617 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:55Z","lastTransitionTime":"2026-01-26T07:54:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:55 crc kubenswrapper[4806]: I0126 07:54:55.696602 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d7glh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4320ae6b-0d73-47d7-9f2c-f3c5b6b69041\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b2b56f712aa5b59214153bb170a5866aea874524f826e53be3e703b8ff912fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9da5e8c52d415e76190ace196927bd1783b08990164b11379940ecc2b6734551\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T07:54:47Z\\\",\\\"message\\\":\\\"2026-01-26T07:54:01+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6a15fa94-db47-433a-a1f7-0d0ac0229e59\\\\n2026-01-26T07:54:01+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6a15fa94-db47-433a-a1f7-0d0ac0229e59 to /host/opt/cni/bin/\\\\n2026-01-26T07:54:02Z [verbose] multus-daemon started\\\\n2026-01-26T07:54:02Z [verbose] Readiness Indicator file check\\\\n2026-01-26T07:54:47Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:01Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpcqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d7glh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:55Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:55 crc kubenswrapper[4806]: I0126 07:54:55.717720 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-268q5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59844d88-1bf9-4761-b664-74623e7532c3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80fc9c4f56c3d017653449212b2b9457a701e8c567317798eb12c695bccec5c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19fc2da16d6c76dad40cc2947eb8da01b1ead5804be80df5756c132456575909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19fc2da16d6c76dad40cc2947eb8da01b1ead5804be80df5756c132456575909\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a426f2a63c3cdd56cec3bbd1f6e88ff76808304da3fe757043afeb2feb26614\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a426f2a63c3cdd56cec3bbd1f6e88ff76808304da3fe757043afeb2feb26614\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5459f7b834ef08b014a811d0955763235cb4dbb6cbe92670202b03f51f7d684d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5459f7b834ef08b014a811d0955763235cb4dbb6cbe92670202b03f51f7d684d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1efe94940fbf1babcccc68efae1d38751c22232d1a2f197485057be906db80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f1efe94940fbf1babcccc68efae1d38751c22232d1a2f197485057be906db80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1332a28e0cf08275ce281997fdc5674f44e397f8ff02fdd19fed12c2de2b70a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1332a28e0cf08275ce281997fdc5674f44e397f8ff02fdd19fed12c2de2b70a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://638da847467b7cade9c68a764044417e85bfa99cae93105cb70b260d3668c1a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://638da847467b7cade9c68a764044417e85bfa99cae93105cb70b260d3668c1a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-268q5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:55Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:55 crc kubenswrapper[4806]: I0126 07:54:55.742803 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"674c5d9f-0f9e-47cf-b6bb-5a011bd0af68\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06b15982445e897ebbaf881e9a84973b9b700b0763ea6dfc0dfa3d5adb3d5b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67cf748dcc41d3402937cba97eadb7311a4fd7ab359a24768ffd4b6f689e9e2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://374c11776db902d4a30fdeab722bf8c85be53043b87555ac42f9f18bbfca5f78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded50f9353eb262b2587b8d1a59d8e1a06946c9b228eb0991fe35ad79f19567b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dc1a26bcb2eff0795009b3d88f7fedd86bccf553e9b8b86f8bc4bdda154566d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b85e7f36ac03113362c2ab6b15fb73e517599c6a56c6a410b9dead70191502f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b85e7f36ac03113362c2ab6b15fb73e517599c6a56c6a410b9dead70191502f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7324fbdf53b0be79696bb4bf870fe072761633e09cc88cce3a213a40b339410d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7324fbdf53b0be79696bb4bf870fe072761633e09cc88cce3a213a40b339410d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1987002ea8dfdacf427291f1757243727d744b1fe99d0be1f8cddc26beb74fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1987002ea8dfdacf427291f1757243727d744b1fe99d0be1f8cddc26beb74fc9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:53:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:55Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:55 crc kubenswrapper[4806]: I0126 07:54:55.769604 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f837be3-c97c-4ec6-9ac0-1b2c210a2bd1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://370998a25827938e42e6ca8f51cb20c64068d266a2a6e92f43cfa6033efe1f1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://befc3a7310ed730ab06800d0ffe5a5ad206f5193f921d0080543d96e53c382cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f4e90bd8d92e4159f45b887647465c23595d6a14cb9fd5a6f3b6c736345b302\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d8fe9445b34c3dea0e5508fbbd9d8c72d3b0bd3c927f3d0469fb5f86a61b7a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6442d8b57ec5789bc5ad219da279d42cd053598b4a8a843b4190a619750483d3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 07:53:53.366863 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 07:53:53.368547 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1947576678/tls.crt::/tmp/serving-cert-1947576678/tls.key\\\\\\\"\\\\nI0126 07:53:59.068951 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 07:53:59.072863 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 07:53:59.072893 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 07:53:59.072922 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 07:53:59.072931 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 07:53:59.078386 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 07:53:59.078432 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 07:53:59.078439 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 07:53:59.078445 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 07:53:59.078449 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 07:53:59.078455 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 07:53:59.078459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 07:53:59.078571 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 07:53:59.084946 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eda625619208720589e5f413d0df8e1e55e7fd5ab4a2e59d64e19150afb05ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02368f0d186c7de1c3dc25d5c60985799ded799314f332c5132230a413d531a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02368f0d186c7de1c3dc25d5c60985799ded799314f332c5132230a413d531a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:53:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:55Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:55 crc kubenswrapper[4806]: I0126 07:54:55.790334 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acf0880fe013b6ee2bf3e8342e19ebbdc21b0f7aeb69749dce219fe0e5e4939a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:55Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:55 crc kubenswrapper[4806]: I0126 07:54:55.791578 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:55 crc kubenswrapper[4806]: I0126 07:54:55.791635 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:55 crc kubenswrapper[4806]: I0126 07:54:55.791647 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:55 crc kubenswrapper[4806]: I0126 07:54:55.791665 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:55 crc kubenswrapper[4806]: I0126 07:54:55.791678 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:55Z","lastTransitionTime":"2026-01-26T07:54:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:55 crc kubenswrapper[4806]: I0126 07:54:55.810755 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0298829a559cb39d3f8d2f3b0e53b249e74c432ba82385ab628135d53fa77272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70db7a55580b02bc2be913a8978f895c5a3826133883cf1c838bc84a76e89af1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:55Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:55 crc kubenswrapper[4806]: I0126 07:54:55.839052 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wx526" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"265e15b3-6ef8-47df-ab15-dcc9bd9574ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5080e1085d9267f3e432eaf62a6c620f7e59db08140bd5a901ab21840bfd6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmfr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wx526\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:55Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:55 crc kubenswrapper[4806]: I0126 07:54:55.862465 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f2d0753-513a-4924-9985-d1058d2cda9b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da291d9c45edfc19eed33fce385cb1d8005585d2e9ac5fdfc50040a9c3038683\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6719f7820924ce02c29e99fc9fb5c6e733597c1f006b3a2701046680af978e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8e90cbd21cd7080a82e2440f441264b1fd1b45ee048fbcadc7afbd7b4384996\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e49f6187d5b008fdb24b0f1a268d099bba2d3700d3ebd07c6f2d05e5718bf990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e49f6187d5b008fdb24b0f1a268d099bba2d3700d3ebd07c6f2d05e5718bf990\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:41Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:53:41Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:55Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:55 crc kubenswrapper[4806]: I0126 07:54:55.882344 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:55Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:55 crc kubenswrapper[4806]: I0126 07:54:55.894349 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:55 crc kubenswrapper[4806]: I0126 07:54:55.894386 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:55 crc kubenswrapper[4806]: I0126 07:54:55.894396 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:55 crc kubenswrapper[4806]: I0126 07:54:55.894411 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:55 crc kubenswrapper[4806]: I0126 07:54:55.894422 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:55Z","lastTransitionTime":"2026-01-26T07:54:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:55 crc kubenswrapper[4806]: I0126 07:54:55.898954 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dfe44c66a73660b38211ed7ae8fbb6a036c2ebc6a2716dd0e36da70b8d1fcc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:55Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:55 crc kubenswrapper[4806]: I0126 07:54:55.911481 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d07502a2-50b0-4012-b335-340a1c694c50\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03d55ca5e1d542a5cdb726c1e581fb9b644e237bf4575794fedadf60386c339c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67x45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://824bb40c04b0ea474c45020c9c84ce7b7ee18c52beef9fee9618ba5a6e59be04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67x45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k2tlk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:55Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:55 crc kubenswrapper[4806]: I0126 07:54:55.922943 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pw8cg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c73a9f4-20b2-4c8a-b25d-413770be4fac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://792a2a27d6043da345b05c611a152abc9233ff61958434dc7a7f8c13d185fc04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt9w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pw8cg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:55Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:55 crc kubenswrapper[4806]: I0126 07:54:55.936834 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tfbl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47635c72-c532-48f3-839a-d86393eb5d24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7ea777c2c49c1bc930bff2584c62a6ebd7752638e4d5aec2818f56cf0678ad8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4lbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4910b820b4227a848efb96e462e633150fe7e06295dc04a0d9c45636cc4b83bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4lbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tfbl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:55Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:55 crc kubenswrapper[4806]: I0126 07:54:55.998594 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:55 crc kubenswrapper[4806]: I0126 07:54:55.998665 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:55 crc kubenswrapper[4806]: I0126 07:54:55.998679 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:55 crc kubenswrapper[4806]: I0126 07:54:55.998738 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:55 crc kubenswrapper[4806]: I0126 07:54:55.998757 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:55Z","lastTransitionTime":"2026-01-26T07:54:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:56 crc kubenswrapper[4806]: I0126 07:54:56.040946 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 07:54:56 crc kubenswrapper[4806]: I0126 07:54:56.040978 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 07:54:56 crc kubenswrapper[4806]: I0126 07:54:56.041049 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqmvf" Jan 26 07:54:56 crc kubenswrapper[4806]: I0126 07:54:56.041012 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 07:54:56 crc kubenswrapper[4806]: E0126 07:54:56.041150 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 07:54:56 crc kubenswrapper[4806]: E0126 07:54:56.041277 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqmvf" podUID="137029f0-49ad-4400-b117-2eff9271bce3" Jan 26 07:54:56 crc kubenswrapper[4806]: E0126 07:54:56.041478 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 07:54:56 crc kubenswrapper[4806]: E0126 07:54:56.041600 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 07:54:56 crc kubenswrapper[4806]: I0126 07:54:56.101691 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:56 crc kubenswrapper[4806]: I0126 07:54:56.101743 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:56 crc kubenswrapper[4806]: I0126 07:54:56.101759 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:56 crc kubenswrapper[4806]: I0126 07:54:56.101783 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:56 crc kubenswrapper[4806]: I0126 07:54:56.101798 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:56Z","lastTransitionTime":"2026-01-26T07:54:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:56 crc kubenswrapper[4806]: I0126 07:54:56.135473 4806 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 21:47:16.915343753 +0000 UTC Jan 26 07:54:56 crc kubenswrapper[4806]: I0126 07:54:56.204391 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:56 crc kubenswrapper[4806]: I0126 07:54:56.204437 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:56 crc kubenswrapper[4806]: I0126 07:54:56.204447 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:56 crc kubenswrapper[4806]: I0126 07:54:56.204464 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:56 crc kubenswrapper[4806]: I0126 07:54:56.204473 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:56Z","lastTransitionTime":"2026-01-26T07:54:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:56 crc kubenswrapper[4806]: I0126 07:54:56.307754 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:56 crc kubenswrapper[4806]: I0126 07:54:56.307817 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:56 crc kubenswrapper[4806]: I0126 07:54:56.307833 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:56 crc kubenswrapper[4806]: I0126 07:54:56.307863 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:56 crc kubenswrapper[4806]: I0126 07:54:56.307878 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:56Z","lastTransitionTime":"2026-01-26T07:54:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:56 crc kubenswrapper[4806]: I0126 07:54:56.411856 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:56 crc kubenswrapper[4806]: I0126 07:54:56.411936 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:56 crc kubenswrapper[4806]: I0126 07:54:56.411956 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:56 crc kubenswrapper[4806]: I0126 07:54:56.411986 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:56 crc kubenswrapper[4806]: I0126 07:54:56.412006 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:56Z","lastTransitionTime":"2026-01-26T07:54:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:56 crc kubenswrapper[4806]: I0126 07:54:56.516095 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:56 crc kubenswrapper[4806]: I0126 07:54:56.516160 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:56 crc kubenswrapper[4806]: I0126 07:54:56.516179 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:56 crc kubenswrapper[4806]: I0126 07:54:56.516208 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:56 crc kubenswrapper[4806]: I0126 07:54:56.516233 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:56Z","lastTransitionTime":"2026-01-26T07:54:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:56 crc kubenswrapper[4806]: I0126 07:54:56.619929 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:56 crc kubenswrapper[4806]: I0126 07:54:56.620019 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:56 crc kubenswrapper[4806]: I0126 07:54:56.620042 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:56 crc kubenswrapper[4806]: I0126 07:54:56.620073 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:56 crc kubenswrapper[4806]: I0126 07:54:56.620093 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:56Z","lastTransitionTime":"2026-01-26T07:54:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:56 crc kubenswrapper[4806]: I0126 07:54:56.722623 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:56 crc kubenswrapper[4806]: I0126 07:54:56.722670 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:56 crc kubenswrapper[4806]: I0126 07:54:56.722682 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:56 crc kubenswrapper[4806]: I0126 07:54:56.722701 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:56 crc kubenswrapper[4806]: I0126 07:54:56.722714 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:56Z","lastTransitionTime":"2026-01-26T07:54:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:56 crc kubenswrapper[4806]: I0126 07:54:56.825680 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:56 crc kubenswrapper[4806]: I0126 07:54:56.825768 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:56 crc kubenswrapper[4806]: I0126 07:54:56.825796 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:56 crc kubenswrapper[4806]: I0126 07:54:56.825825 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:56 crc kubenswrapper[4806]: I0126 07:54:56.825845 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:56Z","lastTransitionTime":"2026-01-26T07:54:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:56 crc kubenswrapper[4806]: I0126 07:54:56.928109 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:56 crc kubenswrapper[4806]: I0126 07:54:56.928165 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:56 crc kubenswrapper[4806]: I0126 07:54:56.928180 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:56 crc kubenswrapper[4806]: I0126 07:54:56.928203 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:56 crc kubenswrapper[4806]: I0126 07:54:56.928218 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:56Z","lastTransitionTime":"2026-01-26T07:54:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:57 crc kubenswrapper[4806]: I0126 07:54:57.030688 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:57 crc kubenswrapper[4806]: I0126 07:54:57.030756 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:57 crc kubenswrapper[4806]: I0126 07:54:57.030774 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:57 crc kubenswrapper[4806]: I0126 07:54:57.030799 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:57 crc kubenswrapper[4806]: I0126 07:54:57.030816 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:57Z","lastTransitionTime":"2026-01-26T07:54:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:57 crc kubenswrapper[4806]: I0126 07:54:57.134369 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:57 crc kubenswrapper[4806]: I0126 07:54:57.134441 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:57 crc kubenswrapper[4806]: I0126 07:54:57.134461 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:57 crc kubenswrapper[4806]: I0126 07:54:57.134491 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:57 crc kubenswrapper[4806]: I0126 07:54:57.134511 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:57Z","lastTransitionTime":"2026-01-26T07:54:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:57 crc kubenswrapper[4806]: I0126 07:54:57.136506 4806 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 20:35:27.454846307 +0000 UTC Jan 26 07:54:57 crc kubenswrapper[4806]: I0126 07:54:57.237729 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:57 crc kubenswrapper[4806]: I0126 07:54:57.237783 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:57 crc kubenswrapper[4806]: I0126 07:54:57.237797 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:57 crc kubenswrapper[4806]: I0126 07:54:57.237820 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:57 crc kubenswrapper[4806]: I0126 07:54:57.237834 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:57Z","lastTransitionTime":"2026-01-26T07:54:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:57 crc kubenswrapper[4806]: I0126 07:54:57.341463 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:57 crc kubenswrapper[4806]: I0126 07:54:57.341568 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:57 crc kubenswrapper[4806]: I0126 07:54:57.341583 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:57 crc kubenswrapper[4806]: I0126 07:54:57.341607 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:57 crc kubenswrapper[4806]: I0126 07:54:57.341624 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:57Z","lastTransitionTime":"2026-01-26T07:54:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:57 crc kubenswrapper[4806]: I0126 07:54:57.445322 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:57 crc kubenswrapper[4806]: I0126 07:54:57.445382 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:57 crc kubenswrapper[4806]: I0126 07:54:57.445401 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:57 crc kubenswrapper[4806]: I0126 07:54:57.445427 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:57 crc kubenswrapper[4806]: I0126 07:54:57.445449 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:57Z","lastTransitionTime":"2026-01-26T07:54:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:57 crc kubenswrapper[4806]: I0126 07:54:57.547899 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:57 crc kubenswrapper[4806]: I0126 07:54:57.547977 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:57 crc kubenswrapper[4806]: I0126 07:54:57.548001 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:57 crc kubenswrapper[4806]: I0126 07:54:57.548030 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:57 crc kubenswrapper[4806]: I0126 07:54:57.548139 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:57Z","lastTransitionTime":"2026-01-26T07:54:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:57 crc kubenswrapper[4806]: I0126 07:54:57.562168 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8mw7z_1f8b8acb-f4cf-41db-82f8-94ffd21c1594/ovnkube-controller/3.log" Jan 26 07:54:57 crc kubenswrapper[4806]: I0126 07:54:57.562843 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8mw7z_1f8b8acb-f4cf-41db-82f8-94ffd21c1594/ovnkube-controller/2.log" Jan 26 07:54:57 crc kubenswrapper[4806]: I0126 07:54:57.565447 4806 generic.go:334] "Generic (PLEG): container finished" podID="1f8b8acb-f4cf-41db-82f8-94ffd21c1594" containerID="df2814fbaab959fae53eddddeffb17ab97a5a8ad31fbc20bcca4cdad140a24e2" exitCode=1 Jan 26 07:54:57 crc kubenswrapper[4806]: I0126 07:54:57.565489 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" event={"ID":"1f8b8acb-f4cf-41db-82f8-94ffd21c1594","Type":"ContainerDied","Data":"df2814fbaab959fae53eddddeffb17ab97a5a8ad31fbc20bcca4cdad140a24e2"} Jan 26 07:54:57 crc kubenswrapper[4806]: I0126 07:54:57.565539 4806 scope.go:117] "RemoveContainer" containerID="cfc2e9b8a1bd4f2725d86ece596811599fc42c42930193e14dec2164c415acd3" Jan 26 07:54:57 crc kubenswrapper[4806]: I0126 07:54:57.566652 4806 scope.go:117] "RemoveContainer" containerID="df2814fbaab959fae53eddddeffb17ab97a5a8ad31fbc20bcca4cdad140a24e2" Jan 26 07:54:57 crc kubenswrapper[4806]: E0126 07:54:57.566825 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-8mw7z_openshift-ovn-kubernetes(1f8b8acb-f4cf-41db-82f8-94ffd21c1594)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" podUID="1f8b8acb-f4cf-41db-82f8-94ffd21c1594" Jan 26 07:54:57 crc kubenswrapper[4806]: I0126 07:54:57.578879 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f2d0753-513a-4924-9985-d1058d2cda9b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da291d9c45edfc19eed33fce385cb1d8005585d2e9ac5fdfc50040a9c3038683\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6719f7820924ce02c29e99fc9fb5c6e733597c1f006b3a2701046680af978e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8e90cbd21cd7080a82e2440f441264b1fd1b45ee048fbcadc7afbd7b4384996\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e49f6187d5b008fdb24b0f1a268d099bba2d3700d3ebd07c6f2d05e5718bf990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e49f6187d5b008fdb24b0f1a268d099bba2d3700d3ebd07c6f2d05e5718bf990\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:41Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:53:41Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:57Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:57 crc kubenswrapper[4806]: I0126 07:54:57.590783 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:57Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:57 crc kubenswrapper[4806]: I0126 07:54:57.605601 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dfe44c66a73660b38211ed7ae8fbb6a036c2ebc6a2716dd0e36da70b8d1fcc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:57Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:57 crc kubenswrapper[4806]: I0126 07:54:57.623199 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d07502a2-50b0-4012-b335-340a1c694c50\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03d55ca5e1d542a5cdb726c1e581fb9b644e237bf4575794fedadf60386c339c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67x45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://824bb40c04b0ea474c45020c9c84ce7b7ee18c52beef9fee9618ba5a6e59be04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67x45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k2tlk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:57Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:57 crc kubenswrapper[4806]: I0126 07:54:57.634260 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pw8cg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c73a9f4-20b2-4c8a-b25d-413770be4fac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://792a2a27d6043da345b05c611a152abc9233ff61958434dc7a7f8c13d185fc04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt9w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pw8cg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:57Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:57 crc kubenswrapper[4806]: I0126 07:54:57.646637 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tfbl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47635c72-c532-48f3-839a-d86393eb5d24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7ea777c2c49c1bc930bff2584c62a6ebd7752638e4d5aec2818f56cf0678ad8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4lbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4910b820b4227a848efb96e462e633150fe7e06295dc04a0d9c45636cc4b83bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4lbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tfbl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:57Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:57 crc kubenswrapper[4806]: I0126 07:54:57.650364 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:57 crc kubenswrapper[4806]: I0126 07:54:57.650402 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:57 crc kubenswrapper[4806]: I0126 07:54:57.650412 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:57 crc kubenswrapper[4806]: I0126 07:54:57.650427 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:57 crc kubenswrapper[4806]: I0126 07:54:57.650440 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:57Z","lastTransitionTime":"2026-01-26T07:54:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:57 crc kubenswrapper[4806]: I0126 07:54:57.668130 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f8b8acb-f4cf-41db-82f8-94ffd21c1594\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21d2ea6bc7f0a91da2b372587419ec1b34cfe3aca8f70dd4e360bf3e818bb103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adbed86e005e6e9cecb4320e0dff72eab3cfd505ff656a64e6643cf47828c789\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d102c6b4ea9d79168f0fec24c556c8fec8e3c1191646eebcbe8ed0289edfb6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c90dede1fc0753cad500555db7d92be73cb9b51fe948e3e65bed66134d85628c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e05c58fd693a7827b1ff019c6e160e4d2198004aca34245ee6e08cd37f5627da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://634f160a9064240a06e9b425f70ceae6390db7205f898efdc601437e72c362df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df2814fbaab959fae53eddddeffb17ab97a5a8ad31fbc20bcca4cdad140a24e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfc2e9b8a1bd4f2725d86ece596811599fc42c42930193e14dec2164c415acd3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T07:54:31Z\\\",\\\"message\\\":\\\"o create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:30Z is after 2025-08-24T17:21:41Z]\\\\nI0126 07:54:31.006399 6338 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-network-console/networking-console-plugin_TCP_cluster\\\\\\\", UUID:\\\\\\\"ab0b1d51-5ec6-479b-8881-93dfa8d30337\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-network-console/networking-console-plugin\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGr\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df2814fbaab959fae53eddddeffb17ab97a5a8ad31fbc20bcca4cdad140a24e2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T07:54:56Z\\\",\\\"message\\\":\\\"712973235162149816) with []\\\\nI0126 07:54:56.255878 6711 address_set.go:302] New(aa6fc2dc-fab0-4812-b9da-809058e4dcf7/default-network-controller:EgressIP:egressip-served-pods:v4:default/a8519615025667110816) with []\\\\nI0126 07:54:56.255905 6711 address_set.go:302] New(bf133528-8652-4c84-85ff-881f0afe9837/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with []\\\\nI0126 07:54:56.255984 6711 factory.go:1336] Added *v1.Node event handler 7\\\\nI0126 07:54:56.256027 6711 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0126 07:54:56.256081 6711 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0126 07:54:56.256139 6711 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0126 07:54:56.256186 6711 factory.go:656] Stopping watch factory\\\\nI0126 07:54:56.256241 6711 handler.go:208] Removed *v1.Node event handler 7\\\\nI0126 07:54:56.256268 6711 handler.go:208] Removed *v1.Node event handler 2\\\\nI0126 07:54:56.256352 6711 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0126 07:54:56.256438 6711 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0126 07:54:56.256482 6711 ovnkube.go:599] Stopped ovnkube\\\\nI0126 07:54:56.256561 6711 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0126 07:54:56.256649 6711 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e9d86c9f33a8ad421c133f7b674eff9362011ad9ce60cbe1b917bda84c85047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2105d09a59a290aa3e1915a2b8c2c8aeb3a3e94ddfd4b48dc574a5362f522d18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2105d09a59a290aa3e1915a2b8c2c8aeb3a3e94ddfd4b48dc574a5362f522d18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8mw7z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:57Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:57 crc kubenswrapper[4806]: I0126 07:54:57.681161 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rqmvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"137029f0-49ad-4400-b117-2eff9271bce3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9zt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9zt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:14Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rqmvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:57Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:57 crc kubenswrapper[4806]: I0126 07:54:57.690499 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a365c4a-dfc5-4290-920d-f1f04e322061\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d54a191a8969d10a108b9fa36fe6c03f55969e7bd6c73aef14d2936d92290ccb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad400013f30ae5d55431037940b99d47dc5a2449b4535963218a0e9510e5fcc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad400013f30ae5d55431037940b99d47dc5a2449b4535963218a0e9510e5fcc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:53:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:57Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:57 crc kubenswrapper[4806]: I0126 07:54:57.702129 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a2716f1-7f63-4f1b-88e7-412b33a5afe3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1797a908e33ef3678ac1471e937a8e52c11dbb31d8de0f19d400a01706a98b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c111f2ec62507ccc570af4f4da19232bfa0946bb9e67bde595d5a3d66eff6a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f94ceee9c2ed3bd1c5ba58ee7e5047dd5f0a325c610c4277705f3426e69ed79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50228cb176a3defc821a78ed75d837706c2de3a3414f02e0afacf81289a15999\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:53:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:57Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:57 crc kubenswrapper[4806]: I0126 07:54:57.715749 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:57Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:57 crc kubenswrapper[4806]: I0126 07:54:57.727035 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:57Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:57 crc kubenswrapper[4806]: I0126 07:54:57.739275 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wx526" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"265e15b3-6ef8-47df-ab15-dcc9bd9574ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5080e1085d9267f3e432eaf62a6c620f7e59db08140bd5a901ab21840bfd6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmfr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wx526\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:57Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:57 crc kubenswrapper[4806]: I0126 07:54:57.754117 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:57 crc kubenswrapper[4806]: I0126 07:54:57.754289 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:57 crc kubenswrapper[4806]: I0126 07:54:57.754453 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:57 crc kubenswrapper[4806]: I0126 07:54:57.754557 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:57 crc kubenswrapper[4806]: I0126 07:54:57.754626 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:57Z","lastTransitionTime":"2026-01-26T07:54:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:57 crc kubenswrapper[4806]: I0126 07:54:57.755509 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d7glh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4320ae6b-0d73-47d7-9f2c-f3c5b6b69041\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b2b56f712aa5b59214153bb170a5866aea874524f826e53be3e703b8ff912fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9da5e8c52d415e76190ace196927bd1783b08990164b11379940ecc2b6734551\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T07:54:47Z\\\",\\\"message\\\":\\\"2026-01-26T07:54:01+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6a15fa94-db47-433a-a1f7-0d0ac0229e59\\\\n2026-01-26T07:54:01+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6a15fa94-db47-433a-a1f7-0d0ac0229e59 to /host/opt/cni/bin/\\\\n2026-01-26T07:54:02Z [verbose] multus-daemon started\\\\n2026-01-26T07:54:02Z [verbose] Readiness Indicator file check\\\\n2026-01-26T07:54:47Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:01Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpcqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d7glh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:57Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:57 crc kubenswrapper[4806]: I0126 07:54:57.772062 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-268q5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59844d88-1bf9-4761-b664-74623e7532c3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80fc9c4f56c3d017653449212b2b9457a701e8c567317798eb12c695bccec5c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19fc2da16d6c76dad40cc2947eb8da01b1ead5804be80df5756c132456575909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19fc2da16d6c76dad40cc2947eb8da01b1ead5804be80df5756c132456575909\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a426f2a63c3cdd56cec3bbd1f6e88ff76808304da3fe757043afeb2feb26614\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a426f2a63c3cdd56cec3bbd1f6e88ff76808304da3fe757043afeb2feb26614\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5459f7b834ef08b014a811d0955763235cb4dbb6cbe92670202b03f51f7d684d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5459f7b834ef08b014a811d0955763235cb4dbb6cbe92670202b03f51f7d684d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1efe94940fbf1babcccc68efae1d38751c22232d1a2f197485057be906db80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f1efe94940fbf1babcccc68efae1d38751c22232d1a2f197485057be906db80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1332a28e0cf08275ce281997fdc5674f44e397f8ff02fdd19fed12c2de2b70a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1332a28e0cf08275ce281997fdc5674f44e397f8ff02fdd19fed12c2de2b70a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://638da847467b7cade9c68a764044417e85bfa99cae93105cb70b260d3668c1a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://638da847467b7cade9c68a764044417e85bfa99cae93105cb70b260d3668c1a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-268q5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:57Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:57 crc kubenswrapper[4806]: I0126 07:54:57.793051 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"674c5d9f-0f9e-47cf-b6bb-5a011bd0af68\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06b15982445e897ebbaf881e9a84973b9b700b0763ea6dfc0dfa3d5adb3d5b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67cf748dcc41d3402937cba97eadb7311a4fd7ab359a24768ffd4b6f689e9e2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://374c11776db902d4a30fdeab722bf8c85be53043b87555ac42f9f18bbfca5f78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded50f9353eb262b2587b8d1a59d8e1a06946c9b228eb0991fe35ad79f19567b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dc1a26bcb2eff0795009b3d88f7fedd86bccf553e9b8b86f8bc4bdda154566d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b85e7f36ac03113362c2ab6b15fb73e517599c6a56c6a410b9dead70191502f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b85e7f36ac03113362c2ab6b15fb73e517599c6a56c6a410b9dead70191502f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7324fbdf53b0be79696bb4bf870fe072761633e09cc88cce3a213a40b339410d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7324fbdf53b0be79696bb4bf870fe072761633e09cc88cce3a213a40b339410d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1987002ea8dfdacf427291f1757243727d744b1fe99d0be1f8cddc26beb74fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1987002ea8dfdacf427291f1757243727d744b1fe99d0be1f8cddc26beb74fc9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:53:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:57Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:57 crc kubenswrapper[4806]: I0126 07:54:57.809355 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f837be3-c97c-4ec6-9ac0-1b2c210a2bd1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://370998a25827938e42e6ca8f51cb20c64068d266a2a6e92f43cfa6033efe1f1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://befc3a7310ed730ab06800d0ffe5a5ad206f5193f921d0080543d96e53c382cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f4e90bd8d92e4159f45b887647465c23595d6a14cb9fd5a6f3b6c736345b302\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d8fe9445b34c3dea0e5508fbbd9d8c72d3b0bd3c927f3d0469fb5f86a61b7a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6442d8b57ec5789bc5ad219da279d42cd053598b4a8a843b4190a619750483d3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 07:53:53.366863 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 07:53:53.368547 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1947576678/tls.crt::/tmp/serving-cert-1947576678/tls.key\\\\\\\"\\\\nI0126 07:53:59.068951 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 07:53:59.072863 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 07:53:59.072893 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 07:53:59.072922 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 07:53:59.072931 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 07:53:59.078386 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 07:53:59.078432 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 07:53:59.078439 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 07:53:59.078445 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 07:53:59.078449 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 07:53:59.078455 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 07:53:59.078459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 07:53:59.078571 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 07:53:59.084946 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eda625619208720589e5f413d0df8e1e55e7fd5ab4a2e59d64e19150afb05ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02368f0d186c7de1c3dc25d5c60985799ded799314f332c5132230a413d531a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02368f0d186c7de1c3dc25d5c60985799ded799314f332c5132230a413d531a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:53:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:57Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:57 crc kubenswrapper[4806]: I0126 07:54:57.824173 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acf0880fe013b6ee2bf3e8342e19ebbdc21b0f7aeb69749dce219fe0e5e4939a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:57Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:57 crc kubenswrapper[4806]: I0126 07:54:57.837207 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0298829a559cb39d3f8d2f3b0e53b249e74c432ba82385ab628135d53fa77272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70db7a55580b02bc2be913a8978f895c5a3826133883cf1c838bc84a76e89af1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:57Z is after 2025-08-24T17:21:41Z" Jan 26 07:54:57 crc kubenswrapper[4806]: I0126 07:54:57.857566 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:57 crc kubenswrapper[4806]: I0126 07:54:57.857632 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:57 crc kubenswrapper[4806]: I0126 07:54:57.857645 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:57 crc kubenswrapper[4806]: I0126 07:54:57.857669 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:57 crc kubenswrapper[4806]: I0126 07:54:57.857682 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:57Z","lastTransitionTime":"2026-01-26T07:54:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:57 crc kubenswrapper[4806]: I0126 07:54:57.961977 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:57 crc kubenswrapper[4806]: I0126 07:54:57.962401 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:57 crc kubenswrapper[4806]: I0126 07:54:57.962500 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:57 crc kubenswrapper[4806]: I0126 07:54:57.962623 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:57 crc kubenswrapper[4806]: I0126 07:54:57.962727 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:57Z","lastTransitionTime":"2026-01-26T07:54:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:58 crc kubenswrapper[4806]: I0126 07:54:58.041636 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqmvf" Jan 26 07:54:58 crc kubenswrapper[4806]: E0126 07:54:58.042057 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqmvf" podUID="137029f0-49ad-4400-b117-2eff9271bce3" Jan 26 07:54:58 crc kubenswrapper[4806]: I0126 07:54:58.041746 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 07:54:58 crc kubenswrapper[4806]: E0126 07:54:58.042384 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 07:54:58 crc kubenswrapper[4806]: I0126 07:54:58.041746 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 07:54:58 crc kubenswrapper[4806]: E0126 07:54:58.042698 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 07:54:58 crc kubenswrapper[4806]: I0126 07:54:58.041846 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 07:54:58 crc kubenswrapper[4806]: E0126 07:54:58.042947 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 07:54:58 crc kubenswrapper[4806]: I0126 07:54:58.065548 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:58 crc kubenswrapper[4806]: I0126 07:54:58.065599 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:58 crc kubenswrapper[4806]: I0126 07:54:58.065612 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:58 crc kubenswrapper[4806]: I0126 07:54:58.065633 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:58 crc kubenswrapper[4806]: I0126 07:54:58.065647 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:58Z","lastTransitionTime":"2026-01-26T07:54:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:58 crc kubenswrapper[4806]: I0126 07:54:58.137709 4806 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 18:44:45.329836097 +0000 UTC Jan 26 07:54:58 crc kubenswrapper[4806]: I0126 07:54:58.168383 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:58 crc kubenswrapper[4806]: I0126 07:54:58.168426 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:58 crc kubenswrapper[4806]: I0126 07:54:58.168438 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:58 crc kubenswrapper[4806]: I0126 07:54:58.168454 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:58 crc kubenswrapper[4806]: I0126 07:54:58.168464 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:58Z","lastTransitionTime":"2026-01-26T07:54:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:58 crc kubenswrapper[4806]: I0126 07:54:58.270591 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:58 crc kubenswrapper[4806]: I0126 07:54:58.270621 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:58 crc kubenswrapper[4806]: I0126 07:54:58.270629 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:58 crc kubenswrapper[4806]: I0126 07:54:58.270643 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:58 crc kubenswrapper[4806]: I0126 07:54:58.270652 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:58Z","lastTransitionTime":"2026-01-26T07:54:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:58 crc kubenswrapper[4806]: I0126 07:54:58.373013 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:58 crc kubenswrapper[4806]: I0126 07:54:58.373715 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:58 crc kubenswrapper[4806]: I0126 07:54:58.373767 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:58 crc kubenswrapper[4806]: I0126 07:54:58.373803 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:58 crc kubenswrapper[4806]: I0126 07:54:58.373847 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:58Z","lastTransitionTime":"2026-01-26T07:54:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:58 crc kubenswrapper[4806]: I0126 07:54:58.477118 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:58 crc kubenswrapper[4806]: I0126 07:54:58.477152 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:58 crc kubenswrapper[4806]: I0126 07:54:58.477161 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:58 crc kubenswrapper[4806]: I0126 07:54:58.477175 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:58 crc kubenswrapper[4806]: I0126 07:54:58.477187 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:58Z","lastTransitionTime":"2026-01-26T07:54:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:58 crc kubenswrapper[4806]: I0126 07:54:58.575199 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8mw7z_1f8b8acb-f4cf-41db-82f8-94ffd21c1594/ovnkube-controller/3.log" Jan 26 07:54:58 crc kubenswrapper[4806]: I0126 07:54:58.579433 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:58 crc kubenswrapper[4806]: I0126 07:54:58.579459 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:58 crc kubenswrapper[4806]: I0126 07:54:58.579471 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:58 crc kubenswrapper[4806]: I0126 07:54:58.579487 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:58 crc kubenswrapper[4806]: I0126 07:54:58.579499 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:58Z","lastTransitionTime":"2026-01-26T07:54:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:58 crc kubenswrapper[4806]: I0126 07:54:58.681824 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:58 crc kubenswrapper[4806]: I0126 07:54:58.681868 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:58 crc kubenswrapper[4806]: I0126 07:54:58.681879 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:58 crc kubenswrapper[4806]: I0126 07:54:58.681895 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:58 crc kubenswrapper[4806]: I0126 07:54:58.681905 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:58Z","lastTransitionTime":"2026-01-26T07:54:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:58 crc kubenswrapper[4806]: I0126 07:54:58.784275 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:58 crc kubenswrapper[4806]: I0126 07:54:58.784341 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:58 crc kubenswrapper[4806]: I0126 07:54:58.784352 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:58 crc kubenswrapper[4806]: I0126 07:54:58.784369 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:58 crc kubenswrapper[4806]: I0126 07:54:58.784380 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:58Z","lastTransitionTime":"2026-01-26T07:54:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:58 crc kubenswrapper[4806]: I0126 07:54:58.886464 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:58 crc kubenswrapper[4806]: I0126 07:54:58.886499 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:58 crc kubenswrapper[4806]: I0126 07:54:58.886510 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:58 crc kubenswrapper[4806]: I0126 07:54:58.886547 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:58 crc kubenswrapper[4806]: I0126 07:54:58.886561 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:58Z","lastTransitionTime":"2026-01-26T07:54:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:58 crc kubenswrapper[4806]: I0126 07:54:58.989319 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:58 crc kubenswrapper[4806]: I0126 07:54:58.989369 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:58 crc kubenswrapper[4806]: I0126 07:54:58.989385 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:58 crc kubenswrapper[4806]: I0126 07:54:58.989406 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:58 crc kubenswrapper[4806]: I0126 07:54:58.989419 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:58Z","lastTransitionTime":"2026-01-26T07:54:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:59 crc kubenswrapper[4806]: I0126 07:54:59.092842 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:59 crc kubenswrapper[4806]: I0126 07:54:59.092888 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:59 crc kubenswrapper[4806]: I0126 07:54:59.092899 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:59 crc kubenswrapper[4806]: I0126 07:54:59.092920 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:59 crc kubenswrapper[4806]: I0126 07:54:59.092932 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:59Z","lastTransitionTime":"2026-01-26T07:54:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:59 crc kubenswrapper[4806]: I0126 07:54:59.138012 4806 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 06:43:53.526849811 +0000 UTC Jan 26 07:54:59 crc kubenswrapper[4806]: I0126 07:54:59.195700 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:59 crc kubenswrapper[4806]: I0126 07:54:59.195768 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:59 crc kubenswrapper[4806]: I0126 07:54:59.195782 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:59 crc kubenswrapper[4806]: I0126 07:54:59.195808 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:59 crc kubenswrapper[4806]: I0126 07:54:59.195825 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:59Z","lastTransitionTime":"2026-01-26T07:54:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:59 crc kubenswrapper[4806]: I0126 07:54:59.299110 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:59 crc kubenswrapper[4806]: I0126 07:54:59.299186 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:59 crc kubenswrapper[4806]: I0126 07:54:59.299206 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:59 crc kubenswrapper[4806]: I0126 07:54:59.299233 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:59 crc kubenswrapper[4806]: I0126 07:54:59.299251 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:59Z","lastTransitionTime":"2026-01-26T07:54:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:59 crc kubenswrapper[4806]: I0126 07:54:59.403307 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:59 crc kubenswrapper[4806]: I0126 07:54:59.403376 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:59 crc kubenswrapper[4806]: I0126 07:54:59.403398 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:59 crc kubenswrapper[4806]: I0126 07:54:59.403433 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:59 crc kubenswrapper[4806]: I0126 07:54:59.403457 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:59Z","lastTransitionTime":"2026-01-26T07:54:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:59 crc kubenswrapper[4806]: I0126 07:54:59.506899 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:59 crc kubenswrapper[4806]: I0126 07:54:59.506975 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:59 crc kubenswrapper[4806]: I0126 07:54:59.506993 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:59 crc kubenswrapper[4806]: I0126 07:54:59.507018 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:59 crc kubenswrapper[4806]: I0126 07:54:59.507036 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:59Z","lastTransitionTime":"2026-01-26T07:54:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:59 crc kubenswrapper[4806]: I0126 07:54:59.609929 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:59 crc kubenswrapper[4806]: I0126 07:54:59.610018 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:59 crc kubenswrapper[4806]: I0126 07:54:59.610042 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:59 crc kubenswrapper[4806]: I0126 07:54:59.610074 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:59 crc kubenswrapper[4806]: I0126 07:54:59.610092 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:59Z","lastTransitionTime":"2026-01-26T07:54:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:59 crc kubenswrapper[4806]: I0126 07:54:59.712805 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:59 crc kubenswrapper[4806]: I0126 07:54:59.712877 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:59 crc kubenswrapper[4806]: I0126 07:54:59.712913 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:59 crc kubenswrapper[4806]: I0126 07:54:59.712947 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:59 crc kubenswrapper[4806]: I0126 07:54:59.712971 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:59Z","lastTransitionTime":"2026-01-26T07:54:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:59 crc kubenswrapper[4806]: I0126 07:54:59.815564 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:59 crc kubenswrapper[4806]: I0126 07:54:59.815606 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:59 crc kubenswrapper[4806]: I0126 07:54:59.815621 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:59 crc kubenswrapper[4806]: I0126 07:54:59.815636 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:59 crc kubenswrapper[4806]: I0126 07:54:59.815646 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:59Z","lastTransitionTime":"2026-01-26T07:54:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:54:59 crc kubenswrapper[4806]: I0126 07:54:59.922934 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:54:59 crc kubenswrapper[4806]: I0126 07:54:59.922981 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:54:59 crc kubenswrapper[4806]: I0126 07:54:59.922998 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:54:59 crc kubenswrapper[4806]: I0126 07:54:59.923016 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:54:59 crc kubenswrapper[4806]: I0126 07:54:59.923026 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:54:59Z","lastTransitionTime":"2026-01-26T07:54:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:00 crc kubenswrapper[4806]: I0126 07:55:00.025355 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:00 crc kubenswrapper[4806]: I0126 07:55:00.025396 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:00 crc kubenswrapper[4806]: I0126 07:55:00.025407 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:00 crc kubenswrapper[4806]: I0126 07:55:00.025423 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:00 crc kubenswrapper[4806]: I0126 07:55:00.025434 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:00Z","lastTransitionTime":"2026-01-26T07:55:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:00 crc kubenswrapper[4806]: I0126 07:55:00.041647 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqmvf" Jan 26 07:55:00 crc kubenswrapper[4806]: I0126 07:55:00.041687 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 07:55:00 crc kubenswrapper[4806]: I0126 07:55:00.041727 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 07:55:00 crc kubenswrapper[4806]: I0126 07:55:00.041664 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 07:55:00 crc kubenswrapper[4806]: E0126 07:55:00.041784 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqmvf" podUID="137029f0-49ad-4400-b117-2eff9271bce3" Jan 26 07:55:00 crc kubenswrapper[4806]: E0126 07:55:00.041856 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 07:55:00 crc kubenswrapper[4806]: E0126 07:55:00.041926 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 07:55:00 crc kubenswrapper[4806]: E0126 07:55:00.042002 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 07:55:00 crc kubenswrapper[4806]: I0126 07:55:00.127945 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:00 crc kubenswrapper[4806]: I0126 07:55:00.128007 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:00 crc kubenswrapper[4806]: I0126 07:55:00.128024 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:00 crc kubenswrapper[4806]: I0126 07:55:00.128046 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:00 crc kubenswrapper[4806]: I0126 07:55:00.128062 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:00Z","lastTransitionTime":"2026-01-26T07:55:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:00 crc kubenswrapper[4806]: I0126 07:55:00.138366 4806 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 05:33:54.751580321 +0000 UTC Jan 26 07:55:00 crc kubenswrapper[4806]: I0126 07:55:00.231269 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:00 crc kubenswrapper[4806]: I0126 07:55:00.231407 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:00 crc kubenswrapper[4806]: I0126 07:55:00.231424 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:00 crc kubenswrapper[4806]: I0126 07:55:00.231441 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:00 crc kubenswrapper[4806]: I0126 07:55:00.231453 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:00Z","lastTransitionTime":"2026-01-26T07:55:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:00 crc kubenswrapper[4806]: I0126 07:55:00.334021 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:00 crc kubenswrapper[4806]: I0126 07:55:00.334059 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:00 crc kubenswrapper[4806]: I0126 07:55:00.334069 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:00 crc kubenswrapper[4806]: I0126 07:55:00.334081 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:00 crc kubenswrapper[4806]: I0126 07:55:00.334091 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:00Z","lastTransitionTime":"2026-01-26T07:55:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:00 crc kubenswrapper[4806]: I0126 07:55:00.436552 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:00 crc kubenswrapper[4806]: I0126 07:55:00.436601 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:00 crc kubenswrapper[4806]: I0126 07:55:00.436615 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:00 crc kubenswrapper[4806]: I0126 07:55:00.436635 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:00 crc kubenswrapper[4806]: I0126 07:55:00.436647 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:00Z","lastTransitionTime":"2026-01-26T07:55:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:00 crc kubenswrapper[4806]: I0126 07:55:00.539735 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:00 crc kubenswrapper[4806]: I0126 07:55:00.539791 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:00 crc kubenswrapper[4806]: I0126 07:55:00.539807 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:00 crc kubenswrapper[4806]: I0126 07:55:00.539829 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:00 crc kubenswrapper[4806]: I0126 07:55:00.539846 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:00Z","lastTransitionTime":"2026-01-26T07:55:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:00 crc kubenswrapper[4806]: I0126 07:55:00.643090 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:00 crc kubenswrapper[4806]: I0126 07:55:00.643138 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:00 crc kubenswrapper[4806]: I0126 07:55:00.643151 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:00 crc kubenswrapper[4806]: I0126 07:55:00.643172 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:00 crc kubenswrapper[4806]: I0126 07:55:00.643186 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:00Z","lastTransitionTime":"2026-01-26T07:55:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:00 crc kubenswrapper[4806]: I0126 07:55:00.781301 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:00 crc kubenswrapper[4806]: I0126 07:55:00.781330 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:00 crc kubenswrapper[4806]: I0126 07:55:00.781341 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:00 crc kubenswrapper[4806]: I0126 07:55:00.781357 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:00 crc kubenswrapper[4806]: I0126 07:55:00.781366 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:00Z","lastTransitionTime":"2026-01-26T07:55:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:00 crc kubenswrapper[4806]: I0126 07:55:00.883624 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:00 crc kubenswrapper[4806]: I0126 07:55:00.883682 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:00 crc kubenswrapper[4806]: I0126 07:55:00.883693 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:00 crc kubenswrapper[4806]: I0126 07:55:00.883718 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:00 crc kubenswrapper[4806]: I0126 07:55:00.883731 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:00Z","lastTransitionTime":"2026-01-26T07:55:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:00 crc kubenswrapper[4806]: I0126 07:55:00.986266 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:00 crc kubenswrapper[4806]: I0126 07:55:00.986307 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:00 crc kubenswrapper[4806]: I0126 07:55:00.986317 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:00 crc kubenswrapper[4806]: I0126 07:55:00.986332 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:00 crc kubenswrapper[4806]: I0126 07:55:00.986341 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:00Z","lastTransitionTime":"2026-01-26T07:55:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:01 crc kubenswrapper[4806]: I0126 07:55:01.055829 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f2d0753-513a-4924-9985-d1058d2cda9b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da291d9c45edfc19eed33fce385cb1d8005585d2e9ac5fdfc50040a9c3038683\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6719f7820924ce02c29e99fc9fb5c6e733597c1f006b3a2701046680af978e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8e90cbd21cd7080a82e2440f441264b1fd1b45ee048fbcadc7afbd7b4384996\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e49f6187d5b008fdb24b0f1a268d099bba2d3700d3ebd07c6f2d05e5718bf990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e49f6187d5b008fdb24b0f1a268d099bba2d3700d3ebd07c6f2d05e5718bf990\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:41Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:53:41Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:55:01Z is after 2025-08-24T17:21:41Z" Jan 26 07:55:01 crc kubenswrapper[4806]: I0126 07:55:01.074298 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:55:01Z is after 2025-08-24T17:21:41Z" Jan 26 07:55:01 crc kubenswrapper[4806]: I0126 07:55:01.089171 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:01 crc kubenswrapper[4806]: I0126 07:55:01.089225 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:01 crc kubenswrapper[4806]: I0126 07:55:01.089235 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:01 crc kubenswrapper[4806]: I0126 07:55:01.089255 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:01 crc kubenswrapper[4806]: I0126 07:55:01.089268 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:01Z","lastTransitionTime":"2026-01-26T07:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:01 crc kubenswrapper[4806]: I0126 07:55:01.090989 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dfe44c66a73660b38211ed7ae8fbb6a036c2ebc6a2716dd0e36da70b8d1fcc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:55:01Z is after 2025-08-24T17:21:41Z" Jan 26 07:55:01 crc kubenswrapper[4806]: I0126 07:55:01.106687 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d07502a2-50b0-4012-b335-340a1c694c50\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03d55ca5e1d542a5cdb726c1e581fb9b644e237bf4575794fedadf60386c339c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67x45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://824bb40c04b0ea474c45020c9c84ce7b7ee18c52beef9fee9618ba5a6e59be04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67x45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k2tlk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:55:01Z is after 2025-08-24T17:21:41Z" Jan 26 07:55:01 crc kubenswrapper[4806]: I0126 07:55:01.121411 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pw8cg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c73a9f4-20b2-4c8a-b25d-413770be4fac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://792a2a27d6043da345b05c611a152abc9233ff61958434dc7a7f8c13d185fc04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt9w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pw8cg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:55:01Z is after 2025-08-24T17:21:41Z" Jan 26 07:55:01 crc kubenswrapper[4806]: I0126 07:55:01.136805 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tfbl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47635c72-c532-48f3-839a-d86393eb5d24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7ea777c2c49c1bc930bff2584c62a6ebd7752638e4d5aec2818f56cf0678ad8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4lbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4910b820b4227a848efb96e462e633150fe7e06295dc04a0d9c45636cc4b83bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4lbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tfbl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:55:01Z is after 2025-08-24T17:21:41Z" Jan 26 07:55:01 crc kubenswrapper[4806]: I0126 07:55:01.138830 4806 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 13:16:22.656977595 +0000 UTC Jan 26 07:55:01 crc kubenswrapper[4806]: I0126 07:55:01.147316 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a365c4a-dfc5-4290-920d-f1f04e322061\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d54a191a8969d10a108b9fa36fe6c03f55969e7bd6c73aef14d2936d92290ccb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad400013f30ae5d55431037940b99d47dc5a2449b4535963218a0e9510e5fcc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad400013f30ae5d55431037940b99d47dc5a2449b4535963218a0e9510e5fcc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:53:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:55:01Z is after 2025-08-24T17:21:41Z" Jan 26 07:55:01 crc kubenswrapper[4806]: I0126 07:55:01.161174 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a2716f1-7f63-4f1b-88e7-412b33a5afe3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1797a908e33ef3678ac1471e937a8e52c11dbb31d8de0f19d400a01706a98b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c111f2ec62507ccc570af4f4da19232bfa0946bb9e67bde595d5a3d66eff6a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f94ceee9c2ed3bd1c5ba58ee7e5047dd5f0a325c610c4277705f3426e69ed79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50228cb176a3defc821a78ed75d837706c2de3a3414f02e0afacf81289a15999\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:53:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:55:01Z is after 2025-08-24T17:21:41Z" Jan 26 07:55:01 crc kubenswrapper[4806]: I0126 07:55:01.176592 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:55:01Z is after 2025-08-24T17:21:41Z" Jan 26 07:55:01 crc kubenswrapper[4806]: I0126 07:55:01.185426 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:01 crc kubenswrapper[4806]: I0126 07:55:01.185473 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:01 crc kubenswrapper[4806]: I0126 07:55:01.185482 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:01 crc kubenswrapper[4806]: I0126 07:55:01.185498 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:01 crc kubenswrapper[4806]: I0126 07:55:01.185508 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:01Z","lastTransitionTime":"2026-01-26T07:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:01 crc kubenswrapper[4806]: I0126 07:55:01.197038 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:55:01Z is after 2025-08-24T17:21:41Z" Jan 26 07:55:01 crc kubenswrapper[4806]: E0126 07:55:01.199311 4806 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:55:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:55:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:55:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:55:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:55:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:55:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:55:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:55:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6d591560-a509-477f-85dc-1a92a429bf2e\\\",\\\"systemUUID\\\":\\\"8cee8155-a08c-4d0d-aec6-2f132dd9ee01\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:55:01Z is after 2025-08-24T17:21:41Z" Jan 26 07:55:01 crc kubenswrapper[4806]: I0126 07:55:01.206556 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:01 crc kubenswrapper[4806]: I0126 07:55:01.206601 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:01 crc kubenswrapper[4806]: I0126 07:55:01.206613 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:01 crc kubenswrapper[4806]: I0126 07:55:01.206628 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:01 crc kubenswrapper[4806]: I0126 07:55:01.206639 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:01Z","lastTransitionTime":"2026-01-26T07:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:01 crc kubenswrapper[4806]: I0126 07:55:01.221107 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f8b8acb-f4cf-41db-82f8-94ffd21c1594\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21d2ea6bc7f0a91da2b372587419ec1b34cfe3aca8f70dd4e360bf3e818bb103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adbed86e005e6e9cecb4320e0dff72eab3cfd505ff656a64e6643cf47828c789\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d102c6b4ea9d79168f0fec24c556c8fec8e3c1191646eebcbe8ed0289edfb6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c90dede1fc0753cad500555db7d92be73cb9b51fe948e3e65bed66134d85628c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e05c58fd693a7827b1ff019c6e160e4d2198004aca34245ee6e08cd37f5627da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://634f160a9064240a06e9b425f70ceae6390db7205f898efdc601437e72c362df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df2814fbaab959fae53eddddeffb17ab97a5a8ad31fbc20bcca4cdad140a24e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfc2e9b8a1bd4f2725d86ece596811599fc42c42930193e14dec2164c415acd3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T07:54:31Z\\\",\\\"message\\\":\\\"o create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:30Z is after 2025-08-24T17:21:41Z]\\\\nI0126 07:54:31.006399 6338 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-network-console/networking-console-plugin_TCP_cluster\\\\\\\", UUID:\\\\\\\"ab0b1d51-5ec6-479b-8881-93dfa8d30337\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-network-console/networking-console-plugin\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGr\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df2814fbaab959fae53eddddeffb17ab97a5a8ad31fbc20bcca4cdad140a24e2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T07:54:56Z\\\",\\\"message\\\":\\\"712973235162149816) with []\\\\nI0126 07:54:56.255878 6711 address_set.go:302] New(aa6fc2dc-fab0-4812-b9da-809058e4dcf7/default-network-controller:EgressIP:egressip-served-pods:v4:default/a8519615025667110816) with []\\\\nI0126 07:54:56.255905 6711 address_set.go:302] New(bf133528-8652-4c84-85ff-881f0afe9837/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with []\\\\nI0126 07:54:56.255984 6711 factory.go:1336] Added *v1.Node event handler 7\\\\nI0126 07:54:56.256027 6711 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0126 07:54:56.256081 6711 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0126 07:54:56.256139 6711 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0126 07:54:56.256186 6711 factory.go:656] Stopping watch factory\\\\nI0126 07:54:56.256241 6711 handler.go:208] Removed *v1.Node event handler 7\\\\nI0126 07:54:56.256268 6711 handler.go:208] Removed *v1.Node event handler 2\\\\nI0126 07:54:56.256352 6711 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0126 07:54:56.256438 6711 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0126 07:54:56.256482 6711 ovnkube.go:599] Stopped ovnkube\\\\nI0126 07:54:56.256561 6711 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0126 07:54:56.256649 6711 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e9d86c9f33a8ad421c133f7b674eff9362011ad9ce60cbe1b917bda84c85047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2105d09a59a290aa3e1915a2b8c2c8aeb3a3e94ddfd4b48dc574a5362f522d18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2105d09a59a290aa3e1915a2b8c2c8aeb3a3e94ddfd4b48dc574a5362f522d18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8mw7z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:55:01Z is after 2025-08-24T17:21:41Z" Jan 26 07:55:01 crc kubenswrapper[4806]: E0126 07:55:01.223689 4806 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:55:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:55:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:55:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:55:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:55:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:55:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:55:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:55:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6d591560-a509-477f-85dc-1a92a429bf2e\\\",\\\"systemUUID\\\":\\\"8cee8155-a08c-4d0d-aec6-2f132dd9ee01\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:55:01Z is after 2025-08-24T17:21:41Z" Jan 26 07:55:01 crc kubenswrapper[4806]: I0126 07:55:01.226363 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:01 crc kubenswrapper[4806]: I0126 07:55:01.226393 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:01 crc kubenswrapper[4806]: I0126 07:55:01.226401 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:01 crc kubenswrapper[4806]: I0126 07:55:01.226422 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:01 crc kubenswrapper[4806]: I0126 07:55:01.226433 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:01Z","lastTransitionTime":"2026-01-26T07:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:01 crc kubenswrapper[4806]: I0126 07:55:01.235603 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rqmvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"137029f0-49ad-4400-b117-2eff9271bce3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9zt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9zt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:14Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rqmvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:55:01Z is after 2025-08-24T17:21:41Z" Jan 26 07:55:01 crc kubenswrapper[4806]: E0126 07:55:01.243859 4806 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:55:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:55:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:55:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:55:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:55:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:55:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:55:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:55:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6d591560-a509-477f-85dc-1a92a429bf2e\\\",\\\"systemUUID\\\":\\\"8cee8155-a08c-4d0d-aec6-2f132dd9ee01\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:55:01Z is after 2025-08-24T17:21:41Z" Jan 26 07:55:01 crc kubenswrapper[4806]: I0126 07:55:01.247582 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:01 crc kubenswrapper[4806]: I0126 07:55:01.247620 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:01 crc kubenswrapper[4806]: I0126 07:55:01.247630 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:01 crc kubenswrapper[4806]: I0126 07:55:01.247645 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:01 crc kubenswrapper[4806]: I0126 07:55:01.247656 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:01Z","lastTransitionTime":"2026-01-26T07:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:01 crc kubenswrapper[4806]: I0126 07:55:01.256955 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"674c5d9f-0f9e-47cf-b6bb-5a011bd0af68\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06b15982445e897ebbaf881e9a84973b9b700b0763ea6dfc0dfa3d5adb3d5b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67cf748dcc41d3402937cba97eadb7311a4fd7ab359a24768ffd4b6f689e9e2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://374c11776db902d4a30fdeab722bf8c85be53043b87555ac42f9f18bbfca5f78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded50f9353eb262b2587b8d1a59d8e1a06946c9b228eb0991fe35ad79f19567b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dc1a26bcb2eff0795009b3d88f7fedd86bccf553e9b8b86f8bc4bdda154566d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b85e7f36ac03113362c2ab6b15fb73e517599c6a56c6a410b9dead70191502f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b85e7f36ac03113362c2ab6b15fb73e517599c6a56c6a410b9dead70191502f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7324fbdf53b0be79696bb4bf870fe072761633e09cc88cce3a213a40b339410d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7324fbdf53b0be79696bb4bf870fe072761633e09cc88cce3a213a40b339410d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1987002ea8dfdacf427291f1757243727d744b1fe99d0be1f8cddc26beb74fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1987002ea8dfdacf427291f1757243727d744b1fe99d0be1f8cddc26beb74fc9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:53:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:55:01Z is after 2025-08-24T17:21:41Z" Jan 26 07:55:01 crc kubenswrapper[4806]: E0126 07:55:01.258246 4806 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:55:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:55:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:55:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:55:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:55:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:55:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:55:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:55:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6d591560-a509-477f-85dc-1a92a429bf2e\\\",\\\"systemUUID\\\":\\\"8cee8155-a08c-4d0d-aec6-2f132dd9ee01\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:55:01Z is after 2025-08-24T17:21:41Z" Jan 26 07:55:01 crc kubenswrapper[4806]: I0126 07:55:01.261333 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:01 crc kubenswrapper[4806]: I0126 07:55:01.261375 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:01 crc kubenswrapper[4806]: I0126 07:55:01.261408 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:01 crc kubenswrapper[4806]: I0126 07:55:01.261428 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:01 crc kubenswrapper[4806]: I0126 07:55:01.261439 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:01Z","lastTransitionTime":"2026-01-26T07:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:01 crc kubenswrapper[4806]: I0126 07:55:01.270392 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f837be3-c97c-4ec6-9ac0-1b2c210a2bd1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://370998a25827938e42e6ca8f51cb20c64068d266a2a6e92f43cfa6033efe1f1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://befc3a7310ed730ab06800d0ffe5a5ad206f5193f921d0080543d96e53c382cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f4e90bd8d92e4159f45b887647465c23595d6a14cb9fd5a6f3b6c736345b302\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d8fe9445b34c3dea0e5508fbbd9d8c72d3b0bd3c927f3d0469fb5f86a61b7a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6442d8b57ec5789bc5ad219da279d42cd053598b4a8a843b4190a619750483d3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 07:53:53.366863 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 07:53:53.368547 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1947576678/tls.crt::/tmp/serving-cert-1947576678/tls.key\\\\\\\"\\\\nI0126 07:53:59.068951 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 07:53:59.072863 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 07:53:59.072893 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 07:53:59.072922 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 07:53:59.072931 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 07:53:59.078386 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 07:53:59.078432 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 07:53:59.078439 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 07:53:59.078445 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 07:53:59.078449 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 07:53:59.078455 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 07:53:59.078459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 07:53:59.078571 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 07:53:59.084946 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eda625619208720589e5f413d0df8e1e55e7fd5ab4a2e59d64e19150afb05ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02368f0d186c7de1c3dc25d5c60985799ded799314f332c5132230a413d531a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02368f0d186c7de1c3dc25d5c60985799ded799314f332c5132230a413d531a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:53:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:55:01Z is after 2025-08-24T17:21:41Z" Jan 26 07:55:01 crc kubenswrapper[4806]: E0126 07:55:01.271720 4806 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:55:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:55:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:55:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:55:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:55:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:55:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:55:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:55:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6d591560-a509-477f-85dc-1a92a429bf2e\\\",\\\"systemUUID\\\":\\\"8cee8155-a08c-4d0d-aec6-2f132dd9ee01\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:55:01Z is after 2025-08-24T17:21:41Z" Jan 26 07:55:01 crc kubenswrapper[4806]: E0126 07:55:01.271866 4806 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 07:55:01 crc kubenswrapper[4806]: I0126 07:55:01.273272 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:01 crc kubenswrapper[4806]: I0126 07:55:01.273308 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:01 crc kubenswrapper[4806]: I0126 07:55:01.273320 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:01 crc kubenswrapper[4806]: I0126 07:55:01.273337 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:01 crc kubenswrapper[4806]: I0126 07:55:01.273348 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:01Z","lastTransitionTime":"2026-01-26T07:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:01 crc kubenswrapper[4806]: I0126 07:55:01.282235 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acf0880fe013b6ee2bf3e8342e19ebbdc21b0f7aeb69749dce219fe0e5e4939a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:55:01Z is after 2025-08-24T17:21:41Z" Jan 26 07:55:01 crc kubenswrapper[4806]: I0126 07:55:01.293648 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0298829a559cb39d3f8d2f3b0e53b249e74c432ba82385ab628135d53fa77272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70db7a55580b02bc2be913a8978f895c5a3826133883cf1c838bc84a76e89af1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:55:01Z is after 2025-08-24T17:21:41Z" Jan 26 07:55:01 crc kubenswrapper[4806]: I0126 07:55:01.302382 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wx526" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"265e15b3-6ef8-47df-ab15-dcc9bd9574ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5080e1085d9267f3e432eaf62a6c620f7e59db08140bd5a901ab21840bfd6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmfr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wx526\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:55:01Z is after 2025-08-24T17:21:41Z" Jan 26 07:55:01 crc kubenswrapper[4806]: I0126 07:55:01.314315 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d7glh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4320ae6b-0d73-47d7-9f2c-f3c5b6b69041\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b2b56f712aa5b59214153bb170a5866aea874524f826e53be3e703b8ff912fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9da5e8c52d415e76190ace196927bd1783b08990164b11379940ecc2b6734551\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T07:54:47Z\\\",\\\"message\\\":\\\"2026-01-26T07:54:01+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6a15fa94-db47-433a-a1f7-0d0ac0229e59\\\\n2026-01-26T07:54:01+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6a15fa94-db47-433a-a1f7-0d0ac0229e59 to /host/opt/cni/bin/\\\\n2026-01-26T07:54:02Z [verbose] multus-daemon started\\\\n2026-01-26T07:54:02Z [verbose] Readiness Indicator file check\\\\n2026-01-26T07:54:47Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:01Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpcqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d7glh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:55:01Z is after 2025-08-24T17:21:41Z" Jan 26 07:55:01 crc kubenswrapper[4806]: I0126 07:55:01.327023 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-268q5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59844d88-1bf9-4761-b664-74623e7532c3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80fc9c4f56c3d017653449212b2b9457a701e8c567317798eb12c695bccec5c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19fc2da16d6c76dad40cc2947eb8da01b1ead5804be80df5756c132456575909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19fc2da16d6c76dad40cc2947eb8da01b1ead5804be80df5756c132456575909\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a426f2a63c3cdd56cec3bbd1f6e88ff76808304da3fe757043afeb2feb26614\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a426f2a63c3cdd56cec3bbd1f6e88ff76808304da3fe757043afeb2feb26614\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5459f7b834ef08b014a811d0955763235cb4dbb6cbe92670202b03f51f7d684d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5459f7b834ef08b014a811d0955763235cb4dbb6cbe92670202b03f51f7d684d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1efe94940fbf1babcccc68efae1d38751c22232d1a2f197485057be906db80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f1efe94940fbf1babcccc68efae1d38751c22232d1a2f197485057be906db80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1332a28e0cf08275ce281997fdc5674f44e397f8ff02fdd19fed12c2de2b70a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1332a28e0cf08275ce281997fdc5674f44e397f8ff02fdd19fed12c2de2b70a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://638da847467b7cade9c68a764044417e85bfa99cae93105cb70b260d3668c1a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://638da847467b7cade9c68a764044417e85bfa99cae93105cb70b260d3668c1a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-268q5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:55:01Z is after 2025-08-24T17:21:41Z" Jan 26 07:55:01 crc kubenswrapper[4806]: I0126 07:55:01.375306 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:01 crc kubenswrapper[4806]: I0126 07:55:01.375335 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:01 crc kubenswrapper[4806]: I0126 07:55:01.375345 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:01 crc kubenswrapper[4806]: I0126 07:55:01.375358 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:01 crc kubenswrapper[4806]: I0126 07:55:01.375367 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:01Z","lastTransitionTime":"2026-01-26T07:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:01 crc kubenswrapper[4806]: I0126 07:55:01.477911 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:01 crc kubenswrapper[4806]: I0126 07:55:01.477943 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:01 crc kubenswrapper[4806]: I0126 07:55:01.477951 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:01 crc kubenswrapper[4806]: I0126 07:55:01.477965 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:01 crc kubenswrapper[4806]: I0126 07:55:01.477974 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:01Z","lastTransitionTime":"2026-01-26T07:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:01 crc kubenswrapper[4806]: I0126 07:55:01.580338 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:01 crc kubenswrapper[4806]: I0126 07:55:01.580381 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:01 crc kubenswrapper[4806]: I0126 07:55:01.580392 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:01 crc kubenswrapper[4806]: I0126 07:55:01.580411 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:01 crc kubenswrapper[4806]: I0126 07:55:01.580423 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:01Z","lastTransitionTime":"2026-01-26T07:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:01 crc kubenswrapper[4806]: I0126 07:55:01.682737 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:01 crc kubenswrapper[4806]: I0126 07:55:01.682773 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:01 crc kubenswrapper[4806]: I0126 07:55:01.682785 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:01 crc kubenswrapper[4806]: I0126 07:55:01.682801 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:01 crc kubenswrapper[4806]: I0126 07:55:01.682813 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:01Z","lastTransitionTime":"2026-01-26T07:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:01 crc kubenswrapper[4806]: I0126 07:55:01.785078 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:01 crc kubenswrapper[4806]: I0126 07:55:01.785147 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:01 crc kubenswrapper[4806]: I0126 07:55:01.785170 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:01 crc kubenswrapper[4806]: I0126 07:55:01.785201 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:01 crc kubenswrapper[4806]: I0126 07:55:01.785224 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:01Z","lastTransitionTime":"2026-01-26T07:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:01 crc kubenswrapper[4806]: I0126 07:55:01.888133 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:01 crc kubenswrapper[4806]: I0126 07:55:01.888183 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:01 crc kubenswrapper[4806]: I0126 07:55:01.888201 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:01 crc kubenswrapper[4806]: I0126 07:55:01.888225 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:01 crc kubenswrapper[4806]: I0126 07:55:01.888245 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:01Z","lastTransitionTime":"2026-01-26T07:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:01 crc kubenswrapper[4806]: I0126 07:55:01.990943 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:01 crc kubenswrapper[4806]: I0126 07:55:01.990988 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:01 crc kubenswrapper[4806]: I0126 07:55:01.991006 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:01 crc kubenswrapper[4806]: I0126 07:55:01.991032 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:01 crc kubenswrapper[4806]: I0126 07:55:01.991052 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:01Z","lastTransitionTime":"2026-01-26T07:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:02 crc kubenswrapper[4806]: I0126 07:55:02.041721 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqmvf" Jan 26 07:55:02 crc kubenswrapper[4806]: E0126 07:55:02.041889 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqmvf" podUID="137029f0-49ad-4400-b117-2eff9271bce3" Jan 26 07:55:02 crc kubenswrapper[4806]: I0126 07:55:02.041979 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 07:55:02 crc kubenswrapper[4806]: E0126 07:55:02.042066 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 07:55:02 crc kubenswrapper[4806]: I0126 07:55:02.042204 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 07:55:02 crc kubenswrapper[4806]: E0126 07:55:02.042286 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 07:55:02 crc kubenswrapper[4806]: I0126 07:55:02.042342 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 07:55:02 crc kubenswrapper[4806]: E0126 07:55:02.042394 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 07:55:02 crc kubenswrapper[4806]: I0126 07:55:02.093579 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:02 crc kubenswrapper[4806]: I0126 07:55:02.093628 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:02 crc kubenswrapper[4806]: I0126 07:55:02.093645 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:02 crc kubenswrapper[4806]: I0126 07:55:02.093666 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:02 crc kubenswrapper[4806]: I0126 07:55:02.093685 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:02Z","lastTransitionTime":"2026-01-26T07:55:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:02 crc kubenswrapper[4806]: I0126 07:55:02.139189 4806 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 22:38:57.286868035 +0000 UTC Jan 26 07:55:02 crc kubenswrapper[4806]: I0126 07:55:02.196401 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:02 crc kubenswrapper[4806]: I0126 07:55:02.196466 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:02 crc kubenswrapper[4806]: I0126 07:55:02.196490 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:02 crc kubenswrapper[4806]: I0126 07:55:02.196517 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:02 crc kubenswrapper[4806]: I0126 07:55:02.196564 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:02Z","lastTransitionTime":"2026-01-26T07:55:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:02 crc kubenswrapper[4806]: I0126 07:55:02.299503 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:02 crc kubenswrapper[4806]: I0126 07:55:02.299593 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:02 crc kubenswrapper[4806]: I0126 07:55:02.299612 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:02 crc kubenswrapper[4806]: I0126 07:55:02.299638 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:02 crc kubenswrapper[4806]: I0126 07:55:02.299657 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:02Z","lastTransitionTime":"2026-01-26T07:55:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:02 crc kubenswrapper[4806]: I0126 07:55:02.403155 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:02 crc kubenswrapper[4806]: I0126 07:55:02.403191 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:02 crc kubenswrapper[4806]: I0126 07:55:02.403206 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:02 crc kubenswrapper[4806]: I0126 07:55:02.403223 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:02 crc kubenswrapper[4806]: I0126 07:55:02.403235 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:02Z","lastTransitionTime":"2026-01-26T07:55:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:02 crc kubenswrapper[4806]: I0126 07:55:02.506023 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:02 crc kubenswrapper[4806]: I0126 07:55:02.506111 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:02 crc kubenswrapper[4806]: I0126 07:55:02.506128 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:02 crc kubenswrapper[4806]: I0126 07:55:02.506148 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:02 crc kubenswrapper[4806]: I0126 07:55:02.506168 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:02Z","lastTransitionTime":"2026-01-26T07:55:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:02 crc kubenswrapper[4806]: I0126 07:55:02.608593 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:02 crc kubenswrapper[4806]: I0126 07:55:02.608637 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:02 crc kubenswrapper[4806]: I0126 07:55:02.608652 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:02 crc kubenswrapper[4806]: I0126 07:55:02.608672 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:02 crc kubenswrapper[4806]: I0126 07:55:02.608687 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:02Z","lastTransitionTime":"2026-01-26T07:55:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:02 crc kubenswrapper[4806]: I0126 07:55:02.711325 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:02 crc kubenswrapper[4806]: I0126 07:55:02.711384 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:02 crc kubenswrapper[4806]: I0126 07:55:02.711400 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:02 crc kubenswrapper[4806]: I0126 07:55:02.711424 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:02 crc kubenswrapper[4806]: I0126 07:55:02.711442 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:02Z","lastTransitionTime":"2026-01-26T07:55:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:02 crc kubenswrapper[4806]: I0126 07:55:02.817775 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:02 crc kubenswrapper[4806]: I0126 07:55:02.818324 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:02 crc kubenswrapper[4806]: I0126 07:55:02.818374 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:02 crc kubenswrapper[4806]: I0126 07:55:02.818411 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:02 crc kubenswrapper[4806]: I0126 07:55:02.818449 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:02Z","lastTransitionTime":"2026-01-26T07:55:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:02 crc kubenswrapper[4806]: I0126 07:55:02.920961 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:02 crc kubenswrapper[4806]: I0126 07:55:02.921490 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:02 crc kubenswrapper[4806]: I0126 07:55:02.921835 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:02 crc kubenswrapper[4806]: I0126 07:55:02.922018 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:02 crc kubenswrapper[4806]: I0126 07:55:02.922425 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:02Z","lastTransitionTime":"2026-01-26T07:55:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:03 crc kubenswrapper[4806]: I0126 07:55:03.026097 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:03 crc kubenswrapper[4806]: I0126 07:55:03.026157 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:03 crc kubenswrapper[4806]: I0126 07:55:03.026174 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:03 crc kubenswrapper[4806]: I0126 07:55:03.026198 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:03 crc kubenswrapper[4806]: I0126 07:55:03.026218 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:03Z","lastTransitionTime":"2026-01-26T07:55:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:03 crc kubenswrapper[4806]: I0126 07:55:03.130034 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:03 crc kubenswrapper[4806]: I0126 07:55:03.130164 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:03 crc kubenswrapper[4806]: I0126 07:55:03.130189 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:03 crc kubenswrapper[4806]: I0126 07:55:03.130214 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:03 crc kubenswrapper[4806]: I0126 07:55:03.130237 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:03Z","lastTransitionTime":"2026-01-26T07:55:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:03 crc kubenswrapper[4806]: I0126 07:55:03.139380 4806 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 13:04:21.898311197 +0000 UTC Jan 26 07:55:03 crc kubenswrapper[4806]: I0126 07:55:03.233463 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:03 crc kubenswrapper[4806]: I0126 07:55:03.233573 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:03 crc kubenswrapper[4806]: I0126 07:55:03.233590 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:03 crc kubenswrapper[4806]: I0126 07:55:03.233612 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:03 crc kubenswrapper[4806]: I0126 07:55:03.233628 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:03Z","lastTransitionTime":"2026-01-26T07:55:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:03 crc kubenswrapper[4806]: I0126 07:55:03.335977 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:03 crc kubenswrapper[4806]: I0126 07:55:03.336010 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:03 crc kubenswrapper[4806]: I0126 07:55:03.336019 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:03 crc kubenswrapper[4806]: I0126 07:55:03.336032 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:03 crc kubenswrapper[4806]: I0126 07:55:03.336042 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:03Z","lastTransitionTime":"2026-01-26T07:55:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:03 crc kubenswrapper[4806]: I0126 07:55:03.438448 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:03 crc kubenswrapper[4806]: I0126 07:55:03.438567 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:03 crc kubenswrapper[4806]: I0126 07:55:03.438595 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:03 crc kubenswrapper[4806]: I0126 07:55:03.438625 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:03 crc kubenswrapper[4806]: I0126 07:55:03.438648 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:03Z","lastTransitionTime":"2026-01-26T07:55:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:03 crc kubenswrapper[4806]: I0126 07:55:03.541214 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:03 crc kubenswrapper[4806]: I0126 07:55:03.541250 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:03 crc kubenswrapper[4806]: I0126 07:55:03.541267 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:03 crc kubenswrapper[4806]: I0126 07:55:03.541284 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:03 crc kubenswrapper[4806]: I0126 07:55:03.541295 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:03Z","lastTransitionTime":"2026-01-26T07:55:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:03 crc kubenswrapper[4806]: I0126 07:55:03.644935 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:03 crc kubenswrapper[4806]: I0126 07:55:03.644972 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:03 crc kubenswrapper[4806]: I0126 07:55:03.644984 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:03 crc kubenswrapper[4806]: I0126 07:55:03.645000 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:03 crc kubenswrapper[4806]: I0126 07:55:03.645015 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:03Z","lastTransitionTime":"2026-01-26T07:55:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:03 crc kubenswrapper[4806]: I0126 07:55:03.747834 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:03 crc kubenswrapper[4806]: I0126 07:55:03.747865 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:03 crc kubenswrapper[4806]: I0126 07:55:03.747875 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:03 crc kubenswrapper[4806]: I0126 07:55:03.747910 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:03 crc kubenswrapper[4806]: I0126 07:55:03.747919 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:03Z","lastTransitionTime":"2026-01-26T07:55:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:03 crc kubenswrapper[4806]: I0126 07:55:03.850640 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:03 crc kubenswrapper[4806]: I0126 07:55:03.850695 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:03 crc kubenswrapper[4806]: I0126 07:55:03.850707 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:03 crc kubenswrapper[4806]: I0126 07:55:03.850727 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:03 crc kubenswrapper[4806]: I0126 07:55:03.850740 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:03Z","lastTransitionTime":"2026-01-26T07:55:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:03 crc kubenswrapper[4806]: I0126 07:55:03.914608 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 07:55:03 crc kubenswrapper[4806]: E0126 07:55:03.914877 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 07:56:07.914843623 +0000 UTC m=+147.179251679 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 07:55:03 crc kubenswrapper[4806]: I0126 07:55:03.953720 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:03 crc kubenswrapper[4806]: I0126 07:55:03.953771 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:03 crc kubenswrapper[4806]: I0126 07:55:03.953788 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:03 crc kubenswrapper[4806]: I0126 07:55:03.953807 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:03 crc kubenswrapper[4806]: I0126 07:55:03.953820 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:03Z","lastTransitionTime":"2026-01-26T07:55:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:04 crc kubenswrapper[4806]: I0126 07:55:04.015761 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 07:55:04 crc kubenswrapper[4806]: I0126 07:55:04.015834 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 07:55:04 crc kubenswrapper[4806]: I0126 07:55:04.015860 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 07:55:04 crc kubenswrapper[4806]: I0126 07:55:04.015895 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 07:55:04 crc kubenswrapper[4806]: E0126 07:55:04.015960 4806 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 07:55:04 crc kubenswrapper[4806]: E0126 07:55:04.015961 4806 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 07:55:04 crc kubenswrapper[4806]: E0126 07:55:04.016013 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 07:56:08.01599762 +0000 UTC m=+147.280405686 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 07:55:04 crc kubenswrapper[4806]: E0126 07:55:04.016028 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 07:56:08.01602096 +0000 UTC m=+147.280429026 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 07:55:04 crc kubenswrapper[4806]: E0126 07:55:04.016093 4806 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 07:55:04 crc kubenswrapper[4806]: E0126 07:55:04.016140 4806 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 07:55:04 crc kubenswrapper[4806]: E0126 07:55:04.016151 4806 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 07:55:04 crc kubenswrapper[4806]: E0126 07:55:04.016205 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 07:56:08.016189895 +0000 UTC m=+147.280597951 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 07:55:04 crc kubenswrapper[4806]: E0126 07:55:04.016093 4806 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 07:55:04 crc kubenswrapper[4806]: E0126 07:55:04.016229 4806 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 07:55:04 crc kubenswrapper[4806]: E0126 07:55:04.016237 4806 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 07:55:04 crc kubenswrapper[4806]: E0126 07:55:04.016261 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 07:56:08.016253317 +0000 UTC m=+147.280661373 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 07:55:04 crc kubenswrapper[4806]: I0126 07:55:04.041813 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqmvf" Jan 26 07:55:04 crc kubenswrapper[4806]: I0126 07:55:04.041876 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 07:55:04 crc kubenswrapper[4806]: E0126 07:55:04.041938 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqmvf" podUID="137029f0-49ad-4400-b117-2eff9271bce3" Jan 26 07:55:04 crc kubenswrapper[4806]: I0126 07:55:04.041894 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 07:55:04 crc kubenswrapper[4806]: I0126 07:55:04.041876 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 07:55:04 crc kubenswrapper[4806]: E0126 07:55:04.042079 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 07:55:04 crc kubenswrapper[4806]: E0126 07:55:04.042155 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 07:55:04 crc kubenswrapper[4806]: E0126 07:55:04.042259 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 07:55:04 crc kubenswrapper[4806]: I0126 07:55:04.056745 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:04 crc kubenswrapper[4806]: I0126 07:55:04.056789 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:04 crc kubenswrapper[4806]: I0126 07:55:04.056802 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:04 crc kubenswrapper[4806]: I0126 07:55:04.056820 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:04 crc kubenswrapper[4806]: I0126 07:55:04.056833 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:04Z","lastTransitionTime":"2026-01-26T07:55:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:04 crc kubenswrapper[4806]: I0126 07:55:04.139669 4806 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 17:16:29.956572506 +0000 UTC Jan 26 07:55:04 crc kubenswrapper[4806]: I0126 07:55:04.159464 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:04 crc kubenswrapper[4806]: I0126 07:55:04.159510 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:04 crc kubenswrapper[4806]: I0126 07:55:04.159543 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:04 crc kubenswrapper[4806]: I0126 07:55:04.159563 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:04 crc kubenswrapper[4806]: I0126 07:55:04.159578 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:04Z","lastTransitionTime":"2026-01-26T07:55:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:04 crc kubenswrapper[4806]: I0126 07:55:04.262641 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:04 crc kubenswrapper[4806]: I0126 07:55:04.262689 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:04 crc kubenswrapper[4806]: I0126 07:55:04.262698 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:04 crc kubenswrapper[4806]: I0126 07:55:04.262714 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:04 crc kubenswrapper[4806]: I0126 07:55:04.262724 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:04Z","lastTransitionTime":"2026-01-26T07:55:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:04 crc kubenswrapper[4806]: I0126 07:55:04.366316 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:04 crc kubenswrapper[4806]: I0126 07:55:04.366380 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:04 crc kubenswrapper[4806]: I0126 07:55:04.366398 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:04 crc kubenswrapper[4806]: I0126 07:55:04.366423 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:04 crc kubenswrapper[4806]: I0126 07:55:04.366443 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:04Z","lastTransitionTime":"2026-01-26T07:55:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:04 crc kubenswrapper[4806]: I0126 07:55:04.468882 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:04 crc kubenswrapper[4806]: I0126 07:55:04.468945 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:04 crc kubenswrapper[4806]: I0126 07:55:04.468969 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:04 crc kubenswrapper[4806]: I0126 07:55:04.469102 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:04 crc kubenswrapper[4806]: I0126 07:55:04.469128 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:04Z","lastTransitionTime":"2026-01-26T07:55:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:04 crc kubenswrapper[4806]: I0126 07:55:04.572820 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:04 crc kubenswrapper[4806]: I0126 07:55:04.573062 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:04 crc kubenswrapper[4806]: I0126 07:55:04.573095 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:04 crc kubenswrapper[4806]: I0126 07:55:04.573129 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:04 crc kubenswrapper[4806]: I0126 07:55:04.573151 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:04Z","lastTransitionTime":"2026-01-26T07:55:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:04 crc kubenswrapper[4806]: I0126 07:55:04.677097 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:04 crc kubenswrapper[4806]: I0126 07:55:04.677186 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:04 crc kubenswrapper[4806]: I0126 07:55:04.677212 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:04 crc kubenswrapper[4806]: I0126 07:55:04.677249 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:04 crc kubenswrapper[4806]: I0126 07:55:04.677274 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:04Z","lastTransitionTime":"2026-01-26T07:55:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:04 crc kubenswrapper[4806]: I0126 07:55:04.780270 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:04 crc kubenswrapper[4806]: I0126 07:55:04.780344 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:04 crc kubenswrapper[4806]: I0126 07:55:04.780363 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:04 crc kubenswrapper[4806]: I0126 07:55:04.780392 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:04 crc kubenswrapper[4806]: I0126 07:55:04.780414 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:04Z","lastTransitionTime":"2026-01-26T07:55:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:04 crc kubenswrapper[4806]: I0126 07:55:04.883472 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:04 crc kubenswrapper[4806]: I0126 07:55:04.883584 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:04 crc kubenswrapper[4806]: I0126 07:55:04.883604 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:04 crc kubenswrapper[4806]: I0126 07:55:04.883631 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:04 crc kubenswrapper[4806]: I0126 07:55:04.883648 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:04Z","lastTransitionTime":"2026-01-26T07:55:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:04 crc kubenswrapper[4806]: I0126 07:55:04.986805 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:04 crc kubenswrapper[4806]: I0126 07:55:04.986903 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:04 crc kubenswrapper[4806]: I0126 07:55:04.986930 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:04 crc kubenswrapper[4806]: I0126 07:55:04.986966 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:04 crc kubenswrapper[4806]: I0126 07:55:04.986991 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:04Z","lastTransitionTime":"2026-01-26T07:55:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:05 crc kubenswrapper[4806]: I0126 07:55:05.091048 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:05 crc kubenswrapper[4806]: I0126 07:55:05.091104 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:05 crc kubenswrapper[4806]: I0126 07:55:05.091115 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:05 crc kubenswrapper[4806]: I0126 07:55:05.091131 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:05 crc kubenswrapper[4806]: I0126 07:55:05.091141 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:05Z","lastTransitionTime":"2026-01-26T07:55:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:05 crc kubenswrapper[4806]: I0126 07:55:05.140029 4806 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 10:53:23.667200607 +0000 UTC Jan 26 07:55:05 crc kubenswrapper[4806]: I0126 07:55:05.194026 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:05 crc kubenswrapper[4806]: I0126 07:55:05.194409 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:05 crc kubenswrapper[4806]: I0126 07:55:05.194418 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:05 crc kubenswrapper[4806]: I0126 07:55:05.194432 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:05 crc kubenswrapper[4806]: I0126 07:55:05.194442 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:05Z","lastTransitionTime":"2026-01-26T07:55:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:05 crc kubenswrapper[4806]: I0126 07:55:05.296361 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:05 crc kubenswrapper[4806]: I0126 07:55:05.296398 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:05 crc kubenswrapper[4806]: I0126 07:55:05.296407 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:05 crc kubenswrapper[4806]: I0126 07:55:05.296421 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:05 crc kubenswrapper[4806]: I0126 07:55:05.296430 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:05Z","lastTransitionTime":"2026-01-26T07:55:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:05 crc kubenswrapper[4806]: I0126 07:55:05.398285 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:05 crc kubenswrapper[4806]: I0126 07:55:05.398389 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:05 crc kubenswrapper[4806]: I0126 07:55:05.398409 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:05 crc kubenswrapper[4806]: I0126 07:55:05.398448 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:05 crc kubenswrapper[4806]: I0126 07:55:05.398501 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:05Z","lastTransitionTime":"2026-01-26T07:55:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:05 crc kubenswrapper[4806]: I0126 07:55:05.501616 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:05 crc kubenswrapper[4806]: I0126 07:55:05.502080 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:05 crc kubenswrapper[4806]: I0126 07:55:05.502237 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:05 crc kubenswrapper[4806]: I0126 07:55:05.502396 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:05 crc kubenswrapper[4806]: I0126 07:55:05.502566 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:05Z","lastTransitionTime":"2026-01-26T07:55:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:05 crc kubenswrapper[4806]: I0126 07:55:05.605177 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:05 crc kubenswrapper[4806]: I0126 07:55:05.605213 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:05 crc kubenswrapper[4806]: I0126 07:55:05.605223 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:05 crc kubenswrapper[4806]: I0126 07:55:05.605238 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:05 crc kubenswrapper[4806]: I0126 07:55:05.605248 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:05Z","lastTransitionTime":"2026-01-26T07:55:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:05 crc kubenswrapper[4806]: I0126 07:55:05.708947 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:05 crc kubenswrapper[4806]: I0126 07:55:05.709011 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:05 crc kubenswrapper[4806]: I0126 07:55:05.709029 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:05 crc kubenswrapper[4806]: I0126 07:55:05.709056 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:05 crc kubenswrapper[4806]: I0126 07:55:05.709075 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:05Z","lastTransitionTime":"2026-01-26T07:55:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:05 crc kubenswrapper[4806]: I0126 07:55:05.822579 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:05 crc kubenswrapper[4806]: I0126 07:55:05.822667 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:05 crc kubenswrapper[4806]: I0126 07:55:05.822690 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:05 crc kubenswrapper[4806]: I0126 07:55:05.822724 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:05 crc kubenswrapper[4806]: I0126 07:55:05.822747 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:05Z","lastTransitionTime":"2026-01-26T07:55:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:05 crc kubenswrapper[4806]: I0126 07:55:05.926890 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:05 crc kubenswrapper[4806]: I0126 07:55:05.926972 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:05 crc kubenswrapper[4806]: I0126 07:55:05.926994 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:05 crc kubenswrapper[4806]: I0126 07:55:05.927026 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:05 crc kubenswrapper[4806]: I0126 07:55:05.927048 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:05Z","lastTransitionTime":"2026-01-26T07:55:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:06 crc kubenswrapper[4806]: I0126 07:55:06.029735 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:06 crc kubenswrapper[4806]: I0126 07:55:06.029785 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:06 crc kubenswrapper[4806]: I0126 07:55:06.029797 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:06 crc kubenswrapper[4806]: I0126 07:55:06.029820 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:06 crc kubenswrapper[4806]: I0126 07:55:06.029836 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:06Z","lastTransitionTime":"2026-01-26T07:55:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:06 crc kubenswrapper[4806]: I0126 07:55:06.041075 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqmvf" Jan 26 07:55:06 crc kubenswrapper[4806]: I0126 07:55:06.041144 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 07:55:06 crc kubenswrapper[4806]: I0126 07:55:06.041079 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 07:55:06 crc kubenswrapper[4806]: I0126 07:55:06.041219 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 07:55:06 crc kubenswrapper[4806]: E0126 07:55:06.041317 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqmvf" podUID="137029f0-49ad-4400-b117-2eff9271bce3" Jan 26 07:55:06 crc kubenswrapper[4806]: E0126 07:55:06.041441 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 07:55:06 crc kubenswrapper[4806]: E0126 07:55:06.042035 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 07:55:06 crc kubenswrapper[4806]: E0126 07:55:06.042256 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 07:55:06 crc kubenswrapper[4806]: I0126 07:55:06.132241 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:06 crc kubenswrapper[4806]: I0126 07:55:06.132302 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:06 crc kubenswrapper[4806]: I0126 07:55:06.132318 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:06 crc kubenswrapper[4806]: I0126 07:55:06.132340 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:06 crc kubenswrapper[4806]: I0126 07:55:06.132355 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:06Z","lastTransitionTime":"2026-01-26T07:55:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:06 crc kubenswrapper[4806]: I0126 07:55:06.140419 4806 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 03:07:41.655300514 +0000 UTC Jan 26 07:55:06 crc kubenswrapper[4806]: I0126 07:55:06.236264 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:06 crc kubenswrapper[4806]: I0126 07:55:06.236326 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:06 crc kubenswrapper[4806]: I0126 07:55:06.236344 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:06 crc kubenswrapper[4806]: I0126 07:55:06.236369 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:06 crc kubenswrapper[4806]: I0126 07:55:06.236382 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:06Z","lastTransitionTime":"2026-01-26T07:55:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:06 crc kubenswrapper[4806]: I0126 07:55:06.340395 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:06 crc kubenswrapper[4806]: I0126 07:55:06.340476 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:06 crc kubenswrapper[4806]: I0126 07:55:06.340496 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:06 crc kubenswrapper[4806]: I0126 07:55:06.340549 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:06 crc kubenswrapper[4806]: I0126 07:55:06.340573 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:06Z","lastTransitionTime":"2026-01-26T07:55:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:06 crc kubenswrapper[4806]: I0126 07:55:06.444393 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:06 crc kubenswrapper[4806]: I0126 07:55:06.444439 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:06 crc kubenswrapper[4806]: I0126 07:55:06.444452 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:06 crc kubenswrapper[4806]: I0126 07:55:06.444475 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:06 crc kubenswrapper[4806]: I0126 07:55:06.444489 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:06Z","lastTransitionTime":"2026-01-26T07:55:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:06 crc kubenswrapper[4806]: I0126 07:55:06.548430 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:06 crc kubenswrapper[4806]: I0126 07:55:06.548486 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:06 crc kubenswrapper[4806]: I0126 07:55:06.548499 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:06 crc kubenswrapper[4806]: I0126 07:55:06.548543 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:06 crc kubenswrapper[4806]: I0126 07:55:06.548557 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:06Z","lastTransitionTime":"2026-01-26T07:55:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:06 crc kubenswrapper[4806]: I0126 07:55:06.651776 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:06 crc kubenswrapper[4806]: I0126 07:55:06.652582 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:06 crc kubenswrapper[4806]: I0126 07:55:06.652656 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:06 crc kubenswrapper[4806]: I0126 07:55:06.652778 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:06 crc kubenswrapper[4806]: I0126 07:55:06.652869 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:06Z","lastTransitionTime":"2026-01-26T07:55:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:06 crc kubenswrapper[4806]: I0126 07:55:06.755925 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:06 crc kubenswrapper[4806]: I0126 07:55:06.755997 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:06 crc kubenswrapper[4806]: I0126 07:55:06.756023 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:06 crc kubenswrapper[4806]: I0126 07:55:06.756056 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:06 crc kubenswrapper[4806]: I0126 07:55:06.756074 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:06Z","lastTransitionTime":"2026-01-26T07:55:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:06 crc kubenswrapper[4806]: I0126 07:55:06.858830 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:06 crc kubenswrapper[4806]: I0126 07:55:06.859176 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:06 crc kubenswrapper[4806]: I0126 07:55:06.859261 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:06 crc kubenswrapper[4806]: I0126 07:55:06.859354 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:06 crc kubenswrapper[4806]: I0126 07:55:06.859417 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:06Z","lastTransitionTime":"2026-01-26T07:55:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:06 crc kubenswrapper[4806]: I0126 07:55:06.961255 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:06 crc kubenswrapper[4806]: I0126 07:55:06.961297 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:06 crc kubenswrapper[4806]: I0126 07:55:06.961307 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:06 crc kubenswrapper[4806]: I0126 07:55:06.961323 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:06 crc kubenswrapper[4806]: I0126 07:55:06.961332 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:06Z","lastTransitionTime":"2026-01-26T07:55:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:07 crc kubenswrapper[4806]: I0126 07:55:07.063558 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:07 crc kubenswrapper[4806]: I0126 07:55:07.063595 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:07 crc kubenswrapper[4806]: I0126 07:55:07.063607 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:07 crc kubenswrapper[4806]: I0126 07:55:07.063645 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:07 crc kubenswrapper[4806]: I0126 07:55:07.063657 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:07Z","lastTransitionTime":"2026-01-26T07:55:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:07 crc kubenswrapper[4806]: I0126 07:55:07.141194 4806 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 11:47:28.588379212 +0000 UTC Jan 26 07:55:07 crc kubenswrapper[4806]: I0126 07:55:07.166111 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:07 crc kubenswrapper[4806]: I0126 07:55:07.166151 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:07 crc kubenswrapper[4806]: I0126 07:55:07.166165 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:07 crc kubenswrapper[4806]: I0126 07:55:07.166182 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:07 crc kubenswrapper[4806]: I0126 07:55:07.166193 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:07Z","lastTransitionTime":"2026-01-26T07:55:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:07 crc kubenswrapper[4806]: I0126 07:55:07.269282 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:07 crc kubenswrapper[4806]: I0126 07:55:07.269321 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:07 crc kubenswrapper[4806]: I0126 07:55:07.269330 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:07 crc kubenswrapper[4806]: I0126 07:55:07.269345 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:07 crc kubenswrapper[4806]: I0126 07:55:07.269355 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:07Z","lastTransitionTime":"2026-01-26T07:55:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:07 crc kubenswrapper[4806]: I0126 07:55:07.371757 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:07 crc kubenswrapper[4806]: I0126 07:55:07.371794 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:07 crc kubenswrapper[4806]: I0126 07:55:07.371805 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:07 crc kubenswrapper[4806]: I0126 07:55:07.371821 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:07 crc kubenswrapper[4806]: I0126 07:55:07.371832 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:07Z","lastTransitionTime":"2026-01-26T07:55:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:07 crc kubenswrapper[4806]: I0126 07:55:07.474283 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:07 crc kubenswrapper[4806]: I0126 07:55:07.474621 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:07 crc kubenswrapper[4806]: I0126 07:55:07.474766 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:07 crc kubenswrapper[4806]: I0126 07:55:07.474857 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:07 crc kubenswrapper[4806]: I0126 07:55:07.474930 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:07Z","lastTransitionTime":"2026-01-26T07:55:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:07 crc kubenswrapper[4806]: I0126 07:55:07.577549 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:07 crc kubenswrapper[4806]: I0126 07:55:07.577605 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:07 crc kubenswrapper[4806]: I0126 07:55:07.577619 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:07 crc kubenswrapper[4806]: I0126 07:55:07.577638 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:07 crc kubenswrapper[4806]: I0126 07:55:07.577650 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:07Z","lastTransitionTime":"2026-01-26T07:55:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:07 crc kubenswrapper[4806]: I0126 07:55:07.680029 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:07 crc kubenswrapper[4806]: I0126 07:55:07.680103 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:07 crc kubenswrapper[4806]: I0126 07:55:07.680118 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:07 crc kubenswrapper[4806]: I0126 07:55:07.680133 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:07 crc kubenswrapper[4806]: I0126 07:55:07.680163 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:07Z","lastTransitionTime":"2026-01-26T07:55:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:07 crc kubenswrapper[4806]: I0126 07:55:07.782243 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:07 crc kubenswrapper[4806]: I0126 07:55:07.782288 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:07 crc kubenswrapper[4806]: I0126 07:55:07.782297 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:07 crc kubenswrapper[4806]: I0126 07:55:07.782312 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:07 crc kubenswrapper[4806]: I0126 07:55:07.782324 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:07Z","lastTransitionTime":"2026-01-26T07:55:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:07 crc kubenswrapper[4806]: I0126 07:55:07.884891 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:07 crc kubenswrapper[4806]: I0126 07:55:07.884940 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:07 crc kubenswrapper[4806]: I0126 07:55:07.884951 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:07 crc kubenswrapper[4806]: I0126 07:55:07.884963 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:07 crc kubenswrapper[4806]: I0126 07:55:07.885047 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:07Z","lastTransitionTime":"2026-01-26T07:55:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:07 crc kubenswrapper[4806]: I0126 07:55:07.986458 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:07 crc kubenswrapper[4806]: I0126 07:55:07.986707 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:07 crc kubenswrapper[4806]: I0126 07:55:07.986772 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:07 crc kubenswrapper[4806]: I0126 07:55:07.986846 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:07 crc kubenswrapper[4806]: I0126 07:55:07.986906 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:07Z","lastTransitionTime":"2026-01-26T07:55:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:08 crc kubenswrapper[4806]: I0126 07:55:08.041862 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 07:55:08 crc kubenswrapper[4806]: I0126 07:55:08.041893 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqmvf" Jan 26 07:55:08 crc kubenswrapper[4806]: E0126 07:55:08.042018 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 07:55:08 crc kubenswrapper[4806]: I0126 07:55:08.042027 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 07:55:08 crc kubenswrapper[4806]: E0126 07:55:08.042124 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqmvf" podUID="137029f0-49ad-4400-b117-2eff9271bce3" Jan 26 07:55:08 crc kubenswrapper[4806]: E0126 07:55:08.042250 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 07:55:08 crc kubenswrapper[4806]: I0126 07:55:08.042645 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 07:55:08 crc kubenswrapper[4806]: E0126 07:55:08.042703 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 07:55:08 crc kubenswrapper[4806]: I0126 07:55:08.088944 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:08 crc kubenswrapper[4806]: I0126 07:55:08.089035 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:08 crc kubenswrapper[4806]: I0126 07:55:08.089059 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:08 crc kubenswrapper[4806]: I0126 07:55:08.089086 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:08 crc kubenswrapper[4806]: I0126 07:55:08.089133 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:08Z","lastTransitionTime":"2026-01-26T07:55:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:08 crc kubenswrapper[4806]: I0126 07:55:08.141954 4806 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 09:55:06.248190439 +0000 UTC Jan 26 07:55:08 crc kubenswrapper[4806]: I0126 07:55:08.191190 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:08 crc kubenswrapper[4806]: I0126 07:55:08.191237 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:08 crc kubenswrapper[4806]: I0126 07:55:08.191247 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:08 crc kubenswrapper[4806]: I0126 07:55:08.191263 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:08 crc kubenswrapper[4806]: I0126 07:55:08.191273 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:08Z","lastTransitionTime":"2026-01-26T07:55:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:08 crc kubenswrapper[4806]: I0126 07:55:08.293893 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:08 crc kubenswrapper[4806]: I0126 07:55:08.293951 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:08 crc kubenswrapper[4806]: I0126 07:55:08.293965 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:08 crc kubenswrapper[4806]: I0126 07:55:08.293990 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:08 crc kubenswrapper[4806]: I0126 07:55:08.294008 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:08Z","lastTransitionTime":"2026-01-26T07:55:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:08 crc kubenswrapper[4806]: I0126 07:55:08.396099 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:08 crc kubenswrapper[4806]: I0126 07:55:08.396253 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:08 crc kubenswrapper[4806]: I0126 07:55:08.396264 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:08 crc kubenswrapper[4806]: I0126 07:55:08.396280 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:08 crc kubenswrapper[4806]: I0126 07:55:08.396290 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:08Z","lastTransitionTime":"2026-01-26T07:55:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:08 crc kubenswrapper[4806]: I0126 07:55:08.500073 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:08 crc kubenswrapper[4806]: I0126 07:55:08.500225 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:08 crc kubenswrapper[4806]: I0126 07:55:08.500267 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:08 crc kubenswrapper[4806]: I0126 07:55:08.500308 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:08 crc kubenswrapper[4806]: I0126 07:55:08.500346 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:08Z","lastTransitionTime":"2026-01-26T07:55:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:08 crc kubenswrapper[4806]: I0126 07:55:08.603972 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:08 crc kubenswrapper[4806]: I0126 07:55:08.604006 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:08 crc kubenswrapper[4806]: I0126 07:55:08.604017 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:08 crc kubenswrapper[4806]: I0126 07:55:08.604033 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:08 crc kubenswrapper[4806]: I0126 07:55:08.604043 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:08Z","lastTransitionTime":"2026-01-26T07:55:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:08 crc kubenswrapper[4806]: I0126 07:55:08.707107 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:08 crc kubenswrapper[4806]: I0126 07:55:08.707171 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:08 crc kubenswrapper[4806]: I0126 07:55:08.707191 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:08 crc kubenswrapper[4806]: I0126 07:55:08.707222 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:08 crc kubenswrapper[4806]: I0126 07:55:08.707245 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:08Z","lastTransitionTime":"2026-01-26T07:55:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:08 crc kubenswrapper[4806]: I0126 07:55:08.810015 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:08 crc kubenswrapper[4806]: I0126 07:55:08.810047 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:08 crc kubenswrapper[4806]: I0126 07:55:08.810056 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:08 crc kubenswrapper[4806]: I0126 07:55:08.810070 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:08 crc kubenswrapper[4806]: I0126 07:55:08.810079 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:08Z","lastTransitionTime":"2026-01-26T07:55:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:08 crc kubenswrapper[4806]: I0126 07:55:08.913303 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:08 crc kubenswrapper[4806]: I0126 07:55:08.913338 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:08 crc kubenswrapper[4806]: I0126 07:55:08.913348 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:08 crc kubenswrapper[4806]: I0126 07:55:08.913366 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:08 crc kubenswrapper[4806]: I0126 07:55:08.913379 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:08Z","lastTransitionTime":"2026-01-26T07:55:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:09 crc kubenswrapper[4806]: I0126 07:55:09.016271 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:09 crc kubenswrapper[4806]: I0126 07:55:09.016332 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:09 crc kubenswrapper[4806]: I0126 07:55:09.016345 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:09 crc kubenswrapper[4806]: I0126 07:55:09.016361 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:09 crc kubenswrapper[4806]: I0126 07:55:09.016373 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:09Z","lastTransitionTime":"2026-01-26T07:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:09 crc kubenswrapper[4806]: I0126 07:55:09.119063 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:09 crc kubenswrapper[4806]: I0126 07:55:09.119103 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:09 crc kubenswrapper[4806]: I0126 07:55:09.119112 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:09 crc kubenswrapper[4806]: I0126 07:55:09.119147 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:09 crc kubenswrapper[4806]: I0126 07:55:09.119158 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:09Z","lastTransitionTime":"2026-01-26T07:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:09 crc kubenswrapper[4806]: I0126 07:55:09.142754 4806 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 12:25:32.366888961 +0000 UTC Jan 26 07:55:09 crc kubenswrapper[4806]: I0126 07:55:09.223695 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:09 crc kubenswrapper[4806]: I0126 07:55:09.223774 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:09 crc kubenswrapper[4806]: I0126 07:55:09.223799 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:09 crc kubenswrapper[4806]: I0126 07:55:09.223840 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:09 crc kubenswrapper[4806]: I0126 07:55:09.223868 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:09Z","lastTransitionTime":"2026-01-26T07:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:09 crc kubenswrapper[4806]: I0126 07:55:09.327009 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:09 crc kubenswrapper[4806]: I0126 07:55:09.327055 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:09 crc kubenswrapper[4806]: I0126 07:55:09.327066 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:09 crc kubenswrapper[4806]: I0126 07:55:09.327085 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:09 crc kubenswrapper[4806]: I0126 07:55:09.327095 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:09Z","lastTransitionTime":"2026-01-26T07:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:09 crc kubenswrapper[4806]: I0126 07:55:09.430897 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:09 crc kubenswrapper[4806]: I0126 07:55:09.430998 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:09 crc kubenswrapper[4806]: I0126 07:55:09.431017 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:09 crc kubenswrapper[4806]: I0126 07:55:09.431039 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:09 crc kubenswrapper[4806]: I0126 07:55:09.431053 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:09Z","lastTransitionTime":"2026-01-26T07:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:09 crc kubenswrapper[4806]: I0126 07:55:09.534259 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:09 crc kubenswrapper[4806]: I0126 07:55:09.534329 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:09 crc kubenswrapper[4806]: I0126 07:55:09.534354 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:09 crc kubenswrapper[4806]: I0126 07:55:09.534390 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:09 crc kubenswrapper[4806]: I0126 07:55:09.534418 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:09Z","lastTransitionTime":"2026-01-26T07:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:09 crc kubenswrapper[4806]: I0126 07:55:09.637421 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:09 crc kubenswrapper[4806]: I0126 07:55:09.637476 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:09 crc kubenswrapper[4806]: I0126 07:55:09.637495 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:09 crc kubenswrapper[4806]: I0126 07:55:09.637555 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:09 crc kubenswrapper[4806]: I0126 07:55:09.637584 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:09Z","lastTransitionTime":"2026-01-26T07:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:09 crc kubenswrapper[4806]: I0126 07:55:09.741365 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:09 crc kubenswrapper[4806]: I0126 07:55:09.741420 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:09 crc kubenswrapper[4806]: I0126 07:55:09.741432 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:09 crc kubenswrapper[4806]: I0126 07:55:09.741452 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:09 crc kubenswrapper[4806]: I0126 07:55:09.741465 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:09Z","lastTransitionTime":"2026-01-26T07:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:09 crc kubenswrapper[4806]: I0126 07:55:09.844720 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:09 crc kubenswrapper[4806]: I0126 07:55:09.844762 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:09 crc kubenswrapper[4806]: I0126 07:55:09.844772 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:09 crc kubenswrapper[4806]: I0126 07:55:09.844794 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:09 crc kubenswrapper[4806]: I0126 07:55:09.844807 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:09Z","lastTransitionTime":"2026-01-26T07:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:09 crc kubenswrapper[4806]: I0126 07:55:09.948330 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:09 crc kubenswrapper[4806]: I0126 07:55:09.948386 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:09 crc kubenswrapper[4806]: I0126 07:55:09.948405 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:09 crc kubenswrapper[4806]: I0126 07:55:09.948430 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:09 crc kubenswrapper[4806]: I0126 07:55:09.948448 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:09Z","lastTransitionTime":"2026-01-26T07:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:10 crc kubenswrapper[4806]: I0126 07:55:10.041791 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 07:55:10 crc kubenswrapper[4806]: I0126 07:55:10.041844 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqmvf" Jan 26 07:55:10 crc kubenswrapper[4806]: I0126 07:55:10.041966 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 07:55:10 crc kubenswrapper[4806]: I0126 07:55:10.041966 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 07:55:10 crc kubenswrapper[4806]: E0126 07:55:10.042298 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 07:55:10 crc kubenswrapper[4806]: E0126 07:55:10.042370 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqmvf" podUID="137029f0-49ad-4400-b117-2eff9271bce3" Jan 26 07:55:10 crc kubenswrapper[4806]: E0126 07:55:10.042494 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 07:55:10 crc kubenswrapper[4806]: E0126 07:55:10.042772 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 07:55:10 crc kubenswrapper[4806]: I0126 07:55:10.051765 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:10 crc kubenswrapper[4806]: I0126 07:55:10.052000 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:10 crc kubenswrapper[4806]: I0126 07:55:10.052250 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:10 crc kubenswrapper[4806]: I0126 07:55:10.052463 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:10 crc kubenswrapper[4806]: I0126 07:55:10.052707 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:10Z","lastTransitionTime":"2026-01-26T07:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:10 crc kubenswrapper[4806]: I0126 07:55:10.144007 4806 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 18:22:42.916242152 +0000 UTC Jan 26 07:55:10 crc kubenswrapper[4806]: I0126 07:55:10.155504 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:10 crc kubenswrapper[4806]: I0126 07:55:10.155624 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:10 crc kubenswrapper[4806]: I0126 07:55:10.155650 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:10 crc kubenswrapper[4806]: I0126 07:55:10.155695 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:10 crc kubenswrapper[4806]: I0126 07:55:10.155724 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:10Z","lastTransitionTime":"2026-01-26T07:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:10 crc kubenswrapper[4806]: I0126 07:55:10.259007 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:10 crc kubenswrapper[4806]: I0126 07:55:10.259082 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:10 crc kubenswrapper[4806]: I0126 07:55:10.259098 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:10 crc kubenswrapper[4806]: I0126 07:55:10.259121 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:10 crc kubenswrapper[4806]: I0126 07:55:10.259140 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:10Z","lastTransitionTime":"2026-01-26T07:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:10 crc kubenswrapper[4806]: I0126 07:55:10.361877 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:10 crc kubenswrapper[4806]: I0126 07:55:10.362189 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:10 crc kubenswrapper[4806]: I0126 07:55:10.362211 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:10 crc kubenswrapper[4806]: I0126 07:55:10.362228 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:10 crc kubenswrapper[4806]: I0126 07:55:10.362239 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:10Z","lastTransitionTime":"2026-01-26T07:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:10 crc kubenswrapper[4806]: I0126 07:55:10.465372 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:10 crc kubenswrapper[4806]: I0126 07:55:10.465407 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:10 crc kubenswrapper[4806]: I0126 07:55:10.465416 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:10 crc kubenswrapper[4806]: I0126 07:55:10.465428 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:10 crc kubenswrapper[4806]: I0126 07:55:10.465437 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:10Z","lastTransitionTime":"2026-01-26T07:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:10 crc kubenswrapper[4806]: I0126 07:55:10.567805 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:10 crc kubenswrapper[4806]: I0126 07:55:10.567856 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:10 crc kubenswrapper[4806]: I0126 07:55:10.567872 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:10 crc kubenswrapper[4806]: I0126 07:55:10.567890 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:10 crc kubenswrapper[4806]: I0126 07:55:10.567904 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:10Z","lastTransitionTime":"2026-01-26T07:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:10 crc kubenswrapper[4806]: I0126 07:55:10.671275 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:10 crc kubenswrapper[4806]: I0126 07:55:10.671602 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:10 crc kubenswrapper[4806]: I0126 07:55:10.671769 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:10 crc kubenswrapper[4806]: I0126 07:55:10.671917 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:10 crc kubenswrapper[4806]: I0126 07:55:10.672034 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:10Z","lastTransitionTime":"2026-01-26T07:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:10 crc kubenswrapper[4806]: I0126 07:55:10.775507 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:10 crc kubenswrapper[4806]: I0126 07:55:10.775573 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:10 crc kubenswrapper[4806]: I0126 07:55:10.775582 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:10 crc kubenswrapper[4806]: I0126 07:55:10.775603 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:10 crc kubenswrapper[4806]: I0126 07:55:10.775621 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:10Z","lastTransitionTime":"2026-01-26T07:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:10 crc kubenswrapper[4806]: I0126 07:55:10.878996 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:10 crc kubenswrapper[4806]: I0126 07:55:10.879061 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:10 crc kubenswrapper[4806]: I0126 07:55:10.879078 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:10 crc kubenswrapper[4806]: I0126 07:55:10.879104 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:10 crc kubenswrapper[4806]: I0126 07:55:10.879121 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:10Z","lastTransitionTime":"2026-01-26T07:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:10 crc kubenswrapper[4806]: I0126 07:55:10.982077 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:10 crc kubenswrapper[4806]: I0126 07:55:10.982157 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:10 crc kubenswrapper[4806]: I0126 07:55:10.982170 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:10 crc kubenswrapper[4806]: I0126 07:55:10.982188 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:10 crc kubenswrapper[4806]: I0126 07:55:10.982206 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:10Z","lastTransitionTime":"2026-01-26T07:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:11 crc kubenswrapper[4806]: I0126 07:55:11.042874 4806 scope.go:117] "RemoveContainer" containerID="df2814fbaab959fae53eddddeffb17ab97a5a8ad31fbc20bcca4cdad140a24e2" Jan 26 07:55:11 crc kubenswrapper[4806]: E0126 07:55:11.044135 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-8mw7z_openshift-ovn-kubernetes(1f8b8acb-f4cf-41db-82f8-94ffd21c1594)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" podUID="1f8b8acb-f4cf-41db-82f8-94ffd21c1594" Jan 26 07:55:11 crc kubenswrapper[4806]: I0126 07:55:11.063385 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-268q5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59844d88-1bf9-4761-b664-74623e7532c3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80fc9c4f56c3d017653449212b2b9457a701e8c567317798eb12c695bccec5c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19fc2da16d6c76dad40cc2947eb8da01b1ead5804be80df5756c132456575909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19fc2da16d6c76dad40cc2947eb8da01b1ead5804be80df5756c132456575909\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a426f2a63c3cdd56cec3bbd1f6e88ff76808304da3fe757043afeb2feb26614\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a426f2a63c3cdd56cec3bbd1f6e88ff76808304da3fe757043afeb2feb26614\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5459f7b834ef08b014a811d0955763235cb4dbb6cbe92670202b03f51f7d684d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5459f7b834ef08b014a811d0955763235cb4dbb6cbe92670202b03f51f7d684d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1efe94940fbf1babcccc68efae1d38751c22232d1a2f197485057be906db80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f1efe94940fbf1babcccc68efae1d38751c22232d1a2f197485057be906db80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1332a28e0cf08275ce281997fdc5674f44e397f8ff02fdd19fed12c2de2b70a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1332a28e0cf08275ce281997fdc5674f44e397f8ff02fdd19fed12c2de2b70a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://638da847467b7cade9c68a764044417e85bfa99cae93105cb70b260d3668c1a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://638da847467b7cade9c68a764044417e85bfa99cae93105cb70b260d3668c1a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-268q5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:55:11Z is after 2025-08-24T17:21:41Z" Jan 26 07:55:11 crc kubenswrapper[4806]: I0126 07:55:11.084319 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:11 crc kubenswrapper[4806]: I0126 07:55:11.084380 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:11 crc kubenswrapper[4806]: I0126 07:55:11.084394 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:11 crc kubenswrapper[4806]: I0126 07:55:11.084415 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:11 crc kubenswrapper[4806]: I0126 07:55:11.084428 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:11Z","lastTransitionTime":"2026-01-26T07:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:11 crc kubenswrapper[4806]: I0126 07:55:11.092667 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"674c5d9f-0f9e-47cf-b6bb-5a011bd0af68\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06b15982445e897ebbaf881e9a84973b9b700b0763ea6dfc0dfa3d5adb3d5b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67cf748dcc41d3402937cba97eadb7311a4fd7ab359a24768ffd4b6f689e9e2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://374c11776db902d4a30fdeab722bf8c85be53043b87555ac42f9f18bbfca5f78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded50f9353eb262b2587b8d1a59d8e1a06946c9b228eb0991fe35ad79f19567b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dc1a26bcb2eff0795009b3d88f7fedd86bccf553e9b8b86f8bc4bdda154566d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b85e7f36ac03113362c2ab6b15fb73e517599c6a56c6a410b9dead70191502f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b85e7f36ac03113362c2ab6b15fb73e517599c6a56c6a410b9dead70191502f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7324fbdf53b0be79696bb4bf870fe072761633e09cc88cce3a213a40b339410d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7324fbdf53b0be79696bb4bf870fe072761633e09cc88cce3a213a40b339410d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1987002ea8dfdacf427291f1757243727d744b1fe99d0be1f8cddc26beb74fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1987002ea8dfdacf427291f1757243727d744b1fe99d0be1f8cddc26beb74fc9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:53:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:55:11Z is after 2025-08-24T17:21:41Z" Jan 26 07:55:11 crc kubenswrapper[4806]: I0126 07:55:11.111015 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f837be3-c97c-4ec6-9ac0-1b2c210a2bd1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://370998a25827938e42e6ca8f51cb20c64068d266a2a6e92f43cfa6033efe1f1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://befc3a7310ed730ab06800d0ffe5a5ad206f5193f921d0080543d96e53c382cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f4e90bd8d92e4159f45b887647465c23595d6a14cb9fd5a6f3b6c736345b302\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d8fe9445b34c3dea0e5508fbbd9d8c72d3b0bd3c927f3d0469fb5f86a61b7a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6442d8b57ec5789bc5ad219da279d42cd053598b4a8a843b4190a619750483d3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 07:53:53.366863 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 07:53:53.368547 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1947576678/tls.crt::/tmp/serving-cert-1947576678/tls.key\\\\\\\"\\\\nI0126 07:53:59.068951 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 07:53:59.072863 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 07:53:59.072893 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 07:53:59.072922 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 07:53:59.072931 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 07:53:59.078386 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 07:53:59.078432 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 07:53:59.078439 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 07:53:59.078445 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 07:53:59.078449 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 07:53:59.078455 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 07:53:59.078459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 07:53:59.078571 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 07:53:59.084946 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eda625619208720589e5f413d0df8e1e55e7fd5ab4a2e59d64e19150afb05ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02368f0d186c7de1c3dc25d5c60985799ded799314f332c5132230a413d531a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02368f0d186c7de1c3dc25d5c60985799ded799314f332c5132230a413d531a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:53:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:55:11Z is after 2025-08-24T17:21:41Z" Jan 26 07:55:11 crc kubenswrapper[4806]: I0126 07:55:11.130996 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acf0880fe013b6ee2bf3e8342e19ebbdc21b0f7aeb69749dce219fe0e5e4939a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:55:11Z is after 2025-08-24T17:21:41Z" Jan 26 07:55:11 crc kubenswrapper[4806]: I0126 07:55:11.146389 4806 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 01:40:33.066361867 +0000 UTC Jan 26 07:55:11 crc kubenswrapper[4806]: I0126 07:55:11.157119 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0298829a559cb39d3f8d2f3b0e53b249e74c432ba82385ab628135d53fa77272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70db7a55580b02bc2be913a8978f895c5a3826133883cf1c838bc84a76e89af1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:55:11Z is after 2025-08-24T17:21:41Z" Jan 26 07:55:11 crc kubenswrapper[4806]: I0126 07:55:11.174281 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wx526" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"265e15b3-6ef8-47df-ab15-dcc9bd9574ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5080e1085d9267f3e432eaf62a6c620f7e59db08140bd5a901ab21840bfd6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmfr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wx526\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:55:11Z is after 2025-08-24T17:21:41Z" Jan 26 07:55:11 crc kubenswrapper[4806]: I0126 07:55:11.191132 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:11 crc kubenswrapper[4806]: I0126 07:55:11.191176 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:11 crc kubenswrapper[4806]: I0126 07:55:11.191188 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:11 crc kubenswrapper[4806]: I0126 07:55:11.191210 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:11 crc kubenswrapper[4806]: I0126 07:55:11.191226 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:11Z","lastTransitionTime":"2026-01-26T07:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:11 crc kubenswrapper[4806]: I0126 07:55:11.196687 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d7glh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4320ae6b-0d73-47d7-9f2c-f3c5b6b69041\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b2b56f712aa5b59214153bb170a5866aea874524f826e53be3e703b8ff912fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9da5e8c52d415e76190ace196927bd1783b08990164b11379940ecc2b6734551\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T07:54:47Z\\\",\\\"message\\\":\\\"2026-01-26T07:54:01+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6a15fa94-db47-433a-a1f7-0d0ac0229e59\\\\n2026-01-26T07:54:01+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6a15fa94-db47-433a-a1f7-0d0ac0229e59 to /host/opt/cni/bin/\\\\n2026-01-26T07:54:02Z [verbose] multus-daemon started\\\\n2026-01-26T07:54:02Z [verbose] Readiness Indicator file check\\\\n2026-01-26T07:54:47Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:01Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpcqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d7glh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:55:11Z is after 2025-08-24T17:21:41Z" Jan 26 07:55:11 crc kubenswrapper[4806]: I0126 07:55:11.213697 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f2d0753-513a-4924-9985-d1058d2cda9b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da291d9c45edfc19eed33fce385cb1d8005585d2e9ac5fdfc50040a9c3038683\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6719f7820924ce02c29e99fc9fb5c6e733597c1f006b3a2701046680af978e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8e90cbd21cd7080a82e2440f441264b1fd1b45ee048fbcadc7afbd7b4384996\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e49f6187d5b008fdb24b0f1a268d099bba2d3700d3ebd07c6f2d05e5718bf990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e49f6187d5b008fdb24b0f1a268d099bba2d3700d3ebd07c6f2d05e5718bf990\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:41Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:53:41Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:55:11Z is after 2025-08-24T17:21:41Z" Jan 26 07:55:11 crc kubenswrapper[4806]: I0126 07:55:11.235775 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:55:11Z is after 2025-08-24T17:21:41Z" Jan 26 07:55:11 crc kubenswrapper[4806]: I0126 07:55:11.281785 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dfe44c66a73660b38211ed7ae8fbb6a036c2ebc6a2716dd0e36da70b8d1fcc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:55:11Z is after 2025-08-24T17:21:41Z" Jan 26 07:55:11 crc kubenswrapper[4806]: I0126 07:55:11.293842 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:11 crc kubenswrapper[4806]: I0126 07:55:11.293886 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:11 crc kubenswrapper[4806]: I0126 07:55:11.293904 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:11 crc kubenswrapper[4806]: I0126 07:55:11.293921 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:11 crc kubenswrapper[4806]: I0126 07:55:11.293931 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:11Z","lastTransitionTime":"2026-01-26T07:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:11 crc kubenswrapper[4806]: I0126 07:55:11.303961 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d07502a2-50b0-4012-b335-340a1c694c50\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03d55ca5e1d542a5cdb726c1e581fb9b644e237bf4575794fedadf60386c339c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67x45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://824bb40c04b0ea474c45020c9c84ce7b7ee18c52beef9fee9618ba5a6e59be04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67x45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k2tlk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:55:11Z is after 2025-08-24T17:21:41Z" Jan 26 07:55:11 crc kubenswrapper[4806]: I0126 07:55:11.314854 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pw8cg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c73a9f4-20b2-4c8a-b25d-413770be4fac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://792a2a27d6043da345b05c611a152abc9233ff61958434dc7a7f8c13d185fc04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt9w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pw8cg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:55:11Z is after 2025-08-24T17:21:41Z" Jan 26 07:55:11 crc kubenswrapper[4806]: I0126 07:55:11.324135 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tfbl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47635c72-c532-48f3-839a-d86393eb5d24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7ea777c2c49c1bc930bff2584c62a6ebd7752638e4d5aec2818f56cf0678ad8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4lbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4910b820b4227a848efb96e462e633150fe7e06295dc04a0d9c45636cc4b83bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4lbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tfbl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:55:11Z is after 2025-08-24T17:21:41Z" Jan 26 07:55:11 crc kubenswrapper[4806]: I0126 07:55:11.334461 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a365c4a-dfc5-4290-920d-f1f04e322061\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d54a191a8969d10a108b9fa36fe6c03f55969e7bd6c73aef14d2936d92290ccb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad400013f30ae5d55431037940b99d47dc5a2449b4535963218a0e9510e5fcc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad400013f30ae5d55431037940b99d47dc5a2449b4535963218a0e9510e5fcc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:53:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:55:11Z is after 2025-08-24T17:21:41Z" Jan 26 07:55:11 crc kubenswrapper[4806]: I0126 07:55:11.352107 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a2716f1-7f63-4f1b-88e7-412b33a5afe3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1797a908e33ef3678ac1471e937a8e52c11dbb31d8de0f19d400a01706a98b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c111f2ec62507ccc570af4f4da19232bfa0946bb9e67bde595d5a3d66eff6a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f94ceee9c2ed3bd1c5ba58ee7e5047dd5f0a325c610c4277705f3426e69ed79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50228cb176a3defc821a78ed75d837706c2de3a3414f02e0afacf81289a15999\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:53:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:55:11Z is after 2025-08-24T17:21:41Z" Jan 26 07:55:11 crc kubenswrapper[4806]: I0126 07:55:11.359492 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:11 crc kubenswrapper[4806]: I0126 07:55:11.359573 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:11 crc kubenswrapper[4806]: I0126 07:55:11.359586 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:11 crc kubenswrapper[4806]: I0126 07:55:11.359604 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:11 crc kubenswrapper[4806]: I0126 07:55:11.359616 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:11Z","lastTransitionTime":"2026-01-26T07:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:11 crc kubenswrapper[4806]: I0126 07:55:11.367897 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:55:11Z is after 2025-08-24T17:21:41Z" Jan 26 07:55:11 crc kubenswrapper[4806]: E0126 07:55:11.375557 4806 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:55:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:55:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:55:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:55:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:55:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:55:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:55:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:55:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6d591560-a509-477f-85dc-1a92a429bf2e\\\",\\\"systemUUID\\\":\\\"8cee8155-a08c-4d0d-aec6-2f132dd9ee01\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:55:11Z is after 2025-08-24T17:21:41Z" Jan 26 07:55:11 crc kubenswrapper[4806]: I0126 07:55:11.379165 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:11 crc kubenswrapper[4806]: I0126 07:55:11.379195 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:11 crc kubenswrapper[4806]: I0126 07:55:11.379205 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:11 crc kubenswrapper[4806]: I0126 07:55:11.379220 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:11 crc kubenswrapper[4806]: I0126 07:55:11.379246 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:11Z","lastTransitionTime":"2026-01-26T07:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:11 crc kubenswrapper[4806]: I0126 07:55:11.382097 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:55:11Z is after 2025-08-24T17:21:41Z" Jan 26 07:55:11 crc kubenswrapper[4806]: E0126 07:55:11.391667 4806 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:55:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:55:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:55:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:55:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:55:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:55:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:55:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:55:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6d591560-a509-477f-85dc-1a92a429bf2e\\\",\\\"systemUUID\\\":\\\"8cee8155-a08c-4d0d-aec6-2f132dd9ee01\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:55:11Z is after 2025-08-24T17:21:41Z" Jan 26 07:55:11 crc kubenswrapper[4806]: I0126 07:55:11.394995 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:11 crc kubenswrapper[4806]: I0126 07:55:11.395054 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:11 crc kubenswrapper[4806]: I0126 07:55:11.395067 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:11 crc kubenswrapper[4806]: I0126 07:55:11.395089 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:11 crc kubenswrapper[4806]: I0126 07:55:11.395101 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:11Z","lastTransitionTime":"2026-01-26T07:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:11 crc kubenswrapper[4806]: I0126 07:55:11.400651 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f8b8acb-f4cf-41db-82f8-94ffd21c1594\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21d2ea6bc7f0a91da2b372587419ec1b34cfe3aca8f70dd4e360bf3e818bb103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adbed86e005e6e9cecb4320e0dff72eab3cfd505ff656a64e6643cf47828c789\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d102c6b4ea9d79168f0fec24c556c8fec8e3c1191646eebcbe8ed0289edfb6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c90dede1fc0753cad500555db7d92be73cb9b51fe948e3e65bed66134d85628c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e05c58fd693a7827b1ff019c6e160e4d2198004aca34245ee6e08cd37f5627da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://634f160a9064240a06e9b425f70ceae6390db7205f898efdc601437e72c362df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df2814fbaab959fae53eddddeffb17ab97a5a8ad31fbc20bcca4cdad140a24e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cfc2e9b8a1bd4f2725d86ece596811599fc42c42930193e14dec2164c415acd3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T07:54:31Z\\\",\\\"message\\\":\\\"o create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:54:30Z is after 2025-08-24T17:21:41Z]\\\\nI0126 07:54:31.006399 6338 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-network-console/networking-console-plugin_TCP_cluster\\\\\\\", UUID:\\\\\\\"ab0b1d51-5ec6-479b-8881-93dfa8d30337\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-network-console/networking-console-plugin\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGr\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:30Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df2814fbaab959fae53eddddeffb17ab97a5a8ad31fbc20bcca4cdad140a24e2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T07:54:56Z\\\",\\\"message\\\":\\\"712973235162149816) with []\\\\nI0126 07:54:56.255878 6711 address_set.go:302] New(aa6fc2dc-fab0-4812-b9da-809058e4dcf7/default-network-controller:EgressIP:egressip-served-pods:v4:default/a8519615025667110816) with []\\\\nI0126 07:54:56.255905 6711 address_set.go:302] New(bf133528-8652-4c84-85ff-881f0afe9837/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with []\\\\nI0126 07:54:56.255984 6711 factory.go:1336] Added *v1.Node event handler 7\\\\nI0126 07:54:56.256027 6711 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0126 07:54:56.256081 6711 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0126 07:54:56.256139 6711 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0126 07:54:56.256186 6711 factory.go:656] Stopping watch factory\\\\nI0126 07:54:56.256241 6711 handler.go:208] Removed *v1.Node event handler 7\\\\nI0126 07:54:56.256268 6711 handler.go:208] Removed *v1.Node event handler 2\\\\nI0126 07:54:56.256352 6711 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0126 07:54:56.256438 6711 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0126 07:54:56.256482 6711 ovnkube.go:599] Stopped ovnkube\\\\nI0126 07:54:56.256561 6711 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0126 07:54:56.256649 6711 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e9d86c9f33a8ad421c133f7b674eff9362011ad9ce60cbe1b917bda84c85047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2105d09a59a290aa3e1915a2b8c2c8aeb3a3e94ddfd4b48dc574a5362f522d18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2105d09a59a290aa3e1915a2b8c2c8aeb3a3e94ddfd4b48dc574a5362f522d18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8mw7z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:55:11Z is after 2025-08-24T17:21:41Z" Jan 26 07:55:11 crc kubenswrapper[4806]: E0126 07:55:11.405005 4806 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:55:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:55:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:55:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:55:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:55:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:55:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:55:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:55:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6d591560-a509-477f-85dc-1a92a429bf2e\\\",\\\"systemUUID\\\":\\\"8cee8155-a08c-4d0d-aec6-2f132dd9ee01\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:55:11Z is after 2025-08-24T17:21:41Z" Jan 26 07:55:11 crc kubenswrapper[4806]: I0126 07:55:11.408040 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:11 crc kubenswrapper[4806]: I0126 07:55:11.408080 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:11 crc kubenswrapper[4806]: I0126 07:55:11.408099 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:11 crc kubenswrapper[4806]: I0126 07:55:11.408118 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:11 crc kubenswrapper[4806]: I0126 07:55:11.408130 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:11Z","lastTransitionTime":"2026-01-26T07:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:11 crc kubenswrapper[4806]: I0126 07:55:11.412161 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rqmvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"137029f0-49ad-4400-b117-2eff9271bce3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9zt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9zt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:14Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rqmvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:55:11Z is after 2025-08-24T17:21:41Z" Jan 26 07:55:11 crc kubenswrapper[4806]: E0126 07:55:11.423189 4806 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:55:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:55:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:55:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:55:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:55:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:55:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:55:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:55:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6d591560-a509-477f-85dc-1a92a429bf2e\\\",\\\"systemUUID\\\":\\\"8cee8155-a08c-4d0d-aec6-2f132dd9ee01\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:55:11Z is after 2025-08-24T17:21:41Z" Jan 26 07:55:11 crc kubenswrapper[4806]: I0126 07:55:11.427685 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:11 crc kubenswrapper[4806]: I0126 07:55:11.427727 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:11 crc kubenswrapper[4806]: I0126 07:55:11.427742 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:11 crc kubenswrapper[4806]: I0126 07:55:11.427762 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:11 crc kubenswrapper[4806]: I0126 07:55:11.427779 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:11Z","lastTransitionTime":"2026-01-26T07:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:11 crc kubenswrapper[4806]: I0126 07:55:11.429731 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:55:11Z is after 2025-08-24T17:21:41Z" Jan 26 07:55:11 crc kubenswrapper[4806]: E0126 07:55:11.440712 4806 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:55:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:55:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:55:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:55:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:55:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:55:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:55:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:55:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6d591560-a509-477f-85dc-1a92a429bf2e\\\",\\\"systemUUID\\\":\\\"8cee8155-a08c-4d0d-aec6-2f132dd9ee01\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:55:11Z is after 2025-08-24T17:21:41Z" Jan 26 07:55:11 crc kubenswrapper[4806]: E0126 07:55:11.440829 4806 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 07:55:11 crc kubenswrapper[4806]: I0126 07:55:11.442415 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:11 crc kubenswrapper[4806]: I0126 07:55:11.442442 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:11 crc kubenswrapper[4806]: I0126 07:55:11.442450 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:11 crc kubenswrapper[4806]: I0126 07:55:11.442466 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:11 crc kubenswrapper[4806]: I0126 07:55:11.442474 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:11Z","lastTransitionTime":"2026-01-26T07:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:11 crc kubenswrapper[4806]: I0126 07:55:11.449776 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f8b8acb-f4cf-41db-82f8-94ffd21c1594\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21d2ea6bc7f0a91da2b372587419ec1b34cfe3aca8f70dd4e360bf3e818bb103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adbed86e005e6e9cecb4320e0dff72eab3cfd505ff656a64e6643cf47828c789\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d102c6b4ea9d79168f0fec24c556c8fec8e3c1191646eebcbe8ed0289edfb6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c90dede1fc0753cad500555db7d92be73cb9b51fe948e3e65bed66134d85628c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e05c58fd693a7827b1ff019c6e160e4d2198004aca34245ee6e08cd37f5627da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://634f160a9064240a06e9b425f70ceae6390db7205f898efdc601437e72c362df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df2814fbaab959fae53eddddeffb17ab97a5a8ad31fbc20bcca4cdad140a24e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df2814fbaab959fae53eddddeffb17ab97a5a8ad31fbc20bcca4cdad140a24e2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T07:54:56Z\\\",\\\"message\\\":\\\"712973235162149816) with []\\\\nI0126 07:54:56.255878 6711 address_set.go:302] New(aa6fc2dc-fab0-4812-b9da-809058e4dcf7/default-network-controller:EgressIP:egressip-served-pods:v4:default/a8519615025667110816) with []\\\\nI0126 07:54:56.255905 6711 address_set.go:302] New(bf133528-8652-4c84-85ff-881f0afe9837/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with []\\\\nI0126 07:54:56.255984 6711 factory.go:1336] Added *v1.Node event handler 7\\\\nI0126 07:54:56.256027 6711 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0126 07:54:56.256081 6711 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0126 07:54:56.256139 6711 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0126 07:54:56.256186 6711 factory.go:656] Stopping watch factory\\\\nI0126 07:54:56.256241 6711 handler.go:208] Removed *v1.Node event handler 7\\\\nI0126 07:54:56.256268 6711 handler.go:208] Removed *v1.Node event handler 2\\\\nI0126 07:54:56.256352 6711 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0126 07:54:56.256438 6711 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0126 07:54:56.256482 6711 ovnkube.go:599] Stopped ovnkube\\\\nI0126 07:54:56.256561 6711 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0126 07:54:56.256649 6711 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-8mw7z_openshift-ovn-kubernetes(1f8b8acb-f4cf-41db-82f8-94ffd21c1594)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e9d86c9f33a8ad421c133f7b674eff9362011ad9ce60cbe1b917bda84c85047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2105d09a59a290aa3e1915a2b8c2c8aeb3a3e94ddfd4b48dc574a5362f522d18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2105d09a59a290aa3e1915a2b8c2c8aeb3a3e94ddfd4b48dc574a5362f522d18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8mw7z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:55:11Z is after 2025-08-24T17:21:41Z" Jan 26 07:55:11 crc kubenswrapper[4806]: I0126 07:55:11.462499 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rqmvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"137029f0-49ad-4400-b117-2eff9271bce3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9zt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9zt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:14Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rqmvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:55:11Z is after 2025-08-24T17:21:41Z" Jan 26 07:55:11 crc kubenswrapper[4806]: I0126 07:55:11.477660 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a365c4a-dfc5-4290-920d-f1f04e322061\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d54a191a8969d10a108b9fa36fe6c03f55969e7bd6c73aef14d2936d92290ccb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad400013f30ae5d55431037940b99d47dc5a2449b4535963218a0e9510e5fcc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad400013f30ae5d55431037940b99d47dc5a2449b4535963218a0e9510e5fcc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:53:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:55:11Z is after 2025-08-24T17:21:41Z" Jan 26 07:55:11 crc kubenswrapper[4806]: I0126 07:55:11.498985 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a2716f1-7f63-4f1b-88e7-412b33a5afe3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1797a908e33ef3678ac1471e937a8e52c11dbb31d8de0f19d400a01706a98b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c111f2ec62507ccc570af4f4da19232bfa0946bb9e67bde595d5a3d66eff6a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f94ceee9c2ed3bd1c5ba58ee7e5047dd5f0a325c610c4277705f3426e69ed79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50228cb176a3defc821a78ed75d837706c2de3a3414f02e0afacf81289a15999\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:53:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:55:11Z is after 2025-08-24T17:21:41Z" Jan 26 07:55:11 crc kubenswrapper[4806]: I0126 07:55:11.514090 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:55:11Z is after 2025-08-24T17:21:41Z" Jan 26 07:55:11 crc kubenswrapper[4806]: I0126 07:55:11.527244 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0298829a559cb39d3f8d2f3b0e53b249e74c432ba82385ab628135d53fa77272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70db7a55580b02bc2be913a8978f895c5a3826133883cf1c838bc84a76e89af1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:55:11Z is after 2025-08-24T17:21:41Z" Jan 26 07:55:11 crc kubenswrapper[4806]: I0126 07:55:11.539717 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wx526" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"265e15b3-6ef8-47df-ab15-dcc9bd9574ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5080e1085d9267f3e432eaf62a6c620f7e59db08140bd5a901ab21840bfd6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmfr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wx526\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:55:11Z is after 2025-08-24T17:21:41Z" Jan 26 07:55:11 crc kubenswrapper[4806]: I0126 07:55:11.544396 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:11 crc kubenswrapper[4806]: I0126 07:55:11.544440 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:11 crc kubenswrapper[4806]: I0126 07:55:11.544451 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:11 crc kubenswrapper[4806]: I0126 07:55:11.544467 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:11 crc kubenswrapper[4806]: I0126 07:55:11.544475 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:11Z","lastTransitionTime":"2026-01-26T07:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:11 crc kubenswrapper[4806]: I0126 07:55:11.554445 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d7glh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4320ae6b-0d73-47d7-9f2c-f3c5b6b69041\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b2b56f712aa5b59214153bb170a5866aea874524f826e53be3e703b8ff912fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9da5e8c52d415e76190ace196927bd1783b08990164b11379940ecc2b6734551\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T07:54:47Z\\\",\\\"message\\\":\\\"2026-01-26T07:54:01+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6a15fa94-db47-433a-a1f7-0d0ac0229e59\\\\n2026-01-26T07:54:01+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6a15fa94-db47-433a-a1f7-0d0ac0229e59 to /host/opt/cni/bin/\\\\n2026-01-26T07:54:02Z [verbose] multus-daemon started\\\\n2026-01-26T07:54:02Z [verbose] Readiness Indicator file check\\\\n2026-01-26T07:54:47Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:01Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpcqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d7glh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:55:11Z is after 2025-08-24T17:21:41Z" Jan 26 07:55:11 crc kubenswrapper[4806]: I0126 07:55:11.570565 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-268q5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59844d88-1bf9-4761-b664-74623e7532c3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80fc9c4f56c3d017653449212b2b9457a701e8c567317798eb12c695bccec5c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19fc2da16d6c76dad40cc2947eb8da01b1ead5804be80df5756c132456575909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19fc2da16d6c76dad40cc2947eb8da01b1ead5804be80df5756c132456575909\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a426f2a63c3cdd56cec3bbd1f6e88ff76808304da3fe757043afeb2feb26614\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a426f2a63c3cdd56cec3bbd1f6e88ff76808304da3fe757043afeb2feb26614\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5459f7b834ef08b014a811d0955763235cb4dbb6cbe92670202b03f51f7d684d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5459f7b834ef08b014a811d0955763235cb4dbb6cbe92670202b03f51f7d684d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1efe94940fbf1babcccc68efae1d38751c22232d1a2f197485057be906db80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f1efe94940fbf1babcccc68efae1d38751c22232d1a2f197485057be906db80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1332a28e0cf08275ce281997fdc5674f44e397f8ff02fdd19fed12c2de2b70a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1332a28e0cf08275ce281997fdc5674f44e397f8ff02fdd19fed12c2de2b70a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://638da847467b7cade9c68a764044417e85bfa99cae93105cb70b260d3668c1a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://638da847467b7cade9c68a764044417e85bfa99cae93105cb70b260d3668c1a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-268q5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:55:11Z is after 2025-08-24T17:21:41Z" Jan 26 07:55:11 crc kubenswrapper[4806]: I0126 07:55:11.589346 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"674c5d9f-0f9e-47cf-b6bb-5a011bd0af68\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06b15982445e897ebbaf881e9a84973b9b700b0763ea6dfc0dfa3d5adb3d5b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67cf748dcc41d3402937cba97eadb7311a4fd7ab359a24768ffd4b6f689e9e2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://374c11776db902d4a30fdeab722bf8c85be53043b87555ac42f9f18bbfca5f78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded50f9353eb262b2587b8d1a59d8e1a06946c9b228eb0991fe35ad79f19567b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dc1a26bcb2eff0795009b3d88f7fedd86bccf553e9b8b86f8bc4bdda154566d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b85e7f36ac03113362c2ab6b15fb73e517599c6a56c6a410b9dead70191502f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b85e7f36ac03113362c2ab6b15fb73e517599c6a56c6a410b9dead70191502f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7324fbdf53b0be79696bb4bf870fe072761633e09cc88cce3a213a40b339410d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7324fbdf53b0be79696bb4bf870fe072761633e09cc88cce3a213a40b339410d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1987002ea8dfdacf427291f1757243727d744b1fe99d0be1f8cddc26beb74fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1987002ea8dfdacf427291f1757243727d744b1fe99d0be1f8cddc26beb74fc9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:53:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:55:11Z is after 2025-08-24T17:21:41Z" Jan 26 07:55:11 crc kubenswrapper[4806]: I0126 07:55:11.604727 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f837be3-c97c-4ec6-9ac0-1b2c210a2bd1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://370998a25827938e42e6ca8f51cb20c64068d266a2a6e92f43cfa6033efe1f1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://befc3a7310ed730ab06800d0ffe5a5ad206f5193f921d0080543d96e53c382cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f4e90bd8d92e4159f45b887647465c23595d6a14cb9fd5a6f3b6c736345b302\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d8fe9445b34c3dea0e5508fbbd9d8c72d3b0bd3c927f3d0469fb5f86a61b7a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6442d8b57ec5789bc5ad219da279d42cd053598b4a8a843b4190a619750483d3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 07:53:53.366863 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 07:53:53.368547 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1947576678/tls.crt::/tmp/serving-cert-1947576678/tls.key\\\\\\\"\\\\nI0126 07:53:59.068951 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 07:53:59.072863 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 07:53:59.072893 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 07:53:59.072922 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 07:53:59.072931 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 07:53:59.078386 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 07:53:59.078432 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 07:53:59.078439 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 07:53:59.078445 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 07:53:59.078449 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 07:53:59.078455 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 07:53:59.078459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 07:53:59.078571 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 07:53:59.084946 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eda625619208720589e5f413d0df8e1e55e7fd5ab4a2e59d64e19150afb05ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02368f0d186c7de1c3dc25d5c60985799ded799314f332c5132230a413d531a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02368f0d186c7de1c3dc25d5c60985799ded799314f332c5132230a413d531a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:53:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:55:11Z is after 2025-08-24T17:21:41Z" Jan 26 07:55:11 crc kubenswrapper[4806]: I0126 07:55:11.621606 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acf0880fe013b6ee2bf3e8342e19ebbdc21b0f7aeb69749dce219fe0e5e4939a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:55:11Z is after 2025-08-24T17:21:41Z" Jan 26 07:55:11 crc kubenswrapper[4806]: I0126 07:55:11.634372 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d07502a2-50b0-4012-b335-340a1c694c50\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03d55ca5e1d542a5cdb726c1e581fb9b644e237bf4575794fedadf60386c339c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67x45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://824bb40c04b0ea474c45020c9c84ce7b7ee18c52beef9fee9618ba5a6e59be04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67x45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k2tlk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:55:11Z is after 2025-08-24T17:21:41Z" Jan 26 07:55:11 crc kubenswrapper[4806]: I0126 07:55:11.646799 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:11 crc kubenswrapper[4806]: I0126 07:55:11.646868 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:11 crc kubenswrapper[4806]: I0126 07:55:11.646888 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:11 crc kubenswrapper[4806]: I0126 07:55:11.646914 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:11 crc kubenswrapper[4806]: I0126 07:55:11.646935 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:11Z","lastTransitionTime":"2026-01-26T07:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:11 crc kubenswrapper[4806]: I0126 07:55:11.651089 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f2d0753-513a-4924-9985-d1058d2cda9b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da291d9c45edfc19eed33fce385cb1d8005585d2e9ac5fdfc50040a9c3038683\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6719f7820924ce02c29e99fc9fb5c6e733597c1f006b3a2701046680af978e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8e90cbd21cd7080a82e2440f441264b1fd1b45ee048fbcadc7afbd7b4384996\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e49f6187d5b008fdb24b0f1a268d099bba2d3700d3ebd07c6f2d05e5718bf990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e49f6187d5b008fdb24b0f1a268d099bba2d3700d3ebd07c6f2d05e5718bf990\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:41Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:53:41Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:55:11Z is after 2025-08-24T17:21:41Z" Jan 26 07:55:11 crc kubenswrapper[4806]: I0126 07:55:11.666379 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:55:11Z is after 2025-08-24T17:21:41Z" Jan 26 07:55:11 crc kubenswrapper[4806]: I0126 07:55:11.679152 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dfe44c66a73660b38211ed7ae8fbb6a036c2ebc6a2716dd0e36da70b8d1fcc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:55:11Z is after 2025-08-24T17:21:41Z" Jan 26 07:55:11 crc kubenswrapper[4806]: I0126 07:55:11.690827 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pw8cg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c73a9f4-20b2-4c8a-b25d-413770be4fac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://792a2a27d6043da345b05c611a152abc9233ff61958434dc7a7f8c13d185fc04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt9w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pw8cg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:55:11Z is after 2025-08-24T17:21:41Z" Jan 26 07:55:11 crc kubenswrapper[4806]: I0126 07:55:11.705620 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tfbl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47635c72-c532-48f3-839a-d86393eb5d24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7ea777c2c49c1bc930bff2584c62a6ebd7752638e4d5aec2818f56cf0678ad8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4lbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4910b820b4227a848efb96e462e633150fe7e06295dc04a0d9c45636cc4b83bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4lbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tfbl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:55:11Z is after 2025-08-24T17:21:41Z" Jan 26 07:55:11 crc kubenswrapper[4806]: I0126 07:55:11.749575 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:11 crc kubenswrapper[4806]: I0126 07:55:11.749637 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:11 crc kubenswrapper[4806]: I0126 07:55:11.749651 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:11 crc kubenswrapper[4806]: I0126 07:55:11.749675 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:11 crc kubenswrapper[4806]: I0126 07:55:11.749696 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:11Z","lastTransitionTime":"2026-01-26T07:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:11 crc kubenswrapper[4806]: I0126 07:55:11.851578 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:11 crc kubenswrapper[4806]: I0126 07:55:11.851644 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:11 crc kubenswrapper[4806]: I0126 07:55:11.851657 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:11 crc kubenswrapper[4806]: I0126 07:55:11.851673 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:11 crc kubenswrapper[4806]: I0126 07:55:11.851684 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:11Z","lastTransitionTime":"2026-01-26T07:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:11 crc kubenswrapper[4806]: I0126 07:55:11.954940 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:11 crc kubenswrapper[4806]: I0126 07:55:11.954991 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:11 crc kubenswrapper[4806]: I0126 07:55:11.955004 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:11 crc kubenswrapper[4806]: I0126 07:55:11.955024 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:11 crc kubenswrapper[4806]: I0126 07:55:11.955035 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:11Z","lastTransitionTime":"2026-01-26T07:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:12 crc kubenswrapper[4806]: I0126 07:55:12.041010 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqmvf" Jan 26 07:55:12 crc kubenswrapper[4806]: E0126 07:55:12.041220 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqmvf" podUID="137029f0-49ad-4400-b117-2eff9271bce3" Jan 26 07:55:12 crc kubenswrapper[4806]: I0126 07:55:12.041329 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 07:55:12 crc kubenswrapper[4806]: E0126 07:55:12.041394 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 07:55:12 crc kubenswrapper[4806]: I0126 07:55:12.041451 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 07:55:12 crc kubenswrapper[4806]: E0126 07:55:12.041496 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 07:55:12 crc kubenswrapper[4806]: I0126 07:55:12.041586 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 07:55:12 crc kubenswrapper[4806]: E0126 07:55:12.041635 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 07:55:12 crc kubenswrapper[4806]: I0126 07:55:12.057573 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:12 crc kubenswrapper[4806]: I0126 07:55:12.057658 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:12 crc kubenswrapper[4806]: I0126 07:55:12.057672 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:12 crc kubenswrapper[4806]: I0126 07:55:12.057692 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:12 crc kubenswrapper[4806]: I0126 07:55:12.057706 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:12Z","lastTransitionTime":"2026-01-26T07:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:12 crc kubenswrapper[4806]: I0126 07:55:12.147461 4806 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 14:54:21.498175459 +0000 UTC Jan 26 07:55:12 crc kubenswrapper[4806]: I0126 07:55:12.159907 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:12 crc kubenswrapper[4806]: I0126 07:55:12.159946 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:12 crc kubenswrapper[4806]: I0126 07:55:12.159956 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:12 crc kubenswrapper[4806]: I0126 07:55:12.159972 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:12 crc kubenswrapper[4806]: I0126 07:55:12.159983 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:12Z","lastTransitionTime":"2026-01-26T07:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:12 crc kubenswrapper[4806]: I0126 07:55:12.262253 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:12 crc kubenswrapper[4806]: I0126 07:55:12.262330 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:12 crc kubenswrapper[4806]: I0126 07:55:12.262364 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:12 crc kubenswrapper[4806]: I0126 07:55:12.262393 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:12 crc kubenswrapper[4806]: I0126 07:55:12.262416 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:12Z","lastTransitionTime":"2026-01-26T07:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:12 crc kubenswrapper[4806]: I0126 07:55:12.365748 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:12 crc kubenswrapper[4806]: I0126 07:55:12.365835 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:12 crc kubenswrapper[4806]: I0126 07:55:12.365866 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:12 crc kubenswrapper[4806]: I0126 07:55:12.365899 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:12 crc kubenswrapper[4806]: I0126 07:55:12.365925 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:12Z","lastTransitionTime":"2026-01-26T07:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:12 crc kubenswrapper[4806]: I0126 07:55:12.468805 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:12 crc kubenswrapper[4806]: I0126 07:55:12.468854 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:12 crc kubenswrapper[4806]: I0126 07:55:12.468866 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:12 crc kubenswrapper[4806]: I0126 07:55:12.468894 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:12 crc kubenswrapper[4806]: I0126 07:55:12.468907 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:12Z","lastTransitionTime":"2026-01-26T07:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:12 crc kubenswrapper[4806]: I0126 07:55:12.571572 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:12 crc kubenswrapper[4806]: I0126 07:55:12.571637 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:12 crc kubenswrapper[4806]: I0126 07:55:12.571656 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:12 crc kubenswrapper[4806]: I0126 07:55:12.571681 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:12 crc kubenswrapper[4806]: I0126 07:55:12.571698 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:12Z","lastTransitionTime":"2026-01-26T07:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:12 crc kubenswrapper[4806]: I0126 07:55:12.674581 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:12 crc kubenswrapper[4806]: I0126 07:55:12.674630 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:12 crc kubenswrapper[4806]: I0126 07:55:12.674645 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:12 crc kubenswrapper[4806]: I0126 07:55:12.674667 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:12 crc kubenswrapper[4806]: I0126 07:55:12.674685 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:12Z","lastTransitionTime":"2026-01-26T07:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:12 crc kubenswrapper[4806]: I0126 07:55:12.778260 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:12 crc kubenswrapper[4806]: I0126 07:55:12.778306 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:12 crc kubenswrapper[4806]: I0126 07:55:12.778327 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:12 crc kubenswrapper[4806]: I0126 07:55:12.778358 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:12 crc kubenswrapper[4806]: I0126 07:55:12.778381 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:12Z","lastTransitionTime":"2026-01-26T07:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:12 crc kubenswrapper[4806]: I0126 07:55:12.881816 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:12 crc kubenswrapper[4806]: I0126 07:55:12.881882 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:12 crc kubenswrapper[4806]: I0126 07:55:12.881904 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:12 crc kubenswrapper[4806]: I0126 07:55:12.881933 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:12 crc kubenswrapper[4806]: I0126 07:55:12.881955 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:12Z","lastTransitionTime":"2026-01-26T07:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:12 crc kubenswrapper[4806]: I0126 07:55:12.985412 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:12 crc kubenswrapper[4806]: I0126 07:55:12.985480 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:12 crc kubenswrapper[4806]: I0126 07:55:12.985507 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:12 crc kubenswrapper[4806]: I0126 07:55:12.985593 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:12 crc kubenswrapper[4806]: I0126 07:55:12.985623 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:12Z","lastTransitionTime":"2026-01-26T07:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:13 crc kubenswrapper[4806]: I0126 07:55:13.088672 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:13 crc kubenswrapper[4806]: I0126 07:55:13.088719 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:13 crc kubenswrapper[4806]: I0126 07:55:13.088731 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:13 crc kubenswrapper[4806]: I0126 07:55:13.088748 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:13 crc kubenswrapper[4806]: I0126 07:55:13.088759 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:13Z","lastTransitionTime":"2026-01-26T07:55:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:13 crc kubenswrapper[4806]: I0126 07:55:13.147669 4806 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 09:37:29.421639124 +0000 UTC Jan 26 07:55:13 crc kubenswrapper[4806]: I0126 07:55:13.191425 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:13 crc kubenswrapper[4806]: I0126 07:55:13.191498 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:13 crc kubenswrapper[4806]: I0126 07:55:13.191512 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:13 crc kubenswrapper[4806]: I0126 07:55:13.191548 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:13 crc kubenswrapper[4806]: I0126 07:55:13.191561 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:13Z","lastTransitionTime":"2026-01-26T07:55:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:13 crc kubenswrapper[4806]: I0126 07:55:13.294934 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:13 crc kubenswrapper[4806]: I0126 07:55:13.295043 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:13 crc kubenswrapper[4806]: I0126 07:55:13.295060 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:13 crc kubenswrapper[4806]: I0126 07:55:13.295081 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:13 crc kubenswrapper[4806]: I0126 07:55:13.295098 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:13Z","lastTransitionTime":"2026-01-26T07:55:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:13 crc kubenswrapper[4806]: I0126 07:55:13.398153 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:13 crc kubenswrapper[4806]: I0126 07:55:13.398204 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:13 crc kubenswrapper[4806]: I0126 07:55:13.398220 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:13 crc kubenswrapper[4806]: I0126 07:55:13.398246 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:13 crc kubenswrapper[4806]: I0126 07:55:13.398270 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:13Z","lastTransitionTime":"2026-01-26T07:55:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:13 crc kubenswrapper[4806]: I0126 07:55:13.500398 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:13 crc kubenswrapper[4806]: I0126 07:55:13.500445 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:13 crc kubenswrapper[4806]: I0126 07:55:13.500457 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:13 crc kubenswrapper[4806]: I0126 07:55:13.500473 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:13 crc kubenswrapper[4806]: I0126 07:55:13.500485 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:13Z","lastTransitionTime":"2026-01-26T07:55:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:13 crc kubenswrapper[4806]: I0126 07:55:13.603495 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:13 crc kubenswrapper[4806]: I0126 07:55:13.603556 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:13 crc kubenswrapper[4806]: I0126 07:55:13.603567 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:13 crc kubenswrapper[4806]: I0126 07:55:13.603583 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:13 crc kubenswrapper[4806]: I0126 07:55:13.603594 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:13Z","lastTransitionTime":"2026-01-26T07:55:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:13 crc kubenswrapper[4806]: I0126 07:55:13.706606 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:13 crc kubenswrapper[4806]: I0126 07:55:13.706644 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:13 crc kubenswrapper[4806]: I0126 07:55:13.706656 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:13 crc kubenswrapper[4806]: I0126 07:55:13.706676 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:13 crc kubenswrapper[4806]: I0126 07:55:13.706689 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:13Z","lastTransitionTime":"2026-01-26T07:55:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:13 crc kubenswrapper[4806]: I0126 07:55:13.809632 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:13 crc kubenswrapper[4806]: I0126 07:55:13.809711 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:13 crc kubenswrapper[4806]: I0126 07:55:13.809734 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:13 crc kubenswrapper[4806]: I0126 07:55:13.809765 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:13 crc kubenswrapper[4806]: I0126 07:55:13.809789 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:13Z","lastTransitionTime":"2026-01-26T07:55:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:13 crc kubenswrapper[4806]: I0126 07:55:13.913276 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:13 crc kubenswrapper[4806]: I0126 07:55:13.913356 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:13 crc kubenswrapper[4806]: I0126 07:55:13.913376 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:13 crc kubenswrapper[4806]: I0126 07:55:13.913411 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:13 crc kubenswrapper[4806]: I0126 07:55:13.913434 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:13Z","lastTransitionTime":"2026-01-26T07:55:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:14 crc kubenswrapper[4806]: I0126 07:55:14.016351 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:14 crc kubenswrapper[4806]: I0126 07:55:14.016458 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:14 crc kubenswrapper[4806]: I0126 07:55:14.016479 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:14 crc kubenswrapper[4806]: I0126 07:55:14.016513 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:14 crc kubenswrapper[4806]: I0126 07:55:14.016578 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:14Z","lastTransitionTime":"2026-01-26T07:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:14 crc kubenswrapper[4806]: I0126 07:55:14.040988 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 07:55:14 crc kubenswrapper[4806]: I0126 07:55:14.041138 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqmvf" Jan 26 07:55:14 crc kubenswrapper[4806]: I0126 07:55:14.041004 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 07:55:14 crc kubenswrapper[4806]: E0126 07:55:14.041187 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 07:55:14 crc kubenswrapper[4806]: I0126 07:55:14.041039 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 07:55:14 crc kubenswrapper[4806]: E0126 07:55:14.041336 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqmvf" podUID="137029f0-49ad-4400-b117-2eff9271bce3" Jan 26 07:55:14 crc kubenswrapper[4806]: E0126 07:55:14.041453 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 07:55:14 crc kubenswrapper[4806]: E0126 07:55:14.041699 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 07:55:14 crc kubenswrapper[4806]: I0126 07:55:14.118658 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:14 crc kubenswrapper[4806]: I0126 07:55:14.118709 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:14 crc kubenswrapper[4806]: I0126 07:55:14.118726 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:14 crc kubenswrapper[4806]: I0126 07:55:14.118747 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:14 crc kubenswrapper[4806]: I0126 07:55:14.118762 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:14Z","lastTransitionTime":"2026-01-26T07:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:14 crc kubenswrapper[4806]: I0126 07:55:14.148500 4806 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 08:03:15.037036211 +0000 UTC Jan 26 07:55:14 crc kubenswrapper[4806]: I0126 07:55:14.222206 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:14 crc kubenswrapper[4806]: I0126 07:55:14.222258 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:14 crc kubenswrapper[4806]: I0126 07:55:14.222269 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:14 crc kubenswrapper[4806]: I0126 07:55:14.222287 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:14 crc kubenswrapper[4806]: I0126 07:55:14.222323 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:14Z","lastTransitionTime":"2026-01-26T07:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:14 crc kubenswrapper[4806]: I0126 07:55:14.324948 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:14 crc kubenswrapper[4806]: I0126 07:55:14.325022 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:14 crc kubenswrapper[4806]: I0126 07:55:14.325045 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:14 crc kubenswrapper[4806]: I0126 07:55:14.325076 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:14 crc kubenswrapper[4806]: I0126 07:55:14.325096 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:14Z","lastTransitionTime":"2026-01-26T07:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:14 crc kubenswrapper[4806]: I0126 07:55:14.428086 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:14 crc kubenswrapper[4806]: I0126 07:55:14.428147 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:14 crc kubenswrapper[4806]: I0126 07:55:14.428165 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:14 crc kubenswrapper[4806]: I0126 07:55:14.428197 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:14 crc kubenswrapper[4806]: I0126 07:55:14.428215 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:14Z","lastTransitionTime":"2026-01-26T07:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:14 crc kubenswrapper[4806]: I0126 07:55:14.531696 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:14 crc kubenswrapper[4806]: I0126 07:55:14.531746 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:14 crc kubenswrapper[4806]: I0126 07:55:14.531762 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:14 crc kubenswrapper[4806]: I0126 07:55:14.531786 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:14 crc kubenswrapper[4806]: I0126 07:55:14.531803 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:14Z","lastTransitionTime":"2026-01-26T07:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:14 crc kubenswrapper[4806]: I0126 07:55:14.633723 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:14 crc kubenswrapper[4806]: I0126 07:55:14.633773 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:14 crc kubenswrapper[4806]: I0126 07:55:14.633791 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:14 crc kubenswrapper[4806]: I0126 07:55:14.633817 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:14 crc kubenswrapper[4806]: I0126 07:55:14.633839 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:14Z","lastTransitionTime":"2026-01-26T07:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:14 crc kubenswrapper[4806]: I0126 07:55:14.736873 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:14 crc kubenswrapper[4806]: I0126 07:55:14.736932 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:14 crc kubenswrapper[4806]: I0126 07:55:14.736954 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:14 crc kubenswrapper[4806]: I0126 07:55:14.736981 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:14 crc kubenswrapper[4806]: I0126 07:55:14.737002 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:14Z","lastTransitionTime":"2026-01-26T07:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:14 crc kubenswrapper[4806]: I0126 07:55:14.839643 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:14 crc kubenswrapper[4806]: I0126 07:55:14.839706 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:14 crc kubenswrapper[4806]: I0126 07:55:14.839720 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:14 crc kubenswrapper[4806]: I0126 07:55:14.839734 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:14 crc kubenswrapper[4806]: I0126 07:55:14.839742 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:14Z","lastTransitionTime":"2026-01-26T07:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:14 crc kubenswrapper[4806]: I0126 07:55:14.942932 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:14 crc kubenswrapper[4806]: I0126 07:55:14.943013 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:14 crc kubenswrapper[4806]: I0126 07:55:14.943029 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:14 crc kubenswrapper[4806]: I0126 07:55:14.943054 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:14 crc kubenswrapper[4806]: I0126 07:55:14.943072 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:14Z","lastTransitionTime":"2026-01-26T07:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:15 crc kubenswrapper[4806]: I0126 07:55:15.046325 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:15 crc kubenswrapper[4806]: I0126 07:55:15.046378 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:15 crc kubenswrapper[4806]: I0126 07:55:15.046392 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:15 crc kubenswrapper[4806]: I0126 07:55:15.046414 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:15 crc kubenswrapper[4806]: I0126 07:55:15.046428 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:15Z","lastTransitionTime":"2026-01-26T07:55:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:15 crc kubenswrapper[4806]: I0126 07:55:15.148668 4806 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 19:51:17.109431829 +0000 UTC Jan 26 07:55:15 crc kubenswrapper[4806]: I0126 07:55:15.148865 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:15 crc kubenswrapper[4806]: I0126 07:55:15.148927 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:15 crc kubenswrapper[4806]: I0126 07:55:15.148944 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:15 crc kubenswrapper[4806]: I0126 07:55:15.148970 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:15 crc kubenswrapper[4806]: I0126 07:55:15.148985 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:15Z","lastTransitionTime":"2026-01-26T07:55:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:15 crc kubenswrapper[4806]: I0126 07:55:15.252442 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:15 crc kubenswrapper[4806]: I0126 07:55:15.252490 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:15 crc kubenswrapper[4806]: I0126 07:55:15.252503 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:15 crc kubenswrapper[4806]: I0126 07:55:15.252539 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:15 crc kubenswrapper[4806]: I0126 07:55:15.252552 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:15Z","lastTransitionTime":"2026-01-26T07:55:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:15 crc kubenswrapper[4806]: I0126 07:55:15.355277 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:15 crc kubenswrapper[4806]: I0126 07:55:15.355338 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:15 crc kubenswrapper[4806]: I0126 07:55:15.355355 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:15 crc kubenswrapper[4806]: I0126 07:55:15.355379 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:15 crc kubenswrapper[4806]: I0126 07:55:15.355396 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:15Z","lastTransitionTime":"2026-01-26T07:55:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:15 crc kubenswrapper[4806]: I0126 07:55:15.458515 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:15 crc kubenswrapper[4806]: I0126 07:55:15.458676 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:15 crc kubenswrapper[4806]: I0126 07:55:15.458700 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:15 crc kubenswrapper[4806]: I0126 07:55:15.458723 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:15 crc kubenswrapper[4806]: I0126 07:55:15.458741 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:15Z","lastTransitionTime":"2026-01-26T07:55:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:15 crc kubenswrapper[4806]: I0126 07:55:15.561444 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:15 crc kubenswrapper[4806]: I0126 07:55:15.561493 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:15 crc kubenswrapper[4806]: I0126 07:55:15.561505 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:15 crc kubenswrapper[4806]: I0126 07:55:15.561543 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:15 crc kubenswrapper[4806]: I0126 07:55:15.561555 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:15Z","lastTransitionTime":"2026-01-26T07:55:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:15 crc kubenswrapper[4806]: I0126 07:55:15.663882 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:15 crc kubenswrapper[4806]: I0126 07:55:15.663954 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:15 crc kubenswrapper[4806]: I0126 07:55:15.663964 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:15 crc kubenswrapper[4806]: I0126 07:55:15.663977 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:15 crc kubenswrapper[4806]: I0126 07:55:15.663987 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:15Z","lastTransitionTime":"2026-01-26T07:55:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:15 crc kubenswrapper[4806]: I0126 07:55:15.766325 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:15 crc kubenswrapper[4806]: I0126 07:55:15.766381 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:15 crc kubenswrapper[4806]: I0126 07:55:15.766393 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:15 crc kubenswrapper[4806]: I0126 07:55:15.766412 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:15 crc kubenswrapper[4806]: I0126 07:55:15.766424 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:15Z","lastTransitionTime":"2026-01-26T07:55:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:15 crc kubenswrapper[4806]: I0126 07:55:15.868817 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:15 crc kubenswrapper[4806]: I0126 07:55:15.868858 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:15 crc kubenswrapper[4806]: I0126 07:55:15.868870 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:15 crc kubenswrapper[4806]: I0126 07:55:15.868888 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:15 crc kubenswrapper[4806]: I0126 07:55:15.868901 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:15Z","lastTransitionTime":"2026-01-26T07:55:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:15 crc kubenswrapper[4806]: I0126 07:55:15.971212 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:15 crc kubenswrapper[4806]: I0126 07:55:15.971253 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:15 crc kubenswrapper[4806]: I0126 07:55:15.971262 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:15 crc kubenswrapper[4806]: I0126 07:55:15.971276 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:15 crc kubenswrapper[4806]: I0126 07:55:15.971285 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:15Z","lastTransitionTime":"2026-01-26T07:55:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:16 crc kubenswrapper[4806]: I0126 07:55:16.041670 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqmvf" Jan 26 07:55:16 crc kubenswrapper[4806]: I0126 07:55:16.041707 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 07:55:16 crc kubenswrapper[4806]: I0126 07:55:16.041699 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 07:55:16 crc kubenswrapper[4806]: I0126 07:55:16.041694 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 07:55:16 crc kubenswrapper[4806]: E0126 07:55:16.042072 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqmvf" podUID="137029f0-49ad-4400-b117-2eff9271bce3" Jan 26 07:55:16 crc kubenswrapper[4806]: E0126 07:55:16.042173 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 07:55:16 crc kubenswrapper[4806]: E0126 07:55:16.042240 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 07:55:16 crc kubenswrapper[4806]: E0126 07:55:16.042340 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 07:55:16 crc kubenswrapper[4806]: I0126 07:55:16.074896 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:16 crc kubenswrapper[4806]: I0126 07:55:16.074965 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:16 crc kubenswrapper[4806]: I0126 07:55:16.074977 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:16 crc kubenswrapper[4806]: I0126 07:55:16.074998 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:16 crc kubenswrapper[4806]: I0126 07:55:16.075011 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:16Z","lastTransitionTime":"2026-01-26T07:55:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:16 crc kubenswrapper[4806]: I0126 07:55:16.149098 4806 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 18:18:39.21115254 +0000 UTC Jan 26 07:55:16 crc kubenswrapper[4806]: I0126 07:55:16.177798 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:16 crc kubenswrapper[4806]: I0126 07:55:16.177865 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:16 crc kubenswrapper[4806]: I0126 07:55:16.177882 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:16 crc kubenswrapper[4806]: I0126 07:55:16.177904 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:16 crc kubenswrapper[4806]: I0126 07:55:16.177919 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:16Z","lastTransitionTime":"2026-01-26T07:55:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:16 crc kubenswrapper[4806]: I0126 07:55:16.281712 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:16 crc kubenswrapper[4806]: I0126 07:55:16.281812 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:16 crc kubenswrapper[4806]: I0126 07:55:16.281826 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:16 crc kubenswrapper[4806]: I0126 07:55:16.281907 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:16 crc kubenswrapper[4806]: I0126 07:55:16.281923 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:16Z","lastTransitionTime":"2026-01-26T07:55:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:16 crc kubenswrapper[4806]: I0126 07:55:16.385378 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:16 crc kubenswrapper[4806]: I0126 07:55:16.385453 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:16 crc kubenswrapper[4806]: I0126 07:55:16.385471 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:16 crc kubenswrapper[4806]: I0126 07:55:16.385500 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:16 crc kubenswrapper[4806]: I0126 07:55:16.385542 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:16Z","lastTransitionTime":"2026-01-26T07:55:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:16 crc kubenswrapper[4806]: I0126 07:55:16.488787 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:16 crc kubenswrapper[4806]: I0126 07:55:16.488856 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:16 crc kubenswrapper[4806]: I0126 07:55:16.488876 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:16 crc kubenswrapper[4806]: I0126 07:55:16.488906 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:16 crc kubenswrapper[4806]: I0126 07:55:16.488926 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:16Z","lastTransitionTime":"2026-01-26T07:55:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:16 crc kubenswrapper[4806]: I0126 07:55:16.592456 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:16 crc kubenswrapper[4806]: I0126 07:55:16.592593 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:16 crc kubenswrapper[4806]: I0126 07:55:16.592612 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:16 crc kubenswrapper[4806]: I0126 07:55:16.592640 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:16 crc kubenswrapper[4806]: I0126 07:55:16.592660 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:16Z","lastTransitionTime":"2026-01-26T07:55:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:16 crc kubenswrapper[4806]: I0126 07:55:16.695068 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:16 crc kubenswrapper[4806]: I0126 07:55:16.695104 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:16 crc kubenswrapper[4806]: I0126 07:55:16.695113 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:16 crc kubenswrapper[4806]: I0126 07:55:16.695127 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:16 crc kubenswrapper[4806]: I0126 07:55:16.695136 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:16Z","lastTransitionTime":"2026-01-26T07:55:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:16 crc kubenswrapper[4806]: I0126 07:55:16.798841 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:16 crc kubenswrapper[4806]: I0126 07:55:16.798900 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:16 crc kubenswrapper[4806]: I0126 07:55:16.798920 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:16 crc kubenswrapper[4806]: I0126 07:55:16.798945 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:16 crc kubenswrapper[4806]: I0126 07:55:16.798962 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:16Z","lastTransitionTime":"2026-01-26T07:55:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:16 crc kubenswrapper[4806]: I0126 07:55:16.903205 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:16 crc kubenswrapper[4806]: I0126 07:55:16.903283 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:16 crc kubenswrapper[4806]: I0126 07:55:16.903302 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:16 crc kubenswrapper[4806]: I0126 07:55:16.903339 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:16 crc kubenswrapper[4806]: I0126 07:55:16.903359 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:16Z","lastTransitionTime":"2026-01-26T07:55:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:17 crc kubenswrapper[4806]: I0126 07:55:17.010231 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:17 crc kubenswrapper[4806]: I0126 07:55:17.010307 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:17 crc kubenswrapper[4806]: I0126 07:55:17.010333 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:17 crc kubenswrapper[4806]: I0126 07:55:17.010368 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:17 crc kubenswrapper[4806]: I0126 07:55:17.010390 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:17Z","lastTransitionTime":"2026-01-26T07:55:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:17 crc kubenswrapper[4806]: I0126 07:55:17.112978 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:17 crc kubenswrapper[4806]: I0126 07:55:17.113045 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:17 crc kubenswrapper[4806]: I0126 07:55:17.113058 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:17 crc kubenswrapper[4806]: I0126 07:55:17.113078 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:17 crc kubenswrapper[4806]: I0126 07:55:17.113093 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:17Z","lastTransitionTime":"2026-01-26T07:55:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:17 crc kubenswrapper[4806]: I0126 07:55:17.150078 4806 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 07:15:12.013177681 +0000 UTC Jan 26 07:55:17 crc kubenswrapper[4806]: I0126 07:55:17.216499 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:17 crc kubenswrapper[4806]: I0126 07:55:17.216608 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:17 crc kubenswrapper[4806]: I0126 07:55:17.216628 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:17 crc kubenswrapper[4806]: I0126 07:55:17.216660 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:17 crc kubenswrapper[4806]: I0126 07:55:17.216680 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:17Z","lastTransitionTime":"2026-01-26T07:55:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:17 crc kubenswrapper[4806]: I0126 07:55:17.321400 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:17 crc kubenswrapper[4806]: I0126 07:55:17.321467 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:17 crc kubenswrapper[4806]: I0126 07:55:17.321487 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:17 crc kubenswrapper[4806]: I0126 07:55:17.321515 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:17 crc kubenswrapper[4806]: I0126 07:55:17.321571 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:17Z","lastTransitionTime":"2026-01-26T07:55:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:17 crc kubenswrapper[4806]: I0126 07:55:17.424181 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:17 crc kubenswrapper[4806]: I0126 07:55:17.424266 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:17 crc kubenswrapper[4806]: I0126 07:55:17.424279 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:17 crc kubenswrapper[4806]: I0126 07:55:17.424295 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:17 crc kubenswrapper[4806]: I0126 07:55:17.424306 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:17Z","lastTransitionTime":"2026-01-26T07:55:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:17 crc kubenswrapper[4806]: I0126 07:55:17.527222 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:17 crc kubenswrapper[4806]: I0126 07:55:17.527267 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:17 crc kubenswrapper[4806]: I0126 07:55:17.527278 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:17 crc kubenswrapper[4806]: I0126 07:55:17.527299 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:17 crc kubenswrapper[4806]: I0126 07:55:17.527309 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:17Z","lastTransitionTime":"2026-01-26T07:55:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:17 crc kubenswrapper[4806]: I0126 07:55:17.630702 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:17 crc kubenswrapper[4806]: I0126 07:55:17.630788 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:17 crc kubenswrapper[4806]: I0126 07:55:17.630803 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:17 crc kubenswrapper[4806]: I0126 07:55:17.630851 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:17 crc kubenswrapper[4806]: I0126 07:55:17.630868 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:17Z","lastTransitionTime":"2026-01-26T07:55:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:17 crc kubenswrapper[4806]: I0126 07:55:17.734362 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:17 crc kubenswrapper[4806]: I0126 07:55:17.734407 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:17 crc kubenswrapper[4806]: I0126 07:55:17.734416 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:17 crc kubenswrapper[4806]: I0126 07:55:17.734433 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:17 crc kubenswrapper[4806]: I0126 07:55:17.734443 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:17Z","lastTransitionTime":"2026-01-26T07:55:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:17 crc kubenswrapper[4806]: I0126 07:55:17.837392 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:17 crc kubenswrapper[4806]: I0126 07:55:17.837422 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:17 crc kubenswrapper[4806]: I0126 07:55:17.837431 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:17 crc kubenswrapper[4806]: I0126 07:55:17.837444 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:17 crc kubenswrapper[4806]: I0126 07:55:17.837455 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:17Z","lastTransitionTime":"2026-01-26T07:55:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:17 crc kubenswrapper[4806]: I0126 07:55:17.941222 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:17 crc kubenswrapper[4806]: I0126 07:55:17.941268 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:17 crc kubenswrapper[4806]: I0126 07:55:17.941281 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:17 crc kubenswrapper[4806]: I0126 07:55:17.941309 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:17 crc kubenswrapper[4806]: I0126 07:55:17.941323 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:17Z","lastTransitionTime":"2026-01-26T07:55:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:18 crc kubenswrapper[4806]: I0126 07:55:18.041238 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqmvf" Jan 26 07:55:18 crc kubenswrapper[4806]: I0126 07:55:18.041355 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 07:55:18 crc kubenswrapper[4806]: I0126 07:55:18.041438 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 07:55:18 crc kubenswrapper[4806]: I0126 07:55:18.041617 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 07:55:18 crc kubenswrapper[4806]: E0126 07:55:18.041604 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqmvf" podUID="137029f0-49ad-4400-b117-2eff9271bce3" Jan 26 07:55:18 crc kubenswrapper[4806]: E0126 07:55:18.041827 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 07:55:18 crc kubenswrapper[4806]: E0126 07:55:18.042018 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 07:55:18 crc kubenswrapper[4806]: E0126 07:55:18.042485 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 07:55:18 crc kubenswrapper[4806]: I0126 07:55:18.044769 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:18 crc kubenswrapper[4806]: I0126 07:55:18.044850 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:18 crc kubenswrapper[4806]: I0126 07:55:18.044872 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:18 crc kubenswrapper[4806]: I0126 07:55:18.044902 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:18 crc kubenswrapper[4806]: I0126 07:55:18.044923 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:18Z","lastTransitionTime":"2026-01-26T07:55:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:18 crc kubenswrapper[4806]: I0126 07:55:18.147686 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:18 crc kubenswrapper[4806]: I0126 07:55:18.147743 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:18 crc kubenswrapper[4806]: I0126 07:55:18.147755 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:18 crc kubenswrapper[4806]: I0126 07:55:18.147777 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:18 crc kubenswrapper[4806]: I0126 07:55:18.147791 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:18Z","lastTransitionTime":"2026-01-26T07:55:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:18 crc kubenswrapper[4806]: I0126 07:55:18.150960 4806 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 05:49:33.653033142 +0000 UTC Jan 26 07:55:18 crc kubenswrapper[4806]: I0126 07:55:18.251434 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:18 crc kubenswrapper[4806]: I0126 07:55:18.251495 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:18 crc kubenswrapper[4806]: I0126 07:55:18.251570 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:18 crc kubenswrapper[4806]: I0126 07:55:18.251608 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:18 crc kubenswrapper[4806]: I0126 07:55:18.251638 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:18Z","lastTransitionTime":"2026-01-26T07:55:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:18 crc kubenswrapper[4806]: I0126 07:55:18.355018 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:18 crc kubenswrapper[4806]: I0126 07:55:18.355066 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:18 crc kubenswrapper[4806]: I0126 07:55:18.355078 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:18 crc kubenswrapper[4806]: I0126 07:55:18.355098 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:18 crc kubenswrapper[4806]: I0126 07:55:18.355111 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:18Z","lastTransitionTime":"2026-01-26T07:55:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:18 crc kubenswrapper[4806]: I0126 07:55:18.479995 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:18 crc kubenswrapper[4806]: I0126 07:55:18.480059 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:18 crc kubenswrapper[4806]: I0126 07:55:18.480077 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:18 crc kubenswrapper[4806]: I0126 07:55:18.480103 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:18 crc kubenswrapper[4806]: I0126 07:55:18.480127 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:18Z","lastTransitionTime":"2026-01-26T07:55:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:18 crc kubenswrapper[4806]: I0126 07:55:18.583714 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:18 crc kubenswrapper[4806]: I0126 07:55:18.583801 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:18 crc kubenswrapper[4806]: I0126 07:55:18.583824 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:18 crc kubenswrapper[4806]: I0126 07:55:18.583859 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:18 crc kubenswrapper[4806]: I0126 07:55:18.583884 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:18Z","lastTransitionTime":"2026-01-26T07:55:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:18 crc kubenswrapper[4806]: I0126 07:55:18.687026 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:18 crc kubenswrapper[4806]: I0126 07:55:18.687067 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:18 crc kubenswrapper[4806]: I0126 07:55:18.687079 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:18 crc kubenswrapper[4806]: I0126 07:55:18.687098 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:18 crc kubenswrapper[4806]: I0126 07:55:18.687112 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:18Z","lastTransitionTime":"2026-01-26T07:55:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:18 crc kubenswrapper[4806]: I0126 07:55:18.696032 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/137029f0-49ad-4400-b117-2eff9271bce3-metrics-certs\") pod \"network-metrics-daemon-rqmvf\" (UID: \"137029f0-49ad-4400-b117-2eff9271bce3\") " pod="openshift-multus/network-metrics-daemon-rqmvf" Jan 26 07:55:18 crc kubenswrapper[4806]: E0126 07:55:18.696181 4806 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 07:55:18 crc kubenswrapper[4806]: E0126 07:55:18.696234 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/137029f0-49ad-4400-b117-2eff9271bce3-metrics-certs podName:137029f0-49ad-4400-b117-2eff9271bce3 nodeName:}" failed. No retries permitted until 2026-01-26 07:56:22.696216875 +0000 UTC m=+161.960624931 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/137029f0-49ad-4400-b117-2eff9271bce3-metrics-certs") pod "network-metrics-daemon-rqmvf" (UID: "137029f0-49ad-4400-b117-2eff9271bce3") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 07:55:18 crc kubenswrapper[4806]: I0126 07:55:18.790553 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:18 crc kubenswrapper[4806]: I0126 07:55:18.790622 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:18 crc kubenswrapper[4806]: I0126 07:55:18.790638 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:18 crc kubenswrapper[4806]: I0126 07:55:18.790663 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:18 crc kubenswrapper[4806]: I0126 07:55:18.790680 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:18Z","lastTransitionTime":"2026-01-26T07:55:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:18 crc kubenswrapper[4806]: I0126 07:55:18.893867 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:18 crc kubenswrapper[4806]: I0126 07:55:18.893918 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:18 crc kubenswrapper[4806]: I0126 07:55:18.893936 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:18 crc kubenswrapper[4806]: I0126 07:55:18.893960 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:18 crc kubenswrapper[4806]: I0126 07:55:18.893977 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:18Z","lastTransitionTime":"2026-01-26T07:55:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:18 crc kubenswrapper[4806]: I0126 07:55:18.997306 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:18 crc kubenswrapper[4806]: I0126 07:55:18.997360 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:18 crc kubenswrapper[4806]: I0126 07:55:18.997379 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:18 crc kubenswrapper[4806]: I0126 07:55:18.997405 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:18 crc kubenswrapper[4806]: I0126 07:55:18.997423 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:18Z","lastTransitionTime":"2026-01-26T07:55:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:19 crc kubenswrapper[4806]: I0126 07:55:19.100441 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:19 crc kubenswrapper[4806]: I0126 07:55:19.100563 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:19 crc kubenswrapper[4806]: I0126 07:55:19.100594 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:19 crc kubenswrapper[4806]: I0126 07:55:19.100630 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:19 crc kubenswrapper[4806]: I0126 07:55:19.100653 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:19Z","lastTransitionTime":"2026-01-26T07:55:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:19 crc kubenswrapper[4806]: I0126 07:55:19.151902 4806 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 19:40:18.512449129 +0000 UTC Jan 26 07:55:19 crc kubenswrapper[4806]: I0126 07:55:19.203965 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:19 crc kubenswrapper[4806]: I0126 07:55:19.204007 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:19 crc kubenswrapper[4806]: I0126 07:55:19.204024 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:19 crc kubenswrapper[4806]: I0126 07:55:19.204043 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:19 crc kubenswrapper[4806]: I0126 07:55:19.204057 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:19Z","lastTransitionTime":"2026-01-26T07:55:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:19 crc kubenswrapper[4806]: I0126 07:55:19.306886 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:19 crc kubenswrapper[4806]: I0126 07:55:19.306971 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:19 crc kubenswrapper[4806]: I0126 07:55:19.306987 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:19 crc kubenswrapper[4806]: I0126 07:55:19.307006 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:19 crc kubenswrapper[4806]: I0126 07:55:19.307023 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:19Z","lastTransitionTime":"2026-01-26T07:55:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:19 crc kubenswrapper[4806]: I0126 07:55:19.410145 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:19 crc kubenswrapper[4806]: I0126 07:55:19.410216 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:19 crc kubenswrapper[4806]: I0126 07:55:19.410240 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:19 crc kubenswrapper[4806]: I0126 07:55:19.410276 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:19 crc kubenswrapper[4806]: I0126 07:55:19.410300 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:19Z","lastTransitionTime":"2026-01-26T07:55:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:19 crc kubenswrapper[4806]: I0126 07:55:19.513510 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:19 crc kubenswrapper[4806]: I0126 07:55:19.513632 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:19 crc kubenswrapper[4806]: I0126 07:55:19.513658 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:19 crc kubenswrapper[4806]: I0126 07:55:19.513695 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:19 crc kubenswrapper[4806]: I0126 07:55:19.513720 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:19Z","lastTransitionTime":"2026-01-26T07:55:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:19 crc kubenswrapper[4806]: I0126 07:55:19.618206 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:19 crc kubenswrapper[4806]: I0126 07:55:19.618269 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:19 crc kubenswrapper[4806]: I0126 07:55:19.618287 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:19 crc kubenswrapper[4806]: I0126 07:55:19.618313 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:19 crc kubenswrapper[4806]: I0126 07:55:19.618328 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:19Z","lastTransitionTime":"2026-01-26T07:55:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:19 crc kubenswrapper[4806]: I0126 07:55:19.721124 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:19 crc kubenswrapper[4806]: I0126 07:55:19.721172 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:19 crc kubenswrapper[4806]: I0126 07:55:19.721185 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:19 crc kubenswrapper[4806]: I0126 07:55:19.721206 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:19 crc kubenswrapper[4806]: I0126 07:55:19.721221 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:19Z","lastTransitionTime":"2026-01-26T07:55:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:19 crc kubenswrapper[4806]: I0126 07:55:19.824481 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:19 crc kubenswrapper[4806]: I0126 07:55:19.824565 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:19 crc kubenswrapper[4806]: I0126 07:55:19.824577 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:19 crc kubenswrapper[4806]: I0126 07:55:19.824599 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:19 crc kubenswrapper[4806]: I0126 07:55:19.824614 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:19Z","lastTransitionTime":"2026-01-26T07:55:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:19 crc kubenswrapper[4806]: I0126 07:55:19.928309 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:19 crc kubenswrapper[4806]: I0126 07:55:19.928367 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:19 crc kubenswrapper[4806]: I0126 07:55:19.928380 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:19 crc kubenswrapper[4806]: I0126 07:55:19.928404 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:19 crc kubenswrapper[4806]: I0126 07:55:19.928421 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:19Z","lastTransitionTime":"2026-01-26T07:55:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:20 crc kubenswrapper[4806]: I0126 07:55:20.031324 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:20 crc kubenswrapper[4806]: I0126 07:55:20.031384 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:20 crc kubenswrapper[4806]: I0126 07:55:20.031398 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:20 crc kubenswrapper[4806]: I0126 07:55:20.031424 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:20 crc kubenswrapper[4806]: I0126 07:55:20.031441 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:20Z","lastTransitionTime":"2026-01-26T07:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:20 crc kubenswrapper[4806]: I0126 07:55:20.041763 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 07:55:20 crc kubenswrapper[4806]: I0126 07:55:20.041821 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 07:55:20 crc kubenswrapper[4806]: I0126 07:55:20.041824 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 07:55:20 crc kubenswrapper[4806]: I0126 07:55:20.041918 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqmvf" Jan 26 07:55:20 crc kubenswrapper[4806]: E0126 07:55:20.042082 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 07:55:20 crc kubenswrapper[4806]: E0126 07:55:20.042297 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 07:55:20 crc kubenswrapper[4806]: E0126 07:55:20.042370 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 07:55:20 crc kubenswrapper[4806]: E0126 07:55:20.042588 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqmvf" podUID="137029f0-49ad-4400-b117-2eff9271bce3" Jan 26 07:55:20 crc kubenswrapper[4806]: I0126 07:55:20.134270 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:20 crc kubenswrapper[4806]: I0126 07:55:20.134336 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:20 crc kubenswrapper[4806]: I0126 07:55:20.134357 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:20 crc kubenswrapper[4806]: I0126 07:55:20.134387 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:20 crc kubenswrapper[4806]: I0126 07:55:20.134408 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:20Z","lastTransitionTime":"2026-01-26T07:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:20 crc kubenswrapper[4806]: I0126 07:55:20.152571 4806 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 11:59:11.945041779 +0000 UTC Jan 26 07:55:20 crc kubenswrapper[4806]: I0126 07:55:20.237390 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:20 crc kubenswrapper[4806]: I0126 07:55:20.237452 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:20 crc kubenswrapper[4806]: I0126 07:55:20.237469 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:20 crc kubenswrapper[4806]: I0126 07:55:20.237495 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:20 crc kubenswrapper[4806]: I0126 07:55:20.237516 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:20Z","lastTransitionTime":"2026-01-26T07:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:20 crc kubenswrapper[4806]: I0126 07:55:20.340493 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:20 crc kubenswrapper[4806]: I0126 07:55:20.340641 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:20 crc kubenswrapper[4806]: I0126 07:55:20.340667 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:20 crc kubenswrapper[4806]: I0126 07:55:20.340702 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:20 crc kubenswrapper[4806]: I0126 07:55:20.340727 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:20Z","lastTransitionTime":"2026-01-26T07:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:20 crc kubenswrapper[4806]: I0126 07:55:20.444912 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:20 crc kubenswrapper[4806]: I0126 07:55:20.444973 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:20 crc kubenswrapper[4806]: I0126 07:55:20.444997 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:20 crc kubenswrapper[4806]: I0126 07:55:20.445031 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:20 crc kubenswrapper[4806]: I0126 07:55:20.445055 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:20Z","lastTransitionTime":"2026-01-26T07:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:20 crc kubenswrapper[4806]: I0126 07:55:20.548048 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:20 crc kubenswrapper[4806]: I0126 07:55:20.548096 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:20 crc kubenswrapper[4806]: I0126 07:55:20.548107 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:20 crc kubenswrapper[4806]: I0126 07:55:20.548171 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:20 crc kubenswrapper[4806]: I0126 07:55:20.548184 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:20Z","lastTransitionTime":"2026-01-26T07:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:20 crc kubenswrapper[4806]: I0126 07:55:20.652565 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:20 crc kubenswrapper[4806]: I0126 07:55:20.652634 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:20 crc kubenswrapper[4806]: I0126 07:55:20.652652 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:20 crc kubenswrapper[4806]: I0126 07:55:20.652679 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:20 crc kubenswrapper[4806]: I0126 07:55:20.653060 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:20Z","lastTransitionTime":"2026-01-26T07:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:20 crc kubenswrapper[4806]: I0126 07:55:20.760252 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:20 crc kubenswrapper[4806]: I0126 07:55:20.760315 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:20 crc kubenswrapper[4806]: I0126 07:55:20.760335 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:20 crc kubenswrapper[4806]: I0126 07:55:20.760362 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:20 crc kubenswrapper[4806]: I0126 07:55:20.760382 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:20Z","lastTransitionTime":"2026-01-26T07:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:20 crc kubenswrapper[4806]: I0126 07:55:20.863911 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:20 crc kubenswrapper[4806]: I0126 07:55:20.863990 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:20 crc kubenswrapper[4806]: I0126 07:55:20.864013 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:20 crc kubenswrapper[4806]: I0126 07:55:20.864045 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:20 crc kubenswrapper[4806]: I0126 07:55:20.864063 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:20Z","lastTransitionTime":"2026-01-26T07:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:20 crc kubenswrapper[4806]: I0126 07:55:20.970323 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:20 crc kubenswrapper[4806]: I0126 07:55:20.970467 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:20 crc kubenswrapper[4806]: I0126 07:55:20.970494 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:20 crc kubenswrapper[4806]: I0126 07:55:20.970545 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:20 crc kubenswrapper[4806]: I0126 07:55:20.970575 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:20Z","lastTransitionTime":"2026-01-26T07:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:21 crc kubenswrapper[4806]: I0126 07:55:21.065599 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6dfe44c66a73660b38211ed7ae8fbb6a036c2ebc6a2716dd0e36da70b8d1fcc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:55:21Z is after 2025-08-24T17:21:41Z" Jan 26 07:55:21 crc kubenswrapper[4806]: I0126 07:55:21.073724 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:21 crc kubenswrapper[4806]: I0126 07:55:21.073806 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:21 crc kubenswrapper[4806]: I0126 07:55:21.073821 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:21 crc kubenswrapper[4806]: I0126 07:55:21.073851 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:21 crc kubenswrapper[4806]: I0126 07:55:21.073872 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:21Z","lastTransitionTime":"2026-01-26T07:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:21 crc kubenswrapper[4806]: I0126 07:55:21.083805 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d07502a2-50b0-4012-b335-340a1c694c50\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03d55ca5e1d542a5cdb726c1e581fb9b644e237bf4575794fedadf60386c339c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67x45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://824bb40c04b0ea474c45020c9c84ce7b7ee18c52beef9fee9618ba5a6e59be04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67x45\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-k2tlk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:55:21Z is after 2025-08-24T17:21:41Z" Jan 26 07:55:21 crc kubenswrapper[4806]: I0126 07:55:21.105951 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0f2d0753-513a-4924-9985-d1058d2cda9b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da291d9c45edfc19eed33fce385cb1d8005585d2e9ac5fdfc50040a9c3038683\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c6719f7820924ce02c29e99fc9fb5c6e733597c1f006b3a2701046680af978e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8e90cbd21cd7080a82e2440f441264b1fd1b45ee048fbcadc7afbd7b4384996\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e49f6187d5b008fdb24b0f1a268d099bba2d3700d3ebd07c6f2d05e5718bf990\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e49f6187d5b008fdb24b0f1a268d099bba2d3700d3ebd07c6f2d05e5718bf990\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:41Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:53:41Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:55:21Z is after 2025-08-24T17:21:41Z" Jan 26 07:55:21 crc kubenswrapper[4806]: I0126 07:55:21.129911 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:55:21Z is after 2025-08-24T17:21:41Z" Jan 26 07:55:21 crc kubenswrapper[4806]: I0126 07:55:21.150637 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pw8cg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c73a9f4-20b2-4c8a-b25d-413770be4fac\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://792a2a27d6043da345b05c611a152abc9233ff61958434dc7a7f8c13d185fc04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qt9w8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pw8cg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:55:21Z is after 2025-08-24T17:21:41Z" Jan 26 07:55:21 crc kubenswrapper[4806]: I0126 07:55:21.153136 4806 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 06:51:14.905530188 +0000 UTC Jan 26 07:55:21 crc kubenswrapper[4806]: I0126 07:55:21.168928 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tfbl7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"47635c72-c532-48f3-839a-d86393eb5d24\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b7ea777c2c49c1bc930bff2584c62a6ebd7752638e4d5aec2818f56cf0678ad8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4lbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4910b820b4227a848efb96e462e633150fe7e06295dc04a0d9c45636cc4b83bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s4lbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:12Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tfbl7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:55:21Z is after 2025-08-24T17:21:41Z" Jan 26 07:55:21 crc kubenswrapper[4806]: I0126 07:55:21.176595 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:21 crc kubenswrapper[4806]: I0126 07:55:21.176666 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:21 crc kubenswrapper[4806]: I0126 07:55:21.176734 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:21 crc kubenswrapper[4806]: I0126 07:55:21.176761 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:21 crc kubenswrapper[4806]: I0126 07:55:21.176778 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:21Z","lastTransitionTime":"2026-01-26T07:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:21 crc kubenswrapper[4806]: I0126 07:55:21.194691 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:55:21Z is after 2025-08-24T17:21:41Z" Jan 26 07:55:21 crc kubenswrapper[4806]: I0126 07:55:21.212329 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:55:21Z is after 2025-08-24T17:21:41Z" Jan 26 07:55:21 crc kubenswrapper[4806]: I0126 07:55:21.238938 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f8b8acb-f4cf-41db-82f8-94ffd21c1594\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://21d2ea6bc7f0a91da2b372587419ec1b34cfe3aca8f70dd4e360bf3e818bb103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adbed86e005e6e9cecb4320e0dff72eab3cfd505ff656a64e6643cf47828c789\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d102c6b4ea9d79168f0fec24c556c8fec8e3c1191646eebcbe8ed0289edfb6d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c90dede1fc0753cad500555db7d92be73cb9b51fe948e3e65bed66134d85628c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e05c58fd693a7827b1ff019c6e160e4d2198004aca34245ee6e08cd37f5627da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://634f160a9064240a06e9b425f70ceae6390db7205f898efdc601437e72c362df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df2814fbaab959fae53eddddeffb17ab97a5a8ad31fbc20bcca4cdad140a24e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://df2814fbaab959fae53eddddeffb17ab97a5a8ad31fbc20bcca4cdad140a24e2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T07:54:56Z\\\",\\\"message\\\":\\\"712973235162149816) with []\\\\nI0126 07:54:56.255878 6711 address_set.go:302] New(aa6fc2dc-fab0-4812-b9da-809058e4dcf7/default-network-controller:EgressIP:egressip-served-pods:v4:default/a8519615025667110816) with []\\\\nI0126 07:54:56.255905 6711 address_set.go:302] New(bf133528-8652-4c84-85ff-881f0afe9837/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with []\\\\nI0126 07:54:56.255984 6711 factory.go:1336] Added *v1.Node event handler 7\\\\nI0126 07:54:56.256027 6711 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0126 07:54:56.256081 6711 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0126 07:54:56.256139 6711 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0126 07:54:56.256186 6711 factory.go:656] Stopping watch factory\\\\nI0126 07:54:56.256241 6711 handler.go:208] Removed *v1.Node event handler 7\\\\nI0126 07:54:56.256268 6711 handler.go:208] Removed *v1.Node event handler 2\\\\nI0126 07:54:56.256352 6711 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0126 07:54:56.256438 6711 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0126 07:54:56.256482 6711 ovnkube.go:599] Stopped ovnkube\\\\nI0126 07:54:56.256561 6711 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0126 07:54:56.256649 6711 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-8mw7z_openshift-ovn-kubernetes(1f8b8acb-f4cf-41db-82f8-94ffd21c1594)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e9d86c9f33a8ad421c133f7b674eff9362011ad9ce60cbe1b917bda84c85047\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2105d09a59a290aa3e1915a2b8c2c8aeb3a3e94ddfd4b48dc574a5362f522d18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2105d09a59a290aa3e1915a2b8c2c8aeb3a3e94ddfd4b48dc574a5362f522d18\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bh82q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-8mw7z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:55:21Z is after 2025-08-24T17:21:41Z" Jan 26 07:55:21 crc kubenswrapper[4806]: I0126 07:55:21.258164 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rqmvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"137029f0-49ad-4400-b117-2eff9271bce3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9zt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs9zt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:14Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rqmvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:55:21Z is after 2025-08-24T17:21:41Z" Jan 26 07:55:21 crc kubenswrapper[4806]: I0126 07:55:21.274985 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a365c4a-dfc5-4290-920d-f1f04e322061\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d54a191a8969d10a108b9fa36fe6c03f55969e7bd6c73aef14d2936d92290ccb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad400013f30ae5d55431037940b99d47dc5a2449b4535963218a0e9510e5fcc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad400013f30ae5d55431037940b99d47dc5a2449b4535963218a0e9510e5fcc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:53:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:55:21Z is after 2025-08-24T17:21:41Z" Jan 26 07:55:21 crc kubenswrapper[4806]: I0126 07:55:21.280733 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:21 crc kubenswrapper[4806]: I0126 07:55:21.280785 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:21 crc kubenswrapper[4806]: I0126 07:55:21.280802 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:21 crc kubenswrapper[4806]: I0126 07:55:21.280828 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:21 crc kubenswrapper[4806]: I0126 07:55:21.280846 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:21Z","lastTransitionTime":"2026-01-26T07:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:21 crc kubenswrapper[4806]: I0126 07:55:21.296146 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a2716f1-7f63-4f1b-88e7-412b33a5afe3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1797a908e33ef3678ac1471e937a8e52c11dbb31d8de0f19d400a01706a98b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c111f2ec62507ccc570af4f4da19232bfa0946bb9e67bde595d5a3d66eff6a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f94ceee9c2ed3bd1c5ba58ee7e5047dd5f0a325c610c4277705f3426e69ed79\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50228cb176a3defc821a78ed75d837706c2de3a3414f02e0afacf81289a15999\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:53:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:55:21Z is after 2025-08-24T17:21:41Z" Jan 26 07:55:21 crc kubenswrapper[4806]: I0126 07:55:21.316358 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://acf0880fe013b6ee2bf3e8342e19ebbdc21b0f7aeb69749dce219fe0e5e4939a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:55:21Z is after 2025-08-24T17:21:41Z" Jan 26 07:55:21 crc kubenswrapper[4806]: I0126 07:55:21.338395 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0298829a559cb39d3f8d2f3b0e53b249e74c432ba82385ab628135d53fa77272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://70db7a55580b02bc2be913a8978f895c5a3826133883cf1c838bc84a76e89af1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:55:21Z is after 2025-08-24T17:21:41Z" Jan 26 07:55:21 crc kubenswrapper[4806]: I0126 07:55:21.353588 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-wx526" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"265e15b3-6ef8-47df-ab15-dcc9bd9574ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5080e1085d9267f3e432eaf62a6c620f7e59db08140bd5a901ab21840bfd6bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cmfr4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wx526\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:55:21Z is after 2025-08-24T17:21:41Z" Jan 26 07:55:21 crc kubenswrapper[4806]: I0126 07:55:21.369707 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-d7glh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4320ae6b-0d73-47d7-9f2c-f3c5b6b69041\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0b2b56f712aa5b59214153bb170a5866aea874524f826e53be3e703b8ff912fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9da5e8c52d415e76190ace196927bd1783b08990164b11379940ecc2b6734551\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T07:54:47Z\\\",\\\"message\\\":\\\"2026-01-26T07:54:01+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6a15fa94-db47-433a-a1f7-0d0ac0229e59\\\\n2026-01-26T07:54:01+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6a15fa94-db47-433a-a1f7-0d0ac0229e59 to /host/opt/cni/bin/\\\\n2026-01-26T07:54:02Z [verbose] multus-daemon started\\\\n2026-01-26T07:54:02Z [verbose] Readiness Indicator file check\\\\n2026-01-26T07:54:47Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:01Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xpcqh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-d7glh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:55:21Z is after 2025-08-24T17:21:41Z" Jan 26 07:55:21 crc kubenswrapper[4806]: I0126 07:55:21.384842 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:21 crc kubenswrapper[4806]: I0126 07:55:21.384914 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:21 crc kubenswrapper[4806]: I0126 07:55:21.384934 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:21 crc kubenswrapper[4806]: I0126 07:55:21.384963 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:21 crc kubenswrapper[4806]: I0126 07:55:21.384982 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:21Z","lastTransitionTime":"2026-01-26T07:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:21 crc kubenswrapper[4806]: I0126 07:55:21.392479 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-268q5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"59844d88-1bf9-4761-b664-74623e7532c3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://80fc9c4f56c3d017653449212b2b9457a701e8c567317798eb12c695bccec5c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19fc2da16d6c76dad40cc2947eb8da01b1ead5804be80df5756c132456575909\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://19fc2da16d6c76dad40cc2947eb8da01b1ead5804be80df5756c132456575909\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a426f2a63c3cdd56cec3bbd1f6e88ff76808304da3fe757043afeb2feb26614\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a426f2a63c3cdd56cec3bbd1f6e88ff76808304da3fe757043afeb2feb26614\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5459f7b834ef08b014a811d0955763235cb4dbb6cbe92670202b03f51f7d684d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5459f7b834ef08b014a811d0955763235cb4dbb6cbe92670202b03f51f7d684d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f1efe94940fbf1babcccc68efae1d38751c22232d1a2f197485057be906db80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f1efe94940fbf1babcccc68efae1d38751c22232d1a2f197485057be906db80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1332a28e0cf08275ce281997fdc5674f44e397f8ff02fdd19fed12c2de2b70a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1332a28e0cf08275ce281997fdc5674f44e397f8ff02fdd19fed12c2de2b70a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://638da847467b7cade9c68a764044417e85bfa99cae93105cb70b260d3668c1a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://638da847467b7cade9c68a764044417e85bfa99cae93105cb70b260d3668c1a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:54:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gt988\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:54:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-268q5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:55:21Z is after 2025-08-24T17:21:41Z" Jan 26 07:55:21 crc kubenswrapper[4806]: I0126 07:55:21.424420 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"674c5d9f-0f9e-47cf-b6bb-5a011bd0af68\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://06b15982445e897ebbaf881e9a84973b9b700b0763ea6dfc0dfa3d5adb3d5b90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://67cf748dcc41d3402937cba97eadb7311a4fd7ab359a24768ffd4b6f689e9e2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://374c11776db902d4a30fdeab722bf8c85be53043b87555ac42f9f18bbfca5f78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ded50f9353eb262b2587b8d1a59d8e1a06946c9b228eb0991fe35ad79f19567b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7dc1a26bcb2eff0795009b3d88f7fedd86bccf553e9b8b86f8bc4bdda154566d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b85e7f36ac03113362c2ab6b15fb73e517599c6a56c6a410b9dead70191502f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6b85e7f36ac03113362c2ab6b15fb73e517599c6a56c6a410b9dead70191502f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7324fbdf53b0be79696bb4bf870fe072761633e09cc88cce3a213a40b339410d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7324fbdf53b0be79696bb4bf870fe072761633e09cc88cce3a213a40b339410d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1987002ea8dfdacf427291f1757243727d744b1fe99d0be1f8cddc26beb74fc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1987002ea8dfdacf427291f1757243727d744b1fe99d0be1f8cddc26beb74fc9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:53:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:55:21Z is after 2025-08-24T17:21:41Z" Jan 26 07:55:21 crc kubenswrapper[4806]: I0126 07:55:21.444009 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f837be3-c97c-4ec6-9ac0-1b2c210a2bd1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:54:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://370998a25827938e42e6ca8f51cb20c64068d266a2a6e92f43cfa6033efe1f1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://befc3a7310ed730ab06800d0ffe5a5ad206f5193f921d0080543d96e53c382cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f4e90bd8d92e4159f45b887647465c23595d6a14cb9fd5a6f3b6c736345b302\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1d8fe9445b34c3dea0e5508fbbd9d8c72d3b0bd3c927f3d0469fb5f86a61b7a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6442d8b57ec5789bc5ad219da279d42cd053598b4a8a843b4190a619750483d3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-26T07:53:59Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0126 07:53:53.366863 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0126 07:53:53.368547 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1947576678/tls.crt::/tmp/serving-cert-1947576678/tls.key\\\\\\\"\\\\nI0126 07:53:59.068951 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0126 07:53:59.072863 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0126 07:53:59.072893 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0126 07:53:59.072922 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0126 07:53:59.072931 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0126 07:53:59.078386 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0126 07:53:59.078432 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 07:53:59.078439 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0126 07:53:59.078445 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0126 07:53:59.078449 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0126 07:53:59.078455 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0126 07:53:59.078459 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0126 07:53:59.078571 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0126 07:53:59.084946 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:43Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eda625619208720589e5f413d0df8e1e55e7fd5ab4a2e59d64e19150afb05ac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T07:53:42Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02368f0d186c7de1c3dc25d5c60985799ded799314f332c5132230a413d531a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02368f0d186c7de1c3dc25d5c60985799ded799314f332c5132230a413d531a6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T07:53:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T07:53:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T07:53:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:55:21Z is after 2025-08-24T17:21:41Z" Jan 26 07:55:21 crc kubenswrapper[4806]: I0126 07:55:21.488643 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:21 crc kubenswrapper[4806]: I0126 07:55:21.488709 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:21 crc kubenswrapper[4806]: I0126 07:55:21.488732 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:21 crc kubenswrapper[4806]: I0126 07:55:21.488768 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:21 crc kubenswrapper[4806]: I0126 07:55:21.488795 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:21Z","lastTransitionTime":"2026-01-26T07:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:21 crc kubenswrapper[4806]: I0126 07:55:21.586288 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:21 crc kubenswrapper[4806]: I0126 07:55:21.586747 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:21 crc kubenswrapper[4806]: I0126 07:55:21.586834 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:21 crc kubenswrapper[4806]: I0126 07:55:21.586931 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:21 crc kubenswrapper[4806]: I0126 07:55:21.587022 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:21Z","lastTransitionTime":"2026-01-26T07:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:21 crc kubenswrapper[4806]: E0126 07:55:21.606839 4806 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:55:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:55:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:55:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:55:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:55:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:55:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:55:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:55:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6d591560-a509-477f-85dc-1a92a429bf2e\\\",\\\"systemUUID\\\":\\\"8cee8155-a08c-4d0d-aec6-2f132dd9ee01\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:55:21Z is after 2025-08-24T17:21:41Z" Jan 26 07:55:21 crc kubenswrapper[4806]: I0126 07:55:21.612796 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:21 crc kubenswrapper[4806]: I0126 07:55:21.612872 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:21 crc kubenswrapper[4806]: I0126 07:55:21.612893 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:21 crc kubenswrapper[4806]: I0126 07:55:21.612930 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:21 crc kubenswrapper[4806]: I0126 07:55:21.612955 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:21Z","lastTransitionTime":"2026-01-26T07:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:21 crc kubenswrapper[4806]: E0126 07:55:21.631588 4806 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:55:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:55:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:55:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:55:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:55:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:55:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:55:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:55:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6d591560-a509-477f-85dc-1a92a429bf2e\\\",\\\"systemUUID\\\":\\\"8cee8155-a08c-4d0d-aec6-2f132dd9ee01\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:55:21Z is after 2025-08-24T17:21:41Z" Jan 26 07:55:21 crc kubenswrapper[4806]: I0126 07:55:21.638454 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:21 crc kubenswrapper[4806]: I0126 07:55:21.638547 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:21 crc kubenswrapper[4806]: I0126 07:55:21.638568 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:21 crc kubenswrapper[4806]: I0126 07:55:21.638599 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:21 crc kubenswrapper[4806]: I0126 07:55:21.638618 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:21Z","lastTransitionTime":"2026-01-26T07:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:21 crc kubenswrapper[4806]: E0126 07:55:21.659587 4806 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:55:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:55:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:55:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:55:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:55:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:55:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:55:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:55:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6d591560-a509-477f-85dc-1a92a429bf2e\\\",\\\"systemUUID\\\":\\\"8cee8155-a08c-4d0d-aec6-2f132dd9ee01\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:55:21Z is after 2025-08-24T17:21:41Z" Jan 26 07:55:21 crc kubenswrapper[4806]: I0126 07:55:21.665445 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:21 crc kubenswrapper[4806]: I0126 07:55:21.665515 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:21 crc kubenswrapper[4806]: I0126 07:55:21.665592 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:21 crc kubenswrapper[4806]: I0126 07:55:21.665627 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:21 crc kubenswrapper[4806]: I0126 07:55:21.665653 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:21Z","lastTransitionTime":"2026-01-26T07:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:21 crc kubenswrapper[4806]: E0126 07:55:21.686163 4806 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:55:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:55:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:55:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:55:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:55:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:55:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:55:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:55:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6d591560-a509-477f-85dc-1a92a429bf2e\\\",\\\"systemUUID\\\":\\\"8cee8155-a08c-4d0d-aec6-2f132dd9ee01\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:55:21Z is after 2025-08-24T17:21:41Z" Jan 26 07:55:21 crc kubenswrapper[4806]: I0126 07:55:21.691816 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:21 crc kubenswrapper[4806]: I0126 07:55:21.691982 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:21 crc kubenswrapper[4806]: I0126 07:55:21.692089 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:21 crc kubenswrapper[4806]: I0126 07:55:21.692195 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:21 crc kubenswrapper[4806]: I0126 07:55:21.692288 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:21Z","lastTransitionTime":"2026-01-26T07:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:21 crc kubenswrapper[4806]: E0126 07:55:21.709945 4806 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148060Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608860Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:55:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:55:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:55:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:55:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:55:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:55:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T07:55:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T07:55:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6d591560-a509-477f-85dc-1a92a429bf2e\\\",\\\"systemUUID\\\":\\\"8cee8155-a08c-4d0d-aec6-2f132dd9ee01\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T07:55:21Z is after 2025-08-24T17:21:41Z" Jan 26 07:55:21 crc kubenswrapper[4806]: E0126 07:55:21.710300 4806 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 07:55:21 crc kubenswrapper[4806]: I0126 07:55:21.712736 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:21 crc kubenswrapper[4806]: I0126 07:55:21.712892 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:21 crc kubenswrapper[4806]: I0126 07:55:21.712981 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:21 crc kubenswrapper[4806]: I0126 07:55:21.713140 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:21 crc kubenswrapper[4806]: I0126 07:55:21.713228 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:21Z","lastTransitionTime":"2026-01-26T07:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:21 crc kubenswrapper[4806]: I0126 07:55:21.816200 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:21 crc kubenswrapper[4806]: I0126 07:55:21.816755 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:21 crc kubenswrapper[4806]: I0126 07:55:21.816924 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:21 crc kubenswrapper[4806]: I0126 07:55:21.817084 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:21 crc kubenswrapper[4806]: I0126 07:55:21.817295 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:21Z","lastTransitionTime":"2026-01-26T07:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:21 crc kubenswrapper[4806]: I0126 07:55:21.920073 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:21 crc kubenswrapper[4806]: I0126 07:55:21.920159 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:21 crc kubenswrapper[4806]: I0126 07:55:21.920185 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:21 crc kubenswrapper[4806]: I0126 07:55:21.920213 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:21 crc kubenswrapper[4806]: I0126 07:55:21.920235 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:21Z","lastTransitionTime":"2026-01-26T07:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:22 crc kubenswrapper[4806]: I0126 07:55:22.023560 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:22 crc kubenswrapper[4806]: I0126 07:55:22.023648 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:22 crc kubenswrapper[4806]: I0126 07:55:22.023671 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:22 crc kubenswrapper[4806]: I0126 07:55:22.023703 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:22 crc kubenswrapper[4806]: I0126 07:55:22.023728 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:22Z","lastTransitionTime":"2026-01-26T07:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:22 crc kubenswrapper[4806]: I0126 07:55:22.041618 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 07:55:22 crc kubenswrapper[4806]: E0126 07:55:22.041803 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 07:55:22 crc kubenswrapper[4806]: I0126 07:55:22.041903 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 07:55:22 crc kubenswrapper[4806]: E0126 07:55:22.041984 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 07:55:22 crc kubenswrapper[4806]: I0126 07:55:22.042058 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqmvf" Jan 26 07:55:22 crc kubenswrapper[4806]: E0126 07:55:22.042144 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqmvf" podUID="137029f0-49ad-4400-b117-2eff9271bce3" Jan 26 07:55:22 crc kubenswrapper[4806]: I0126 07:55:22.042211 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 07:55:22 crc kubenswrapper[4806]: E0126 07:55:22.042309 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 07:55:22 crc kubenswrapper[4806]: I0126 07:55:22.127477 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:22 crc kubenswrapper[4806]: I0126 07:55:22.127761 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:22 crc kubenswrapper[4806]: I0126 07:55:22.127910 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:22 crc kubenswrapper[4806]: I0126 07:55:22.128059 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:22 crc kubenswrapper[4806]: I0126 07:55:22.128224 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:22Z","lastTransitionTime":"2026-01-26T07:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:22 crc kubenswrapper[4806]: I0126 07:55:22.154006 4806 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 01:13:01.972963877 +0000 UTC Jan 26 07:55:22 crc kubenswrapper[4806]: I0126 07:55:22.233404 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:22 crc kubenswrapper[4806]: I0126 07:55:22.233499 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:22 crc kubenswrapper[4806]: I0126 07:55:22.233547 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:22 crc kubenswrapper[4806]: I0126 07:55:22.233580 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:22 crc kubenswrapper[4806]: I0126 07:55:22.233601 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:22Z","lastTransitionTime":"2026-01-26T07:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:22 crc kubenswrapper[4806]: I0126 07:55:22.337845 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:22 crc kubenswrapper[4806]: I0126 07:55:22.337918 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:22 crc kubenswrapper[4806]: I0126 07:55:22.337936 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:22 crc kubenswrapper[4806]: I0126 07:55:22.337963 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:22 crc kubenswrapper[4806]: I0126 07:55:22.337983 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:22Z","lastTransitionTime":"2026-01-26T07:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:22 crc kubenswrapper[4806]: I0126 07:55:22.441915 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:22 crc kubenswrapper[4806]: I0126 07:55:22.442338 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:22 crc kubenswrapper[4806]: I0126 07:55:22.442488 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:22 crc kubenswrapper[4806]: I0126 07:55:22.442757 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:22 crc kubenswrapper[4806]: I0126 07:55:22.442980 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:22Z","lastTransitionTime":"2026-01-26T07:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:22 crc kubenswrapper[4806]: I0126 07:55:22.547336 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:22 crc kubenswrapper[4806]: I0126 07:55:22.547395 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:22 crc kubenswrapper[4806]: I0126 07:55:22.547412 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:22 crc kubenswrapper[4806]: I0126 07:55:22.547447 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:22 crc kubenswrapper[4806]: I0126 07:55:22.547467 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:22Z","lastTransitionTime":"2026-01-26T07:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:22 crc kubenswrapper[4806]: I0126 07:55:22.651659 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:22 crc kubenswrapper[4806]: I0126 07:55:22.651737 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:22 crc kubenswrapper[4806]: I0126 07:55:22.651754 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:22 crc kubenswrapper[4806]: I0126 07:55:22.651783 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:22 crc kubenswrapper[4806]: I0126 07:55:22.651803 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:22Z","lastTransitionTime":"2026-01-26T07:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:22 crc kubenswrapper[4806]: I0126 07:55:22.755733 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:22 crc kubenswrapper[4806]: I0126 07:55:22.755804 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:22 crc kubenswrapper[4806]: I0126 07:55:22.755824 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:22 crc kubenswrapper[4806]: I0126 07:55:22.755852 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:22 crc kubenswrapper[4806]: I0126 07:55:22.755872 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:22Z","lastTransitionTime":"2026-01-26T07:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:22 crc kubenswrapper[4806]: I0126 07:55:22.859640 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:22 crc kubenswrapper[4806]: I0126 07:55:22.859696 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:22 crc kubenswrapper[4806]: I0126 07:55:22.859713 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:22 crc kubenswrapper[4806]: I0126 07:55:22.859742 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:22 crc kubenswrapper[4806]: I0126 07:55:22.859761 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:22Z","lastTransitionTime":"2026-01-26T07:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:22 crc kubenswrapper[4806]: I0126 07:55:22.962951 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:22 crc kubenswrapper[4806]: I0126 07:55:22.963013 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:22 crc kubenswrapper[4806]: I0126 07:55:22.963032 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:22 crc kubenswrapper[4806]: I0126 07:55:22.963088 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:22 crc kubenswrapper[4806]: I0126 07:55:22.963107 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:22Z","lastTransitionTime":"2026-01-26T07:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:23 crc kubenswrapper[4806]: I0126 07:55:23.065890 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:23 crc kubenswrapper[4806]: I0126 07:55:23.065942 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:23 crc kubenswrapper[4806]: I0126 07:55:23.065959 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:23 crc kubenswrapper[4806]: I0126 07:55:23.065985 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:23 crc kubenswrapper[4806]: I0126 07:55:23.066004 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:23Z","lastTransitionTime":"2026-01-26T07:55:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:23 crc kubenswrapper[4806]: I0126 07:55:23.155240 4806 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 05:21:41.4897712 +0000 UTC Jan 26 07:55:23 crc kubenswrapper[4806]: I0126 07:55:23.169170 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:23 crc kubenswrapper[4806]: I0126 07:55:23.169211 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:23 crc kubenswrapper[4806]: I0126 07:55:23.169233 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:23 crc kubenswrapper[4806]: I0126 07:55:23.169262 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:23 crc kubenswrapper[4806]: I0126 07:55:23.169281 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:23Z","lastTransitionTime":"2026-01-26T07:55:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:23 crc kubenswrapper[4806]: I0126 07:55:23.273926 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:23 crc kubenswrapper[4806]: I0126 07:55:23.274031 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:23 crc kubenswrapper[4806]: I0126 07:55:23.274059 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:23 crc kubenswrapper[4806]: I0126 07:55:23.274103 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:23 crc kubenswrapper[4806]: I0126 07:55:23.274369 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:23Z","lastTransitionTime":"2026-01-26T07:55:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:23 crc kubenswrapper[4806]: I0126 07:55:23.378066 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:23 crc kubenswrapper[4806]: I0126 07:55:23.378210 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:23 crc kubenswrapper[4806]: I0126 07:55:23.378233 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:23 crc kubenswrapper[4806]: I0126 07:55:23.378257 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:23 crc kubenswrapper[4806]: I0126 07:55:23.378274 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:23Z","lastTransitionTime":"2026-01-26T07:55:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:23 crc kubenswrapper[4806]: I0126 07:55:23.481932 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:23 crc kubenswrapper[4806]: I0126 07:55:23.482000 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:23 crc kubenswrapper[4806]: I0126 07:55:23.482019 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:23 crc kubenswrapper[4806]: I0126 07:55:23.482092 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:23 crc kubenswrapper[4806]: I0126 07:55:23.482115 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:23Z","lastTransitionTime":"2026-01-26T07:55:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:23 crc kubenswrapper[4806]: I0126 07:55:23.585136 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:23 crc kubenswrapper[4806]: I0126 07:55:23.585217 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:23 crc kubenswrapper[4806]: I0126 07:55:23.585237 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:23 crc kubenswrapper[4806]: I0126 07:55:23.585276 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:23 crc kubenswrapper[4806]: I0126 07:55:23.585298 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:23Z","lastTransitionTime":"2026-01-26T07:55:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:23 crc kubenswrapper[4806]: I0126 07:55:23.688504 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:23 crc kubenswrapper[4806]: I0126 07:55:23.688607 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:23 crc kubenswrapper[4806]: I0126 07:55:23.688625 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:23 crc kubenswrapper[4806]: I0126 07:55:23.688661 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:23 crc kubenswrapper[4806]: I0126 07:55:23.688681 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:23Z","lastTransitionTime":"2026-01-26T07:55:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:23 crc kubenswrapper[4806]: I0126 07:55:23.793252 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:23 crc kubenswrapper[4806]: I0126 07:55:23.793845 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:23 crc kubenswrapper[4806]: I0126 07:55:23.793866 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:23 crc kubenswrapper[4806]: I0126 07:55:23.793884 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:23 crc kubenswrapper[4806]: I0126 07:55:23.793897 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:23Z","lastTransitionTime":"2026-01-26T07:55:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:23 crc kubenswrapper[4806]: I0126 07:55:23.904247 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:23 crc kubenswrapper[4806]: I0126 07:55:23.904283 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:23 crc kubenswrapper[4806]: I0126 07:55:23.904291 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:23 crc kubenswrapper[4806]: I0126 07:55:23.904310 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:23 crc kubenswrapper[4806]: I0126 07:55:23.904323 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:23Z","lastTransitionTime":"2026-01-26T07:55:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:24 crc kubenswrapper[4806]: I0126 07:55:24.007687 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:24 crc kubenswrapper[4806]: I0126 07:55:24.007735 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:24 crc kubenswrapper[4806]: I0126 07:55:24.007748 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:24 crc kubenswrapper[4806]: I0126 07:55:24.007767 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:24 crc kubenswrapper[4806]: I0126 07:55:24.007780 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:24Z","lastTransitionTime":"2026-01-26T07:55:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:24 crc kubenswrapper[4806]: I0126 07:55:24.042155 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqmvf" Jan 26 07:55:24 crc kubenswrapper[4806]: I0126 07:55:24.042155 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 07:55:24 crc kubenswrapper[4806]: E0126 07:55:24.042326 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqmvf" podUID="137029f0-49ad-4400-b117-2eff9271bce3" Jan 26 07:55:24 crc kubenswrapper[4806]: I0126 07:55:24.042511 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 07:55:24 crc kubenswrapper[4806]: E0126 07:55:24.042584 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 07:55:24 crc kubenswrapper[4806]: E0126 07:55:24.042722 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 07:55:24 crc kubenswrapper[4806]: I0126 07:55:24.042726 4806 scope.go:117] "RemoveContainer" containerID="df2814fbaab959fae53eddddeffb17ab97a5a8ad31fbc20bcca4cdad140a24e2" Jan 26 07:55:24 crc kubenswrapper[4806]: E0126 07:55:24.043187 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-8mw7z_openshift-ovn-kubernetes(1f8b8acb-f4cf-41db-82f8-94ffd21c1594)\"" pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" podUID="1f8b8acb-f4cf-41db-82f8-94ffd21c1594" Jan 26 07:55:24 crc kubenswrapper[4806]: I0126 07:55:24.043305 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 07:55:24 crc kubenswrapper[4806]: E0126 07:55:24.043451 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 07:55:24 crc kubenswrapper[4806]: I0126 07:55:24.110907 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:24 crc kubenswrapper[4806]: I0126 07:55:24.111230 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:24 crc kubenswrapper[4806]: I0126 07:55:24.111336 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:24 crc kubenswrapper[4806]: I0126 07:55:24.111453 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:24 crc kubenswrapper[4806]: I0126 07:55:24.111588 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:24Z","lastTransitionTime":"2026-01-26T07:55:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:24 crc kubenswrapper[4806]: I0126 07:55:24.155373 4806 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 02:46:04.738807043 +0000 UTC Jan 26 07:55:24 crc kubenswrapper[4806]: I0126 07:55:24.215193 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:24 crc kubenswrapper[4806]: I0126 07:55:24.215249 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:24 crc kubenswrapper[4806]: I0126 07:55:24.215268 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:24 crc kubenswrapper[4806]: I0126 07:55:24.215298 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:24 crc kubenswrapper[4806]: I0126 07:55:24.215317 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:24Z","lastTransitionTime":"2026-01-26T07:55:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:24 crc kubenswrapper[4806]: I0126 07:55:24.318511 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:24 crc kubenswrapper[4806]: I0126 07:55:24.319095 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:24 crc kubenswrapper[4806]: I0126 07:55:24.319271 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:24 crc kubenswrapper[4806]: I0126 07:55:24.319479 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:24 crc kubenswrapper[4806]: I0126 07:55:24.319713 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:24Z","lastTransitionTime":"2026-01-26T07:55:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:24 crc kubenswrapper[4806]: I0126 07:55:24.424254 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:24 crc kubenswrapper[4806]: I0126 07:55:24.424374 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:24 crc kubenswrapper[4806]: I0126 07:55:24.424397 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:24 crc kubenswrapper[4806]: I0126 07:55:24.424434 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:24 crc kubenswrapper[4806]: I0126 07:55:24.424456 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:24Z","lastTransitionTime":"2026-01-26T07:55:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:24 crc kubenswrapper[4806]: I0126 07:55:24.527804 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:24 crc kubenswrapper[4806]: I0126 07:55:24.527933 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:24 crc kubenswrapper[4806]: I0126 07:55:24.527953 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:24 crc kubenswrapper[4806]: I0126 07:55:24.527983 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:24 crc kubenswrapper[4806]: I0126 07:55:24.528003 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:24Z","lastTransitionTime":"2026-01-26T07:55:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:24 crc kubenswrapper[4806]: I0126 07:55:24.632207 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:24 crc kubenswrapper[4806]: I0126 07:55:24.632282 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:24 crc kubenswrapper[4806]: I0126 07:55:24.632309 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:24 crc kubenswrapper[4806]: I0126 07:55:24.632338 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:24 crc kubenswrapper[4806]: I0126 07:55:24.632359 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:24Z","lastTransitionTime":"2026-01-26T07:55:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:24 crc kubenswrapper[4806]: I0126 07:55:24.736388 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:24 crc kubenswrapper[4806]: I0126 07:55:24.736963 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:24 crc kubenswrapper[4806]: I0126 07:55:24.737363 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:24 crc kubenswrapper[4806]: I0126 07:55:24.737785 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:24 crc kubenswrapper[4806]: I0126 07:55:24.737990 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:24Z","lastTransitionTime":"2026-01-26T07:55:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:24 crc kubenswrapper[4806]: I0126 07:55:24.841889 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:24 crc kubenswrapper[4806]: I0126 07:55:24.841969 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:24 crc kubenswrapper[4806]: I0126 07:55:24.841995 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:24 crc kubenswrapper[4806]: I0126 07:55:24.842028 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:24 crc kubenswrapper[4806]: I0126 07:55:24.842054 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:24Z","lastTransitionTime":"2026-01-26T07:55:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:24 crc kubenswrapper[4806]: I0126 07:55:24.943869 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:24 crc kubenswrapper[4806]: I0126 07:55:24.943907 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:24 crc kubenswrapper[4806]: I0126 07:55:24.943918 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:24 crc kubenswrapper[4806]: I0126 07:55:24.943934 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:24 crc kubenswrapper[4806]: I0126 07:55:24.943946 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:24Z","lastTransitionTime":"2026-01-26T07:55:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:25 crc kubenswrapper[4806]: I0126 07:55:25.045903 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:25 crc kubenswrapper[4806]: I0126 07:55:25.045952 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:25 crc kubenswrapper[4806]: I0126 07:55:25.045961 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:25 crc kubenswrapper[4806]: I0126 07:55:25.045976 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:25 crc kubenswrapper[4806]: I0126 07:55:25.045987 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:25Z","lastTransitionTime":"2026-01-26T07:55:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:25 crc kubenswrapper[4806]: I0126 07:55:25.149488 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:25 crc kubenswrapper[4806]: I0126 07:55:25.149619 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:25 crc kubenswrapper[4806]: I0126 07:55:25.149642 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:25 crc kubenswrapper[4806]: I0126 07:55:25.149670 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:25 crc kubenswrapper[4806]: I0126 07:55:25.149691 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:25Z","lastTransitionTime":"2026-01-26T07:55:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:25 crc kubenswrapper[4806]: I0126 07:55:25.156333 4806 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 03:41:41.953780769 +0000 UTC Jan 26 07:55:25 crc kubenswrapper[4806]: I0126 07:55:25.253419 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:25 crc kubenswrapper[4806]: I0126 07:55:25.253485 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:25 crc kubenswrapper[4806]: I0126 07:55:25.253502 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:25 crc kubenswrapper[4806]: I0126 07:55:25.253574 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:25 crc kubenswrapper[4806]: I0126 07:55:25.253599 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:25Z","lastTransitionTime":"2026-01-26T07:55:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:25 crc kubenswrapper[4806]: I0126 07:55:25.357190 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:25 crc kubenswrapper[4806]: I0126 07:55:25.357785 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:25 crc kubenswrapper[4806]: I0126 07:55:25.358105 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:25 crc kubenswrapper[4806]: I0126 07:55:25.358349 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:25 crc kubenswrapper[4806]: I0126 07:55:25.358795 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:25Z","lastTransitionTime":"2026-01-26T07:55:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:25 crc kubenswrapper[4806]: I0126 07:55:25.462899 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:25 crc kubenswrapper[4806]: I0126 07:55:25.463361 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:25 crc kubenswrapper[4806]: I0126 07:55:25.463441 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:25 crc kubenswrapper[4806]: I0126 07:55:25.463537 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:25 crc kubenswrapper[4806]: I0126 07:55:25.463611 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:25Z","lastTransitionTime":"2026-01-26T07:55:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:25 crc kubenswrapper[4806]: I0126 07:55:25.566607 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:25 crc kubenswrapper[4806]: I0126 07:55:25.566693 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:25 crc kubenswrapper[4806]: I0126 07:55:25.566713 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:25 crc kubenswrapper[4806]: I0126 07:55:25.566739 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:25 crc kubenswrapper[4806]: I0126 07:55:25.566758 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:25Z","lastTransitionTime":"2026-01-26T07:55:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:25 crc kubenswrapper[4806]: I0126 07:55:25.669406 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:25 crc kubenswrapper[4806]: I0126 07:55:25.669467 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:25 crc kubenswrapper[4806]: I0126 07:55:25.669487 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:25 crc kubenswrapper[4806]: I0126 07:55:25.669514 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:25 crc kubenswrapper[4806]: I0126 07:55:25.669601 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:25Z","lastTransitionTime":"2026-01-26T07:55:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:25 crc kubenswrapper[4806]: I0126 07:55:25.773172 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:25 crc kubenswrapper[4806]: I0126 07:55:25.773515 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:25 crc kubenswrapper[4806]: I0126 07:55:25.773668 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:25 crc kubenswrapper[4806]: I0126 07:55:25.773781 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:25 crc kubenswrapper[4806]: I0126 07:55:25.773923 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:25Z","lastTransitionTime":"2026-01-26T07:55:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:25 crc kubenswrapper[4806]: I0126 07:55:25.877560 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:25 crc kubenswrapper[4806]: I0126 07:55:25.877609 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:25 crc kubenswrapper[4806]: I0126 07:55:25.877647 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:25 crc kubenswrapper[4806]: I0126 07:55:25.877667 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:25 crc kubenswrapper[4806]: I0126 07:55:25.877681 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:25Z","lastTransitionTime":"2026-01-26T07:55:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:25 crc kubenswrapper[4806]: I0126 07:55:25.980475 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:25 crc kubenswrapper[4806]: I0126 07:55:25.980883 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:25 crc kubenswrapper[4806]: I0126 07:55:25.981020 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:25 crc kubenswrapper[4806]: I0126 07:55:25.981156 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:25 crc kubenswrapper[4806]: I0126 07:55:25.981240 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:25Z","lastTransitionTime":"2026-01-26T07:55:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:26 crc kubenswrapper[4806]: I0126 07:55:26.045205 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 07:55:26 crc kubenswrapper[4806]: E0126 07:55:26.045350 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 07:55:26 crc kubenswrapper[4806]: I0126 07:55:26.045491 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 07:55:26 crc kubenswrapper[4806]: E0126 07:55:26.045794 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 07:55:26 crc kubenswrapper[4806]: I0126 07:55:26.046012 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 07:55:26 crc kubenswrapper[4806]: I0126 07:55:26.046068 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqmvf" Jan 26 07:55:26 crc kubenswrapper[4806]: E0126 07:55:26.046204 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 07:55:26 crc kubenswrapper[4806]: E0126 07:55:26.046377 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqmvf" podUID="137029f0-49ad-4400-b117-2eff9271bce3" Jan 26 07:55:26 crc kubenswrapper[4806]: I0126 07:55:26.084309 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:26 crc kubenswrapper[4806]: I0126 07:55:26.084364 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:26 crc kubenswrapper[4806]: I0126 07:55:26.084383 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:26 crc kubenswrapper[4806]: I0126 07:55:26.084409 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:26 crc kubenswrapper[4806]: I0126 07:55:26.084428 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:26Z","lastTransitionTime":"2026-01-26T07:55:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:26 crc kubenswrapper[4806]: I0126 07:55:26.156689 4806 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 21:49:01.205445209 +0000 UTC Jan 26 07:55:26 crc kubenswrapper[4806]: I0126 07:55:26.187863 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:26 crc kubenswrapper[4806]: I0126 07:55:26.187910 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:26 crc kubenswrapper[4806]: I0126 07:55:26.187932 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:26 crc kubenswrapper[4806]: I0126 07:55:26.187964 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:26 crc kubenswrapper[4806]: I0126 07:55:26.187989 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:26Z","lastTransitionTime":"2026-01-26T07:55:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:26 crc kubenswrapper[4806]: I0126 07:55:26.292006 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:26 crc kubenswrapper[4806]: I0126 07:55:26.292098 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:26 crc kubenswrapper[4806]: I0126 07:55:26.292116 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:26 crc kubenswrapper[4806]: I0126 07:55:26.292178 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:26 crc kubenswrapper[4806]: I0126 07:55:26.292204 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:26Z","lastTransitionTime":"2026-01-26T07:55:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:26 crc kubenswrapper[4806]: I0126 07:55:26.396071 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:26 crc kubenswrapper[4806]: I0126 07:55:26.396153 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:26 crc kubenswrapper[4806]: I0126 07:55:26.396178 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:26 crc kubenswrapper[4806]: I0126 07:55:26.396211 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:26 crc kubenswrapper[4806]: I0126 07:55:26.396233 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:26Z","lastTransitionTime":"2026-01-26T07:55:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:26 crc kubenswrapper[4806]: I0126 07:55:26.500230 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:26 crc kubenswrapper[4806]: I0126 07:55:26.500291 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:26 crc kubenswrapper[4806]: I0126 07:55:26.500312 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:26 crc kubenswrapper[4806]: I0126 07:55:26.500344 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:26 crc kubenswrapper[4806]: I0126 07:55:26.500363 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:26Z","lastTransitionTime":"2026-01-26T07:55:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:26 crc kubenswrapper[4806]: I0126 07:55:26.603706 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:26 crc kubenswrapper[4806]: I0126 07:55:26.604199 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:26 crc kubenswrapper[4806]: I0126 07:55:26.604399 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:26 crc kubenswrapper[4806]: I0126 07:55:26.604614 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:26 crc kubenswrapper[4806]: I0126 07:55:26.604811 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:26Z","lastTransitionTime":"2026-01-26T07:55:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:26 crc kubenswrapper[4806]: I0126 07:55:26.707487 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:26 crc kubenswrapper[4806]: I0126 07:55:26.707799 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:26 crc kubenswrapper[4806]: I0126 07:55:26.707884 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:26 crc kubenswrapper[4806]: I0126 07:55:26.707969 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:26 crc kubenswrapper[4806]: I0126 07:55:26.708039 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:26Z","lastTransitionTime":"2026-01-26T07:55:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:26 crc kubenswrapper[4806]: I0126 07:55:26.810823 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:26 crc kubenswrapper[4806]: I0126 07:55:26.811192 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:26 crc kubenswrapper[4806]: I0126 07:55:26.811292 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:26 crc kubenswrapper[4806]: I0126 07:55:26.811422 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:26 crc kubenswrapper[4806]: I0126 07:55:26.811559 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:26Z","lastTransitionTime":"2026-01-26T07:55:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:26 crc kubenswrapper[4806]: I0126 07:55:26.914738 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:26 crc kubenswrapper[4806]: I0126 07:55:26.915422 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:26 crc kubenswrapper[4806]: I0126 07:55:26.915692 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:26 crc kubenswrapper[4806]: I0126 07:55:26.915883 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:26 crc kubenswrapper[4806]: I0126 07:55:26.916043 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:26Z","lastTransitionTime":"2026-01-26T07:55:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:27 crc kubenswrapper[4806]: I0126 07:55:27.019810 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:27 crc kubenswrapper[4806]: I0126 07:55:27.019923 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:27 crc kubenswrapper[4806]: I0126 07:55:27.019942 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:27 crc kubenswrapper[4806]: I0126 07:55:27.020033 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:27 crc kubenswrapper[4806]: I0126 07:55:27.020053 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:27Z","lastTransitionTime":"2026-01-26T07:55:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:27 crc kubenswrapper[4806]: I0126 07:55:27.123068 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:27 crc kubenswrapper[4806]: I0126 07:55:27.123136 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:27 crc kubenswrapper[4806]: I0126 07:55:27.123159 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:27 crc kubenswrapper[4806]: I0126 07:55:27.123188 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:27 crc kubenswrapper[4806]: I0126 07:55:27.123209 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:27Z","lastTransitionTime":"2026-01-26T07:55:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:27 crc kubenswrapper[4806]: I0126 07:55:27.157405 4806 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 17:57:48.479440789 +0000 UTC Jan 26 07:55:27 crc kubenswrapper[4806]: I0126 07:55:27.226615 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:27 crc kubenswrapper[4806]: I0126 07:55:27.226664 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:27 crc kubenswrapper[4806]: I0126 07:55:27.226678 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:27 crc kubenswrapper[4806]: I0126 07:55:27.226707 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:27 crc kubenswrapper[4806]: I0126 07:55:27.226726 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:27Z","lastTransitionTime":"2026-01-26T07:55:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:27 crc kubenswrapper[4806]: I0126 07:55:27.330793 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:27 crc kubenswrapper[4806]: I0126 07:55:27.330873 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:27 crc kubenswrapper[4806]: I0126 07:55:27.330900 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:27 crc kubenswrapper[4806]: I0126 07:55:27.330936 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:27 crc kubenswrapper[4806]: I0126 07:55:27.330964 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:27Z","lastTransitionTime":"2026-01-26T07:55:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:27 crc kubenswrapper[4806]: I0126 07:55:27.434496 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:27 crc kubenswrapper[4806]: I0126 07:55:27.434605 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:27 crc kubenswrapper[4806]: I0126 07:55:27.434625 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:27 crc kubenswrapper[4806]: I0126 07:55:27.434652 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:27 crc kubenswrapper[4806]: I0126 07:55:27.434673 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:27Z","lastTransitionTime":"2026-01-26T07:55:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:27 crc kubenswrapper[4806]: I0126 07:55:27.538284 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:27 crc kubenswrapper[4806]: I0126 07:55:27.538345 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:27 crc kubenswrapper[4806]: I0126 07:55:27.538363 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:27 crc kubenswrapper[4806]: I0126 07:55:27.538389 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:27 crc kubenswrapper[4806]: I0126 07:55:27.538407 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:27Z","lastTransitionTime":"2026-01-26T07:55:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:27 crc kubenswrapper[4806]: I0126 07:55:27.642468 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:27 crc kubenswrapper[4806]: I0126 07:55:27.642624 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:27 crc kubenswrapper[4806]: I0126 07:55:27.642701 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:27 crc kubenswrapper[4806]: I0126 07:55:27.642784 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:27 crc kubenswrapper[4806]: I0126 07:55:27.642817 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:27Z","lastTransitionTime":"2026-01-26T07:55:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:27 crc kubenswrapper[4806]: I0126 07:55:27.746213 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:27 crc kubenswrapper[4806]: I0126 07:55:27.746308 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:27 crc kubenswrapper[4806]: I0126 07:55:27.746343 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:27 crc kubenswrapper[4806]: I0126 07:55:27.746379 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:27 crc kubenswrapper[4806]: I0126 07:55:27.746404 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:27Z","lastTransitionTime":"2026-01-26T07:55:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:27 crc kubenswrapper[4806]: I0126 07:55:27.849925 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:27 crc kubenswrapper[4806]: I0126 07:55:27.850005 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:27 crc kubenswrapper[4806]: I0126 07:55:27.850027 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:27 crc kubenswrapper[4806]: I0126 07:55:27.850067 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:27 crc kubenswrapper[4806]: I0126 07:55:27.850096 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:27Z","lastTransitionTime":"2026-01-26T07:55:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:27 crc kubenswrapper[4806]: I0126 07:55:27.952730 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:27 crc kubenswrapper[4806]: I0126 07:55:27.952776 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:27 crc kubenswrapper[4806]: I0126 07:55:27.952788 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:27 crc kubenswrapper[4806]: I0126 07:55:27.952806 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:27 crc kubenswrapper[4806]: I0126 07:55:27.952821 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:27Z","lastTransitionTime":"2026-01-26T07:55:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:28 crc kubenswrapper[4806]: I0126 07:55:28.041281 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqmvf" Jan 26 07:55:28 crc kubenswrapper[4806]: E0126 07:55:28.041428 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqmvf" podUID="137029f0-49ad-4400-b117-2eff9271bce3" Jan 26 07:55:28 crc kubenswrapper[4806]: I0126 07:55:28.041640 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 07:55:28 crc kubenswrapper[4806]: E0126 07:55:28.041689 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 07:55:28 crc kubenswrapper[4806]: I0126 07:55:28.042211 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 07:55:28 crc kubenswrapper[4806]: E0126 07:55:28.042275 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 07:55:28 crc kubenswrapper[4806]: I0126 07:55:28.042212 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 07:55:28 crc kubenswrapper[4806]: E0126 07:55:28.042377 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 07:55:28 crc kubenswrapper[4806]: I0126 07:55:28.055749 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:28 crc kubenswrapper[4806]: I0126 07:55:28.055778 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:28 crc kubenswrapper[4806]: I0126 07:55:28.055788 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:28 crc kubenswrapper[4806]: I0126 07:55:28.055802 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:28 crc kubenswrapper[4806]: I0126 07:55:28.055812 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:28Z","lastTransitionTime":"2026-01-26T07:55:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:28 crc kubenswrapper[4806]: I0126 07:55:28.157597 4806 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 05:29:40.463810494 +0000 UTC Jan 26 07:55:28 crc kubenswrapper[4806]: I0126 07:55:28.158870 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:28 crc kubenswrapper[4806]: I0126 07:55:28.158899 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:28 crc kubenswrapper[4806]: I0126 07:55:28.158910 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:28 crc kubenswrapper[4806]: I0126 07:55:28.158962 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:28 crc kubenswrapper[4806]: I0126 07:55:28.158975 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:28Z","lastTransitionTime":"2026-01-26T07:55:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:28 crc kubenswrapper[4806]: I0126 07:55:28.262323 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:28 crc kubenswrapper[4806]: I0126 07:55:28.262364 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:28 crc kubenswrapper[4806]: I0126 07:55:28.262378 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:28 crc kubenswrapper[4806]: I0126 07:55:28.262399 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:28 crc kubenswrapper[4806]: I0126 07:55:28.262412 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:28Z","lastTransitionTime":"2026-01-26T07:55:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:28 crc kubenswrapper[4806]: I0126 07:55:28.366292 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:28 crc kubenswrapper[4806]: I0126 07:55:28.366357 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:28 crc kubenswrapper[4806]: I0126 07:55:28.366370 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:28 crc kubenswrapper[4806]: I0126 07:55:28.366395 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:28 crc kubenswrapper[4806]: I0126 07:55:28.366415 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:28Z","lastTransitionTime":"2026-01-26T07:55:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:28 crc kubenswrapper[4806]: I0126 07:55:28.472724 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:28 crc kubenswrapper[4806]: I0126 07:55:28.472792 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:28 crc kubenswrapper[4806]: I0126 07:55:28.472812 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:28 crc kubenswrapper[4806]: I0126 07:55:28.472842 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:28 crc kubenswrapper[4806]: I0126 07:55:28.472867 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:28Z","lastTransitionTime":"2026-01-26T07:55:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:28 crc kubenswrapper[4806]: I0126 07:55:28.596511 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:28 crc kubenswrapper[4806]: I0126 07:55:28.596603 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:28 crc kubenswrapper[4806]: I0126 07:55:28.596617 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:28 crc kubenswrapper[4806]: I0126 07:55:28.596640 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:28 crc kubenswrapper[4806]: I0126 07:55:28.596655 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:28Z","lastTransitionTime":"2026-01-26T07:55:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:28 crc kubenswrapper[4806]: I0126 07:55:28.698947 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:28 crc kubenswrapper[4806]: I0126 07:55:28.698987 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:28 crc kubenswrapper[4806]: I0126 07:55:28.698996 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:28 crc kubenswrapper[4806]: I0126 07:55:28.699015 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:28 crc kubenswrapper[4806]: I0126 07:55:28.699027 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:28Z","lastTransitionTime":"2026-01-26T07:55:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:28 crc kubenswrapper[4806]: I0126 07:55:28.801244 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:28 crc kubenswrapper[4806]: I0126 07:55:28.801283 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:28 crc kubenswrapper[4806]: I0126 07:55:28.801293 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:28 crc kubenswrapper[4806]: I0126 07:55:28.801307 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:28 crc kubenswrapper[4806]: I0126 07:55:28.801317 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:28Z","lastTransitionTime":"2026-01-26T07:55:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:28 crc kubenswrapper[4806]: I0126 07:55:28.907793 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:28 crc kubenswrapper[4806]: I0126 07:55:28.907843 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:28 crc kubenswrapper[4806]: I0126 07:55:28.907859 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:28 crc kubenswrapper[4806]: I0126 07:55:28.907875 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:28 crc kubenswrapper[4806]: I0126 07:55:28.907892 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:28Z","lastTransitionTime":"2026-01-26T07:55:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:29 crc kubenswrapper[4806]: I0126 07:55:29.010905 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:29 crc kubenswrapper[4806]: I0126 07:55:29.010941 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:29 crc kubenswrapper[4806]: I0126 07:55:29.010950 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:29 crc kubenswrapper[4806]: I0126 07:55:29.010966 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:29 crc kubenswrapper[4806]: I0126 07:55:29.010975 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:29Z","lastTransitionTime":"2026-01-26T07:55:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:29 crc kubenswrapper[4806]: I0126 07:55:29.113445 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:29 crc kubenswrapper[4806]: I0126 07:55:29.113482 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:29 crc kubenswrapper[4806]: I0126 07:55:29.113491 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:29 crc kubenswrapper[4806]: I0126 07:55:29.113503 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:29 crc kubenswrapper[4806]: I0126 07:55:29.113512 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:29Z","lastTransitionTime":"2026-01-26T07:55:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:29 crc kubenswrapper[4806]: I0126 07:55:29.158080 4806 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 18:05:48.514970994 +0000 UTC Jan 26 07:55:29 crc kubenswrapper[4806]: I0126 07:55:29.216207 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:29 crc kubenswrapper[4806]: I0126 07:55:29.216272 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:29 crc kubenswrapper[4806]: I0126 07:55:29.216294 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:29 crc kubenswrapper[4806]: I0126 07:55:29.216356 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:29 crc kubenswrapper[4806]: I0126 07:55:29.216380 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:29Z","lastTransitionTime":"2026-01-26T07:55:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:29 crc kubenswrapper[4806]: I0126 07:55:29.319120 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:29 crc kubenswrapper[4806]: I0126 07:55:29.319187 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:29 crc kubenswrapper[4806]: I0126 07:55:29.319199 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:29 crc kubenswrapper[4806]: I0126 07:55:29.319217 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:29 crc kubenswrapper[4806]: I0126 07:55:29.319230 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:29Z","lastTransitionTime":"2026-01-26T07:55:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:29 crc kubenswrapper[4806]: I0126 07:55:29.422729 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:29 crc kubenswrapper[4806]: I0126 07:55:29.422785 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:29 crc kubenswrapper[4806]: I0126 07:55:29.422801 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:29 crc kubenswrapper[4806]: I0126 07:55:29.422828 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:29 crc kubenswrapper[4806]: I0126 07:55:29.423189 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:29Z","lastTransitionTime":"2026-01-26T07:55:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:29 crc kubenswrapper[4806]: I0126 07:55:29.526716 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:29 crc kubenswrapper[4806]: I0126 07:55:29.526778 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:29 crc kubenswrapper[4806]: I0126 07:55:29.526800 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:29 crc kubenswrapper[4806]: I0126 07:55:29.526827 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:29 crc kubenswrapper[4806]: I0126 07:55:29.526847 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:29Z","lastTransitionTime":"2026-01-26T07:55:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:29 crc kubenswrapper[4806]: I0126 07:55:29.630738 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:29 crc kubenswrapper[4806]: I0126 07:55:29.630799 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:29 crc kubenswrapper[4806]: I0126 07:55:29.630822 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:29 crc kubenswrapper[4806]: I0126 07:55:29.630853 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:29 crc kubenswrapper[4806]: I0126 07:55:29.630875 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:29Z","lastTransitionTime":"2026-01-26T07:55:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:29 crc kubenswrapper[4806]: I0126 07:55:29.733737 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:29 crc kubenswrapper[4806]: I0126 07:55:29.733834 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:29 crc kubenswrapper[4806]: I0126 07:55:29.733903 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:29 crc kubenswrapper[4806]: I0126 07:55:29.733934 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:29 crc kubenswrapper[4806]: I0126 07:55:29.734017 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:29Z","lastTransitionTime":"2026-01-26T07:55:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:29 crc kubenswrapper[4806]: I0126 07:55:29.837903 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:29 crc kubenswrapper[4806]: I0126 07:55:29.837950 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:29 crc kubenswrapper[4806]: I0126 07:55:29.837967 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:29 crc kubenswrapper[4806]: I0126 07:55:29.837990 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:29 crc kubenswrapper[4806]: I0126 07:55:29.838006 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:29Z","lastTransitionTime":"2026-01-26T07:55:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:29 crc kubenswrapper[4806]: I0126 07:55:29.940360 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:29 crc kubenswrapper[4806]: I0126 07:55:29.940432 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:29 crc kubenswrapper[4806]: I0126 07:55:29.940450 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:29 crc kubenswrapper[4806]: I0126 07:55:29.940475 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:29 crc kubenswrapper[4806]: I0126 07:55:29.940492 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:29Z","lastTransitionTime":"2026-01-26T07:55:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:30 crc kubenswrapper[4806]: I0126 07:55:30.041513 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 07:55:30 crc kubenswrapper[4806]: I0126 07:55:30.041637 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqmvf" Jan 26 07:55:30 crc kubenswrapper[4806]: E0126 07:55:30.041849 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 07:55:30 crc kubenswrapper[4806]: I0126 07:55:30.041932 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 07:55:30 crc kubenswrapper[4806]: I0126 07:55:30.041965 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 07:55:30 crc kubenswrapper[4806]: E0126 07:55:30.042111 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 07:55:30 crc kubenswrapper[4806]: E0126 07:55:30.042365 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 07:55:30 crc kubenswrapper[4806]: E0126 07:55:30.042563 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqmvf" podUID="137029f0-49ad-4400-b117-2eff9271bce3" Jan 26 07:55:30 crc kubenswrapper[4806]: I0126 07:55:30.043679 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:30 crc kubenswrapper[4806]: I0126 07:55:30.043726 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:30 crc kubenswrapper[4806]: I0126 07:55:30.043744 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:30 crc kubenswrapper[4806]: I0126 07:55:30.043766 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:30 crc kubenswrapper[4806]: I0126 07:55:30.043806 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:30Z","lastTransitionTime":"2026-01-26T07:55:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:30 crc kubenswrapper[4806]: I0126 07:55:30.146757 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:30 crc kubenswrapper[4806]: I0126 07:55:30.146821 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:30 crc kubenswrapper[4806]: I0126 07:55:30.146936 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:30 crc kubenswrapper[4806]: I0126 07:55:30.146968 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:30 crc kubenswrapper[4806]: I0126 07:55:30.146991 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:30Z","lastTransitionTime":"2026-01-26T07:55:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:30 crc kubenswrapper[4806]: I0126 07:55:30.158982 4806 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 06:02:56.281265104 +0000 UTC Jan 26 07:55:30 crc kubenswrapper[4806]: I0126 07:55:30.250414 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:30 crc kubenswrapper[4806]: I0126 07:55:30.250515 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:30 crc kubenswrapper[4806]: I0126 07:55:30.250555 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:30 crc kubenswrapper[4806]: I0126 07:55:30.250577 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:30 crc kubenswrapper[4806]: I0126 07:55:30.250591 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:30Z","lastTransitionTime":"2026-01-26T07:55:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:30 crc kubenswrapper[4806]: I0126 07:55:30.352550 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:30 crc kubenswrapper[4806]: I0126 07:55:30.352597 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:30 crc kubenswrapper[4806]: I0126 07:55:30.352611 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:30 crc kubenswrapper[4806]: I0126 07:55:30.352633 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:30 crc kubenswrapper[4806]: I0126 07:55:30.352647 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:30Z","lastTransitionTime":"2026-01-26T07:55:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:30 crc kubenswrapper[4806]: I0126 07:55:30.455764 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:30 crc kubenswrapper[4806]: I0126 07:55:30.455877 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:30 crc kubenswrapper[4806]: I0126 07:55:30.455904 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:30 crc kubenswrapper[4806]: I0126 07:55:30.455933 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:30 crc kubenswrapper[4806]: I0126 07:55:30.455954 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:30Z","lastTransitionTime":"2026-01-26T07:55:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:30 crc kubenswrapper[4806]: I0126 07:55:30.558682 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:30 crc kubenswrapper[4806]: I0126 07:55:30.558733 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:30 crc kubenswrapper[4806]: I0126 07:55:30.558751 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:30 crc kubenswrapper[4806]: I0126 07:55:30.558773 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:30 crc kubenswrapper[4806]: I0126 07:55:30.558787 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:30Z","lastTransitionTime":"2026-01-26T07:55:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:30 crc kubenswrapper[4806]: I0126 07:55:30.661797 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:30 crc kubenswrapper[4806]: I0126 07:55:30.661857 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:30 crc kubenswrapper[4806]: I0126 07:55:30.661883 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:30 crc kubenswrapper[4806]: I0126 07:55:30.661916 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:30 crc kubenswrapper[4806]: I0126 07:55:30.661940 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:30Z","lastTransitionTime":"2026-01-26T07:55:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:30 crc kubenswrapper[4806]: I0126 07:55:30.765436 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:30 crc kubenswrapper[4806]: I0126 07:55:30.765479 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:30 crc kubenswrapper[4806]: I0126 07:55:30.765490 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:30 crc kubenswrapper[4806]: I0126 07:55:30.765505 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:30 crc kubenswrapper[4806]: I0126 07:55:30.765516 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:30Z","lastTransitionTime":"2026-01-26T07:55:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:30 crc kubenswrapper[4806]: I0126 07:55:30.868456 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:30 crc kubenswrapper[4806]: I0126 07:55:30.868503 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:30 crc kubenswrapper[4806]: I0126 07:55:30.868567 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:30 crc kubenswrapper[4806]: I0126 07:55:30.868594 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:30 crc kubenswrapper[4806]: I0126 07:55:30.868611 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:30Z","lastTransitionTime":"2026-01-26T07:55:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:30 crc kubenswrapper[4806]: I0126 07:55:30.972304 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:30 crc kubenswrapper[4806]: I0126 07:55:30.972748 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:30 crc kubenswrapper[4806]: I0126 07:55:30.972939 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:30 crc kubenswrapper[4806]: I0126 07:55:30.973131 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:30 crc kubenswrapper[4806]: I0126 07:55:30.973314 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:30Z","lastTransitionTime":"2026-01-26T07:55:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:31 crc kubenswrapper[4806]: I0126 07:55:31.081990 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:31 crc kubenswrapper[4806]: I0126 07:55:31.082101 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:31 crc kubenswrapper[4806]: I0126 07:55:31.082134 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:31 crc kubenswrapper[4806]: I0126 07:55:31.082895 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:31 crc kubenswrapper[4806]: I0126 07:55:31.082929 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:31Z","lastTransitionTime":"2026-01-26T07:55:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:31 crc kubenswrapper[4806]: I0126 07:55:31.105847 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=89.105813202 podStartE2EDuration="1m29.105813202s" podCreationTimestamp="2026-01-26 07:54:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 07:55:31.102321951 +0000 UTC m=+110.366730077" watchObservedRunningTime="2026-01-26 07:55:31.105813202 +0000 UTC m=+110.370221298" Jan 26 07:55:31 crc kubenswrapper[4806]: I0126 07:55:31.159621 4806 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 23:43:02.491413361 +0000 UTC Jan 26 07:55:31 crc kubenswrapper[4806]: I0126 07:55:31.173343 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=92.173309968 podStartE2EDuration="1m32.173309968s" podCreationTimestamp="2026-01-26 07:53:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 07:55:31.140158102 +0000 UTC m=+110.404566208" watchObservedRunningTime="2026-01-26 07:55:31.173309968 +0000 UTC m=+110.437718064" Jan 26 07:55:31 crc kubenswrapper[4806]: I0126 07:55:31.196183 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:31 crc kubenswrapper[4806]: I0126 07:55:31.196221 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:31 crc kubenswrapper[4806]: I0126 07:55:31.196229 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:31 crc kubenswrapper[4806]: I0126 07:55:31.196245 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:31 crc kubenswrapper[4806]: I0126 07:55:31.196257 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:31Z","lastTransitionTime":"2026-01-26T07:55:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:31 crc kubenswrapper[4806]: I0126 07:55:31.225967 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-d7glh" podStartSLOduration=92.225948086 podStartE2EDuration="1m32.225948086s" podCreationTimestamp="2026-01-26 07:53:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 07:55:31.224943907 +0000 UTC m=+110.489351973" watchObservedRunningTime="2026-01-26 07:55:31.225948086 +0000 UTC m=+110.490356152" Jan 26 07:55:31 crc kubenswrapper[4806]: I0126 07:55:31.226134 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-wx526" podStartSLOduration=92.226127921 podStartE2EDuration="1m32.226127921s" podCreationTimestamp="2026-01-26 07:53:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 07:55:31.211001405 +0000 UTC m=+110.475409501" watchObservedRunningTime="2026-01-26 07:55:31.226127921 +0000 UTC m=+110.490535987" Jan 26 07:55:31 crc kubenswrapper[4806]: I0126 07:55:31.248187 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-268q5" podStartSLOduration=92.248161406 podStartE2EDuration="1m32.248161406s" podCreationTimestamp="2026-01-26 07:53:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 07:55:31.246621322 +0000 UTC m=+110.511029418" watchObservedRunningTime="2026-01-26 07:55:31.248161406 +0000 UTC m=+110.512569502" Jan 26 07:55:31 crc kubenswrapper[4806]: I0126 07:55:31.283328 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=59.28330526 podStartE2EDuration="59.28330526s" podCreationTimestamp="2026-01-26 07:54:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 07:55:31.263669704 +0000 UTC m=+110.528077770" watchObservedRunningTime="2026-01-26 07:55:31.28330526 +0000 UTC m=+110.547713326" Jan 26 07:55:31 crc kubenswrapper[4806]: I0126 07:55:31.300162 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:31 crc kubenswrapper[4806]: I0126 07:55:31.300252 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:31 crc kubenswrapper[4806]: I0126 07:55:31.300279 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:31 crc kubenswrapper[4806]: I0126 07:55:31.300310 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:31 crc kubenswrapper[4806]: I0126 07:55:31.300333 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:31Z","lastTransitionTime":"2026-01-26T07:55:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:31 crc kubenswrapper[4806]: I0126 07:55:31.332568 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podStartSLOduration=92.332543739 podStartE2EDuration="1m32.332543739s" podCreationTimestamp="2026-01-26 07:53:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 07:55:31.315453567 +0000 UTC m=+110.579861643" watchObservedRunningTime="2026-01-26 07:55:31.332543739 +0000 UTC m=+110.596951795" Jan 26 07:55:31 crc kubenswrapper[4806]: I0126 07:55:31.350602 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tfbl7" podStartSLOduration=91.350574019 podStartE2EDuration="1m31.350574019s" podCreationTimestamp="2026-01-26 07:54:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 07:55:31.350054594 +0000 UTC m=+110.614462680" watchObservedRunningTime="2026-01-26 07:55:31.350574019 +0000 UTC m=+110.614982085" Jan 26 07:55:31 crc kubenswrapper[4806]: I0126 07:55:31.350834 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-pw8cg" podStartSLOduration=92.350827357 podStartE2EDuration="1m32.350827357s" podCreationTimestamp="2026-01-26 07:53:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 07:55:31.332931031 +0000 UTC m=+110.597339087" watchObservedRunningTime="2026-01-26 07:55:31.350827357 +0000 UTC m=+110.615235413" Jan 26 07:55:31 crc kubenswrapper[4806]: I0126 07:55:31.362679 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=41.362660338 podStartE2EDuration="41.362660338s" podCreationTimestamp="2026-01-26 07:54:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 07:55:31.361726501 +0000 UTC m=+110.626134557" watchObservedRunningTime="2026-01-26 07:55:31.362660338 +0000 UTC m=+110.627068394" Jan 26 07:55:31 crc kubenswrapper[4806]: I0126 07:55:31.381014 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=92.380988406 podStartE2EDuration="1m32.380988406s" podCreationTimestamp="2026-01-26 07:53:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 07:55:31.38043058 +0000 UTC m=+110.644838636" watchObservedRunningTime="2026-01-26 07:55:31.380988406 +0000 UTC m=+110.645396482" Jan 26 07:55:31 crc kubenswrapper[4806]: I0126 07:55:31.403459 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:31 crc kubenswrapper[4806]: I0126 07:55:31.403565 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:31 crc kubenswrapper[4806]: I0126 07:55:31.403584 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:31 crc kubenswrapper[4806]: I0126 07:55:31.403609 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:31 crc kubenswrapper[4806]: I0126 07:55:31.403634 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:31Z","lastTransitionTime":"2026-01-26T07:55:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:31 crc kubenswrapper[4806]: I0126 07:55:31.507083 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:31 crc kubenswrapper[4806]: I0126 07:55:31.507129 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:31 crc kubenswrapper[4806]: I0126 07:55:31.507143 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:31 crc kubenswrapper[4806]: I0126 07:55:31.507171 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:31 crc kubenswrapper[4806]: I0126 07:55:31.507217 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:31Z","lastTransitionTime":"2026-01-26T07:55:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:31 crc kubenswrapper[4806]: I0126 07:55:31.609334 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:31 crc kubenswrapper[4806]: I0126 07:55:31.609373 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:31 crc kubenswrapper[4806]: I0126 07:55:31.609383 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:31 crc kubenswrapper[4806]: I0126 07:55:31.609398 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:31 crc kubenswrapper[4806]: I0126 07:55:31.609408 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:31Z","lastTransitionTime":"2026-01-26T07:55:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:31 crc kubenswrapper[4806]: I0126 07:55:31.712073 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:31 crc kubenswrapper[4806]: I0126 07:55:31.712141 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:31 crc kubenswrapper[4806]: I0126 07:55:31.712158 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:31 crc kubenswrapper[4806]: I0126 07:55:31.712185 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:31 crc kubenswrapper[4806]: I0126 07:55:31.712202 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:31Z","lastTransitionTime":"2026-01-26T07:55:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:31 crc kubenswrapper[4806]: I0126 07:55:31.814733 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:31 crc kubenswrapper[4806]: I0126 07:55:31.814781 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:31 crc kubenswrapper[4806]: I0126 07:55:31.814798 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:31 crc kubenswrapper[4806]: I0126 07:55:31.814822 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:31 crc kubenswrapper[4806]: I0126 07:55:31.814838 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:31Z","lastTransitionTime":"2026-01-26T07:55:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:31 crc kubenswrapper[4806]: I0126 07:55:31.816442 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 07:55:31 crc kubenswrapper[4806]: I0126 07:55:31.816483 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 07:55:31 crc kubenswrapper[4806]: I0126 07:55:31.816496 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 07:55:31 crc kubenswrapper[4806]: I0126 07:55:31.816516 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 07:55:31 crc kubenswrapper[4806]: I0126 07:55:31.816545 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T07:55:31Z","lastTransitionTime":"2026-01-26T07:55:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 07:55:31 crc kubenswrapper[4806]: I0126 07:55:31.860949 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-vgtxd"] Jan 26 07:55:31 crc kubenswrapper[4806]: I0126 07:55:31.861314 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vgtxd" Jan 26 07:55:31 crc kubenswrapper[4806]: I0126 07:55:31.863881 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 26 07:55:31 crc kubenswrapper[4806]: I0126 07:55:31.863950 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 26 07:55:31 crc kubenswrapper[4806]: I0126 07:55:31.864095 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 26 07:55:31 crc kubenswrapper[4806]: I0126 07:55:31.864113 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 26 07:55:31 crc kubenswrapper[4806]: I0126 07:55:31.950925 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/2094f2a5-8da6-478f-983d-707b852e425d-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-vgtxd\" (UID: \"2094f2a5-8da6-478f-983d-707b852e425d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vgtxd" Jan 26 07:55:31 crc kubenswrapper[4806]: I0126 07:55:31.951208 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/2094f2a5-8da6-478f-983d-707b852e425d-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-vgtxd\" (UID: \"2094f2a5-8da6-478f-983d-707b852e425d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vgtxd" Jan 26 07:55:31 crc kubenswrapper[4806]: I0126 07:55:31.951238 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2094f2a5-8da6-478f-983d-707b852e425d-service-ca\") pod \"cluster-version-operator-5c965bbfc6-vgtxd\" (UID: \"2094f2a5-8da6-478f-983d-707b852e425d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vgtxd" Jan 26 07:55:31 crc kubenswrapper[4806]: I0126 07:55:31.951257 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2094f2a5-8da6-478f-983d-707b852e425d-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-vgtxd\" (UID: \"2094f2a5-8da6-478f-983d-707b852e425d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vgtxd" Jan 26 07:55:31 crc kubenswrapper[4806]: I0126 07:55:31.951281 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2094f2a5-8da6-478f-983d-707b852e425d-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-vgtxd\" (UID: \"2094f2a5-8da6-478f-983d-707b852e425d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vgtxd" Jan 26 07:55:32 crc kubenswrapper[4806]: I0126 07:55:32.041193 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 07:55:32 crc kubenswrapper[4806]: I0126 07:55:32.041246 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 07:55:32 crc kubenswrapper[4806]: I0126 07:55:32.041313 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 07:55:32 crc kubenswrapper[4806]: I0126 07:55:32.041477 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqmvf" Jan 26 07:55:32 crc kubenswrapper[4806]: E0126 07:55:32.041471 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 07:55:32 crc kubenswrapper[4806]: E0126 07:55:32.041664 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqmvf" podUID="137029f0-49ad-4400-b117-2eff9271bce3" Jan 26 07:55:32 crc kubenswrapper[4806]: E0126 07:55:32.041786 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 07:55:32 crc kubenswrapper[4806]: E0126 07:55:32.041828 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 07:55:32 crc kubenswrapper[4806]: I0126 07:55:32.052666 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/2094f2a5-8da6-478f-983d-707b852e425d-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-vgtxd\" (UID: \"2094f2a5-8da6-478f-983d-707b852e425d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vgtxd" Jan 26 07:55:32 crc kubenswrapper[4806]: I0126 07:55:32.052784 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/2094f2a5-8da6-478f-983d-707b852e425d-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-vgtxd\" (UID: \"2094f2a5-8da6-478f-983d-707b852e425d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vgtxd" Jan 26 07:55:32 crc kubenswrapper[4806]: I0126 07:55:32.052816 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/2094f2a5-8da6-478f-983d-707b852e425d-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-vgtxd\" (UID: \"2094f2a5-8da6-478f-983d-707b852e425d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vgtxd" Jan 26 07:55:32 crc kubenswrapper[4806]: I0126 07:55:32.052824 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2094f2a5-8da6-478f-983d-707b852e425d-service-ca\") pod \"cluster-version-operator-5c965bbfc6-vgtxd\" (UID: \"2094f2a5-8da6-478f-983d-707b852e425d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vgtxd" Jan 26 07:55:32 crc kubenswrapper[4806]: I0126 07:55:32.052927 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/2094f2a5-8da6-478f-983d-707b852e425d-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-vgtxd\" (UID: \"2094f2a5-8da6-478f-983d-707b852e425d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vgtxd" Jan 26 07:55:32 crc kubenswrapper[4806]: I0126 07:55:32.052934 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2094f2a5-8da6-478f-983d-707b852e425d-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-vgtxd\" (UID: \"2094f2a5-8da6-478f-983d-707b852e425d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vgtxd" Jan 26 07:55:32 crc kubenswrapper[4806]: I0126 07:55:32.053032 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2094f2a5-8da6-478f-983d-707b852e425d-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-vgtxd\" (UID: \"2094f2a5-8da6-478f-983d-707b852e425d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vgtxd" Jan 26 07:55:32 crc kubenswrapper[4806]: I0126 07:55:32.054176 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2094f2a5-8da6-478f-983d-707b852e425d-service-ca\") pod \"cluster-version-operator-5c965bbfc6-vgtxd\" (UID: \"2094f2a5-8da6-478f-983d-707b852e425d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vgtxd" Jan 26 07:55:32 crc kubenswrapper[4806]: I0126 07:55:32.062602 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2094f2a5-8da6-478f-983d-707b852e425d-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-vgtxd\" (UID: \"2094f2a5-8da6-478f-983d-707b852e425d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vgtxd" Jan 26 07:55:32 crc kubenswrapper[4806]: I0126 07:55:32.081809 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2094f2a5-8da6-478f-983d-707b852e425d-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-vgtxd\" (UID: \"2094f2a5-8da6-478f-983d-707b852e425d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vgtxd" Jan 26 07:55:32 crc kubenswrapper[4806]: I0126 07:55:32.172448 4806 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 12:30:45.212880606 +0000 UTC Jan 26 07:55:32 crc kubenswrapper[4806]: I0126 07:55:32.172566 4806 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 26 07:55:32 crc kubenswrapper[4806]: I0126 07:55:32.180358 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vgtxd" Jan 26 07:55:32 crc kubenswrapper[4806]: I0126 07:55:32.184827 4806 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 26 07:55:32 crc kubenswrapper[4806]: I0126 07:55:32.700542 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vgtxd" event={"ID":"2094f2a5-8da6-478f-983d-707b852e425d","Type":"ContainerStarted","Data":"98c3a7d0b37c17121b0b7b57747c4c6e61a4ac4afa7bdb4d871b585459930c51"} Jan 26 07:55:32 crc kubenswrapper[4806]: I0126 07:55:32.700596 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vgtxd" event={"ID":"2094f2a5-8da6-478f-983d-707b852e425d","Type":"ContainerStarted","Data":"f5045dc3d80447783b4c0da77dad84c81fe9d49fff464014e326b450c465aeb8"} Jan 26 07:55:32 crc kubenswrapper[4806]: I0126 07:55:32.717152 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-vgtxd" podStartSLOduration=93.717125782 podStartE2EDuration="1m33.717125782s" podCreationTimestamp="2026-01-26 07:53:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 07:55:32.71705724 +0000 UTC m=+111.981465296" watchObservedRunningTime="2026-01-26 07:55:32.717125782 +0000 UTC m=+111.981533838" Jan 26 07:55:33 crc kubenswrapper[4806]: I0126 07:55:33.706097 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-d7glh_4320ae6b-0d73-47d7-9f2c-f3c5b6b69041/kube-multus/1.log" Jan 26 07:55:33 crc kubenswrapper[4806]: I0126 07:55:33.706582 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-d7glh_4320ae6b-0d73-47d7-9f2c-f3c5b6b69041/kube-multus/0.log" Jan 26 07:55:33 crc kubenswrapper[4806]: I0126 07:55:33.706631 4806 generic.go:334] "Generic (PLEG): container finished" podID="4320ae6b-0d73-47d7-9f2c-f3c5b6b69041" containerID="0b2b56f712aa5b59214153bb170a5866aea874524f826e53be3e703b8ff912fd" exitCode=1 Jan 26 07:55:33 crc kubenswrapper[4806]: I0126 07:55:33.706673 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-d7glh" event={"ID":"4320ae6b-0d73-47d7-9f2c-f3c5b6b69041","Type":"ContainerDied","Data":"0b2b56f712aa5b59214153bb170a5866aea874524f826e53be3e703b8ff912fd"} Jan 26 07:55:33 crc kubenswrapper[4806]: I0126 07:55:33.706725 4806 scope.go:117] "RemoveContainer" containerID="9da5e8c52d415e76190ace196927bd1783b08990164b11379940ecc2b6734551" Jan 26 07:55:33 crc kubenswrapper[4806]: I0126 07:55:33.707257 4806 scope.go:117] "RemoveContainer" containerID="0b2b56f712aa5b59214153bb170a5866aea874524f826e53be3e703b8ff912fd" Jan 26 07:55:33 crc kubenswrapper[4806]: E0126 07:55:33.707598 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-d7glh_openshift-multus(4320ae6b-0d73-47d7-9f2c-f3c5b6b69041)\"" pod="openshift-multus/multus-d7glh" podUID="4320ae6b-0d73-47d7-9f2c-f3c5b6b69041" Jan 26 07:55:34 crc kubenswrapper[4806]: I0126 07:55:34.040953 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqmvf" Jan 26 07:55:34 crc kubenswrapper[4806]: E0126 07:55:34.041096 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqmvf" podUID="137029f0-49ad-4400-b117-2eff9271bce3" Jan 26 07:55:34 crc kubenswrapper[4806]: I0126 07:55:34.041179 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 07:55:34 crc kubenswrapper[4806]: I0126 07:55:34.041287 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 07:55:34 crc kubenswrapper[4806]: I0126 07:55:34.041322 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 07:55:34 crc kubenswrapper[4806]: E0126 07:55:34.041601 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 07:55:34 crc kubenswrapper[4806]: E0126 07:55:34.041820 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 07:55:34 crc kubenswrapper[4806]: E0126 07:55:34.041933 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 07:55:34 crc kubenswrapper[4806]: I0126 07:55:34.711809 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-d7glh_4320ae6b-0d73-47d7-9f2c-f3c5b6b69041/kube-multus/1.log" Jan 26 07:55:36 crc kubenswrapper[4806]: I0126 07:55:36.041296 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 07:55:36 crc kubenswrapper[4806]: I0126 07:55:36.041353 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 07:55:36 crc kubenswrapper[4806]: I0126 07:55:36.041300 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 07:55:36 crc kubenswrapper[4806]: E0126 07:55:36.041596 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 07:55:36 crc kubenswrapper[4806]: E0126 07:55:36.041664 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 07:55:36 crc kubenswrapper[4806]: E0126 07:55:36.041707 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 07:55:36 crc kubenswrapper[4806]: I0126 07:55:36.042606 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqmvf" Jan 26 07:55:36 crc kubenswrapper[4806]: E0126 07:55:36.042857 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqmvf" podUID="137029f0-49ad-4400-b117-2eff9271bce3" Jan 26 07:55:38 crc kubenswrapper[4806]: I0126 07:55:38.041181 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqmvf" Jan 26 07:55:38 crc kubenswrapper[4806]: I0126 07:55:38.041231 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 07:55:38 crc kubenswrapper[4806]: I0126 07:55:38.041293 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 07:55:38 crc kubenswrapper[4806]: I0126 07:55:38.041325 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 07:55:38 crc kubenswrapper[4806]: E0126 07:55:38.041402 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqmvf" podUID="137029f0-49ad-4400-b117-2eff9271bce3" Jan 26 07:55:38 crc kubenswrapper[4806]: E0126 07:55:38.041614 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 07:55:38 crc kubenswrapper[4806]: E0126 07:55:38.041747 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 07:55:38 crc kubenswrapper[4806]: E0126 07:55:38.041824 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 07:55:39 crc kubenswrapper[4806]: I0126 07:55:39.042581 4806 scope.go:117] "RemoveContainer" containerID="df2814fbaab959fae53eddddeffb17ab97a5a8ad31fbc20bcca4cdad140a24e2" Jan 26 07:55:39 crc kubenswrapper[4806]: I0126 07:55:39.733467 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8mw7z_1f8b8acb-f4cf-41db-82f8-94ffd21c1594/ovnkube-controller/3.log" Jan 26 07:55:39 crc kubenswrapper[4806]: I0126 07:55:39.736398 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" event={"ID":"1f8b8acb-f4cf-41db-82f8-94ffd21c1594","Type":"ContainerStarted","Data":"7f29f3d947b8f6cdecce74518b658620be055f64b272f6e599f723e1ae44f773"} Jan 26 07:55:39 crc kubenswrapper[4806]: I0126 07:55:39.736823 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" Jan 26 07:55:39 crc kubenswrapper[4806]: I0126 07:55:39.762778 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" podStartSLOduration=99.762760309 podStartE2EDuration="1m39.762760309s" podCreationTimestamp="2026-01-26 07:54:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 07:55:39.762017918 +0000 UTC m=+119.026425974" watchObservedRunningTime="2026-01-26 07:55:39.762760309 +0000 UTC m=+119.027168365" Jan 26 07:55:39 crc kubenswrapper[4806]: I0126 07:55:39.966587 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-rqmvf"] Jan 26 07:55:39 crc kubenswrapper[4806]: I0126 07:55:39.966745 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqmvf" Jan 26 07:55:39 crc kubenswrapper[4806]: E0126 07:55:39.966858 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqmvf" podUID="137029f0-49ad-4400-b117-2eff9271bce3" Jan 26 07:55:40 crc kubenswrapper[4806]: I0126 07:55:40.041495 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 07:55:40 crc kubenswrapper[4806]: E0126 07:55:40.041630 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 07:55:40 crc kubenswrapper[4806]: I0126 07:55:40.041645 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 07:55:40 crc kubenswrapper[4806]: I0126 07:55:40.041678 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 07:55:40 crc kubenswrapper[4806]: E0126 07:55:40.041741 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 07:55:40 crc kubenswrapper[4806]: E0126 07:55:40.041788 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 07:55:40 crc kubenswrapper[4806]: E0126 07:55:40.965046 4806 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 26 07:55:41 crc kubenswrapper[4806]: E0126 07:55:41.126418 4806 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 26 07:55:42 crc kubenswrapper[4806]: I0126 07:55:42.041816 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqmvf" Jan 26 07:55:42 crc kubenswrapper[4806]: E0126 07:55:42.042180 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqmvf" podUID="137029f0-49ad-4400-b117-2eff9271bce3" Jan 26 07:55:42 crc kubenswrapper[4806]: I0126 07:55:42.041962 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 07:55:42 crc kubenswrapper[4806]: E0126 07:55:42.042251 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 07:55:42 crc kubenswrapper[4806]: I0126 07:55:42.041814 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 07:55:42 crc kubenswrapper[4806]: E0126 07:55:42.042301 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 07:55:42 crc kubenswrapper[4806]: I0126 07:55:42.041986 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 07:55:42 crc kubenswrapper[4806]: E0126 07:55:42.042352 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 07:55:44 crc kubenswrapper[4806]: I0126 07:55:44.040809 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 07:55:44 crc kubenswrapper[4806]: I0126 07:55:44.040887 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqmvf" Jan 26 07:55:44 crc kubenswrapper[4806]: E0126 07:55:44.040938 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 07:55:44 crc kubenswrapper[4806]: I0126 07:55:44.040809 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 07:55:44 crc kubenswrapper[4806]: E0126 07:55:44.041132 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqmvf" podUID="137029f0-49ad-4400-b117-2eff9271bce3" Jan 26 07:55:44 crc kubenswrapper[4806]: E0126 07:55:44.041656 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 07:55:44 crc kubenswrapper[4806]: I0126 07:55:44.041713 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 07:55:44 crc kubenswrapper[4806]: E0126 07:55:44.041790 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 07:55:46 crc kubenswrapper[4806]: I0126 07:55:46.041048 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqmvf" Jan 26 07:55:46 crc kubenswrapper[4806]: I0126 07:55:46.041108 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 07:55:46 crc kubenswrapper[4806]: E0126 07:55:46.041248 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqmvf" podUID="137029f0-49ad-4400-b117-2eff9271bce3" Jan 26 07:55:46 crc kubenswrapper[4806]: I0126 07:55:46.041384 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 07:55:46 crc kubenswrapper[4806]: E0126 07:55:46.041635 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 07:55:46 crc kubenswrapper[4806]: E0126 07:55:46.041803 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 07:55:46 crc kubenswrapper[4806]: I0126 07:55:46.041651 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 07:55:46 crc kubenswrapper[4806]: E0126 07:55:46.041956 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 07:55:46 crc kubenswrapper[4806]: E0126 07:55:46.128441 4806 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 26 07:55:47 crc kubenswrapper[4806]: I0126 07:55:47.042878 4806 scope.go:117] "RemoveContainer" containerID="0b2b56f712aa5b59214153bb170a5866aea874524f826e53be3e703b8ff912fd" Jan 26 07:55:47 crc kubenswrapper[4806]: I0126 07:55:47.763136 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-d7glh_4320ae6b-0d73-47d7-9f2c-f3c5b6b69041/kube-multus/1.log" Jan 26 07:55:47 crc kubenswrapper[4806]: I0126 07:55:47.763465 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-d7glh" event={"ID":"4320ae6b-0d73-47d7-9f2c-f3c5b6b69041","Type":"ContainerStarted","Data":"e417d91e63473ab979f371a0a51d02ca944a89619a0becc7adeeadfc324a0b88"} Jan 26 07:55:48 crc kubenswrapper[4806]: I0126 07:55:48.041396 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 07:55:48 crc kubenswrapper[4806]: I0126 07:55:48.041448 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 07:55:48 crc kubenswrapper[4806]: I0126 07:55:48.041480 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqmvf" Jan 26 07:55:48 crc kubenswrapper[4806]: E0126 07:55:48.041662 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 07:55:48 crc kubenswrapper[4806]: I0126 07:55:48.041705 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 07:55:48 crc kubenswrapper[4806]: E0126 07:55:48.041851 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqmvf" podUID="137029f0-49ad-4400-b117-2eff9271bce3" Jan 26 07:55:48 crc kubenswrapper[4806]: E0126 07:55:48.042005 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 07:55:48 crc kubenswrapper[4806]: E0126 07:55:48.042408 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 07:55:50 crc kubenswrapper[4806]: I0126 07:55:50.041570 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 07:55:50 crc kubenswrapper[4806]: E0126 07:55:50.041723 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 07:55:50 crc kubenswrapper[4806]: I0126 07:55:50.041574 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 07:55:50 crc kubenswrapper[4806]: E0126 07:55:50.042000 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 07:55:50 crc kubenswrapper[4806]: I0126 07:55:50.041574 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqmvf" Jan 26 07:55:50 crc kubenswrapper[4806]: E0126 07:55:50.042101 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqmvf" podUID="137029f0-49ad-4400-b117-2eff9271bce3" Jan 26 07:55:50 crc kubenswrapper[4806]: I0126 07:55:50.042487 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 07:55:50 crc kubenswrapper[4806]: E0126 07:55:50.042594 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 07:55:51 crc kubenswrapper[4806]: I0126 07:55:51.920366 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 26 07:55:51 crc kubenswrapper[4806]: I0126 07:55:51.982610 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-7pdxm"] Jan 26 07:55:51 crc kubenswrapper[4806]: I0126 07:55:51.983260 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-7pdxm" Jan 26 07:55:51 crc kubenswrapper[4806]: I0126 07:55:51.988159 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-9jh6s"] Jan 26 07:55:51 crc kubenswrapper[4806]: I0126 07:55:51.989110 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9jh6s" Jan 26 07:55:51 crc kubenswrapper[4806]: I0126 07:55:51.994953 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-gk5q9"] Jan 26 07:55:51 crc kubenswrapper[4806]: I0126 07:55:51.996507 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-gk5q9" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.002043 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e8f91104-da2f-4f2a-90b2-619d9035f8ca-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-7pdxm\" (UID: \"e8f91104-da2f-4f2a-90b2-619d9035f8ca\") " pod="openshift-controller-manager/controller-manager-879f6c89f-7pdxm" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.002078 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e8f91104-da2f-4f2a-90b2-619d9035f8ca-serving-cert\") pod \"controller-manager-879f6c89f-7pdxm\" (UID: \"e8f91104-da2f-4f2a-90b2-619d9035f8ca\") " pod="openshift-controller-manager/controller-manager-879f6c89f-7pdxm" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.002104 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2n57\" (UniqueName: \"kubernetes.io/projected/e8f91104-da2f-4f2a-90b2-619d9035f8ca-kube-api-access-f2n57\") pod \"controller-manager-879f6c89f-7pdxm\" (UID: \"e8f91104-da2f-4f2a-90b2-619d9035f8ca\") " pod="openshift-controller-manager/controller-manager-879f6c89f-7pdxm" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.002152 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e8f91104-da2f-4f2a-90b2-619d9035f8ca-client-ca\") pod \"controller-manager-879f6c89f-7pdxm\" (UID: \"e8f91104-da2f-4f2a-90b2-619d9035f8ca\") " pod="openshift-controller-manager/controller-manager-879f6c89f-7pdxm" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.002190 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8f91104-da2f-4f2a-90b2-619d9035f8ca-config\") pod \"controller-manager-879f6c89f-7pdxm\" (UID: \"e8f91104-da2f-4f2a-90b2-619d9035f8ca\") " pod="openshift-controller-manager/controller-manager-879f6c89f-7pdxm" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.003431 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.003768 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.004275 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.004612 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.005582 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.005826 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.005985 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.006169 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.006348 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.006541 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.006701 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.006836 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.006976 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.007111 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.007256 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.007247 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-gmnqv"] Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.008079 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.008280 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gmnqv" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.008620 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.010006 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.014864 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-q5z6f"] Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.015418 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q5z6f" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.016033 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.016079 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-4gdvx"] Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.016924 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-4gdvx" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.018691 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-rgn89"] Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.019044 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-rgn89" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.022433 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-hdxh9"] Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.022952 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.024644 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.034177 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.034571 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-qd6mh"] Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.037695 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.024663 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.043931 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-qd6mh" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.044227 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-hdxh9" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.049158 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-d46vj"] Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.051497 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.037511 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.051756 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.060369 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.060436 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.060589 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.061392 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-d46vj" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.061857 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.062005 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.062747 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.062771 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-njnsk"] Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.063622 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-njnsk" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.063649 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-wb6lm"] Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.064153 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.064281 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-wb6lm" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.064482 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.065500 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqmvf" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.065914 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.066259 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.066849 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.066971 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.067634 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.067854 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.067961 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.068139 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.068244 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.068499 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.068828 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.068911 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.069028 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.069277 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.069435 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.069482 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.069650 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.070696 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-7pdxm"] Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.070993 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.071367 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-gk5q9"] Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.072542 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-gmnqv"] Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.078281 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-qnzvz"] Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.078996 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-htpwn"] Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.079738 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qnzvz" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.080612 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.081870 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.082160 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.082282 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.082345 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.082298 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.082173 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.083187 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.083457 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-q5z6f"] Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.083562 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-htpwn" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.085390 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.086070 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.086486 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.086638 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.089374 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.093639 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-s7jrc"] Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.094280 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xqmhh"] Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.094655 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-46429"] Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.095096 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-46429" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.095407 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-s7jrc" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.095617 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.095632 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xqmhh" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.095817 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.095933 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.096104 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.096214 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.096330 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.096442 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.096760 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.096973 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.097145 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.097649 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.097806 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.097973 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.098201 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.098411 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.098642 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.098832 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.099023 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.099570 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.099748 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.099944 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.100288 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.101591 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zrncs"] Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.102138 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zrncs" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.102414 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-tncnb"] Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.103099 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.103258 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.103373 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.119165 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8f91104-da2f-4f2a-90b2-619d9035f8ca-config\") pod \"controller-manager-879f6c89f-7pdxm\" (UID: \"e8f91104-da2f-4f2a-90b2-619d9035f8ca\") " pod="openshift-controller-manager/controller-manager-879f6c89f-7pdxm" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.119220 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/3ba883c9-d8a0-42ec-8894-87769eabf95b-etcd-serving-ca\") pod \"apiserver-76f77b778f-4gdvx\" (UID: \"3ba883c9-d8a0-42ec-8894-87769eabf95b\") " pod="openshift-apiserver/apiserver-76f77b778f-4gdvx" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.119247 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3ba883c9-d8a0-42ec-8894-87769eabf95b-audit-dir\") pod \"apiserver-76f77b778f-4gdvx\" (UID: \"3ba883c9-d8a0-42ec-8894-87769eabf95b\") " pod="openshift-apiserver/apiserver-76f77b778f-4gdvx" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.119269 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b7f2154-fe8d-4ae4-8009-feb30d797f9b-config\") pod \"console-operator-58897d9998-rgn89\" (UID: \"3b7f2154-fe8d-4ae4-8009-feb30d797f9b\") " pod="openshift-console-operator/console-operator-58897d9998-rgn89" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.119296 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ee89739e-edc1-41b5-bf4a-da80ba0a59aa-trusted-ca-bundle\") pod \"console-f9d7485db-qd6mh\" (UID: \"ee89739e-edc1-41b5-bf4a-da80ba0a59aa\") " pod="openshift-console/console-f9d7485db-qd6mh" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.119317 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8dc6ffac-58e2-477a-adaf-3eb1de776a9c-auth-proxy-config\") pod \"machine-approver-56656f9798-9jh6s\" (UID: \"8dc6ffac-58e2-477a-adaf-3eb1de776a9c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9jh6s" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.119338 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvjcz\" (UniqueName: \"kubernetes.io/projected/3ba883c9-d8a0-42ec-8894-87769eabf95b-kube-api-access-cvjcz\") pod \"apiserver-76f77b778f-4gdvx\" (UID: \"3ba883c9-d8a0-42ec-8894-87769eabf95b\") " pod="openshift-apiserver/apiserver-76f77b778f-4gdvx" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.119362 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bgrq\" (UniqueName: \"kubernetes.io/projected/ff8cfaa0-24a3-4aeb-8b43-d0c5c6c401f1-kube-api-access-4bgrq\") pod \"cluster-samples-operator-665b6dd947-njnsk\" (UID: \"ff8cfaa0-24a3-4aeb-8b43-d0c5c6c401f1\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-njnsk" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.119382 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e8f91104-da2f-4f2a-90b2-619d9035f8ca-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-7pdxm\" (UID: \"e8f91104-da2f-4f2a-90b2-619d9035f8ca\") " pod="openshift-controller-manager/controller-manager-879f6c89f-7pdxm" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.119404 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5f97371c-7dc2-4170-90e7-f044dcc62f2a-serving-cert\") pod \"authentication-operator-69f744f599-htpwn\" (UID: \"5f97371c-7dc2-4170-90e7-f044dcc62f2a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-htpwn" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.119425 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14769a57-f19b-4d49-868f-d1754827714b-config\") pod \"route-controller-manager-6576b87f9c-q5z6f\" (UID: \"14769a57-f19b-4d49-868f-d1754827714b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q5z6f" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.119444 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ee89739e-edc1-41b5-bf4a-da80ba0a59aa-console-config\") pod \"console-f9d7485db-qd6mh\" (UID: \"ee89739e-edc1-41b5-bf4a-da80ba0a59aa\") " pod="openshift-console/console-f9d7485db-qd6mh" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.119469 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f97371c-7dc2-4170-90e7-f044dcc62f2a-config\") pod \"authentication-operator-69f744f599-htpwn\" (UID: \"5f97371c-7dc2-4170-90e7-f044dcc62f2a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-htpwn" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.119492 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/14769a57-f19b-4d49-868f-d1754827714b-client-ca\") pod \"route-controller-manager-6576b87f9c-q5z6f\" (UID: \"14769a57-f19b-4d49-868f-d1754827714b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q5z6f" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.119514 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/0f3802bf-e4bc-4952-9e22-428d62ec0349-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-gk5q9\" (UID: \"0f3802bf-e4bc-4952-9e22-428d62ec0349\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-gk5q9" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.119555 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3b7f2154-fe8d-4ae4-8009-feb30d797f9b-trusted-ca\") pod \"console-operator-58897d9998-rgn89\" (UID: \"3b7f2154-fe8d-4ae4-8009-feb30d797f9b\") " pod="openshift-console-operator/console-operator-58897d9998-rgn89" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.119577 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/0f3802bf-e4bc-4952-9e22-428d62ec0349-images\") pod \"machine-api-operator-5694c8668f-gk5q9\" (UID: \"0f3802bf-e4bc-4952-9e22-428d62ec0349\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-gk5q9" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.119599 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5s4v2\" (UniqueName: \"kubernetes.io/projected/5f97371c-7dc2-4170-90e7-f044dcc62f2a-kube-api-access-5s4v2\") pod \"authentication-operator-69f744f599-htpwn\" (UID: \"5f97371c-7dc2-4170-90e7-f044dcc62f2a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-htpwn" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.119619 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ee89739e-edc1-41b5-bf4a-da80ba0a59aa-service-ca\") pod \"console-f9d7485db-qd6mh\" (UID: \"ee89739e-edc1-41b5-bf4a-da80ba0a59aa\") " pod="openshift-console/console-f9d7485db-qd6mh" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.119641 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a0dd2e2-3942-4daa-a45b-17f7bdc66d00-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-wb6lm\" (UID: \"2a0dd2e2-3942-4daa-a45b-17f7bdc66d00\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-wb6lm" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.119662 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d66e251a-5a67-45c4-be63-2f46b56df1a5-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-hdxh9\" (UID: \"d66e251a-5a67-45c4-be63-2f46b56df1a5\") " pod="openshift-authentication/oauth-openshift-558db77b4-hdxh9" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.119686 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e658ebdf-74ef-4dab-b48e-53557c516bd3-metrics-tls\") pod \"dns-operator-744455d44c-d46vj\" (UID: \"e658ebdf-74ef-4dab-b48e-53557c516bd3\") " pod="openshift-dns-operator/dns-operator-744455d44c-d46vj" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.119705 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d66e251a-5a67-45c4-be63-2f46b56df1a5-audit-policies\") pod \"oauth-openshift-558db77b4-hdxh9\" (UID: \"d66e251a-5a67-45c4-be63-2f46b56df1a5\") " pod="openshift-authentication/oauth-openshift-558db77b4-hdxh9" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.119728 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d66e251a-5a67-45c4-be63-2f46b56df1a5-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-hdxh9\" (UID: \"d66e251a-5a67-45c4-be63-2f46b56df1a5\") " pod="openshift-authentication/oauth-openshift-558db77b4-hdxh9" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.119752 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/14769a57-f19b-4d49-868f-d1754827714b-serving-cert\") pod \"route-controller-manager-6576b87f9c-q5z6f\" (UID: \"14769a57-f19b-4d49-868f-d1754827714b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q5z6f" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.119785 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d66e251a-5a67-45c4-be63-2f46b56df1a5-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-hdxh9\" (UID: \"d66e251a-5a67-45c4-be63-2f46b56df1a5\") " pod="openshift-authentication/oauth-openshift-558db77b4-hdxh9" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.119813 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cf7f962d-5924-4e0e-bd23-cd46ba65f5a9-serving-cert\") pod \"apiserver-7bbb656c7d-gmnqv\" (UID: \"cf7f962d-5924-4e0e-bd23-cd46ba65f5a9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gmnqv" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.119843 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/3ba883c9-d8a0-42ec-8894-87769eabf95b-audit\") pod \"apiserver-76f77b778f-4gdvx\" (UID: \"3ba883c9-d8a0-42ec-8894-87769eabf95b\") " pod="openshift-apiserver/apiserver-76f77b778f-4gdvx" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.119872 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztnkg\" (UniqueName: \"kubernetes.io/projected/8dc6ffac-58e2-477a-adaf-3eb1de776a9c-kube-api-access-ztnkg\") pod \"machine-approver-56656f9798-9jh6s\" (UID: \"8dc6ffac-58e2-477a-adaf-3eb1de776a9c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9jh6s" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.119892 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5d00dcee-5512-4730-8743-e128136b9364-metrics-tls\") pod \"ingress-operator-5b745b69d9-qnzvz\" (UID: \"5d00dcee-5512-4730-8743-e128136b9364\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qnzvz" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.119910 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8dc6ffac-58e2-477a-adaf-3eb1de776a9c-config\") pod \"machine-approver-56656f9798-9jh6s\" (UID: \"8dc6ffac-58e2-477a-adaf-3eb1de776a9c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9jh6s" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.119932 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5d00dcee-5512-4730-8743-e128136b9364-bound-sa-token\") pod \"ingress-operator-5b745b69d9-qnzvz\" (UID: \"5d00dcee-5512-4730-8743-e128136b9364\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qnzvz" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.119957 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d66e251a-5a67-45c4-be63-2f46b56df1a5-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-hdxh9\" (UID: \"d66e251a-5a67-45c4-be63-2f46b56df1a5\") " pod="openshift-authentication/oauth-openshift-558db77b4-hdxh9" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.119976 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ee89739e-edc1-41b5-bf4a-da80ba0a59aa-oauth-serving-cert\") pod \"console-f9d7485db-qd6mh\" (UID: \"ee89739e-edc1-41b5-bf4a-da80ba0a59aa\") " pod="openshift-console/console-f9d7485db-qd6mh" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.119994 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-th4lg\" (UniqueName: \"kubernetes.io/projected/ee89739e-edc1-41b5-bf4a-da80ba0a59aa-kube-api-access-th4lg\") pod \"console-f9d7485db-qd6mh\" (UID: \"ee89739e-edc1-41b5-bf4a-da80ba0a59aa\") " pod="openshift-console/console-f9d7485db-qd6mh" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.120012 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cf7f962d-5924-4e0e-bd23-cd46ba65f5a9-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-gmnqv\" (UID: \"cf7f962d-5924-4e0e-bd23-cd46ba65f5a9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gmnqv" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.120031 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/3ba883c9-d8a0-42ec-8894-87769eabf95b-image-import-ca\") pod \"apiserver-76f77b778f-4gdvx\" (UID: \"3ba883c9-d8a0-42ec-8894-87769eabf95b\") " pod="openshift-apiserver/apiserver-76f77b778f-4gdvx" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.120065 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/cf7f962d-5924-4e0e-bd23-cd46ba65f5a9-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-gmnqv\" (UID: \"cf7f962d-5924-4e0e-bd23-cd46ba65f5a9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gmnqv" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.120084 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3ba883c9-d8a0-42ec-8894-87769eabf95b-serving-cert\") pod \"apiserver-76f77b778f-4gdvx\" (UID: \"3ba883c9-d8a0-42ec-8894-87769eabf95b\") " pod="openshift-apiserver/apiserver-76f77b778f-4gdvx" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.120111 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e8f91104-da2f-4f2a-90b2-619d9035f8ca-client-ca\") pod \"controller-manager-879f6c89f-7pdxm\" (UID: \"e8f91104-da2f-4f2a-90b2-619d9035f8ca\") " pod="openshift-controller-manager/controller-manager-879f6c89f-7pdxm" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.120134 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5f97371c-7dc2-4170-90e7-f044dcc62f2a-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-htpwn\" (UID: \"5f97371c-7dc2-4170-90e7-f044dcc62f2a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-htpwn" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.120152 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cf7f962d-5924-4e0e-bd23-cd46ba65f5a9-audit-dir\") pod \"apiserver-7bbb656c7d-gmnqv\" (UID: \"cf7f962d-5924-4e0e-bd23-cd46ba65f5a9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gmnqv" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.120173 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f3802bf-e4bc-4952-9e22-428d62ec0349-config\") pod \"machine-api-operator-5694c8668f-gk5q9\" (UID: \"0f3802bf-e4bc-4952-9e22-428d62ec0349\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-gk5q9" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.120204 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mcxs\" (UniqueName: \"kubernetes.io/projected/cf7f962d-5924-4e0e-bd23-cd46ba65f5a9-kube-api-access-5mcxs\") pod \"apiserver-7bbb656c7d-gmnqv\" (UID: \"cf7f962d-5924-4e0e-bd23-cd46ba65f5a9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gmnqv" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.120224 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/3ba883c9-d8a0-42ec-8894-87769eabf95b-encryption-config\") pod \"apiserver-76f77b778f-4gdvx\" (UID: \"3ba883c9-d8a0-42ec-8894-87769eabf95b\") " pod="openshift-apiserver/apiserver-76f77b778f-4gdvx" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.120244 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/ff8cfaa0-24a3-4aeb-8b43-d0c5c6c401f1-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-njnsk\" (UID: \"ff8cfaa0-24a3-4aeb-8b43-d0c5c6c401f1\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-njnsk" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.120277 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d66e251a-5a67-45c4-be63-2f46b56df1a5-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-hdxh9\" (UID: \"d66e251a-5a67-45c4-be63-2f46b56df1a5\") " pod="openshift-authentication/oauth-openshift-558db77b4-hdxh9" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.120304 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ee89739e-edc1-41b5-bf4a-da80ba0a59aa-console-oauth-config\") pod \"console-f9d7485db-qd6mh\" (UID: \"ee89739e-edc1-41b5-bf4a-da80ba0a59aa\") " pod="openshift-console/console-f9d7485db-qd6mh" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.120323 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/cf7f962d-5924-4e0e-bd23-cd46ba65f5a9-audit-policies\") pod \"apiserver-7bbb656c7d-gmnqv\" (UID: \"cf7f962d-5924-4e0e-bd23-cd46ba65f5a9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gmnqv" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.120346 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e8f91104-da2f-4f2a-90b2-619d9035f8ca-serving-cert\") pod \"controller-manager-879f6c89f-7pdxm\" (UID: \"e8f91104-da2f-4f2a-90b2-619d9035f8ca\") " pod="openshift-controller-manager/controller-manager-879f6c89f-7pdxm" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.120363 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d66e251a-5a67-45c4-be63-2f46b56df1a5-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-hdxh9\" (UID: \"d66e251a-5a67-45c4-be63-2f46b56df1a5\") " pod="openshift-authentication/oauth-openshift-558db77b4-hdxh9" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.120386 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2n57\" (UniqueName: \"kubernetes.io/projected/e8f91104-da2f-4f2a-90b2-619d9035f8ca-kube-api-access-f2n57\") pod \"controller-manager-879f6c89f-7pdxm\" (UID: \"e8f91104-da2f-4f2a-90b2-619d9035f8ca\") " pod="openshift-controller-manager/controller-manager-879f6c89f-7pdxm" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.120409 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d66e251a-5a67-45c4-be63-2f46b56df1a5-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-hdxh9\" (UID: \"d66e251a-5a67-45c4-be63-2f46b56df1a5\") " pod="openshift-authentication/oauth-openshift-558db77b4-hdxh9" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.120430 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ba883c9-d8a0-42ec-8894-87769eabf95b-config\") pod \"apiserver-76f77b778f-4gdvx\" (UID: \"3ba883c9-d8a0-42ec-8894-87769eabf95b\") " pod="openshift-apiserver/apiserver-76f77b778f-4gdvx" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.120450 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3ba883c9-d8a0-42ec-8894-87769eabf95b-etcd-client\") pod \"apiserver-76f77b778f-4gdvx\" (UID: \"3ba883c9-d8a0-42ec-8894-87769eabf95b\") " pod="openshift-apiserver/apiserver-76f77b778f-4gdvx" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.120473 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/cf7f962d-5924-4e0e-bd23-cd46ba65f5a9-etcd-client\") pod \"apiserver-7bbb656c7d-gmnqv\" (UID: \"cf7f962d-5924-4e0e-bd23-cd46ba65f5a9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gmnqv" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.120493 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/cf7f962d-5924-4e0e-bd23-cd46ba65f5a9-encryption-config\") pod \"apiserver-7bbb656c7d-gmnqv\" (UID: \"cf7f962d-5924-4e0e-bd23-cd46ba65f5a9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gmnqv" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.120513 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/8dc6ffac-58e2-477a-adaf-3eb1de776a9c-machine-approver-tls\") pod \"machine-approver-56656f9798-9jh6s\" (UID: \"8dc6ffac-58e2-477a-adaf-3eb1de776a9c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9jh6s" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.120549 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5d00dcee-5512-4730-8743-e128136b9364-trusted-ca\") pod \"ingress-operator-5b745b69d9-qnzvz\" (UID: \"5d00dcee-5512-4730-8743-e128136b9364\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qnzvz" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.120570 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q252k\" (UniqueName: \"kubernetes.io/projected/3b7f2154-fe8d-4ae4-8009-feb30d797f9b-kube-api-access-q252k\") pod \"console-operator-58897d9998-rgn89\" (UID: \"3b7f2154-fe8d-4ae4-8009-feb30d797f9b\") " pod="openshift-console-operator/console-operator-58897d9998-rgn89" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.120591 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d66e251a-5a67-45c4-be63-2f46b56df1a5-audit-dir\") pod \"oauth-openshift-558db77b4-hdxh9\" (UID: \"d66e251a-5a67-45c4-be63-2f46b56df1a5\") " pod="openshift-authentication/oauth-openshift-558db77b4-hdxh9" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.120614 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2a0dd2e2-3942-4daa-a45b-17f7bdc66d00-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-wb6lm\" (UID: \"2a0dd2e2-3942-4daa-a45b-17f7bdc66d00\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-wb6lm" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.120636 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwqhk\" (UniqueName: \"kubernetes.io/projected/5d00dcee-5512-4730-8743-e128136b9364-kube-api-access-xwqhk\") pod \"ingress-operator-5b745b69d9-qnzvz\" (UID: \"5d00dcee-5512-4730-8743-e128136b9364\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qnzvz" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.120658 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzvrl\" (UniqueName: \"kubernetes.io/projected/2a0dd2e2-3942-4daa-a45b-17f7bdc66d00-kube-api-access-nzvrl\") pod \"openshift-controller-manager-operator-756b6f6bc6-wb6lm\" (UID: \"2a0dd2e2-3942-4daa-a45b-17f7bdc66d00\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-wb6lm" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.120676 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgn5z\" (UniqueName: \"kubernetes.io/projected/0f3802bf-e4bc-4952-9e22-428d62ec0349-kube-api-access-sgn5z\") pod \"machine-api-operator-5694c8668f-gk5q9\" (UID: \"0f3802bf-e4bc-4952-9e22-428d62ec0349\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-gk5q9" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.120698 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d66e251a-5a67-45c4-be63-2f46b56df1a5-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-hdxh9\" (UID: \"d66e251a-5a67-45c4-be63-2f46b56df1a5\") " pod="openshift-authentication/oauth-openshift-558db77b4-hdxh9" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.120719 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/3ba883c9-d8a0-42ec-8894-87769eabf95b-node-pullsecrets\") pod \"apiserver-76f77b778f-4gdvx\" (UID: \"3ba883c9-d8a0-42ec-8894-87769eabf95b\") " pod="openshift-apiserver/apiserver-76f77b778f-4gdvx" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.120738 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3b7f2154-fe8d-4ae4-8009-feb30d797f9b-serving-cert\") pod \"console-operator-58897d9998-rgn89\" (UID: \"3b7f2154-fe8d-4ae4-8009-feb30d797f9b\") " pod="openshift-console-operator/console-operator-58897d9998-rgn89" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.120767 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3ba883c9-d8a0-42ec-8894-87769eabf95b-trusted-ca-bundle\") pod \"apiserver-76f77b778f-4gdvx\" (UID: \"3ba883c9-d8a0-42ec-8894-87769eabf95b\") " pod="openshift-apiserver/apiserver-76f77b778f-4gdvx" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.120795 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d66e251a-5a67-45c4-be63-2f46b56df1a5-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-hdxh9\" (UID: \"d66e251a-5a67-45c4-be63-2f46b56df1a5\") " pod="openshift-authentication/oauth-openshift-558db77b4-hdxh9" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.120815 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d66e251a-5a67-45c4-be63-2f46b56df1a5-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-hdxh9\" (UID: \"d66e251a-5a67-45c4-be63-2f46b56df1a5\") " pod="openshift-authentication/oauth-openshift-558db77b4-hdxh9" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.120838 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86g8r\" (UniqueName: \"kubernetes.io/projected/d66e251a-5a67-45c4-be63-2f46b56df1a5-kube-api-access-86g8r\") pod \"oauth-openshift-558db77b4-hdxh9\" (UID: \"d66e251a-5a67-45c4-be63-2f46b56df1a5\") " pod="openshift-authentication/oauth-openshift-558db77b4-hdxh9" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.120861 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfnpk\" (UniqueName: \"kubernetes.io/projected/14769a57-f19b-4d49-868f-d1754827714b-kube-api-access-bfnpk\") pod \"route-controller-manager-6576b87f9c-q5z6f\" (UID: \"14769a57-f19b-4d49-868f-d1754827714b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q5z6f" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.120889 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ee89739e-edc1-41b5-bf4a-da80ba0a59aa-console-serving-cert\") pod \"console-f9d7485db-qd6mh\" (UID: \"ee89739e-edc1-41b5-bf4a-da80ba0a59aa\") " pod="openshift-console/console-f9d7485db-qd6mh" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.120935 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2xls\" (UniqueName: \"kubernetes.io/projected/e658ebdf-74ef-4dab-b48e-53557c516bd3-kube-api-access-z2xls\") pod \"dns-operator-744455d44c-d46vj\" (UID: \"e658ebdf-74ef-4dab-b48e-53557c516bd3\") " pod="openshift-dns-operator/dns-operator-744455d44c-d46vj" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.120978 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5f97371c-7dc2-4170-90e7-f044dcc62f2a-service-ca-bundle\") pod \"authentication-operator-69f744f599-htpwn\" (UID: \"5f97371c-7dc2-4170-90e7-f044dcc62f2a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-htpwn" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.120999 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d66e251a-5a67-45c4-be63-2f46b56df1a5-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-hdxh9\" (UID: \"d66e251a-5a67-45c4-be63-2f46b56df1a5\") " pod="openshift-authentication/oauth-openshift-558db77b4-hdxh9" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.122202 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.122302 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8f91104-da2f-4f2a-90b2-619d9035f8ca-config\") pod \"controller-manager-879f6c89f-7pdxm\" (UID: \"e8f91104-da2f-4f2a-90b2-619d9035f8ca\") " pod="openshift-controller-manager/controller-manager-879f6c89f-7pdxm" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.122507 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.132298 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e8f91104-da2f-4f2a-90b2-619d9035f8ca-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-7pdxm\" (UID: \"e8f91104-da2f-4f2a-90b2-619d9035f8ca\") " pod="openshift-controller-manager/controller-manager-879f6c89f-7pdxm" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.133931 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e8f91104-da2f-4f2a-90b2-619d9035f8ca-client-ca\") pod \"controller-manager-879f6c89f-7pdxm\" (UID: \"e8f91104-da2f-4f2a-90b2-619d9035f8ca\") " pod="openshift-controller-manager/controller-manager-879f6c89f-7pdxm" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.148105 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.148318 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.157309 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.157560 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.159332 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.159854 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.160015 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-sbd2c"] Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.160599 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-sbd2c" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.160978 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-tncnb" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.161162 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.161376 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.161534 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.161585 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.161940 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.162043 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zd7nh"] Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.162705 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zd7nh" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.163245 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-l5mfg"] Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.163817 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-l5mfg" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.164349 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-sf2rk"] Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.164725 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-sf2rk" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.165296 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.166479 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.166978 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.166989 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.174439 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mkvhj"] Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.175070 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mkvhj" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.175406 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hq8jp"] Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.175805 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hq8jp" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.175937 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5g5mr"] Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.176770 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5g5mr" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.177567 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-sxf5m"] Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.177971 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-vhrp6"] Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.178137 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.178218 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-sxf5m" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.179130 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jdcjm"] Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.179642 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6sjl8"] Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.180803 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jdcjm" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.181014 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-vhrp6" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.193662 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e8f91104-da2f-4f2a-90b2-619d9035f8ca-serving-cert\") pod \"controller-manager-879f6c89f-7pdxm\" (UID: \"e8f91104-da2f-4f2a-90b2-619d9035f8ca\") " pod="openshift-controller-manager/controller-manager-879f6c89f-7pdxm" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.193926 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-rgn89"] Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.194075 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6sjl8" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.196634 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.204615 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-f9fk8"] Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.205271 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ncld5"] Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.205957 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ncld5" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.206223 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-f9fk8" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.218634 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.223569 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vd8ts"] Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.224829 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-4gdvx"] Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.224999 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-vd8ts" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.226272 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-mkfwt"] Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.227622 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-mkfwt" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.228592 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.229026 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8dc6ffac-58e2-477a-adaf-3eb1de776a9c-config\") pod \"machine-approver-56656f9798-9jh6s\" (UID: \"8dc6ffac-58e2-477a-adaf-3eb1de776a9c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9jh6s" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.229066 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5d00dcee-5512-4730-8743-e128136b9364-bound-sa-token\") pod \"ingress-operator-5b745b69d9-qnzvz\" (UID: \"5d00dcee-5512-4730-8743-e128136b9364\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qnzvz" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.229096 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d66e251a-5a67-45c4-be63-2f46b56df1a5-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-hdxh9\" (UID: \"d66e251a-5a67-45c4-be63-2f46b56df1a5\") " pod="openshift-authentication/oauth-openshift-558db77b4-hdxh9" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.229123 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ee89739e-edc1-41b5-bf4a-da80ba0a59aa-oauth-serving-cert\") pod \"console-f9d7485db-qd6mh\" (UID: \"ee89739e-edc1-41b5-bf4a-da80ba0a59aa\") " pod="openshift-console/console-f9d7485db-qd6mh" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.229141 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-th4lg\" (UniqueName: \"kubernetes.io/projected/ee89739e-edc1-41b5-bf4a-da80ba0a59aa-kube-api-access-th4lg\") pod \"console-f9d7485db-qd6mh\" (UID: \"ee89739e-edc1-41b5-bf4a-da80ba0a59aa\") " pod="openshift-console/console-f9d7485db-qd6mh" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.229163 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cf7f962d-5924-4e0e-bd23-cd46ba65f5a9-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-gmnqv\" (UID: \"cf7f962d-5924-4e0e-bd23-cd46ba65f5a9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gmnqv" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.229267 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/3ba883c9-d8a0-42ec-8894-87769eabf95b-image-import-ca\") pod \"apiserver-76f77b778f-4gdvx\" (UID: \"3ba883c9-d8a0-42ec-8894-87769eabf95b\") " pod="openshift-apiserver/apiserver-76f77b778f-4gdvx" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.229305 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/cf7f962d-5924-4e0e-bd23-cd46ba65f5a9-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-gmnqv\" (UID: \"cf7f962d-5924-4e0e-bd23-cd46ba65f5a9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gmnqv" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.229327 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3ba883c9-d8a0-42ec-8894-87769eabf95b-serving-cert\") pod \"apiserver-76f77b778f-4gdvx\" (UID: \"3ba883c9-d8a0-42ec-8894-87769eabf95b\") " pod="openshift-apiserver/apiserver-76f77b778f-4gdvx" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.229347 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cf7f962d-5924-4e0e-bd23-cd46ba65f5a9-audit-dir\") pod \"apiserver-7bbb656c7d-gmnqv\" (UID: \"cf7f962d-5924-4e0e-bd23-cd46ba65f5a9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gmnqv" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.229370 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5f97371c-7dc2-4170-90e7-f044dcc62f2a-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-htpwn\" (UID: \"5f97371c-7dc2-4170-90e7-f044dcc62f2a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-htpwn" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.229389 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f3802bf-e4bc-4952-9e22-428d62ec0349-config\") pod \"machine-api-operator-5694c8668f-gk5q9\" (UID: \"0f3802bf-e4bc-4952-9e22-428d62ec0349\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-gk5q9" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.229425 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mcxs\" (UniqueName: \"kubernetes.io/projected/cf7f962d-5924-4e0e-bd23-cd46ba65f5a9-kube-api-access-5mcxs\") pod \"apiserver-7bbb656c7d-gmnqv\" (UID: \"cf7f962d-5924-4e0e-bd23-cd46ba65f5a9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gmnqv" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.229447 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/3ba883c9-d8a0-42ec-8894-87769eabf95b-encryption-config\") pod \"apiserver-76f77b778f-4gdvx\" (UID: \"3ba883c9-d8a0-42ec-8894-87769eabf95b\") " pod="openshift-apiserver/apiserver-76f77b778f-4gdvx" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.229592 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/ff8cfaa0-24a3-4aeb-8b43-d0c5c6c401f1-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-njnsk\" (UID: \"ff8cfaa0-24a3-4aeb-8b43-d0c5c6c401f1\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-njnsk" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.229644 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d66e251a-5a67-45c4-be63-2f46b56df1a5-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-hdxh9\" (UID: \"d66e251a-5a67-45c4-be63-2f46b56df1a5\") " pod="openshift-authentication/oauth-openshift-558db77b4-hdxh9" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.229669 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ee89739e-edc1-41b5-bf4a-da80ba0a59aa-console-oauth-config\") pod \"console-f9d7485db-qd6mh\" (UID: \"ee89739e-edc1-41b5-bf4a-da80ba0a59aa\") " pod="openshift-console/console-f9d7485db-qd6mh" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.229724 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/cf7f962d-5924-4e0e-bd23-cd46ba65f5a9-audit-policies\") pod \"apiserver-7bbb656c7d-gmnqv\" (UID: \"cf7f962d-5924-4e0e-bd23-cd46ba65f5a9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gmnqv" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.229747 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d66e251a-5a67-45c4-be63-2f46b56df1a5-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-hdxh9\" (UID: \"d66e251a-5a67-45c4-be63-2f46b56df1a5\") " pod="openshift-authentication/oauth-openshift-558db77b4-hdxh9" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.229775 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48xpv\" (UniqueName: \"kubernetes.io/projected/8a123de3-5556-4e34-8433-52805089c13c-kube-api-access-48xpv\") pod \"downloads-7954f5f757-s7jrc\" (UID: \"8a123de3-5556-4e34-8433-52805089c13c\") " pod="openshift-console/downloads-7954f5f757-s7jrc" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.229809 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d66e251a-5a67-45c4-be63-2f46b56df1a5-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-hdxh9\" (UID: \"d66e251a-5a67-45c4-be63-2f46b56df1a5\") " pod="openshift-authentication/oauth-openshift-558db77b4-hdxh9" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.229844 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ba883c9-d8a0-42ec-8894-87769eabf95b-config\") pod \"apiserver-76f77b778f-4gdvx\" (UID: \"3ba883c9-d8a0-42ec-8894-87769eabf95b\") " pod="openshift-apiserver/apiserver-76f77b778f-4gdvx" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.229872 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3ba883c9-d8a0-42ec-8894-87769eabf95b-etcd-client\") pod \"apiserver-76f77b778f-4gdvx\" (UID: \"3ba883c9-d8a0-42ec-8894-87769eabf95b\") " pod="openshift-apiserver/apiserver-76f77b778f-4gdvx" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.229893 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/8dc6ffac-58e2-477a-adaf-3eb1de776a9c-machine-approver-tls\") pod \"machine-approver-56656f9798-9jh6s\" (UID: \"8dc6ffac-58e2-477a-adaf-3eb1de776a9c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9jh6s" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.229944 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/cf7f962d-5924-4e0e-bd23-cd46ba65f5a9-etcd-client\") pod \"apiserver-7bbb656c7d-gmnqv\" (UID: \"cf7f962d-5924-4e0e-bd23-cd46ba65f5a9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gmnqv" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.230063 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/cf7f962d-5924-4e0e-bd23-cd46ba65f5a9-encryption-config\") pod \"apiserver-7bbb656c7d-gmnqv\" (UID: \"cf7f962d-5924-4e0e-bd23-cd46ba65f5a9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gmnqv" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.230088 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5d00dcee-5512-4730-8743-e128136b9364-trusted-ca\") pod \"ingress-operator-5b745b69d9-qnzvz\" (UID: \"5d00dcee-5512-4730-8743-e128136b9364\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qnzvz" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.230107 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q252k\" (UniqueName: \"kubernetes.io/projected/3b7f2154-fe8d-4ae4-8009-feb30d797f9b-kube-api-access-q252k\") pod \"console-operator-58897d9998-rgn89\" (UID: \"3b7f2154-fe8d-4ae4-8009-feb30d797f9b\") " pod="openshift-console-operator/console-operator-58897d9998-rgn89" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.230132 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d66e251a-5a67-45c4-be63-2f46b56df1a5-audit-dir\") pod \"oauth-openshift-558db77b4-hdxh9\" (UID: \"d66e251a-5a67-45c4-be63-2f46b56df1a5\") " pod="openshift-authentication/oauth-openshift-558db77b4-hdxh9" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.230158 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2a0dd2e2-3942-4daa-a45b-17f7bdc66d00-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-wb6lm\" (UID: \"2a0dd2e2-3942-4daa-a45b-17f7bdc66d00\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-wb6lm" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.230182 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwqhk\" (UniqueName: \"kubernetes.io/projected/5d00dcee-5512-4730-8743-e128136b9364-kube-api-access-xwqhk\") pod \"ingress-operator-5b745b69d9-qnzvz\" (UID: \"5d00dcee-5512-4730-8743-e128136b9364\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qnzvz" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.230202 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3b7f2154-fe8d-4ae4-8009-feb30d797f9b-serving-cert\") pod \"console-operator-58897d9998-rgn89\" (UID: \"3b7f2154-fe8d-4ae4-8009-feb30d797f9b\") " pod="openshift-console-operator/console-operator-58897d9998-rgn89" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.230243 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nzvrl\" (UniqueName: \"kubernetes.io/projected/2a0dd2e2-3942-4daa-a45b-17f7bdc66d00-kube-api-access-nzvrl\") pod \"openshift-controller-manager-operator-756b6f6bc6-wb6lm\" (UID: \"2a0dd2e2-3942-4daa-a45b-17f7bdc66d00\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-wb6lm" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.230267 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sgn5z\" (UniqueName: \"kubernetes.io/projected/0f3802bf-e4bc-4952-9e22-428d62ec0349-kube-api-access-sgn5z\") pod \"machine-api-operator-5694c8668f-gk5q9\" (UID: \"0f3802bf-e4bc-4952-9e22-428d62ec0349\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-gk5q9" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.230303 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d66e251a-5a67-45c4-be63-2f46b56df1a5-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-hdxh9\" (UID: \"d66e251a-5a67-45c4-be63-2f46b56df1a5\") " pod="openshift-authentication/oauth-openshift-558db77b4-hdxh9" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.230367 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/3ba883c9-d8a0-42ec-8894-87769eabf95b-node-pullsecrets\") pod \"apiserver-76f77b778f-4gdvx\" (UID: \"3ba883c9-d8a0-42ec-8894-87769eabf95b\") " pod="openshift-apiserver/apiserver-76f77b778f-4gdvx" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.230424 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3ba883c9-d8a0-42ec-8894-87769eabf95b-trusted-ca-bundle\") pod \"apiserver-76f77b778f-4gdvx\" (UID: \"3ba883c9-d8a0-42ec-8894-87769eabf95b\") " pod="openshift-apiserver/apiserver-76f77b778f-4gdvx" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.230447 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d66e251a-5a67-45c4-be63-2f46b56df1a5-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-hdxh9\" (UID: \"d66e251a-5a67-45c4-be63-2f46b56df1a5\") " pod="openshift-authentication/oauth-openshift-558db77b4-hdxh9" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.230469 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d66e251a-5a67-45c4-be63-2f46b56df1a5-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-hdxh9\" (UID: \"d66e251a-5a67-45c4-be63-2f46b56df1a5\") " pod="openshift-authentication/oauth-openshift-558db77b4-hdxh9" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.230490 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86g8r\" (UniqueName: \"kubernetes.io/projected/d66e251a-5a67-45c4-be63-2f46b56df1a5-kube-api-access-86g8r\") pod \"oauth-openshift-558db77b4-hdxh9\" (UID: \"d66e251a-5a67-45c4-be63-2f46b56df1a5\") " pod="openshift-authentication/oauth-openshift-558db77b4-hdxh9" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.230512 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bfnpk\" (UniqueName: \"kubernetes.io/projected/14769a57-f19b-4d49-868f-d1754827714b-kube-api-access-bfnpk\") pod \"route-controller-manager-6576b87f9c-q5z6f\" (UID: \"14769a57-f19b-4d49-868f-d1754827714b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q5z6f" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.230686 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ee89739e-edc1-41b5-bf4a-da80ba0a59aa-console-serving-cert\") pod \"console-f9d7485db-qd6mh\" (UID: \"ee89739e-edc1-41b5-bf4a-da80ba0a59aa\") " pod="openshift-console/console-f9d7485db-qd6mh" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.230710 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2xls\" (UniqueName: \"kubernetes.io/projected/e658ebdf-74ef-4dab-b48e-53557c516bd3-kube-api-access-z2xls\") pod \"dns-operator-744455d44c-d46vj\" (UID: \"e658ebdf-74ef-4dab-b48e-53557c516bd3\") " pod="openshift-dns-operator/dns-operator-744455d44c-d46vj" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.230748 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5f97371c-7dc2-4170-90e7-f044dcc62f2a-service-ca-bundle\") pod \"authentication-operator-69f744f599-htpwn\" (UID: \"5f97371c-7dc2-4170-90e7-f044dcc62f2a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-htpwn" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.230769 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d66e251a-5a67-45c4-be63-2f46b56df1a5-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-hdxh9\" (UID: \"d66e251a-5a67-45c4-be63-2f46b56df1a5\") " pod="openshift-authentication/oauth-openshift-558db77b4-hdxh9" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.230796 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b7f2154-fe8d-4ae4-8009-feb30d797f9b-config\") pod \"console-operator-58897d9998-rgn89\" (UID: \"3b7f2154-fe8d-4ae4-8009-feb30d797f9b\") " pod="openshift-console-operator/console-operator-58897d9998-rgn89" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.230819 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/3ba883c9-d8a0-42ec-8894-87769eabf95b-etcd-serving-ca\") pod \"apiserver-76f77b778f-4gdvx\" (UID: \"3ba883c9-d8a0-42ec-8894-87769eabf95b\") " pod="openshift-apiserver/apiserver-76f77b778f-4gdvx" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.230846 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3ba883c9-d8a0-42ec-8894-87769eabf95b-audit-dir\") pod \"apiserver-76f77b778f-4gdvx\" (UID: \"3ba883c9-d8a0-42ec-8894-87769eabf95b\") " pod="openshift-apiserver/apiserver-76f77b778f-4gdvx" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.230876 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ee89739e-edc1-41b5-bf4a-da80ba0a59aa-trusted-ca-bundle\") pod \"console-f9d7485db-qd6mh\" (UID: \"ee89739e-edc1-41b5-bf4a-da80ba0a59aa\") " pod="openshift-console/console-f9d7485db-qd6mh" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.230896 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8dc6ffac-58e2-477a-adaf-3eb1de776a9c-auth-proxy-config\") pod \"machine-approver-56656f9798-9jh6s\" (UID: \"8dc6ffac-58e2-477a-adaf-3eb1de776a9c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9jh6s" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.230923 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvjcz\" (UniqueName: \"kubernetes.io/projected/3ba883c9-d8a0-42ec-8894-87769eabf95b-kube-api-access-cvjcz\") pod \"apiserver-76f77b778f-4gdvx\" (UID: \"3ba883c9-d8a0-42ec-8894-87769eabf95b\") " pod="openshift-apiserver/apiserver-76f77b778f-4gdvx" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.230941 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4bgrq\" (UniqueName: \"kubernetes.io/projected/ff8cfaa0-24a3-4aeb-8b43-d0c5c6c401f1-kube-api-access-4bgrq\") pod \"cluster-samples-operator-665b6dd947-njnsk\" (UID: \"ff8cfaa0-24a3-4aeb-8b43-d0c5c6c401f1\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-njnsk" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.230960 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5f97371c-7dc2-4170-90e7-f044dcc62f2a-serving-cert\") pod \"authentication-operator-69f744f599-htpwn\" (UID: \"5f97371c-7dc2-4170-90e7-f044dcc62f2a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-htpwn" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.231002 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14769a57-f19b-4d49-868f-d1754827714b-config\") pod \"route-controller-manager-6576b87f9c-q5z6f\" (UID: \"14769a57-f19b-4d49-868f-d1754827714b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q5z6f" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.231030 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ee89739e-edc1-41b5-bf4a-da80ba0a59aa-console-config\") pod \"console-f9d7485db-qd6mh\" (UID: \"ee89739e-edc1-41b5-bf4a-da80ba0a59aa\") " pod="openshift-console/console-f9d7485db-qd6mh" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.231050 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f97371c-7dc2-4170-90e7-f044dcc62f2a-config\") pod \"authentication-operator-69f744f599-htpwn\" (UID: \"5f97371c-7dc2-4170-90e7-f044dcc62f2a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-htpwn" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.231068 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/14769a57-f19b-4d49-868f-d1754827714b-client-ca\") pod \"route-controller-manager-6576b87f9c-q5z6f\" (UID: \"14769a57-f19b-4d49-868f-d1754827714b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q5z6f" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.231153 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/0f3802bf-e4bc-4952-9e22-428d62ec0349-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-gk5q9\" (UID: \"0f3802bf-e4bc-4952-9e22-428d62ec0349\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-gk5q9" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.231173 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3b7f2154-fe8d-4ae4-8009-feb30d797f9b-trusted-ca\") pod \"console-operator-58897d9998-rgn89\" (UID: \"3b7f2154-fe8d-4ae4-8009-feb30d797f9b\") " pod="openshift-console-operator/console-operator-58897d9998-rgn89" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.231208 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/0f3802bf-e4bc-4952-9e22-428d62ec0349-images\") pod \"machine-api-operator-5694c8668f-gk5q9\" (UID: \"0f3802bf-e4bc-4952-9e22-428d62ec0349\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-gk5q9" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.231245 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d66e251a-5a67-45c4-be63-2f46b56df1a5-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-hdxh9\" (UID: \"d66e251a-5a67-45c4-be63-2f46b56df1a5\") " pod="openshift-authentication/oauth-openshift-558db77b4-hdxh9" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.231267 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5s4v2\" (UniqueName: \"kubernetes.io/projected/5f97371c-7dc2-4170-90e7-f044dcc62f2a-kube-api-access-5s4v2\") pod \"authentication-operator-69f744f599-htpwn\" (UID: \"5f97371c-7dc2-4170-90e7-f044dcc62f2a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-htpwn" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.231285 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ee89739e-edc1-41b5-bf4a-da80ba0a59aa-service-ca\") pod \"console-f9d7485db-qd6mh\" (UID: \"ee89739e-edc1-41b5-bf4a-da80ba0a59aa\") " pod="openshift-console/console-f9d7485db-qd6mh" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.231312 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a0dd2e2-3942-4daa-a45b-17f7bdc66d00-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-wb6lm\" (UID: \"2a0dd2e2-3942-4daa-a45b-17f7bdc66d00\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-wb6lm" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.231335 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e658ebdf-74ef-4dab-b48e-53557c516bd3-metrics-tls\") pod \"dns-operator-744455d44c-d46vj\" (UID: \"e658ebdf-74ef-4dab-b48e-53557c516bd3\") " pod="openshift-dns-operator/dns-operator-744455d44c-d46vj" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.231354 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d66e251a-5a67-45c4-be63-2f46b56df1a5-audit-policies\") pod \"oauth-openshift-558db77b4-hdxh9\" (UID: \"d66e251a-5a67-45c4-be63-2f46b56df1a5\") " pod="openshift-authentication/oauth-openshift-558db77b4-hdxh9" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.231398 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d66e251a-5a67-45c4-be63-2f46b56df1a5-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-hdxh9\" (UID: \"d66e251a-5a67-45c4-be63-2f46b56df1a5\") " pod="openshift-authentication/oauth-openshift-558db77b4-hdxh9" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.231423 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d66e251a-5a67-45c4-be63-2f46b56df1a5-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-hdxh9\" (UID: \"d66e251a-5a67-45c4-be63-2f46b56df1a5\") " pod="openshift-authentication/oauth-openshift-558db77b4-hdxh9" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.231454 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/14769a57-f19b-4d49-868f-d1754827714b-serving-cert\") pod \"route-controller-manager-6576b87f9c-q5z6f\" (UID: \"14769a57-f19b-4d49-868f-d1754827714b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q5z6f" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.231463 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/3ba883c9-d8a0-42ec-8894-87769eabf95b-image-import-ca\") pod \"apiserver-76f77b778f-4gdvx\" (UID: \"3ba883c9-d8a0-42ec-8894-87769eabf95b\") " pod="openshift-apiserver/apiserver-76f77b778f-4gdvx" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.231477 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cf7f962d-5924-4e0e-bd23-cd46ba65f5a9-serving-cert\") pod \"apiserver-7bbb656c7d-gmnqv\" (UID: \"cf7f962d-5924-4e0e-bd23-cd46ba65f5a9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gmnqv" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.231499 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/3ba883c9-d8a0-42ec-8894-87769eabf95b-audit\") pod \"apiserver-76f77b778f-4gdvx\" (UID: \"3ba883c9-d8a0-42ec-8894-87769eabf95b\") " pod="openshift-apiserver/apiserver-76f77b778f-4gdvx" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.231560 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ztnkg\" (UniqueName: \"kubernetes.io/projected/8dc6ffac-58e2-477a-adaf-3eb1de776a9c-kube-api-access-ztnkg\") pod \"machine-approver-56656f9798-9jh6s\" (UID: \"8dc6ffac-58e2-477a-adaf-3eb1de776a9c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9jh6s" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.231579 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5d00dcee-5512-4730-8743-e128136b9364-metrics-tls\") pod \"ingress-operator-5b745b69d9-qnzvz\" (UID: \"5d00dcee-5512-4730-8743-e128136b9364\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qnzvz" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.232236 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ee89739e-edc1-41b5-bf4a-da80ba0a59aa-oauth-serving-cert\") pod \"console-f9d7485db-qd6mh\" (UID: \"ee89739e-edc1-41b5-bf4a-da80ba0a59aa\") " pod="openshift-console/console-f9d7485db-qd6mh" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.233544 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cf7f962d-5924-4e0e-bd23-cd46ba65f5a9-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-gmnqv\" (UID: \"cf7f962d-5924-4e0e-bd23-cd46ba65f5a9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gmnqv" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.236760 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/cf7f962d-5924-4e0e-bd23-cd46ba65f5a9-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-gmnqv\" (UID: \"cf7f962d-5924-4e0e-bd23-cd46ba65f5a9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gmnqv" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.243202 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-4r9h2"] Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.245283 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5f97371c-7dc2-4170-90e7-f044dcc62f2a-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-htpwn\" (UID: \"5f97371c-7dc2-4170-90e7-f044dcc62f2a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-htpwn" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.245602 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490225-v2v7z"] Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.246144 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-hdxh9"] Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.246242 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490225-v2v7z" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.246378 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f3802bf-e4bc-4952-9e22-428d62ec0349-config\") pod \"machine-api-operator-5694c8668f-gk5q9\" (UID: \"0f3802bf-e4bc-4952-9e22-428d62ec0349\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-gk5q9" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.243251 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cf7f962d-5924-4e0e-bd23-cd46ba65f5a9-audit-dir\") pod \"apiserver-7bbb656c7d-gmnqv\" (UID: \"cf7f962d-5924-4e0e-bd23-cd46ba65f5a9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gmnqv" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.246897 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-4r9h2" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.246897 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d66e251a-5a67-45c4-be63-2f46b56df1a5-audit-dir\") pod \"oauth-openshift-558db77b4-hdxh9\" (UID: \"d66e251a-5a67-45c4-be63-2f46b56df1a5\") " pod="openshift-authentication/oauth-openshift-558db77b4-hdxh9" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.247177 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5d00dcee-5512-4730-8743-e128136b9364-trusted-ca\") pod \"ingress-operator-5b745b69d9-qnzvz\" (UID: \"5d00dcee-5512-4730-8743-e128136b9364\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qnzvz" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.247878 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ba883c9-d8a0-42ec-8894-87769eabf95b-config\") pod \"apiserver-76f77b778f-4gdvx\" (UID: \"3ba883c9-d8a0-42ec-8894-87769eabf95b\") " pod="openshift-apiserver/apiserver-76f77b778f-4gdvx" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.259936 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/3ba883c9-d8a0-42ec-8894-87769eabf95b-node-pullsecrets\") pod \"apiserver-76f77b778f-4gdvx\" (UID: \"3ba883c9-d8a0-42ec-8894-87769eabf95b\") " pod="openshift-apiserver/apiserver-76f77b778f-4gdvx" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.261081 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3ba883c9-d8a0-42ec-8894-87769eabf95b-trusted-ca-bundle\") pod \"apiserver-76f77b778f-4gdvx\" (UID: \"3ba883c9-d8a0-42ec-8894-87769eabf95b\") " pod="openshift-apiserver/apiserver-76f77b778f-4gdvx" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.261824 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d66e251a-5a67-45c4-be63-2f46b56df1a5-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-hdxh9\" (UID: \"d66e251a-5a67-45c4-be63-2f46b56df1a5\") " pod="openshift-authentication/oauth-openshift-558db77b4-hdxh9" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.264546 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-bs67m"] Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.265395 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-bs67m" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.267307 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8dc6ffac-58e2-477a-adaf-3eb1de776a9c-config\") pod \"machine-approver-56656f9798-9jh6s\" (UID: \"8dc6ffac-58e2-477a-adaf-3eb1de776a9c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9jh6s" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.270036 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d66e251a-5a67-45c4-be63-2f46b56df1a5-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-hdxh9\" (UID: \"d66e251a-5a67-45c4-be63-2f46b56df1a5\") " pod="openshift-authentication/oauth-openshift-558db77b4-hdxh9" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.274008 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/3ba883c9-d8a0-42ec-8894-87769eabf95b-encryption-config\") pod \"apiserver-76f77b778f-4gdvx\" (UID: \"3ba883c9-d8a0-42ec-8894-87769eabf95b\") " pod="openshift-apiserver/apiserver-76f77b778f-4gdvx" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.274452 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2a0dd2e2-3942-4daa-a45b-17f7bdc66d00-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-wb6lm\" (UID: \"2a0dd2e2-3942-4daa-a45b-17f7bdc66d00\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-wb6lm" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.279067 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d66e251a-5a67-45c4-be63-2f46b56df1a5-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-hdxh9\" (UID: \"d66e251a-5a67-45c4-be63-2f46b56df1a5\") " pod="openshift-authentication/oauth-openshift-558db77b4-hdxh9" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.279615 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/cf7f962d-5924-4e0e-bd23-cd46ba65f5a9-audit-policies\") pod \"apiserver-7bbb656c7d-gmnqv\" (UID: \"cf7f962d-5924-4e0e-bd23-cd46ba65f5a9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gmnqv" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.280117 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/ff8cfaa0-24a3-4aeb-8b43-d0c5c6c401f1-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-njnsk\" (UID: \"ff8cfaa0-24a3-4aeb-8b43-d0c5c6c401f1\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-njnsk" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.280547 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.281007 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3ba883c9-d8a0-42ec-8894-87769eabf95b-serving-cert\") pod \"apiserver-76f77b778f-4gdvx\" (UID: \"3ba883c9-d8a0-42ec-8894-87769eabf95b\") " pod="openshift-apiserver/apiserver-76f77b778f-4gdvx" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.281434 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f97371c-7dc2-4170-90e7-f044dcc62f2a-config\") pod \"authentication-operator-69f744f599-htpwn\" (UID: \"5f97371c-7dc2-4170-90e7-f044dcc62f2a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-htpwn" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.282246 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5f97371c-7dc2-4170-90e7-f044dcc62f2a-service-ca-bundle\") pod \"authentication-operator-69f744f599-htpwn\" (UID: \"5f97371c-7dc2-4170-90e7-f044dcc62f2a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-htpwn" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.282286 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.282443 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d66e251a-5a67-45c4-be63-2f46b56df1a5-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-hdxh9\" (UID: \"d66e251a-5a67-45c4-be63-2f46b56df1a5\") " pod="openshift-authentication/oauth-openshift-558db77b4-hdxh9" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.282477 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d66e251a-5a67-45c4-be63-2f46b56df1a5-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-hdxh9\" (UID: \"d66e251a-5a67-45c4-be63-2f46b56df1a5\") " pod="openshift-authentication/oauth-openshift-558db77b4-hdxh9" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.282592 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3ba883c9-d8a0-42ec-8894-87769eabf95b-audit-dir\") pod \"apiserver-76f77b778f-4gdvx\" (UID: \"3ba883c9-d8a0-42ec-8894-87769eabf95b\") " pod="openshift-apiserver/apiserver-76f77b778f-4gdvx" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.283098 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b7f2154-fe8d-4ae4-8009-feb30d797f9b-config\") pod \"console-operator-58897d9998-rgn89\" (UID: \"3b7f2154-fe8d-4ae4-8009-feb30d797f9b\") " pod="openshift-console-operator/console-operator-58897d9998-rgn89" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.283180 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8dc6ffac-58e2-477a-adaf-3eb1de776a9c-auth-proxy-config\") pod \"machine-approver-56656f9798-9jh6s\" (UID: \"8dc6ffac-58e2-477a-adaf-3eb1de776a9c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9jh6s" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.283401 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ee89739e-edc1-41b5-bf4a-da80ba0a59aa-trusted-ca-bundle\") pod \"console-f9d7485db-qd6mh\" (UID: \"ee89739e-edc1-41b5-bf4a-da80ba0a59aa\") " pod="openshift-console/console-f9d7485db-qd6mh" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.284032 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/3ba883c9-d8a0-42ec-8894-87769eabf95b-etcd-serving-ca\") pod \"apiserver-76f77b778f-4gdvx\" (UID: \"3ba883c9-d8a0-42ec-8894-87769eabf95b\") " pod="openshift-apiserver/apiserver-76f77b778f-4gdvx" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.284048 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ee89739e-edc1-41b5-bf4a-da80ba0a59aa-console-config\") pod \"console-f9d7485db-qd6mh\" (UID: \"ee89739e-edc1-41b5-bf4a-da80ba0a59aa\") " pod="openshift-console/console-f9d7485db-qd6mh" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.284283 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a0dd2e2-3942-4daa-a45b-17f7bdc66d00-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-wb6lm\" (UID: \"2a0dd2e2-3942-4daa-a45b-17f7bdc66d00\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-wb6lm" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.284321 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14769a57-f19b-4d49-868f-d1754827714b-config\") pod \"route-controller-manager-6576b87f9c-q5z6f\" (UID: \"14769a57-f19b-4d49-868f-d1754827714b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q5z6f" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.284647 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/14769a57-f19b-4d49-868f-d1754827714b-client-ca\") pod \"route-controller-manager-6576b87f9c-q5z6f\" (UID: \"14769a57-f19b-4d49-868f-d1754827714b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q5z6f" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.285071 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ee89739e-edc1-41b5-bf4a-da80ba0a59aa-console-serving-cert\") pod \"console-f9d7485db-qd6mh\" (UID: \"ee89739e-edc1-41b5-bf4a-da80ba0a59aa\") " pod="openshift-console/console-f9d7485db-qd6mh" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.285400 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.285439 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/0f3802bf-e4bc-4952-9e22-428d62ec0349-images\") pod \"machine-api-operator-5694c8668f-gk5q9\" (UID: \"0f3802bf-e4bc-4952-9e22-428d62ec0349\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-gk5q9" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.286102 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ee89739e-edc1-41b5-bf4a-da80ba0a59aa-service-ca\") pod \"console-f9d7485db-qd6mh\" (UID: \"ee89739e-edc1-41b5-bf4a-da80ba0a59aa\") " pod="openshift-console/console-f9d7485db-qd6mh" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.286628 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d66e251a-5a67-45c4-be63-2f46b56df1a5-audit-policies\") pod \"oauth-openshift-558db77b4-hdxh9\" (UID: \"d66e251a-5a67-45c4-be63-2f46b56df1a5\") " pod="openshift-authentication/oauth-openshift-558db77b4-hdxh9" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.286852 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3b7f2154-fe8d-4ae4-8009-feb30d797f9b-trusted-ca\") pod \"console-operator-58897d9998-rgn89\" (UID: \"3b7f2154-fe8d-4ae4-8009-feb30d797f9b\") " pod="openshift-console-operator/console-operator-58897d9998-rgn89" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.287760 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d66e251a-5a67-45c4-be63-2f46b56df1a5-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-hdxh9\" (UID: \"d66e251a-5a67-45c4-be63-2f46b56df1a5\") " pod="openshift-authentication/oauth-openshift-558db77b4-hdxh9" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.288191 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/0f3802bf-e4bc-4952-9e22-428d62ec0349-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-gk5q9\" (UID: \"0f3802bf-e4bc-4952-9e22-428d62ec0349\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-gk5q9" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.288359 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/3ba883c9-d8a0-42ec-8894-87769eabf95b-audit\") pod \"apiserver-76f77b778f-4gdvx\" (UID: \"3ba883c9-d8a0-42ec-8894-87769eabf95b\") " pod="openshift-apiserver/apiserver-76f77b778f-4gdvx" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.288474 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d66e251a-5a67-45c4-be63-2f46b56df1a5-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-hdxh9\" (UID: \"d66e251a-5a67-45c4-be63-2f46b56df1a5\") " pod="openshift-authentication/oauth-openshift-558db77b4-hdxh9" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.288692 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/8dc6ffac-58e2-477a-adaf-3eb1de776a9c-machine-approver-tls\") pod \"machine-approver-56656f9798-9jh6s\" (UID: \"8dc6ffac-58e2-477a-adaf-3eb1de776a9c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9jh6s" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.288845 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d66e251a-5a67-45c4-be63-2f46b56df1a5-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-hdxh9\" (UID: \"d66e251a-5a67-45c4-be63-2f46b56df1a5\") " pod="openshift-authentication/oauth-openshift-558db77b4-hdxh9" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.289281 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/cf7f962d-5924-4e0e-bd23-cd46ba65f5a9-etcd-client\") pod \"apiserver-7bbb656c7d-gmnqv\" (UID: \"cf7f962d-5924-4e0e-bd23-cd46ba65f5a9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gmnqv" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.289559 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d66e251a-5a67-45c4-be63-2f46b56df1a5-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-hdxh9\" (UID: \"d66e251a-5a67-45c4-be63-2f46b56df1a5\") " pod="openshift-authentication/oauth-openshift-558db77b4-hdxh9" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.289736 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3b7f2154-fe8d-4ae4-8009-feb30d797f9b-serving-cert\") pod \"console-operator-58897d9998-rgn89\" (UID: \"3b7f2154-fe8d-4ae4-8009-feb30d797f9b\") " pod="openshift-console-operator/console-operator-58897d9998-rgn89" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.289943 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5d00dcee-5512-4730-8743-e128136b9364-metrics-tls\") pod \"ingress-operator-5b745b69d9-qnzvz\" (UID: \"5d00dcee-5512-4730-8743-e128136b9364\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qnzvz" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.290101 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3ba883c9-d8a0-42ec-8894-87769eabf95b-etcd-client\") pod \"apiserver-76f77b778f-4gdvx\" (UID: \"3ba883c9-d8a0-42ec-8894-87769eabf95b\") " pod="openshift-apiserver/apiserver-76f77b778f-4gdvx" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.290145 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-7c9hs"] Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.290692 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/cf7f962d-5924-4e0e-bd23-cd46ba65f5a9-encryption-config\") pod \"apiserver-7bbb656c7d-gmnqv\" (UID: \"cf7f962d-5924-4e0e-bd23-cd46ba65f5a9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gmnqv" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.290852 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-7c9hs" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.291211 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5f97371c-7dc2-4170-90e7-f044dcc62f2a-serving-cert\") pod \"authentication-operator-69f744f599-htpwn\" (UID: \"5f97371c-7dc2-4170-90e7-f044dcc62f2a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-htpwn" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.291258 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-qd6mh"] Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.292202 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d66e251a-5a67-45c4-be63-2f46b56df1a5-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-hdxh9\" (UID: \"d66e251a-5a67-45c4-be63-2f46b56df1a5\") " pod="openshift-authentication/oauth-openshift-558db77b4-hdxh9" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.292877 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e658ebdf-74ef-4dab-b48e-53557c516bd3-metrics-tls\") pod \"dns-operator-744455d44c-d46vj\" (UID: \"e658ebdf-74ef-4dab-b48e-53557c516bd3\") " pod="openshift-dns-operator/dns-operator-744455d44c-d46vj" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.292986 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d66e251a-5a67-45c4-be63-2f46b56df1a5-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-hdxh9\" (UID: \"d66e251a-5a67-45c4-be63-2f46b56df1a5\") " pod="openshift-authentication/oauth-openshift-558db77b4-hdxh9" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.293135 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-njnsk"] Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.293716 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/14769a57-f19b-4d49-868f-d1754827714b-serving-cert\") pod \"route-controller-manager-6576b87f9c-q5z6f\" (UID: \"14769a57-f19b-4d49-868f-d1754827714b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q5z6f" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.295110 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-nkrgc"] Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.295676 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-nkrgc" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.296488 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-qnzvz"] Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.297358 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ee89739e-edc1-41b5-bf4a-da80ba0a59aa-console-oauth-config\") pod \"console-f9d7485db-qd6mh\" (UID: \"ee89739e-edc1-41b5-bf4a-da80ba0a59aa\") " pod="openshift-console/console-f9d7485db-qd6mh" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.301577 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-jrg5t"] Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.305269 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.306612 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-wkkp2"] Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.306726 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-jrg5t" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.307283 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-htpwn"] Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.307302 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-wb6lm"] Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.307373 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-wkkp2" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.307769 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cf7f962d-5924-4e0e-bd23-cd46ba65f5a9-serving-cert\") pod \"apiserver-7bbb656c7d-gmnqv\" (UID: \"cf7f962d-5924-4e0e-bd23-cd46ba65f5a9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gmnqv" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.308276 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-d46vj"] Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.309380 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xqmhh"] Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.310352 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-vhrp6"] Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.311716 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-s7jrc"] Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.313093 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zrncs"] Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.314117 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mkvhj"] Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.315159 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-tncnb"] Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.316452 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ncld5"] Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.317610 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hq8jp"] Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.320015 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-sf2rk"] Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.320986 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6sjl8"] Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.322157 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-mkfwt"] Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.323390 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-jrg5t"] Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.324692 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-46429"] Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.325702 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.326344 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-7c9hs"] Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.327547 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-bs67m"] Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.329388 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5g5mr"] Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.330501 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zd7nh"] Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.331604 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jdcjm"] Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.332309 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490225-v2v7z"] Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.332612 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48xpv\" (UniqueName: \"kubernetes.io/projected/8a123de3-5556-4e34-8433-52805089c13c-kube-api-access-48xpv\") pod \"downloads-7954f5f757-s7jrc\" (UID: \"8a123de3-5556-4e34-8433-52805089c13c\") " pod="openshift-console/downloads-7954f5f757-s7jrc" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.333641 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-sbd2c"] Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.335030 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vd8ts"] Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.336033 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-mls86"] Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.337197 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-f9fk8"] Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.338124 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-nkrgc"] Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.338368 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-mls86" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.338992 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-4r9h2"] Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.340335 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-sxf5m"] Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.341414 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-mls86"] Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.346011 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.368343 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.386070 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.406449 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.425428 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.446597 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.485567 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.505507 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.525223 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.545814 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.566270 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.586644 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.605791 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.641890 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2n57\" (UniqueName: \"kubernetes.io/projected/e8f91104-da2f-4f2a-90b2-619d9035f8ca-kube-api-access-f2n57\") pod \"controller-manager-879f6c89f-7pdxm\" (UID: \"e8f91104-da2f-4f2a-90b2-619d9035f8ca\") " pod="openshift-controller-manager/controller-manager-879f6c89f-7pdxm" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.646275 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.667687 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.687095 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.707086 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.726223 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.746107 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.765750 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.786194 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.805874 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.826513 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.846264 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.866763 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.887048 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.906275 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.921151 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-7pdxm" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.926668 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.946767 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.965755 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 26 07:55:52 crc kubenswrapper[4806]: I0126 07:55:52.986121 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 26 07:55:53 crc kubenswrapper[4806]: I0126 07:55:53.014029 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 26 07:55:53 crc kubenswrapper[4806]: I0126 07:55:53.026310 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 26 07:55:53 crc kubenswrapper[4806]: I0126 07:55:53.047766 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 26 07:55:53 crc kubenswrapper[4806]: I0126 07:55:53.066761 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 26 07:55:53 crc kubenswrapper[4806]: I0126 07:55:53.086820 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 26 07:55:53 crc kubenswrapper[4806]: I0126 07:55:53.106456 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 26 07:55:53 crc kubenswrapper[4806]: I0126 07:55:53.126345 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 26 07:55:53 crc kubenswrapper[4806]: I0126 07:55:53.133056 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-7pdxm"] Jan 26 07:55:53 crc kubenswrapper[4806]: I0126 07:55:53.147535 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 26 07:55:53 crc kubenswrapper[4806]: W0126 07:55:53.152657 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode8f91104_da2f_4f2a_90b2_619d9035f8ca.slice/crio-688372280e8ba1fba515cf38eb3112e991aa22340a2b6f65e4591b184508c17f WatchSource:0}: Error finding container 688372280e8ba1fba515cf38eb3112e991aa22340a2b6f65e4591b184508c17f: Status 404 returned error can't find the container with id 688372280e8ba1fba515cf38eb3112e991aa22340a2b6f65e4591b184508c17f Jan 26 07:55:53 crc kubenswrapper[4806]: I0126 07:55:53.166275 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 26 07:55:53 crc kubenswrapper[4806]: I0126 07:55:53.184753 4806 request.go:700] Waited for 1.008617228s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0 Jan 26 07:55:53 crc kubenswrapper[4806]: I0126 07:55:53.186225 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 26 07:55:53 crc kubenswrapper[4806]: I0126 07:55:53.206437 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 26 07:55:53 crc kubenswrapper[4806]: I0126 07:55:53.225677 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 26 07:55:53 crc kubenswrapper[4806]: I0126 07:55:53.247457 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 26 07:55:53 crc kubenswrapper[4806]: I0126 07:55:53.266152 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 26 07:55:53 crc kubenswrapper[4806]: I0126 07:55:53.285716 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 26 07:55:53 crc kubenswrapper[4806]: I0126 07:55:53.306161 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 26 07:55:53 crc kubenswrapper[4806]: I0126 07:55:53.325907 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 26 07:55:53 crc kubenswrapper[4806]: I0126 07:55:53.345875 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 26 07:55:53 crc kubenswrapper[4806]: I0126 07:55:53.386256 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 26 07:55:53 crc kubenswrapper[4806]: I0126 07:55:53.405599 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 26 07:55:53 crc kubenswrapper[4806]: I0126 07:55:53.426021 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 26 07:55:53 crc kubenswrapper[4806]: I0126 07:55:53.446082 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 26 07:55:53 crc kubenswrapper[4806]: I0126 07:55:53.465292 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 26 07:55:53 crc kubenswrapper[4806]: I0126 07:55:53.485947 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 26 07:55:53 crc kubenswrapper[4806]: I0126 07:55:53.506325 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 26 07:55:53 crc kubenswrapper[4806]: I0126 07:55:53.525250 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 26 07:55:53 crc kubenswrapper[4806]: I0126 07:55:53.546675 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 26 07:55:53 crc kubenswrapper[4806]: I0126 07:55:53.575472 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 26 07:55:53 crc kubenswrapper[4806]: I0126 07:55:53.586029 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 26 07:55:53 crc kubenswrapper[4806]: I0126 07:55:53.606547 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 26 07:55:53 crc kubenswrapper[4806]: I0126 07:55:53.625517 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 26 07:55:53 crc kubenswrapper[4806]: I0126 07:55:53.679775 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-th4lg\" (UniqueName: \"kubernetes.io/projected/ee89739e-edc1-41b5-bf4a-da80ba0a59aa-kube-api-access-th4lg\") pod \"console-f9d7485db-qd6mh\" (UID: \"ee89739e-edc1-41b5-bf4a-da80ba0a59aa\") " pod="openshift-console/console-f9d7485db-qd6mh" Jan 26 07:55:53 crc kubenswrapper[4806]: I0126 07:55:53.687431 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 26 07:55:53 crc kubenswrapper[4806]: I0126 07:55:53.693446 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sgn5z\" (UniqueName: \"kubernetes.io/projected/0f3802bf-e4bc-4952-9e22-428d62ec0349-kube-api-access-sgn5z\") pod \"machine-api-operator-5694c8668f-gk5q9\" (UID: \"0f3802bf-e4bc-4952-9e22-428d62ec0349\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-gk5q9" Jan 26 07:55:53 crc kubenswrapper[4806]: I0126 07:55:53.720199 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-qd6mh" Jan 26 07:55:53 crc kubenswrapper[4806]: I0126 07:55:53.732098 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mcxs\" (UniqueName: \"kubernetes.io/projected/cf7f962d-5924-4e0e-bd23-cd46ba65f5a9-kube-api-access-5mcxs\") pod \"apiserver-7bbb656c7d-gmnqv\" (UID: \"cf7f962d-5924-4e0e-bd23-cd46ba65f5a9\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gmnqv" Jan 26 07:55:53 crc kubenswrapper[4806]: I0126 07:55:53.742590 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q252k\" (UniqueName: \"kubernetes.io/projected/3b7f2154-fe8d-4ae4-8009-feb30d797f9b-kube-api-access-q252k\") pod \"console-operator-58897d9998-rgn89\" (UID: \"3b7f2154-fe8d-4ae4-8009-feb30d797f9b\") " pod="openshift-console-operator/console-operator-58897d9998-rgn89" Jan 26 07:55:53 crc kubenswrapper[4806]: I0126 07:55:53.746689 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 26 07:55:53 crc kubenswrapper[4806]: I0126 07:55:53.766436 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 26 07:55:53 crc kubenswrapper[4806]: I0126 07:55:53.786514 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 26 07:55:53 crc kubenswrapper[4806]: I0126 07:55:53.790011 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-7pdxm" event={"ID":"e8f91104-da2f-4f2a-90b2-619d9035f8ca","Type":"ContainerStarted","Data":"4052e8d2c06743e3a043c779cd56f2fd0434b7271ab2ab71f9b50227bd05e3ba"} Jan 26 07:55:53 crc kubenswrapper[4806]: I0126 07:55:53.790070 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-7pdxm" event={"ID":"e8f91104-da2f-4f2a-90b2-619d9035f8ca","Type":"ContainerStarted","Data":"688372280e8ba1fba515cf38eb3112e991aa22340a2b6f65e4591b184508c17f"} Jan 26 07:55:53 crc kubenswrapper[4806]: I0126 07:55:53.790786 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-7pdxm" Jan 26 07:55:53 crc kubenswrapper[4806]: I0126 07:55:53.797612 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-7pdxm" Jan 26 07:55:53 crc kubenswrapper[4806]: I0126 07:55:53.808953 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 26 07:55:53 crc kubenswrapper[4806]: I0126 07:55:53.844875 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwqhk\" (UniqueName: \"kubernetes.io/projected/5d00dcee-5512-4730-8743-e128136b9364-kube-api-access-xwqhk\") pod \"ingress-operator-5b745b69d9-qnzvz\" (UID: \"5d00dcee-5512-4730-8743-e128136b9364\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qnzvz" Jan 26 07:55:53 crc kubenswrapper[4806]: I0126 07:55:53.865170 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 26 07:55:53 crc kubenswrapper[4806]: I0126 07:55:53.870652 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nzvrl\" (UniqueName: \"kubernetes.io/projected/2a0dd2e2-3942-4daa-a45b-17f7bdc66d00-kube-api-access-nzvrl\") pod \"openshift-controller-manager-operator-756b6f6bc6-wb6lm\" (UID: \"2a0dd2e2-3942-4daa-a45b-17f7bdc66d00\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-wb6lm" Jan 26 07:55:53 crc kubenswrapper[4806]: I0126 07:55:53.879725 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-gk5q9" Jan 26 07:55:53 crc kubenswrapper[4806]: I0126 07:55:53.886791 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 26 07:55:53 crc kubenswrapper[4806]: I0126 07:55:53.893871 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gmnqv" Jan 26 07:55:53 crc kubenswrapper[4806]: I0126 07:55:53.907770 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 26 07:55:53 crc kubenswrapper[4806]: I0126 07:55:53.926270 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 26 07:55:53 crc kubenswrapper[4806]: I0126 07:55:53.948445 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 26 07:55:53 crc kubenswrapper[4806]: I0126 07:55:53.985218 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5d00dcee-5512-4730-8743-e128136b9364-bound-sa-token\") pod \"ingress-operator-5b745b69d9-qnzvz\" (UID: \"5d00dcee-5512-4730-8743-e128136b9364\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qnzvz" Jan 26 07:55:53 crc kubenswrapper[4806]: I0126 07:55:53.986720 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-rgn89" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.004506 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86g8r\" (UniqueName: \"kubernetes.io/projected/d66e251a-5a67-45c4-be63-2f46b56df1a5-kube-api-access-86g8r\") pod \"oauth-openshift-558db77b4-hdxh9\" (UID: \"d66e251a-5a67-45c4-be63-2f46b56df1a5\") " pod="openshift-authentication/oauth-openshift-558db77b4-hdxh9" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.021961 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-qd6mh"] Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.028321 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bfnpk\" (UniqueName: \"kubernetes.io/projected/14769a57-f19b-4d49-868f-d1754827714b-kube-api-access-bfnpk\") pod \"route-controller-manager-6576b87f9c-q5z6f\" (UID: \"14769a57-f19b-4d49-868f-d1754827714b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q5z6f" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.045149 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2xls\" (UniqueName: \"kubernetes.io/projected/e658ebdf-74ef-4dab-b48e-53557c516bd3-kube-api-access-z2xls\") pod \"dns-operator-744455d44c-d46vj\" (UID: \"e658ebdf-74ef-4dab-b48e-53557c516bd3\") " pod="openshift-dns-operator/dns-operator-744455d44c-d46vj" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.069891 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvjcz\" (UniqueName: \"kubernetes.io/projected/3ba883c9-d8a0-42ec-8894-87769eabf95b-kube-api-access-cvjcz\") pod \"apiserver-76f77b778f-4gdvx\" (UID: \"3ba883c9-d8a0-42ec-8894-87769eabf95b\") " pod="openshift-apiserver/apiserver-76f77b778f-4gdvx" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.094140 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-d46vj" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.099701 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4bgrq\" (UniqueName: \"kubernetes.io/projected/ff8cfaa0-24a3-4aeb-8b43-d0c5c6c401f1-kube-api-access-4bgrq\") pod \"cluster-samples-operator-665b6dd947-njnsk\" (UID: \"ff8cfaa0-24a3-4aeb-8b43-d0c5c6c401f1\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-njnsk" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.112844 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5s4v2\" (UniqueName: \"kubernetes.io/projected/5f97371c-7dc2-4170-90e7-f044dcc62f2a-kube-api-access-5s4v2\") pod \"authentication-operator-69f744f599-htpwn\" (UID: \"5f97371c-7dc2-4170-90e7-f044dcc62f2a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-htpwn" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.122950 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-njnsk" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.123086 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-wb6lm" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.129846 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.142566 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ztnkg\" (UniqueName: \"kubernetes.io/projected/8dc6ffac-58e2-477a-adaf-3eb1de776a9c-kube-api-access-ztnkg\") pod \"machine-approver-56656f9798-9jh6s\" (UID: \"8dc6ffac-58e2-477a-adaf-3eb1de776a9c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9jh6s" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.145298 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qnzvz" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.146451 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.153232 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-htpwn" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.182810 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.188589 4806 request.go:700] Waited for 1.897484372s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0 Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.191914 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.210839 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q5z6f" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.216465 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.227161 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.230126 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-gmnqv"] Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.252917 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 26 07:55:54 crc kubenswrapper[4806]: W0126 07:55:54.255210 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcf7f962d_5924_4e0e_bd23_cd46ba65f5a9.slice/crio-5aac34ac24abf97ba0173038040d4a0790414cdbfaf10cef46f80cc6fad20d09 WatchSource:0}: Error finding container 5aac34ac24abf97ba0173038040d4a0790414cdbfaf10cef46f80cc6fad20d09: Status 404 returned error can't find the container with id 5aac34ac24abf97ba0173038040d4a0790414cdbfaf10cef46f80cc6fad20d09 Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.263895 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-4gdvx" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.278105 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.297978 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-gk5q9"] Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.298235 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.313612 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.314094 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-hdxh9" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.327514 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.356927 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.367038 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.389004 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.421266 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.441790 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9jh6s" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.453836 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.463564 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48xpv\" (UniqueName: \"kubernetes.io/projected/8a123de3-5556-4e34-8433-52805089c13c-kube-api-access-48xpv\") pod \"downloads-7954f5f757-s7jrc\" (UID: \"8a123de3-5556-4e34-8433-52805089c13c\") " pod="openshift-console/downloads-7954f5f757-s7jrc" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.477479 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-s7jrc" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.478058 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.489797 4806 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 26 07:55:54 crc kubenswrapper[4806]: W0126 07:55:54.536757 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8dc6ffac_58e2_477a_adaf_3eb1de776a9c.slice/crio-5076b8ab39881404b02c3132123c36231dddff18ec51501de38f27121cf66f16 WatchSource:0}: Error finding container 5076b8ab39881404b02c3132123c36231dddff18ec51501de38f27121cf66f16: Status 404 returned error can't find the container with id 5076b8ab39881404b02c3132123c36231dddff18ec51501de38f27121cf66f16 Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.569231 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/19ceab6e-3284-4ff6-b3a7-541d73c25150-profile-collector-cert\") pod \"olm-operator-6b444d44fb-5g5mr\" (UID: \"19ceab6e-3284-4ff6-b3a7-541d73c25150\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5g5mr" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.569334 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l87wh\" (UniqueName: \"kubernetes.io/projected/6dac3476-1363-4ddc-8fa5-9f0e110e5c38-kube-api-access-l87wh\") pod \"etcd-operator-b45778765-sf2rk\" (UID: \"6dac3476-1363-4ddc-8fa5-9f0e110e5c38\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sf2rk" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.569390 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/3bcae027-4e25-4c41-bbc9-639927f58691-ca-trust-extracted\") pod \"image-registry-697d97f7c8-tncnb\" (UID: \"3bcae027-4e25-4c41-bbc9-639927f58691\") " pod="openshift-image-registry/image-registry-697d97f7c8-tncnb" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.569444 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4n4p7\" (UniqueName: \"kubernetes.io/projected/19ceab6e-3284-4ff6-b3a7-541d73c25150-kube-api-access-4n4p7\") pod \"olm-operator-6b444d44fb-5g5mr\" (UID: \"19ceab6e-3284-4ff6-b3a7-541d73c25150\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5g5mr" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.569474 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2df8b\" (UniqueName: \"kubernetes.io/projected/aa802450-996f-4548-a763-1e08d1cc564a-kube-api-access-2df8b\") pod \"catalog-operator-68c6474976-hq8jp\" (UID: \"aa802450-996f-4548-a763-1e08d1cc564a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hq8jp" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.569507 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/3bcae027-4e25-4c41-bbc9-639927f58691-registry-tls\") pod \"image-registry-697d97f7c8-tncnb\" (UID: \"3bcae027-4e25-4c41-bbc9-639927f58691\") " pod="openshift-image-registry/image-registry-697d97f7c8-tncnb" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.569548 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/19ceab6e-3284-4ff6-b3a7-541d73c25150-srv-cert\") pod \"olm-operator-6b444d44fb-5g5mr\" (UID: \"19ceab6e-3284-4ff6-b3a7-541d73c25150\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5g5mr" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.569573 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3bcae027-4e25-4c41-bbc9-639927f58691-bound-sa-token\") pod \"image-registry-697d97f7c8-tncnb\" (UID: \"3bcae027-4e25-4c41-bbc9-639927f58691\") " pod="openshift-image-registry/image-registry-697d97f7c8-tncnb" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.569596 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f42b0469-833b-4dca-bc17-71e62b73f378-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-mkvhj\" (UID: \"f42b0469-833b-4dca-bc17-71e62b73f378\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mkvhj" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.569655 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3bacec42-b4b2-4638-9ea2-8db24615e6db-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-6sjl8\" (UID: \"3bacec42-b4b2-4638-9ea2-8db24615e6db\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6sjl8" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.569718 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2ce69172-bf74-4c4a-8aeb-9b1d86f50254-service-ca-bundle\") pod \"router-default-5444994796-l5mfg\" (UID: \"2ce69172-bf74-4c4a-8aeb-9b1d86f50254\") " pod="openshift-ingress/router-default-5444994796-l5mfg" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.569742 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2a762473-781e-436f-bc99-584a5301abc3-proxy-tls\") pod \"machine-config-operator-74547568cd-vhrp6\" (UID: \"2a762473-781e-436f-bc99-584a5301abc3\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-vhrp6" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.569764 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/aa802450-996f-4548-a763-1e08d1cc564a-srv-cert\") pod \"catalog-operator-68c6474976-hq8jp\" (UID: \"aa802450-996f-4548-a763-1e08d1cc564a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hq8jp" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.569788 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/58108b0d-1028-4924-b025-1c11d3238dc1-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-46429\" (UID: \"58108b0d-1028-4924-b025-1c11d3238dc1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-46429" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.569812 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lptmw\" (UniqueName: \"kubernetes.io/projected/58108b0d-1028-4924-b025-1c11d3238dc1-kube-api-access-lptmw\") pod \"cluster-image-registry-operator-dc59b4c8b-46429\" (UID: \"58108b0d-1028-4924-b025-1c11d3238dc1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-46429" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.569838 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aff32d6f-604f-49f1-8547-bea4a259ed45-config\") pod \"openshift-apiserver-operator-796bbdcf4f-zrncs\" (UID: \"aff32d6f-604f-49f1-8547-bea4a259ed45\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zrncs" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.569894 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tncnb\" (UID: \"3bcae027-4e25-4c41-bbc9-639927f58691\") " pod="openshift-image-registry/image-registry-697d97f7c8-tncnb" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.569921 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/6dac3476-1363-4ddc-8fa5-9f0e110e5c38-etcd-service-ca\") pod \"etcd-operator-b45778765-sf2rk\" (UID: \"6dac3476-1363-4ddc-8fa5-9f0e110e5c38\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sf2rk" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.569976 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/aa802450-996f-4548-a763-1e08d1cc564a-profile-collector-cert\") pod \"catalog-operator-68c6474976-hq8jp\" (UID: \"aa802450-996f-4548-a763-1e08d1cc564a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hq8jp" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.570002 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5cd7\" (UniqueName: \"kubernetes.io/projected/b624b9bd-2dce-41fd-8abf-f21908db8f6c-kube-api-access-k5cd7\") pod \"openshift-config-operator-7777fb866f-sbd2c\" (UID: \"b624b9bd-2dce-41fd-8abf-f21908db8f6c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-sbd2c" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.570031 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tgx9\" (UniqueName: \"kubernetes.io/projected/d6260cd7-9202-46cc-b943-b60aaa0e07ff-kube-api-access-8tgx9\") pod \"kube-storage-version-migrator-operator-b67b599dd-xqmhh\" (UID: \"d6260cd7-9202-46cc-b943-b60aaa0e07ff\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xqmhh" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.570062 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3bcae027-4e25-4c41-bbc9-639927f58691-trusted-ca\") pod \"image-registry-697d97f7c8-tncnb\" (UID: \"3bcae027-4e25-4c41-bbc9-639927f58691\") " pod="openshift-image-registry/image-registry-697d97f7c8-tncnb" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.570079 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/2ce69172-bf74-4c4a-8aeb-9b1d86f50254-default-certificate\") pod \"router-default-5444994796-l5mfg\" (UID: \"2ce69172-bf74-4c4a-8aeb-9b1d86f50254\") " pod="openshift-ingress/router-default-5444994796-l5mfg" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.570120 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m79qc\" (UniqueName: \"kubernetes.io/projected/bd8d4294-4075-456c-ab53-d3646b5117b5-kube-api-access-m79qc\") pod \"control-plane-machine-set-operator-78cbb6b69f-jdcjm\" (UID: \"bd8d4294-4075-456c-ab53-d3646b5117b5\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jdcjm" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.570142 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6dac3476-1363-4ddc-8fa5-9f0e110e5c38-serving-cert\") pod \"etcd-operator-b45778765-sf2rk\" (UID: \"6dac3476-1363-4ddc-8fa5-9f0e110e5c38\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sf2rk" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.570196 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2a762473-781e-436f-bc99-584a5301abc3-auth-proxy-config\") pod \"machine-config-operator-74547568cd-vhrp6\" (UID: \"2a762473-781e-436f-bc99-584a5301abc3\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-vhrp6" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.570221 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/3bcae027-4e25-4c41-bbc9-639927f58691-registry-certificates\") pod \"image-registry-697d97f7c8-tncnb\" (UID: \"3bcae027-4e25-4c41-bbc9-639927f58691\") " pod="openshift-image-registry/image-registry-697d97f7c8-tncnb" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.570301 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aff32d6f-604f-49f1-8547-bea4a259ed45-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-zrncs\" (UID: \"aff32d6f-604f-49f1-8547-bea4a259ed45\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zrncs" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.570339 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/3bcae027-4e25-4c41-bbc9-639927f58691-installation-pull-secrets\") pod \"image-registry-697d97f7c8-tncnb\" (UID: \"3bcae027-4e25-4c41-bbc9-639927f58691\") " pod="openshift-image-registry/image-registry-697d97f7c8-tncnb" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.570367 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f42b0469-833b-4dca-bc17-71e62b73f378-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-mkvhj\" (UID: \"f42b0469-833b-4dca-bc17-71e62b73f378\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mkvhj" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.570391 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/b624b9bd-2dce-41fd-8abf-f21908db8f6c-available-featuregates\") pod \"openshift-config-operator-7777fb866f-sbd2c\" (UID: \"b624b9bd-2dce-41fd-8abf-f21908db8f6c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-sbd2c" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.570500 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3bacec42-b4b2-4638-9ea2-8db24615e6db-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-6sjl8\" (UID: \"3bacec42-b4b2-4638-9ea2-8db24615e6db\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6sjl8" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.570580 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6dac3476-1363-4ddc-8fa5-9f0e110e5c38-config\") pod \"etcd-operator-b45778765-sf2rk\" (UID: \"6dac3476-1363-4ddc-8fa5-9f0e110e5c38\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sf2rk" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.570665 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6dac3476-1363-4ddc-8fa5-9f0e110e5c38-etcd-client\") pod \"etcd-operator-b45778765-sf2rk\" (UID: \"6dac3476-1363-4ddc-8fa5-9f0e110e5c38\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sf2rk" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.570694 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxvqk\" (UniqueName: \"kubernetes.io/projected/2a762473-781e-436f-bc99-584a5301abc3-kube-api-access-kxvqk\") pod \"machine-config-operator-74547568cd-vhrp6\" (UID: \"2a762473-781e-436f-bc99-584a5301abc3\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-vhrp6" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.570714 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/6dac3476-1363-4ddc-8fa5-9f0e110e5c38-etcd-ca\") pod \"etcd-operator-b45778765-sf2rk\" (UID: \"6dac3476-1363-4ddc-8fa5-9f0e110e5c38\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sf2rk" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.570781 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/76e1842c-1bb9-492c-9494-55a872376b54-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-zd7nh\" (UID: \"76e1842c-1bb9-492c-9494-55a872376b54\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zd7nh" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.570804 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6260cd7-9202-46cc-b943-b60aaa0e07ff-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-xqmhh\" (UID: \"d6260cd7-9202-46cc-b943-b60aaa0e07ff\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xqmhh" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.570843 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shc8r\" (UniqueName: \"kubernetes.io/projected/aff32d6f-604f-49f1-8547-bea4a259ed45-kube-api-access-shc8r\") pod \"openshift-apiserver-operator-796bbdcf4f-zrncs\" (UID: \"aff32d6f-604f-49f1-8547-bea4a259ed45\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zrncs" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.570948 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kk4pk\" (UniqueName: \"kubernetes.io/projected/2ce69172-bf74-4c4a-8aeb-9b1d86f50254-kube-api-access-kk4pk\") pod \"router-default-5444994796-l5mfg\" (UID: \"2ce69172-bf74-4c4a-8aeb-9b1d86f50254\") " pod="openshift-ingress/router-default-5444994796-l5mfg" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.570978 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/fca40e01-6a4e-46e1-970d-60b8436aa04e-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-sxf5m\" (UID: \"fca40e01-6a4e-46e1-970d-60b8436aa04e\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-sxf5m" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.571012 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-459vt\" (UniqueName: \"kubernetes.io/projected/3bcae027-4e25-4c41-bbc9-639927f58691-kube-api-access-459vt\") pod \"image-registry-697d97f7c8-tncnb\" (UID: \"3bcae027-4e25-4c41-bbc9-639927f58691\") " pod="openshift-image-registry/image-registry-697d97f7c8-tncnb" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.571111 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/2a762473-781e-436f-bc99-584a5301abc3-images\") pod \"machine-config-operator-74547568cd-vhrp6\" (UID: \"2a762473-781e-436f-bc99-584a5301abc3\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-vhrp6" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.571153 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/2ce69172-bf74-4c4a-8aeb-9b1d86f50254-stats-auth\") pod \"router-default-5444994796-l5mfg\" (UID: \"2ce69172-bf74-4c4a-8aeb-9b1d86f50254\") " pod="openshift-ingress/router-default-5444994796-l5mfg" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.571191 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2ce69172-bf74-4c4a-8aeb-9b1d86f50254-metrics-certs\") pod \"router-default-5444994796-l5mfg\" (UID: \"2ce69172-bf74-4c4a-8aeb-9b1d86f50254\") " pod="openshift-ingress/router-default-5444994796-l5mfg" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.571230 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76e1842c-1bb9-492c-9494-55a872376b54-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-zd7nh\" (UID: \"76e1842c-1bb9-492c-9494-55a872376b54\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zd7nh" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.571250 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/58108b0d-1028-4924-b025-1c11d3238dc1-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-46429\" (UID: \"58108b0d-1028-4924-b025-1c11d3238dc1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-46429" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.571286 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d6260cd7-9202-46cc-b943-b60aaa0e07ff-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-xqmhh\" (UID: \"d6260cd7-9202-46cc-b943-b60aaa0e07ff\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xqmhh" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.571321 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/58108b0d-1028-4924-b025-1c11d3238dc1-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-46429\" (UID: \"58108b0d-1028-4924-b025-1c11d3238dc1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-46429" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.571352 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3bacec42-b4b2-4638-9ea2-8db24615e6db-config\") pod \"kube-controller-manager-operator-78b949d7b-6sjl8\" (UID: \"3bacec42-b4b2-4638-9ea2-8db24615e6db\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6sjl8" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.571372 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f42b0469-833b-4dca-bc17-71e62b73f378-config\") pod \"kube-apiserver-operator-766d6c64bb-mkvhj\" (UID: \"f42b0469-833b-4dca-bc17-71e62b73f378\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mkvhj" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.571401 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/76e1842c-1bb9-492c-9494-55a872376b54-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-zd7nh\" (UID: \"76e1842c-1bb9-492c-9494-55a872376b54\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zd7nh" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.571420 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrqf4\" (UniqueName: \"kubernetes.io/projected/fca40e01-6a4e-46e1-970d-60b8436aa04e-kube-api-access-jrqf4\") pod \"multus-admission-controller-857f4d67dd-sxf5m\" (UID: \"fca40e01-6a4e-46e1-970d-60b8436aa04e\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-sxf5m" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.571443 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b624b9bd-2dce-41fd-8abf-f21908db8f6c-serving-cert\") pod \"openshift-config-operator-7777fb866f-sbd2c\" (UID: \"b624b9bd-2dce-41fd-8abf-f21908db8f6c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-sbd2c" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.571578 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/bd8d4294-4075-456c-ab53-d3646b5117b5-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-jdcjm\" (UID: \"bd8d4294-4075-456c-ab53-d3646b5117b5\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jdcjm" Jan 26 07:55:54 crc kubenswrapper[4806]: E0126 07:55:54.572489 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 07:55:55.072474101 +0000 UTC m=+134.336882157 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tncnb" (UID: "3bcae027-4e25-4c41-bbc9-639927f58691") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.674730 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.675401 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3bacec42-b4b2-4638-9ea2-8db24615e6db-config\") pod \"kube-controller-manager-operator-78b949d7b-6sjl8\" (UID: \"3bacec42-b4b2-4638-9ea2-8db24615e6db\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6sjl8" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.675426 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f42b0469-833b-4dca-bc17-71e62b73f378-config\") pod \"kube-apiserver-operator-766d6c64bb-mkvhj\" (UID: \"f42b0469-833b-4dca-bc17-71e62b73f378\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mkvhj" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.675447 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/001f7476-4d33-474a-80c9-8e99cb19b4e5-signing-key\") pod \"service-ca-9c57cc56f-bs67m\" (UID: \"001f7476-4d33-474a-80c9-8e99cb19b4e5\") " pod="openshift-service-ca/service-ca-9c57cc56f-bs67m" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.675466 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/e04696fa-1756-4ad4-9908-a09f7339c584-plugins-dir\") pod \"csi-hostpathplugin-mls86\" (UID: \"e04696fa-1756-4ad4-9908-a09f7339c584\") " pod="hostpath-provisioner/csi-hostpathplugin-mls86" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.675485 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/76e1842c-1bb9-492c-9494-55a872376b54-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-zd7nh\" (UID: \"76e1842c-1bb9-492c-9494-55a872376b54\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zd7nh" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.675504 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrqf4\" (UniqueName: \"kubernetes.io/projected/fca40e01-6a4e-46e1-970d-60b8436aa04e-kube-api-access-jrqf4\") pod \"multus-admission-controller-857f4d67dd-sxf5m\" (UID: \"fca40e01-6a4e-46e1-970d-60b8436aa04e\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-sxf5m" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.675540 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b624b9bd-2dce-41fd-8abf-f21908db8f6c-serving-cert\") pod \"openshift-config-operator-7777fb866f-sbd2c\" (UID: \"b624b9bd-2dce-41fd-8abf-f21908db8f6c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-sbd2c" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.675563 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/bd8d4294-4075-456c-ab53-d3646b5117b5-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-jdcjm\" (UID: \"bd8d4294-4075-456c-ab53-d3646b5117b5\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jdcjm" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.675582 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/19ceab6e-3284-4ff6-b3a7-541d73c25150-profile-collector-cert\") pod \"olm-operator-6b444d44fb-5g5mr\" (UID: \"19ceab6e-3284-4ff6-b3a7-541d73c25150\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5g5mr" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.675599 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7tfrg\" (UniqueName: \"kubernetes.io/projected/380d2e61-5b11-446b-b3b9-d3e7fa65569e-kube-api-access-7tfrg\") pod \"migrator-59844c95c7-4r9h2\" (UID: \"380d2e61-5b11-446b-b3b9-d3e7fa65569e\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-4r9h2" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.675618 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h82dx\" (UniqueName: \"kubernetes.io/projected/d7f14ee6-1d4a-4cb9-bf7d-7a3ad4b0a7c1-kube-api-access-h82dx\") pod \"collect-profiles-29490225-v2v7z\" (UID: \"d7f14ee6-1d4a-4cb9-bf7d-7a3ad4b0a7c1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490225-v2v7z" Jan 26 07:55:54 crc kubenswrapper[4806]: E0126 07:55:54.676265 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 07:55:55.176225478 +0000 UTC m=+134.440633534 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.691758 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l87wh\" (UniqueName: \"kubernetes.io/projected/6dac3476-1363-4ddc-8fa5-9f0e110e5c38-kube-api-access-l87wh\") pod \"etcd-operator-b45778765-sf2rk\" (UID: \"6dac3476-1363-4ddc-8fa5-9f0e110e5c38\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sf2rk" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.691797 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/3bcae027-4e25-4c41-bbc9-639927f58691-ca-trust-extracted\") pod \"image-registry-697d97f7c8-tncnb\" (UID: \"3bcae027-4e25-4c41-bbc9-639927f58691\") " pod="openshift-image-registry/image-registry-697d97f7c8-tncnb" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.691838 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4cdf983d-19e2-445c-9ae4-237ec10dd839-config\") pod \"service-ca-operator-777779d784-7c9hs\" (UID: \"4cdf983d-19e2-445c-9ae4-237ec10dd839\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-7c9hs" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.691865 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4n4p7\" (UniqueName: \"kubernetes.io/projected/19ceab6e-3284-4ff6-b3a7-541d73c25150-kube-api-access-4n4p7\") pod \"olm-operator-6b444d44fb-5g5mr\" (UID: \"19ceab6e-3284-4ff6-b3a7-541d73c25150\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5g5mr" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.691884 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2df8b\" (UniqueName: \"kubernetes.io/projected/aa802450-996f-4548-a763-1e08d1cc564a-kube-api-access-2df8b\") pod \"catalog-operator-68c6474976-hq8jp\" (UID: \"aa802450-996f-4548-a763-1e08d1cc564a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hq8jp" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.691912 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5192ad78-580b-4126-9241-1d52339308b1-proxy-tls\") pod \"machine-config-controller-84d6567774-mkfwt\" (UID: \"5192ad78-580b-4126-9241-1d52339308b1\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-mkfwt" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.691935 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/3bcae027-4e25-4c41-bbc9-639927f58691-registry-tls\") pod \"image-registry-697d97f7c8-tncnb\" (UID: \"3bcae027-4e25-4c41-bbc9-639927f58691\") " pod="openshift-image-registry/image-registry-697d97f7c8-tncnb" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.691954 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/19ceab6e-3284-4ff6-b3a7-541d73c25150-srv-cert\") pod \"olm-operator-6b444d44fb-5g5mr\" (UID: \"19ceab6e-3284-4ff6-b3a7-541d73c25150\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5g5mr" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.691973 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3bcae027-4e25-4c41-bbc9-639927f58691-bound-sa-token\") pod \"image-registry-697d97f7c8-tncnb\" (UID: \"3bcae027-4e25-4c41-bbc9-639927f58691\") " pod="openshift-image-registry/image-registry-697d97f7c8-tncnb" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.691990 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/e04696fa-1756-4ad4-9908-a09f7339c584-registration-dir\") pod \"csi-hostpathplugin-mls86\" (UID: \"e04696fa-1756-4ad4-9908-a09f7339c584\") " pod="hostpath-provisioner/csi-hostpathplugin-mls86" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.692007 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/e04696fa-1756-4ad4-9908-a09f7339c584-csi-data-dir\") pod \"csi-hostpathplugin-mls86\" (UID: \"e04696fa-1756-4ad4-9908-a09f7339c584\") " pod="hostpath-provisioner/csi-hostpathplugin-mls86" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.692030 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f42b0469-833b-4dca-bc17-71e62b73f378-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-mkvhj\" (UID: \"f42b0469-833b-4dca-bc17-71e62b73f378\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mkvhj" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.692048 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/5cfe7537-639f-4bbc-9e05-bca737f295ce-tmpfs\") pod \"packageserver-d55dfcdfc-f9fk8\" (UID: \"5cfe7537-639f-4bbc-9e05-bca737f295ce\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-f9fk8" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.692064 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vpsv\" (UniqueName: \"kubernetes.io/projected/5cfe7537-639f-4bbc-9e05-bca737f295ce-kube-api-access-6vpsv\") pod \"packageserver-d55dfcdfc-f9fk8\" (UID: \"5cfe7537-639f-4bbc-9e05-bca737f295ce\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-f9fk8" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.692083 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/84d147bb-634e-40fb-a631-91ff228c0801-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-vd8ts\" (UID: \"84d147bb-634e-40fb-a631-91ff228c0801\") " pod="openshift-marketplace/marketplace-operator-79b997595-vd8ts" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.692106 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3bacec42-b4b2-4638-9ea2-8db24615e6db-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-6sjl8\" (UID: \"3bacec42-b4b2-4638-9ea2-8db24615e6db\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6sjl8" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.692124 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r24dj\" (UniqueName: \"kubernetes.io/projected/5192ad78-580b-4126-9241-1d52339308b1-kube-api-access-r24dj\") pod \"machine-config-controller-84d6567774-mkfwt\" (UID: \"5192ad78-580b-4126-9241-1d52339308b1\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-mkfwt" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.692141 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/001f7476-4d33-474a-80c9-8e99cb19b4e5-signing-cabundle\") pod \"service-ca-9c57cc56f-bs67m\" (UID: \"001f7476-4d33-474a-80c9-8e99cb19b4e5\") " pod="openshift-service-ca/service-ca-9c57cc56f-bs67m" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.692191 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2ce69172-bf74-4c4a-8aeb-9b1d86f50254-service-ca-bundle\") pod \"router-default-5444994796-l5mfg\" (UID: \"2ce69172-bf74-4c4a-8aeb-9b1d86f50254\") " pod="openshift-ingress/router-default-5444994796-l5mfg" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.692211 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2a762473-781e-436f-bc99-584a5301abc3-proxy-tls\") pod \"machine-config-operator-74547568cd-vhrp6\" (UID: \"2a762473-781e-436f-bc99-584a5301abc3\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-vhrp6" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.692233 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/aa802450-996f-4548-a763-1e08d1cc564a-srv-cert\") pod \"catalog-operator-68c6474976-hq8jp\" (UID: \"aa802450-996f-4548-a763-1e08d1cc564a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hq8jp" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.692249 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/58108b0d-1028-4924-b025-1c11d3238dc1-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-46429\" (UID: \"58108b0d-1028-4924-b025-1c11d3238dc1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-46429" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.692267 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lptmw\" (UniqueName: \"kubernetes.io/projected/58108b0d-1028-4924-b025-1c11d3238dc1-kube-api-access-lptmw\") pod \"cluster-image-registry-operator-dc59b4c8b-46429\" (UID: \"58108b0d-1028-4924-b025-1c11d3238dc1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-46429" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.692287 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aff32d6f-604f-49f1-8547-bea4a259ed45-config\") pod \"openshift-apiserver-operator-796bbdcf4f-zrncs\" (UID: \"aff32d6f-604f-49f1-8547-bea4a259ed45\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zrncs" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.692317 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tncnb\" (UID: \"3bcae027-4e25-4c41-bbc9-639927f58691\") " pod="openshift-image-registry/image-registry-697d97f7c8-tncnb" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.692334 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/6dac3476-1363-4ddc-8fa5-9f0e110e5c38-etcd-service-ca\") pod \"etcd-operator-b45778765-sf2rk\" (UID: \"6dac3476-1363-4ddc-8fa5-9f0e110e5c38\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sf2rk" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.681267 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3bacec42-b4b2-4638-9ea2-8db24615e6db-config\") pod \"kube-controller-manager-operator-78b949d7b-6sjl8\" (UID: \"3bacec42-b4b2-4638-9ea2-8db24615e6db\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6sjl8" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.679962 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/19ceab6e-3284-4ff6-b3a7-541d73c25150-profile-collector-cert\") pod \"olm-operator-6b444d44fb-5g5mr\" (UID: \"19ceab6e-3284-4ff6-b3a7-541d73c25150\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5g5mr" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.680454 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f42b0469-833b-4dca-bc17-71e62b73f378-config\") pod \"kube-apiserver-operator-766d6c64bb-mkvhj\" (UID: \"f42b0469-833b-4dca-bc17-71e62b73f378\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mkvhj" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.693742 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b624b9bd-2dce-41fd-8abf-f21908db8f6c-serving-cert\") pod \"openshift-config-operator-7777fb866f-sbd2c\" (UID: \"b624b9bd-2dce-41fd-8abf-f21908db8f6c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-sbd2c" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.694045 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/41318e1c-f829-49d4-9ad8-5dc639973784-cert\") pod \"ingress-canary-nkrgc\" (UID: \"41318e1c-f829-49d4-9ad8-5dc639973784\") " pod="openshift-ingress-canary/ingress-canary-nkrgc" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.694108 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/aa802450-996f-4548-a763-1e08d1cc564a-profile-collector-cert\") pod \"catalog-operator-68c6474976-hq8jp\" (UID: \"aa802450-996f-4548-a763-1e08d1cc564a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hq8jp" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.694136 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5cd7\" (UniqueName: \"kubernetes.io/projected/b624b9bd-2dce-41fd-8abf-f21908db8f6c-kube-api-access-k5cd7\") pod \"openshift-config-operator-7777fb866f-sbd2c\" (UID: \"b624b9bd-2dce-41fd-8abf-f21908db8f6c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-sbd2c" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.694159 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8tgx9\" (UniqueName: \"kubernetes.io/projected/d6260cd7-9202-46cc-b943-b60aaa0e07ff-kube-api-access-8tgx9\") pod \"kube-storage-version-migrator-operator-b67b599dd-xqmhh\" (UID: \"d6260cd7-9202-46cc-b943-b60aaa0e07ff\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xqmhh" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.694182 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3bcae027-4e25-4c41-bbc9-639927f58691-trusted-ca\") pod \"image-registry-697d97f7c8-tncnb\" (UID: \"3bcae027-4e25-4c41-bbc9-639927f58691\") " pod="openshift-image-registry/image-registry-697d97f7c8-tncnb" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.694204 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/2ce69172-bf74-4c4a-8aeb-9b1d86f50254-default-certificate\") pod \"router-default-5444994796-l5mfg\" (UID: \"2ce69172-bf74-4c4a-8aeb-9b1d86f50254\") " pod="openshift-ingress/router-default-5444994796-l5mfg" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.694228 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m79qc\" (UniqueName: \"kubernetes.io/projected/bd8d4294-4075-456c-ab53-d3646b5117b5-kube-api-access-m79qc\") pod \"control-plane-machine-set-operator-78cbb6b69f-jdcjm\" (UID: \"bd8d4294-4075-456c-ab53-d3646b5117b5\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jdcjm" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.694345 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6dac3476-1363-4ddc-8fa5-9f0e110e5c38-serving-cert\") pod \"etcd-operator-b45778765-sf2rk\" (UID: \"6dac3476-1363-4ddc-8fa5-9f0e110e5c38\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sf2rk" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.694368 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4lrg\" (UniqueName: \"kubernetes.io/projected/41318e1c-f829-49d4-9ad8-5dc639973784-kube-api-access-l4lrg\") pod \"ingress-canary-nkrgc\" (UID: \"41318e1c-f829-49d4-9ad8-5dc639973784\") " pod="openshift-ingress-canary/ingress-canary-nkrgc" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.694390 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2a762473-781e-436f-bc99-584a5301abc3-auth-proxy-config\") pod \"machine-config-operator-74547568cd-vhrp6\" (UID: \"2a762473-781e-436f-bc99-584a5301abc3\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-vhrp6" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.694411 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/3bcae027-4e25-4c41-bbc9-639927f58691-registry-certificates\") pod \"image-registry-697d97f7c8-tncnb\" (UID: \"3bcae027-4e25-4c41-bbc9-639927f58691\") " pod="openshift-image-registry/image-registry-697d97f7c8-tncnb" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.694436 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/e04696fa-1756-4ad4-9908-a09f7339c584-socket-dir\") pod \"csi-hostpathplugin-mls86\" (UID: \"e04696fa-1756-4ad4-9908-a09f7339c584\") " pod="hostpath-provisioner/csi-hostpathplugin-mls86" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.694457 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5cfe7537-639f-4bbc-9e05-bca737f295ce-apiservice-cert\") pod \"packageserver-d55dfcdfc-f9fk8\" (UID: \"5cfe7537-639f-4bbc-9e05-bca737f295ce\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-f9fk8" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.694474 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5cfe7537-639f-4bbc-9e05-bca737f295ce-webhook-cert\") pod \"packageserver-d55dfcdfc-f9fk8\" (UID: \"5cfe7537-639f-4bbc-9e05-bca737f295ce\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-f9fk8" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.694505 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5192ad78-580b-4126-9241-1d52339308b1-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-mkfwt\" (UID: \"5192ad78-580b-4126-9241-1d52339308b1\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-mkfwt" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.694543 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aff32d6f-604f-49f1-8547-bea4a259ed45-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-zrncs\" (UID: \"aff32d6f-604f-49f1-8547-bea4a259ed45\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zrncs" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.694565 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/3bcae027-4e25-4c41-bbc9-639927f58691-installation-pull-secrets\") pod \"image-registry-697d97f7c8-tncnb\" (UID: \"3bcae027-4e25-4c41-bbc9-639927f58691\") " pod="openshift-image-registry/image-registry-697d97f7c8-tncnb" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.694584 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f42b0469-833b-4dca-bc17-71e62b73f378-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-mkvhj\" (UID: \"f42b0469-833b-4dca-bc17-71e62b73f378\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mkvhj" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.694601 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/b624b9bd-2dce-41fd-8abf-f21908db8f6c-available-featuregates\") pod \"openshift-config-operator-7777fb866f-sbd2c\" (UID: \"b624b9bd-2dce-41fd-8abf-f21908db8f6c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-sbd2c" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.695198 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/3bcae027-4e25-4c41-bbc9-639927f58691-ca-trust-extracted\") pod \"image-registry-697d97f7c8-tncnb\" (UID: \"3bcae027-4e25-4c41-bbc9-639927f58691\") " pod="openshift-image-registry/image-registry-697d97f7c8-tncnb" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.707881 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/6dac3476-1363-4ddc-8fa5-9f0e110e5c38-etcd-service-ca\") pod \"etcd-operator-b45778765-sf2rk\" (UID: \"6dac3476-1363-4ddc-8fa5-9f0e110e5c38\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sf2rk" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.708635 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aff32d6f-604f-49f1-8547-bea4a259ed45-config\") pod \"openshift-apiserver-operator-796bbdcf4f-zrncs\" (UID: \"aff32d6f-604f-49f1-8547-bea4a259ed45\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zrncs" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.713623 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2ce69172-bf74-4c4a-8aeb-9b1d86f50254-service-ca-bundle\") pod \"router-default-5444994796-l5mfg\" (UID: \"2ce69172-bf74-4c4a-8aeb-9b1d86f50254\") " pod="openshift-ingress/router-default-5444994796-l5mfg" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.718316 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3bcae027-4e25-4c41-bbc9-639927f58691-trusted-ca\") pod \"image-registry-697d97f7c8-tncnb\" (UID: \"3bcae027-4e25-4c41-bbc9-639927f58691\") " pod="openshift-image-registry/image-registry-697d97f7c8-tncnb" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.718794 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/bd8d4294-4075-456c-ab53-d3646b5117b5-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-jdcjm\" (UID: \"bd8d4294-4075-456c-ab53-d3646b5117b5\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jdcjm" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.729446 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f42b0469-833b-4dca-bc17-71e62b73f378-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-mkvhj\" (UID: \"f42b0469-833b-4dca-bc17-71e62b73f378\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mkvhj" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.741338 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/aa802450-996f-4548-a763-1e08d1cc564a-profile-collector-cert\") pod \"catalog-operator-68c6474976-hq8jp\" (UID: \"aa802450-996f-4548-a763-1e08d1cc564a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hq8jp" Jan 26 07:55:54 crc kubenswrapper[4806]: E0126 07:55:54.742049 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 07:55:55.242030944 +0000 UTC m=+134.506439000 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tncnb" (UID: "3bcae027-4e25-4c41-bbc9-639927f58691") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.742982 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2a762473-781e-436f-bc99-584a5301abc3-proxy-tls\") pod \"machine-config-operator-74547568cd-vhrp6\" (UID: \"2a762473-781e-436f-bc99-584a5301abc3\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-vhrp6" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.743455 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/b624b9bd-2dce-41fd-8abf-f21908db8f6c-available-featuregates\") pod \"openshift-config-operator-7777fb866f-sbd2c\" (UID: \"b624b9bd-2dce-41fd-8abf-f21908db8f6c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-sbd2c" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.743667 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9phg\" (UniqueName: \"kubernetes.io/projected/84d147bb-634e-40fb-a631-91ff228c0801-kube-api-access-n9phg\") pod \"marketplace-operator-79b997595-vd8ts\" (UID: \"84d147bb-634e-40fb-a631-91ff228c0801\") " pod="openshift-marketplace/marketplace-operator-79b997595-vd8ts" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.743729 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3bacec42-b4b2-4638-9ea2-8db24615e6db-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-6sjl8\" (UID: \"3bacec42-b4b2-4638-9ea2-8db24615e6db\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6sjl8" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.743752 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/61b3db9d-f649-4424-8cc5-b7b3f3f1161b-config-volume\") pod \"dns-default-jrg5t\" (UID: \"61b3db9d-f649-4424-8cc5-b7b3f3f1161b\") " pod="openshift-dns/dns-default-jrg5t" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.743818 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6dac3476-1363-4ddc-8fa5-9f0e110e5c38-config\") pod \"etcd-operator-b45778765-sf2rk\" (UID: \"6dac3476-1363-4ddc-8fa5-9f0e110e5c38\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sf2rk" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.743881 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2znn\" (UniqueName: \"kubernetes.io/projected/61b3db9d-f649-4424-8cc5-b7b3f3f1161b-kube-api-access-p2znn\") pod \"dns-default-jrg5t\" (UID: \"61b3db9d-f649-4424-8cc5-b7b3f3f1161b\") " pod="openshift-dns/dns-default-jrg5t" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.744492 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6dac3476-1363-4ddc-8fa5-9f0e110e5c38-config\") pod \"etcd-operator-b45778765-sf2rk\" (UID: \"6dac3476-1363-4ddc-8fa5-9f0e110e5c38\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sf2rk" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.744573 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckv4w\" (UniqueName: \"kubernetes.io/projected/ac2990c0-273e-4614-821b-a1db45c09c0f-kube-api-access-ckv4w\") pod \"package-server-manager-789f6589d5-ncld5\" (UID: \"ac2990c0-273e-4614-821b-a1db45c09c0f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ncld5" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.744639 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pfpr\" (UniqueName: \"kubernetes.io/projected/e04696fa-1756-4ad4-9908-a09f7339c584-kube-api-access-8pfpr\") pod \"csi-hostpathplugin-mls86\" (UID: \"e04696fa-1756-4ad4-9908-a09f7339c584\") " pod="hostpath-provisioner/csi-hostpathplugin-mls86" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.744966 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/ac2990c0-273e-4614-821b-a1db45c09c0f-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-ncld5\" (UID: \"ac2990c0-273e-4614-821b-a1db45c09c0f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ncld5" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.745033 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtswp\" (UniqueName: \"kubernetes.io/projected/001f7476-4d33-474a-80c9-8e99cb19b4e5-kube-api-access-mtswp\") pod \"service-ca-9c57cc56f-bs67m\" (UID: \"001f7476-4d33-474a-80c9-8e99cb19b4e5\") " pod="openshift-service-ca/service-ca-9c57cc56f-bs67m" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.745065 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6dac3476-1363-4ddc-8fa5-9f0e110e5c38-etcd-client\") pod \"etcd-operator-b45778765-sf2rk\" (UID: \"6dac3476-1363-4ddc-8fa5-9f0e110e5c38\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sf2rk" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.745083 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/61b3db9d-f649-4424-8cc5-b7b3f3f1161b-metrics-tls\") pod \"dns-default-jrg5t\" (UID: \"61b3db9d-f649-4424-8cc5-b7b3f3f1161b\") " pod="openshift-dns/dns-default-jrg5t" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.745146 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4cdf983d-19e2-445c-9ae4-237ec10dd839-serving-cert\") pod \"service-ca-operator-777779d784-7c9hs\" (UID: \"4cdf983d-19e2-445c-9ae4-237ec10dd839\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-7c9hs" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.745205 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxvqk\" (UniqueName: \"kubernetes.io/projected/2a762473-781e-436f-bc99-584a5301abc3-kube-api-access-kxvqk\") pod \"machine-config-operator-74547568cd-vhrp6\" (UID: \"2a762473-781e-436f-bc99-584a5301abc3\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-vhrp6" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.745241 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/6dac3476-1363-4ddc-8fa5-9f0e110e5c38-etcd-ca\") pod \"etcd-operator-b45778765-sf2rk\" (UID: \"6dac3476-1363-4ddc-8fa5-9f0e110e5c38\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sf2rk" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.753615 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/6dac3476-1363-4ddc-8fa5-9f0e110e5c38-etcd-ca\") pod \"etcd-operator-b45778765-sf2rk\" (UID: \"6dac3476-1363-4ddc-8fa5-9f0e110e5c38\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sf2rk" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.753752 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/76e1842c-1bb9-492c-9494-55a872376b54-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-zd7nh\" (UID: \"76e1842c-1bb9-492c-9494-55a872376b54\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zd7nh" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.753798 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6260cd7-9202-46cc-b943-b60aaa0e07ff-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-xqmhh\" (UID: \"d6260cd7-9202-46cc-b943-b60aaa0e07ff\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xqmhh" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.755108 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shc8r\" (UniqueName: \"kubernetes.io/projected/aff32d6f-604f-49f1-8547-bea4a259ed45-kube-api-access-shc8r\") pod \"openshift-apiserver-operator-796bbdcf4f-zrncs\" (UID: \"aff32d6f-604f-49f1-8547-bea4a259ed45\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zrncs" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.755367 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kk4pk\" (UniqueName: \"kubernetes.io/projected/2ce69172-bf74-4c4a-8aeb-9b1d86f50254-kube-api-access-kk4pk\") pod \"router-default-5444994796-l5mfg\" (UID: \"2ce69172-bf74-4c4a-8aeb-9b1d86f50254\") " pod="openshift-ingress/router-default-5444994796-l5mfg" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.755411 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/fca40e01-6a4e-46e1-970d-60b8436aa04e-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-sxf5m\" (UID: \"fca40e01-6a4e-46e1-970d-60b8436aa04e\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-sxf5m" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.755442 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d7f14ee6-1d4a-4cb9-bf7d-7a3ad4b0a7c1-secret-volume\") pod \"collect-profiles-29490225-v2v7z\" (UID: \"d7f14ee6-1d4a-4cb9-bf7d-7a3ad4b0a7c1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490225-v2v7z" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.760823 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-rgn89"] Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.777208 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6260cd7-9202-46cc-b943-b60aaa0e07ff-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-xqmhh\" (UID: \"d6260cd7-9202-46cc-b943-b60aaa0e07ff\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xqmhh" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.777850 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-459vt\" (UniqueName: \"kubernetes.io/projected/3bcae027-4e25-4c41-bbc9-639927f58691-kube-api-access-459vt\") pod \"image-registry-697d97f7c8-tncnb\" (UID: \"3bcae027-4e25-4c41-bbc9-639927f58691\") " pod="openshift-image-registry/image-registry-697d97f7c8-tncnb" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.777900 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/e8b535af-199e-479d-a619-194b396b8eb5-node-bootstrap-token\") pod \"machine-config-server-wkkp2\" (UID: \"e8b535af-199e-479d-a619-194b396b8eb5\") " pod="openshift-machine-config-operator/machine-config-server-wkkp2" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.777925 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d7f14ee6-1d4a-4cb9-bf7d-7a3ad4b0a7c1-config-volume\") pod \"collect-profiles-29490225-v2v7z\" (UID: \"d7f14ee6-1d4a-4cb9-bf7d-7a3ad4b0a7c1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490225-v2v7z" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.777949 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2rw7\" (UniqueName: \"kubernetes.io/projected/4cdf983d-19e2-445c-9ae4-237ec10dd839-kube-api-access-z2rw7\") pod \"service-ca-operator-777779d784-7c9hs\" (UID: \"4cdf983d-19e2-445c-9ae4-237ec10dd839\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-7c9hs" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.777976 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/2a762473-781e-436f-bc99-584a5301abc3-images\") pod \"machine-config-operator-74547568cd-vhrp6\" (UID: \"2a762473-781e-436f-bc99-584a5301abc3\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-vhrp6" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.777997 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/2ce69172-bf74-4c4a-8aeb-9b1d86f50254-stats-auth\") pod \"router-default-5444994796-l5mfg\" (UID: \"2ce69172-bf74-4c4a-8aeb-9b1d86f50254\") " pod="openshift-ingress/router-default-5444994796-l5mfg" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.778013 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/84d147bb-634e-40fb-a631-91ff228c0801-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-vd8ts\" (UID: \"84d147bb-634e-40fb-a631-91ff228c0801\") " pod="openshift-marketplace/marketplace-operator-79b997595-vd8ts" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.778028 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/e8b535af-199e-479d-a619-194b396b8eb5-certs\") pod \"machine-config-server-wkkp2\" (UID: \"e8b535af-199e-479d-a619-194b396b8eb5\") " pod="openshift-machine-config-operator/machine-config-server-wkkp2" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.778047 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2ce69172-bf74-4c4a-8aeb-9b1d86f50254-metrics-certs\") pod \"router-default-5444994796-l5mfg\" (UID: \"2ce69172-bf74-4c4a-8aeb-9b1d86f50254\") " pod="openshift-ingress/router-default-5444994796-l5mfg" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.778064 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/e04696fa-1756-4ad4-9908-a09f7339c584-mountpoint-dir\") pod \"csi-hostpathplugin-mls86\" (UID: \"e04696fa-1756-4ad4-9908-a09f7339c584\") " pod="hostpath-provisioner/csi-hostpathplugin-mls86" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.778095 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76e1842c-1bb9-492c-9494-55a872376b54-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-zd7nh\" (UID: \"76e1842c-1bb9-492c-9494-55a872376b54\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zd7nh" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.778111 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/58108b0d-1028-4924-b025-1c11d3238dc1-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-46429\" (UID: \"58108b0d-1028-4924-b025-1c11d3238dc1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-46429" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.778128 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d6260cd7-9202-46cc-b943-b60aaa0e07ff-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-xqmhh\" (UID: \"d6260cd7-9202-46cc-b943-b60aaa0e07ff\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xqmhh" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.778146 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tdsn\" (UniqueName: \"kubernetes.io/projected/e8b535af-199e-479d-a619-194b396b8eb5-kube-api-access-5tdsn\") pod \"machine-config-server-wkkp2\" (UID: \"e8b535af-199e-479d-a619-194b396b8eb5\") " pod="openshift-machine-config-operator/machine-config-server-wkkp2" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.778162 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/58108b0d-1028-4924-b025-1c11d3238dc1-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-46429\" (UID: \"58108b0d-1028-4924-b025-1c11d3238dc1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-46429" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.781245 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76e1842c-1bb9-492c-9494-55a872376b54-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-zd7nh\" (UID: \"76e1842c-1bb9-492c-9494-55a872376b54\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zd7nh" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.782203 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/3bcae027-4e25-4c41-bbc9-639927f58691-registry-certificates\") pod \"image-registry-697d97f7c8-tncnb\" (UID: \"3bcae027-4e25-4c41-bbc9-639927f58691\") " pod="openshift-image-registry/image-registry-697d97f7c8-tncnb" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.783299 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/58108b0d-1028-4924-b025-1c11d3238dc1-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-46429\" (UID: \"58108b0d-1028-4924-b025-1c11d3238dc1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-46429" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.803130 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2a762473-781e-436f-bc99-584a5301abc3-auth-proxy-config\") pod \"machine-config-operator-74547568cd-vhrp6\" (UID: \"2a762473-781e-436f-bc99-584a5301abc3\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-vhrp6" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.804807 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/2a762473-781e-436f-bc99-584a5301abc3-images\") pod \"machine-config-operator-74547568cd-vhrp6\" (UID: \"2a762473-781e-436f-bc99-584a5301abc3\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-vhrp6" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.818755 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/aa802450-996f-4548-a763-1e08d1cc564a-srv-cert\") pod \"catalog-operator-68c6474976-hq8jp\" (UID: \"aa802450-996f-4548-a763-1e08d1cc564a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hq8jp" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.819206 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/3bcae027-4e25-4c41-bbc9-639927f58691-registry-tls\") pod \"image-registry-697d97f7c8-tncnb\" (UID: \"3bcae027-4e25-4c41-bbc9-639927f58691\") " pod="openshift-image-registry/image-registry-697d97f7c8-tncnb" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.824286 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aff32d6f-604f-49f1-8547-bea4a259ed45-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-zrncs\" (UID: \"aff32d6f-604f-49f1-8547-bea4a259ed45\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zrncs" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.827482 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4n4p7\" (UniqueName: \"kubernetes.io/projected/19ceab6e-3284-4ff6-b3a7-541d73c25150-kube-api-access-4n4p7\") pod \"olm-operator-6b444d44fb-5g5mr\" (UID: \"19ceab6e-3284-4ff6-b3a7-541d73c25150\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5g5mr" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.828288 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/19ceab6e-3284-4ff6-b3a7-541d73c25150-srv-cert\") pod \"olm-operator-6b444d44fb-5g5mr\" (UID: \"19ceab6e-3284-4ff6-b3a7-541d73c25150\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5g5mr" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.829416 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lptmw\" (UniqueName: \"kubernetes.io/projected/58108b0d-1028-4924-b025-1c11d3238dc1-kube-api-access-lptmw\") pod \"cluster-image-registry-operator-dc59b4c8b-46429\" (UID: \"58108b0d-1028-4924-b025-1c11d3238dc1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-46429" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.830100 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/2ce69172-bf74-4c4a-8aeb-9b1d86f50254-default-certificate\") pod \"router-default-5444994796-l5mfg\" (UID: \"2ce69172-bf74-4c4a-8aeb-9b1d86f50254\") " pod="openshift-ingress/router-default-5444994796-l5mfg" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.831303 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l87wh\" (UniqueName: \"kubernetes.io/projected/6dac3476-1363-4ddc-8fa5-9f0e110e5c38-kube-api-access-l87wh\") pod \"etcd-operator-b45778765-sf2rk\" (UID: \"6dac3476-1363-4ddc-8fa5-9f0e110e5c38\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sf2rk" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.831743 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/58108b0d-1028-4924-b025-1c11d3238dc1-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-46429\" (UID: \"58108b0d-1028-4924-b025-1c11d3238dc1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-46429" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.832362 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m79qc\" (UniqueName: \"kubernetes.io/projected/bd8d4294-4075-456c-ab53-d3646b5117b5-kube-api-access-m79qc\") pod \"control-plane-machine-set-operator-78cbb6b69f-jdcjm\" (UID: \"bd8d4294-4075-456c-ab53-d3646b5117b5\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jdcjm" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.840304 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrqf4\" (UniqueName: \"kubernetes.io/projected/fca40e01-6a4e-46e1-970d-60b8436aa04e-kube-api-access-jrqf4\") pod \"multus-admission-controller-857f4d67dd-sxf5m\" (UID: \"fca40e01-6a4e-46e1-970d-60b8436aa04e\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-sxf5m" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.841249 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2df8b\" (UniqueName: \"kubernetes.io/projected/aa802450-996f-4548-a763-1e08d1cc564a-kube-api-access-2df8b\") pod \"catalog-operator-68c6474976-hq8jp\" (UID: \"aa802450-996f-4548-a763-1e08d1cc564a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hq8jp" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.847934 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3bacec42-b4b2-4638-9ea2-8db24615e6db-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-6sjl8\" (UID: \"3bacec42-b4b2-4638-9ea2-8db24615e6db\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6sjl8" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.849468 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6dac3476-1363-4ddc-8fa5-9f0e110e5c38-serving-cert\") pod \"etcd-operator-b45778765-sf2rk\" (UID: \"6dac3476-1363-4ddc-8fa5-9f0e110e5c38\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sf2rk" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.849756 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/76e1842c-1bb9-492c-9494-55a872376b54-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-zd7nh\" (UID: \"76e1842c-1bb9-492c-9494-55a872376b54\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zd7nh" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.850352 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/3bcae027-4e25-4c41-bbc9-639927f58691-installation-pull-secrets\") pod \"image-registry-697d97f7c8-tncnb\" (UID: \"3bcae027-4e25-4c41-bbc9-639927f58691\") " pod="openshift-image-registry/image-registry-697d97f7c8-tncnb" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.855763 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6dac3476-1363-4ddc-8fa5-9f0e110e5c38-etcd-client\") pod \"etcd-operator-b45778765-sf2rk\" (UID: \"6dac3476-1363-4ddc-8fa5-9f0e110e5c38\") " pod="openshift-etcd-operator/etcd-operator-b45778765-sf2rk" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.856304 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5g5mr" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.857202 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5cd7\" (UniqueName: \"kubernetes.io/projected/b624b9bd-2dce-41fd-8abf-f21908db8f6c-kube-api-access-k5cd7\") pod \"openshift-config-operator-7777fb866f-sbd2c\" (UID: \"b624b9bd-2dce-41fd-8abf-f21908db8f6c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-sbd2c" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.858621 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/76e1842c-1bb9-492c-9494-55a872376b54-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-zd7nh\" (UID: \"76e1842c-1bb9-492c-9494-55a872376b54\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zd7nh" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.859664 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jdcjm" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.873070 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/fca40e01-6a4e-46e1-970d-60b8436aa04e-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-sxf5m\" (UID: \"fca40e01-6a4e-46e1-970d-60b8436aa04e\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-sxf5m" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.875387 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3bcae027-4e25-4c41-bbc9-639927f58691-bound-sa-token\") pod \"image-registry-697d97f7c8-tncnb\" (UID: \"3bcae027-4e25-4c41-bbc9-639927f58691\") " pod="openshift-image-registry/image-registry-697d97f7c8-tncnb" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.875898 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8tgx9\" (UniqueName: \"kubernetes.io/projected/d6260cd7-9202-46cc-b943-b60aaa0e07ff-kube-api-access-8tgx9\") pod \"kube-storage-version-migrator-operator-b67b599dd-xqmhh\" (UID: \"d6260cd7-9202-46cc-b943-b60aaa0e07ff\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xqmhh" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.880209 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 07:55:54 crc kubenswrapper[4806]: E0126 07:55:54.886705 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 07:55:55.386674975 +0000 UTC m=+134.651083031 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.895656 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5cfe7537-639f-4bbc-9e05-bca737f295ce-apiservice-cert\") pod \"packageserver-d55dfcdfc-f9fk8\" (UID: \"5cfe7537-639f-4bbc-9e05-bca737f295ce\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-f9fk8" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.897647 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5cfe7537-639f-4bbc-9e05-bca737f295ce-webhook-cert\") pod \"packageserver-d55dfcdfc-f9fk8\" (UID: \"5cfe7537-639f-4bbc-9e05-bca737f295ce\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-f9fk8" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.897772 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5192ad78-580b-4126-9241-1d52339308b1-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-mkfwt\" (UID: \"5192ad78-580b-4126-9241-1d52339308b1\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-mkfwt" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.897883 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9phg\" (UniqueName: \"kubernetes.io/projected/84d147bb-634e-40fb-a631-91ff228c0801-kube-api-access-n9phg\") pod \"marketplace-operator-79b997595-vd8ts\" (UID: \"84d147bb-634e-40fb-a631-91ff228c0801\") " pod="openshift-marketplace/marketplace-operator-79b997595-vd8ts" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.897962 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/61b3db9d-f649-4424-8cc5-b7b3f3f1161b-config-volume\") pod \"dns-default-jrg5t\" (UID: \"61b3db9d-f649-4424-8cc5-b7b3f3f1161b\") " pod="openshift-dns/dns-default-jrg5t" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.898034 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2znn\" (UniqueName: \"kubernetes.io/projected/61b3db9d-f649-4424-8cc5-b7b3f3f1161b-kube-api-access-p2znn\") pod \"dns-default-jrg5t\" (UID: \"61b3db9d-f649-4424-8cc5-b7b3f3f1161b\") " pod="openshift-dns/dns-default-jrg5t" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.899139 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ckv4w\" (UniqueName: \"kubernetes.io/projected/ac2990c0-273e-4614-821b-a1db45c09c0f-kube-api-access-ckv4w\") pod \"package-server-manager-789f6589d5-ncld5\" (UID: \"ac2990c0-273e-4614-821b-a1db45c09c0f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ncld5" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.899260 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/ac2990c0-273e-4614-821b-a1db45c09c0f-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-ncld5\" (UID: \"ac2990c0-273e-4614-821b-a1db45c09c0f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ncld5" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.899340 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8pfpr\" (UniqueName: \"kubernetes.io/projected/e04696fa-1756-4ad4-9908-a09f7339c584-kube-api-access-8pfpr\") pod \"csi-hostpathplugin-mls86\" (UID: \"e04696fa-1756-4ad4-9908-a09f7339c584\") " pod="hostpath-provisioner/csi-hostpathplugin-mls86" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.899423 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/61b3db9d-f649-4424-8cc5-b7b3f3f1161b-metrics-tls\") pod \"dns-default-jrg5t\" (UID: \"61b3db9d-f649-4424-8cc5-b7b3f3f1161b\") " pod="openshift-dns/dns-default-jrg5t" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.899492 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mtswp\" (UniqueName: \"kubernetes.io/projected/001f7476-4d33-474a-80c9-8e99cb19b4e5-kube-api-access-mtswp\") pod \"service-ca-9c57cc56f-bs67m\" (UID: \"001f7476-4d33-474a-80c9-8e99cb19b4e5\") " pod="openshift-service-ca/service-ca-9c57cc56f-bs67m" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.899589 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4cdf983d-19e2-445c-9ae4-237ec10dd839-serving-cert\") pod \"service-ca-operator-777779d784-7c9hs\" (UID: \"4cdf983d-19e2-445c-9ae4-237ec10dd839\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-7c9hs" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.899743 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d7f14ee6-1d4a-4cb9-bf7d-7a3ad4b0a7c1-secret-volume\") pod \"collect-profiles-29490225-v2v7z\" (UID: \"d7f14ee6-1d4a-4cb9-bf7d-7a3ad4b0a7c1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490225-v2v7z" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.899848 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/e8b535af-199e-479d-a619-194b396b8eb5-node-bootstrap-token\") pod \"machine-config-server-wkkp2\" (UID: \"e8b535af-199e-479d-a619-194b396b8eb5\") " pod="openshift-machine-config-operator/machine-config-server-wkkp2" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.899919 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d7f14ee6-1d4a-4cb9-bf7d-7a3ad4b0a7c1-config-volume\") pod \"collect-profiles-29490225-v2v7z\" (UID: \"d7f14ee6-1d4a-4cb9-bf7d-7a3ad4b0a7c1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490225-v2v7z" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.900016 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2rw7\" (UniqueName: \"kubernetes.io/projected/4cdf983d-19e2-445c-9ae4-237ec10dd839-kube-api-access-z2rw7\") pod \"service-ca-operator-777779d784-7c9hs\" (UID: \"4cdf983d-19e2-445c-9ae4-237ec10dd839\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-7c9hs" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.900132 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/84d147bb-634e-40fb-a631-91ff228c0801-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-vd8ts\" (UID: \"84d147bb-634e-40fb-a631-91ff228c0801\") " pod="openshift-marketplace/marketplace-operator-79b997595-vd8ts" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.900201 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/e8b535af-199e-479d-a619-194b396b8eb5-certs\") pod \"machine-config-server-wkkp2\" (UID: \"e8b535af-199e-479d-a619-194b396b8eb5\") " pod="openshift-machine-config-operator/machine-config-server-wkkp2" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.900275 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/e04696fa-1756-4ad4-9908-a09f7339c584-mountpoint-dir\") pod \"csi-hostpathplugin-mls86\" (UID: \"e04696fa-1756-4ad4-9908-a09f7339c584\") " pod="hostpath-provisioner/csi-hostpathplugin-mls86" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.900374 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5tdsn\" (UniqueName: \"kubernetes.io/projected/e8b535af-199e-479d-a619-194b396b8eb5-kube-api-access-5tdsn\") pod \"machine-config-server-wkkp2\" (UID: \"e8b535af-199e-479d-a619-194b396b8eb5\") " pod="openshift-machine-config-operator/machine-config-server-wkkp2" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.900464 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/001f7476-4d33-474a-80c9-8e99cb19b4e5-signing-key\") pod \"service-ca-9c57cc56f-bs67m\" (UID: \"001f7476-4d33-474a-80c9-8e99cb19b4e5\") " pod="openshift-service-ca/service-ca-9c57cc56f-bs67m" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.900570 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/e04696fa-1756-4ad4-9908-a09f7339c584-plugins-dir\") pod \"csi-hostpathplugin-mls86\" (UID: \"e04696fa-1756-4ad4-9908-a09f7339c584\") " pod="hostpath-provisioner/csi-hostpathplugin-mls86" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.900659 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7tfrg\" (UniqueName: \"kubernetes.io/projected/380d2e61-5b11-446b-b3b9-d3e7fa65569e-kube-api-access-7tfrg\") pod \"migrator-59844c95c7-4r9h2\" (UID: \"380d2e61-5b11-446b-b3b9-d3e7fa65569e\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-4r9h2" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.902504 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h82dx\" (UniqueName: \"kubernetes.io/projected/d7f14ee6-1d4a-4cb9-bf7d-7a3ad4b0a7c1-kube-api-access-h82dx\") pod \"collect-profiles-29490225-v2v7z\" (UID: \"d7f14ee6-1d4a-4cb9-bf7d-7a3ad4b0a7c1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490225-v2v7z" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.902643 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4cdf983d-19e2-445c-9ae4-237ec10dd839-config\") pod \"service-ca-operator-777779d784-7c9hs\" (UID: \"4cdf983d-19e2-445c-9ae4-237ec10dd839\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-7c9hs" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.902733 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5192ad78-580b-4126-9241-1d52339308b1-proxy-tls\") pod \"machine-config-controller-84d6567774-mkfwt\" (UID: \"5192ad78-580b-4126-9241-1d52339308b1\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-mkfwt" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.902812 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/e04696fa-1756-4ad4-9908-a09f7339c584-registration-dir\") pod \"csi-hostpathplugin-mls86\" (UID: \"e04696fa-1756-4ad4-9908-a09f7339c584\") " pod="hostpath-provisioner/csi-hostpathplugin-mls86" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.902882 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/e04696fa-1756-4ad4-9908-a09f7339c584-csi-data-dir\") pod \"csi-hostpathplugin-mls86\" (UID: \"e04696fa-1756-4ad4-9908-a09f7339c584\") " pod="hostpath-provisioner/csi-hostpathplugin-mls86" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.902957 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/5cfe7537-639f-4bbc-9e05-bca737f295ce-tmpfs\") pod \"packageserver-d55dfcdfc-f9fk8\" (UID: \"5cfe7537-639f-4bbc-9e05-bca737f295ce\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-f9fk8" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.903032 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/84d147bb-634e-40fb-a631-91ff228c0801-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-vd8ts\" (UID: \"84d147bb-634e-40fb-a631-91ff228c0801\") " pod="openshift-marketplace/marketplace-operator-79b997595-vd8ts" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.903123 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vpsv\" (UniqueName: \"kubernetes.io/projected/5cfe7537-639f-4bbc-9e05-bca737f295ce-kube-api-access-6vpsv\") pod \"packageserver-d55dfcdfc-f9fk8\" (UID: \"5cfe7537-639f-4bbc-9e05-bca737f295ce\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-f9fk8" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.904751 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/58108b0d-1028-4924-b025-1c11d3238dc1-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-46429\" (UID: \"58108b0d-1028-4924-b025-1c11d3238dc1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-46429" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.905235 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5cfe7537-639f-4bbc-9e05-bca737f295ce-apiservice-cert\") pod \"packageserver-d55dfcdfc-f9fk8\" (UID: \"5cfe7537-639f-4bbc-9e05-bca737f295ce\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-f9fk8" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.888323 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d6260cd7-9202-46cc-b943-b60aaa0e07ff-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-xqmhh\" (UID: \"d6260cd7-9202-46cc-b943-b60aaa0e07ff\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xqmhh" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.896496 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f42b0469-833b-4dca-bc17-71e62b73f378-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-mkvhj\" (UID: \"f42b0469-833b-4dca-bc17-71e62b73f378\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mkvhj" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.906174 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2ce69172-bf74-4c4a-8aeb-9b1d86f50254-metrics-certs\") pod \"router-default-5444994796-l5mfg\" (UID: \"2ce69172-bf74-4c4a-8aeb-9b1d86f50254\") " pod="openshift-ingress/router-default-5444994796-l5mfg" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.907771 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5192ad78-580b-4126-9241-1d52339308b1-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-mkfwt\" (UID: \"5192ad78-580b-4126-9241-1d52339308b1\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-mkfwt" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.908038 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/61b3db9d-f649-4424-8cc5-b7b3f3f1161b-metrics-tls\") pod \"dns-default-jrg5t\" (UID: \"61b3db9d-f649-4424-8cc5-b7b3f3f1161b\") " pod="openshift-dns/dns-default-jrg5t" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.908467 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/61b3db9d-f649-4424-8cc5-b7b3f3f1161b-config-volume\") pod \"dns-default-jrg5t\" (UID: \"61b3db9d-f649-4424-8cc5-b7b3f3f1161b\") " pod="openshift-dns/dns-default-jrg5t" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.909021 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5cfe7537-639f-4bbc-9e05-bca737f295ce-webhook-cert\") pod \"packageserver-d55dfcdfc-f9fk8\" (UID: \"5cfe7537-639f-4bbc-9e05-bca737f295ce\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-f9fk8" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.909254 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/e04696fa-1756-4ad4-9908-a09f7339c584-mountpoint-dir\") pod \"csi-hostpathplugin-mls86\" (UID: \"e04696fa-1756-4ad4-9908-a09f7339c584\") " pod="hostpath-provisioner/csi-hostpathplugin-mls86" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.909601 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4cdf983d-19e2-445c-9ae4-237ec10dd839-config\") pod \"service-ca-operator-777779d784-7c9hs\" (UID: \"4cdf983d-19e2-445c-9ae4-237ec10dd839\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-7c9hs" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.909778 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/e04696fa-1756-4ad4-9908-a09f7339c584-csi-data-dir\") pod \"csi-hostpathplugin-mls86\" (UID: \"e04696fa-1756-4ad4-9908-a09f7339c584\") " pod="hostpath-provisioner/csi-hostpathplugin-mls86" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.909919 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/e04696fa-1756-4ad4-9908-a09f7339c584-registration-dir\") pod \"csi-hostpathplugin-mls86\" (UID: \"e04696fa-1756-4ad4-9908-a09f7339c584\") " pod="hostpath-provisioner/csi-hostpathplugin-mls86" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.909976 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/e04696fa-1756-4ad4-9908-a09f7339c584-plugins-dir\") pod \"csi-hostpathplugin-mls86\" (UID: \"e04696fa-1756-4ad4-9908-a09f7339c584\") " pod="hostpath-provisioner/csi-hostpathplugin-mls86" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.910711 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/5cfe7537-639f-4bbc-9e05-bca737f295ce-tmpfs\") pod \"packageserver-d55dfcdfc-f9fk8\" (UID: \"5cfe7537-639f-4bbc-9e05-bca737f295ce\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-f9fk8" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.903198 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/001f7476-4d33-474a-80c9-8e99cb19b4e5-signing-cabundle\") pod \"service-ca-9c57cc56f-bs67m\" (UID: \"001f7476-4d33-474a-80c9-8e99cb19b4e5\") " pod="openshift-service-ca/service-ca-9c57cc56f-bs67m" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.914042 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/84d147bb-634e-40fb-a631-91ff228c0801-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-vd8ts\" (UID: \"84d147bb-634e-40fb-a631-91ff228c0801\") " pod="openshift-marketplace/marketplace-operator-79b997595-vd8ts" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.918328 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d7f14ee6-1d4a-4cb9-bf7d-7a3ad4b0a7c1-config-volume\") pod \"collect-profiles-29490225-v2v7z\" (UID: \"d7f14ee6-1d4a-4cb9-bf7d-7a3ad4b0a7c1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490225-v2v7z" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.921450 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d7f14ee6-1d4a-4cb9-bf7d-7a3ad4b0a7c1-secret-volume\") pod \"collect-profiles-29490225-v2v7z\" (UID: \"d7f14ee6-1d4a-4cb9-bf7d-7a3ad4b0a7c1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490225-v2v7z" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.921533 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4cdf983d-19e2-445c-9ae4-237ec10dd839-serving-cert\") pod \"service-ca-operator-777779d784-7c9hs\" (UID: \"4cdf983d-19e2-445c-9ae4-237ec10dd839\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-7c9hs" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.920998 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/001f7476-4d33-474a-80c9-8e99cb19b4e5-signing-cabundle\") pod \"service-ca-9c57cc56f-bs67m\" (UID: \"001f7476-4d33-474a-80c9-8e99cb19b4e5\") " pod="openshift-service-ca/service-ca-9c57cc56f-bs67m" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.921992 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r24dj\" (UniqueName: \"kubernetes.io/projected/5192ad78-580b-4126-9241-1d52339308b1-kube-api-access-r24dj\") pod \"machine-config-controller-84d6567774-mkfwt\" (UID: \"5192ad78-580b-4126-9241-1d52339308b1\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-mkfwt" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.923827 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/2ce69172-bf74-4c4a-8aeb-9b1d86f50254-stats-auth\") pod \"router-default-5444994796-l5mfg\" (UID: \"2ce69172-bf74-4c4a-8aeb-9b1d86f50254\") " pod="openshift-ingress/router-default-5444994796-l5mfg" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.930330 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/e8b535af-199e-479d-a619-194b396b8eb5-certs\") pod \"machine-config-server-wkkp2\" (UID: \"e8b535af-199e-479d-a619-194b396b8eb5\") " pod="openshift-machine-config-operator/machine-config-server-wkkp2" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.931498 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/001f7476-4d33-474a-80c9-8e99cb19b4e5-signing-key\") pod \"service-ca-9c57cc56f-bs67m\" (UID: \"001f7476-4d33-474a-80c9-8e99cb19b4e5\") " pod="openshift-service-ca/service-ca-9c57cc56f-bs67m" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.931997 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5192ad78-580b-4126-9241-1d52339308b1-proxy-tls\") pod \"machine-config-controller-84d6567774-mkfwt\" (UID: \"5192ad78-580b-4126-9241-1d52339308b1\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-mkfwt" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.934317 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxvqk\" (UniqueName: \"kubernetes.io/projected/2a762473-781e-436f-bc99-584a5301abc3-kube-api-access-kxvqk\") pod \"machine-config-operator-74547568cd-vhrp6\" (UID: \"2a762473-781e-436f-bc99-584a5301abc3\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-vhrp6" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.935252 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/e8b535af-199e-479d-a619-194b396b8eb5-node-bootstrap-token\") pod \"machine-config-server-wkkp2\" (UID: \"e8b535af-199e-479d-a619-194b396b8eb5\") " pod="openshift-machine-config-operator/machine-config-server-wkkp2" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.936876 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tncnb\" (UID: \"3bcae027-4e25-4c41-bbc9-639927f58691\") " pod="openshift-image-registry/image-registry-697d97f7c8-tncnb" Jan 26 07:55:54 crc kubenswrapper[4806]: E0126 07:55:54.938015 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 07:55:55.437996683 +0000 UTC m=+134.702404739 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tncnb" (UID: "3bcae027-4e25-4c41-bbc9-639927f58691") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.943988 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/41318e1c-f829-49d4-9ad8-5dc639973784-cert\") pod \"ingress-canary-nkrgc\" (UID: \"41318e1c-f829-49d4-9ad8-5dc639973784\") " pod="openshift-ingress-canary/ingress-canary-nkrgc" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.945022 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4lrg\" (UniqueName: \"kubernetes.io/projected/41318e1c-f829-49d4-9ad8-5dc639973784-kube-api-access-l4lrg\") pod \"ingress-canary-nkrgc\" (UID: \"41318e1c-f829-49d4-9ad8-5dc639973784\") " pod="openshift-ingress-canary/ingress-canary-nkrgc" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.946933 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/e04696fa-1756-4ad4-9908-a09f7339c584-socket-dir\") pod \"csi-hostpathplugin-mls86\" (UID: \"e04696fa-1756-4ad4-9908-a09f7339c584\") " pod="hostpath-provisioner/csi-hostpathplugin-mls86" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.948942 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/e04696fa-1756-4ad4-9908-a09f7339c584-socket-dir\") pod \"csi-hostpathplugin-mls86\" (UID: \"e04696fa-1756-4ad4-9908-a09f7339c584\") " pod="hostpath-provisioner/csi-hostpathplugin-mls86" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.954372 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/84d147bb-634e-40fb-a631-91ff228c0801-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-vd8ts\" (UID: \"84d147bb-634e-40fb-a631-91ff228c0801\") " pod="openshift-marketplace/marketplace-operator-79b997595-vd8ts" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.954714 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3bacec42-b4b2-4638-9ea2-8db24615e6db-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-6sjl8\" (UID: \"3bacec42-b4b2-4638-9ea2-8db24615e6db\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6sjl8" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.956862 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-wb6lm"] Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.960762 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/ac2990c0-273e-4614-821b-a1db45c09c0f-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-ncld5\" (UID: \"ac2990c0-273e-4614-821b-a1db45c09c0f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ncld5" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.963105 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/41318e1c-f829-49d4-9ad8-5dc639973784-cert\") pod \"ingress-canary-nkrgc\" (UID: \"41318e1c-f829-49d4-9ad8-5dc639973784\") " pod="openshift-ingress-canary/ingress-canary-nkrgc" Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.971070 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-gk5q9" event={"ID":"0f3802bf-e4bc-4952-9e22-428d62ec0349","Type":"ContainerStarted","Data":"8232246717cfc9b338aaa9e6a9dfb8bbb789b35c5e33d6c4d1efa9790c43708e"} Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.971119 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-gk5q9" event={"ID":"0f3802bf-e4bc-4952-9e22-428d62ec0349","Type":"ContainerStarted","Data":"666a98aa0cdaa44ee3382bb90a6dcfbffaf2f3fad21427457106fe9166d4d300"} Jan 26 07:55:54 crc kubenswrapper[4806]: I0126 07:55:54.988293 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shc8r\" (UniqueName: \"kubernetes.io/projected/aff32d6f-604f-49f1-8547-bea4a259ed45-kube-api-access-shc8r\") pod \"openshift-apiserver-operator-796bbdcf4f-zrncs\" (UID: \"aff32d6f-604f-49f1-8547-bea4a259ed45\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zrncs" Jan 26 07:55:55 crc kubenswrapper[4806]: I0126 07:55:55.005318 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-qd6mh" event={"ID":"ee89739e-edc1-41b5-bf4a-da80ba0a59aa","Type":"ContainerStarted","Data":"e6cfd3ebbdf70134fd3614b63fbe89c4b8757d61a42572b4578ad19dc0793b54"} Jan 26 07:55:55 crc kubenswrapper[4806]: I0126 07:55:55.005369 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-qd6mh" event={"ID":"ee89739e-edc1-41b5-bf4a-da80ba0a59aa","Type":"ContainerStarted","Data":"2ff1572bd7b25c2ebc3727b20392a89fd2ed2550d4dfe4c8dcb4b223ba6548ad"} Jan 26 07:55:55 crc kubenswrapper[4806]: I0126 07:55:55.006207 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-459vt\" (UniqueName: \"kubernetes.io/projected/3bcae027-4e25-4c41-bbc9-639927f58691-kube-api-access-459vt\") pod \"image-registry-697d97f7c8-tncnb\" (UID: \"3bcae027-4e25-4c41-bbc9-639927f58691\") " pod="openshift-image-registry/image-registry-697d97f7c8-tncnb" Jan 26 07:55:55 crc kubenswrapper[4806]: I0126 07:55:55.013041 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kk4pk\" (UniqueName: \"kubernetes.io/projected/2ce69172-bf74-4c4a-8aeb-9b1d86f50254-kube-api-access-kk4pk\") pod \"router-default-5444994796-l5mfg\" (UID: \"2ce69172-bf74-4c4a-8aeb-9b1d86f50254\") " pod="openshift-ingress/router-default-5444994796-l5mfg" Jan 26 07:55:55 crc kubenswrapper[4806]: I0126 07:55:55.017447 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9jh6s" event={"ID":"8dc6ffac-58e2-477a-adaf-3eb1de776a9c","Type":"ContainerStarted","Data":"5076b8ab39881404b02c3132123c36231dddff18ec51501de38f27121cf66f16"} Jan 26 07:55:55 crc kubenswrapper[4806]: I0126 07:55:55.023546 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h82dx\" (UniqueName: \"kubernetes.io/projected/d7f14ee6-1d4a-4cb9-bf7d-7a3ad4b0a7c1-kube-api-access-h82dx\") pod \"collect-profiles-29490225-v2v7z\" (UID: \"d7f14ee6-1d4a-4cb9-bf7d-7a3ad4b0a7c1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490225-v2v7z" Jan 26 07:55:55 crc kubenswrapper[4806]: I0126 07:55:55.027304 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-q5z6f"] Jan 26 07:55:55 crc kubenswrapper[4806]: I0126 07:55:55.037774 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gmnqv" event={"ID":"cf7f962d-5924-4e0e-bd23-cd46ba65f5a9","Type":"ContainerStarted","Data":"5aac34ac24abf97ba0173038040d4a0790414cdbfaf10cef46f80cc6fad20d09"} Jan 26 07:55:55 crc kubenswrapper[4806]: I0126 07:55:55.048674 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ckv4w\" (UniqueName: \"kubernetes.io/projected/ac2990c0-273e-4614-821b-a1db45c09c0f-kube-api-access-ckv4w\") pod \"package-server-manager-789f6589d5-ncld5\" (UID: \"ac2990c0-273e-4614-821b-a1db45c09c0f\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ncld5" Jan 26 07:55:55 crc kubenswrapper[4806]: I0126 07:55:55.053248 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 07:55:55 crc kubenswrapper[4806]: E0126 07:55:55.053442 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 07:55:55.553419769 +0000 UTC m=+134.817827825 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 07:55:55 crc kubenswrapper[4806]: I0126 07:55:55.053966 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tncnb\" (UID: \"3bcae027-4e25-4c41-bbc9-639927f58691\") " pod="openshift-image-registry/image-registry-697d97f7c8-tncnb" Jan 26 07:55:55 crc kubenswrapper[4806]: E0126 07:55:55.054314 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 07:55:55.554300654 +0000 UTC m=+134.818708710 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tncnb" (UID: "3bcae027-4e25-4c41-bbc9-639927f58691") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 07:55:55 crc kubenswrapper[4806]: I0126 07:55:55.059613 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-46429" Jan 26 07:55:55 crc kubenswrapper[4806]: I0126 07:55:55.062627 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-njnsk"] Jan 26 07:55:55 crc kubenswrapper[4806]: I0126 07:55:55.063318 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-d46vj"] Jan 26 07:55:55 crc kubenswrapper[4806]: I0126 07:55:55.076158 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-qnzvz"] Jan 26 07:55:55 crc kubenswrapper[4806]: I0126 07:55:55.080241 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xqmhh" Jan 26 07:55:55 crc kubenswrapper[4806]: I0126 07:55:55.087231 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8pfpr\" (UniqueName: \"kubernetes.io/projected/e04696fa-1756-4ad4-9908-a09f7339c584-kube-api-access-8pfpr\") pod \"csi-hostpathplugin-mls86\" (UID: \"e04696fa-1756-4ad4-9908-a09f7339c584\") " pod="hostpath-provisioner/csi-hostpathplugin-mls86" Jan 26 07:55:55 crc kubenswrapper[4806]: I0126 07:55:55.087887 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zrncs" Jan 26 07:55:55 crc kubenswrapper[4806]: I0126 07:55:55.088027 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9phg\" (UniqueName: \"kubernetes.io/projected/84d147bb-634e-40fb-a631-91ff228c0801-kube-api-access-n9phg\") pod \"marketplace-operator-79b997595-vd8ts\" (UID: \"84d147bb-634e-40fb-a631-91ff228c0801\") " pod="openshift-marketplace/marketplace-operator-79b997595-vd8ts" Jan 26 07:55:55 crc kubenswrapper[4806]: I0126 07:55:55.102626 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-sbd2c" Jan 26 07:55:55 crc kubenswrapper[4806]: I0126 07:55:55.112871 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zd7nh" Jan 26 07:55:55 crc kubenswrapper[4806]: I0126 07:55:55.115286 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-4gdvx"] Jan 26 07:55:55 crc kubenswrapper[4806]: I0126 07:55:55.117209 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-l5mfg" Jan 26 07:55:55 crc kubenswrapper[4806]: I0126 07:55:55.117372 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtswp\" (UniqueName: \"kubernetes.io/projected/001f7476-4d33-474a-80c9-8e99cb19b4e5-kube-api-access-mtswp\") pod \"service-ca-9c57cc56f-bs67m\" (UID: \"001f7476-4d33-474a-80c9-8e99cb19b4e5\") " pod="openshift-service-ca/service-ca-9c57cc56f-bs67m" Jan 26 07:55:55 crc kubenswrapper[4806]: I0126 07:55:55.128454 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vpsv\" (UniqueName: \"kubernetes.io/projected/5cfe7537-639f-4bbc-9e05-bca737f295ce-kube-api-access-6vpsv\") pod \"packageserver-d55dfcdfc-f9fk8\" (UID: \"5cfe7537-639f-4bbc-9e05-bca737f295ce\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-f9fk8" Jan 26 07:55:55 crc kubenswrapper[4806]: I0126 07:55:55.128766 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-sf2rk" Jan 26 07:55:55 crc kubenswrapper[4806]: I0126 07:55:55.138259 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mkvhj" Jan 26 07:55:55 crc kubenswrapper[4806]: I0126 07:55:55.139939 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hq8jp" Jan 26 07:55:55 crc kubenswrapper[4806]: I0126 07:55:55.146500 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7tfrg\" (UniqueName: \"kubernetes.io/projected/380d2e61-5b11-446b-b3b9-d3e7fa65569e-kube-api-access-7tfrg\") pod \"migrator-59844c95c7-4r9h2\" (UID: \"380d2e61-5b11-446b-b3b9-d3e7fa65569e\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-4r9h2" Jan 26 07:55:55 crc kubenswrapper[4806]: I0126 07:55:55.156255 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 07:55:55 crc kubenswrapper[4806]: I0126 07:55:55.157972 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-sxf5m" Jan 26 07:55:55 crc kubenswrapper[4806]: E0126 07:55:55.158467 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 07:55:55.658444942 +0000 UTC m=+134.922852998 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 07:55:55 crc kubenswrapper[4806]: I0126 07:55:55.161011 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tncnb\" (UID: \"3bcae027-4e25-4c41-bbc9-639927f58691\") " pod="openshift-image-registry/image-registry-697d97f7c8-tncnb" Jan 26 07:55:55 crc kubenswrapper[4806]: E0126 07:55:55.163303 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 07:55:55.663286679 +0000 UTC m=+134.927694735 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tncnb" (UID: "3bcae027-4e25-4c41-bbc9-639927f58691") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 07:55:55 crc kubenswrapper[4806]: I0126 07:55:55.165951 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-vhrp6" Jan 26 07:55:55 crc kubenswrapper[4806]: I0126 07:55:55.178453 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6sjl8" Jan 26 07:55:55 crc kubenswrapper[4806]: I0126 07:55:55.199586 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ncld5" Jan 26 07:55:55 crc kubenswrapper[4806]: I0126 07:55:55.200426 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-f9fk8" Jan 26 07:55:55 crc kubenswrapper[4806]: I0126 07:55:55.201855 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2znn\" (UniqueName: \"kubernetes.io/projected/61b3db9d-f649-4424-8cc5-b7b3f3f1161b-kube-api-access-p2znn\") pod \"dns-default-jrg5t\" (UID: \"61b3db9d-f649-4424-8cc5-b7b3f3f1161b\") " pod="openshift-dns/dns-default-jrg5t" Jan 26 07:55:55 crc kubenswrapper[4806]: I0126 07:55:55.201980 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-vd8ts" Jan 26 07:55:55 crc kubenswrapper[4806]: I0126 07:55:55.214448 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490225-v2v7z" Jan 26 07:55:55 crc kubenswrapper[4806]: I0126 07:55:55.218742 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2rw7\" (UniqueName: \"kubernetes.io/projected/4cdf983d-19e2-445c-9ae4-237ec10dd839-kube-api-access-z2rw7\") pod \"service-ca-operator-777779d784-7c9hs\" (UID: \"4cdf983d-19e2-445c-9ae4-237ec10dd839\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-7c9hs" Jan 26 07:55:55 crc kubenswrapper[4806]: I0126 07:55:55.223144 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-4r9h2" Jan 26 07:55:55 crc kubenswrapper[4806]: I0126 07:55:55.223481 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5tdsn\" (UniqueName: \"kubernetes.io/projected/e8b535af-199e-479d-a619-194b396b8eb5-kube-api-access-5tdsn\") pod \"machine-config-server-wkkp2\" (UID: \"e8b535af-199e-479d-a619-194b396b8eb5\") " pod="openshift-machine-config-operator/machine-config-server-wkkp2" Jan 26 07:55:55 crc kubenswrapper[4806]: I0126 07:55:55.245704 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-bs67m" Jan 26 07:55:55 crc kubenswrapper[4806]: I0126 07:55:55.252008 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-7c9hs" Jan 26 07:55:55 crc kubenswrapper[4806]: I0126 07:55:55.253884 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-jrg5t" Jan 26 07:55:55 crc kubenswrapper[4806]: I0126 07:55:55.297749 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-wkkp2" Jan 26 07:55:55 crc kubenswrapper[4806]: I0126 07:55:55.299574 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4lrg\" (UniqueName: \"kubernetes.io/projected/41318e1c-f829-49d4-9ad8-5dc639973784-kube-api-access-l4lrg\") pod \"ingress-canary-nkrgc\" (UID: \"41318e1c-f829-49d4-9ad8-5dc639973784\") " pod="openshift-ingress-canary/ingress-canary-nkrgc" Jan 26 07:55:55 crc kubenswrapper[4806]: I0126 07:55:55.306111 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r24dj\" (UniqueName: \"kubernetes.io/projected/5192ad78-580b-4126-9241-1d52339308b1-kube-api-access-r24dj\") pod \"machine-config-controller-84d6567774-mkfwt\" (UID: \"5192ad78-580b-4126-9241-1d52339308b1\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-mkfwt" Jan 26 07:55:55 crc kubenswrapper[4806]: I0126 07:55:55.307810 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 07:55:55 crc kubenswrapper[4806]: I0126 07:55:55.331160 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-htpwn"] Jan 26 07:55:55 crc kubenswrapper[4806]: E0126 07:55:55.347622 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 07:55:55.847574418 +0000 UTC m=+135.111982474 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 07:55:55 crc kubenswrapper[4806]: I0126 07:55:55.348108 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-mls86" Jan 26 07:55:55 crc kubenswrapper[4806]: I0126 07:55:55.373080 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-hdxh9"] Jan 26 07:55:55 crc kubenswrapper[4806]: I0126 07:55:55.422960 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tncnb\" (UID: \"3bcae027-4e25-4c41-bbc9-639927f58691\") " pod="openshift-image-registry/image-registry-697d97f7c8-tncnb" Jan 26 07:55:55 crc kubenswrapper[4806]: E0126 07:55:55.423475 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 07:55:55.923457618 +0000 UTC m=+135.187865674 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tncnb" (UID: "3bcae027-4e25-4c41-bbc9-639927f58691") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 07:55:55 crc kubenswrapper[4806]: W0126 07:55:55.471442 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5f97371c_7dc2_4170_90e7_f044dcc62f2a.slice/crio-5e9efc969d99a819e52e3d41ad90acba26e9e917535e82d1fe1f833fdd07d890 WatchSource:0}: Error finding container 5e9efc969d99a819e52e3d41ad90acba26e9e917535e82d1fe1f833fdd07d890: Status 404 returned error can't find the container with id 5e9efc969d99a819e52e3d41ad90acba26e9e917535e82d1fe1f833fdd07d890 Jan 26 07:55:55 crc kubenswrapper[4806]: I0126 07:55:55.506576 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-s7jrc"] Jan 26 07:55:55 crc kubenswrapper[4806]: I0126 07:55:55.507911 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-mkfwt" Jan 26 07:55:55 crc kubenswrapper[4806]: I0126 07:55:55.528182 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 07:55:55 crc kubenswrapper[4806]: E0126 07:55:55.528614 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 07:55:56.028600014 +0000 UTC m=+135.293008070 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 07:55:55 crc kubenswrapper[4806]: I0126 07:55:55.545054 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-nkrgc" Jan 26 07:55:55 crc kubenswrapper[4806]: I0126 07:55:55.629663 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tncnb\" (UID: \"3bcae027-4e25-4c41-bbc9-639927f58691\") " pod="openshift-image-registry/image-registry-697d97f7c8-tncnb" Jan 26 07:55:55 crc kubenswrapper[4806]: E0126 07:55:55.629981 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 07:55:56.129970744 +0000 UTC m=+135.394378800 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tncnb" (UID: "3bcae027-4e25-4c41-bbc9-639927f58691") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 07:55:55 crc kubenswrapper[4806]: I0126 07:55:55.636580 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5g5mr"] Jan 26 07:55:55 crc kubenswrapper[4806]: I0126 07:55:55.731664 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 07:55:55 crc kubenswrapper[4806]: E0126 07:55:55.732005 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 07:55:56.231988212 +0000 UTC m=+135.496396268 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 07:55:55 crc kubenswrapper[4806]: W0126 07:55:55.755789 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2ce69172_bf74_4c4a_8aeb_9b1d86f50254.slice/crio-aec3ff5ce151123d0c089db74a4735a74f43eed2aabcb807f2582da2b773c59e WatchSource:0}: Error finding container aec3ff5ce151123d0c089db74a4735a74f43eed2aabcb807f2582da2b773c59e: Status 404 returned error can't find the container with id aec3ff5ce151123d0c089db74a4735a74f43eed2aabcb807f2582da2b773c59e Jan 26 07:55:55 crc kubenswrapper[4806]: I0126 07:55:55.833996 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tncnb\" (UID: \"3bcae027-4e25-4c41-bbc9-639927f58691\") " pod="openshift-image-registry/image-registry-697d97f7c8-tncnb" Jan 26 07:55:55 crc kubenswrapper[4806]: E0126 07:55:55.834283 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 07:55:56.334272458 +0000 UTC m=+135.598680514 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tncnb" (UID: "3bcae027-4e25-4c41-bbc9-639927f58691") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 07:55:55 crc kubenswrapper[4806]: W0126 07:55:55.845676 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod19ceab6e_3284_4ff6_b3a7_541d73c25150.slice/crio-da6e3b58033b82da3573b61ec53d56b3fdf02aa0228ad2ee9027d2c16017fcc9 WatchSource:0}: Error finding container da6e3b58033b82da3573b61ec53d56b3fdf02aa0228ad2ee9027d2c16017fcc9: Status 404 returned error can't find the container with id da6e3b58033b82da3573b61ec53d56b3fdf02aa0228ad2ee9027d2c16017fcc9 Jan 26 07:55:55 crc kubenswrapper[4806]: I0126 07:55:55.849723 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jdcjm"] Jan 26 07:55:55 crc kubenswrapper[4806]: I0126 07:55:55.940738 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 07:55:55 crc kubenswrapper[4806]: E0126 07:55:55.941033 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 07:55:56.441007109 +0000 UTC m=+135.705415165 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 07:55:55 crc kubenswrapper[4806]: I0126 07:55:55.941421 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tncnb\" (UID: \"3bcae027-4e25-4c41-bbc9-639927f58691\") " pod="openshift-image-registry/image-registry-697d97f7c8-tncnb" Jan 26 07:55:55 crc kubenswrapper[4806]: E0126 07:55:55.945593 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 07:55:56.445183217 +0000 UTC m=+135.709591273 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tncnb" (UID: "3bcae027-4e25-4c41-bbc9-639927f58691") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 07:55:55 crc kubenswrapper[4806]: I0126 07:55:55.977903 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-7pdxm" podStartSLOduration=115.97788726900001 podStartE2EDuration="1m55.977887269s" podCreationTimestamp="2026-01-26 07:54:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 07:55:55.923095264 +0000 UTC m=+135.187503320" watchObservedRunningTime="2026-01-26 07:55:55.977887269 +0000 UTC m=+135.242295315" Jan 26 07:55:55 crc kubenswrapper[4806]: I0126 07:55:55.978965 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-qd6mh" podStartSLOduration=115.97895873 podStartE2EDuration="1m55.97895873s" podCreationTimestamp="2026-01-26 07:54:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 07:55:55.974426492 +0000 UTC m=+135.238834548" watchObservedRunningTime="2026-01-26 07:55:55.97895873 +0000 UTC m=+135.243366786" Jan 26 07:55:56 crc kubenswrapper[4806]: I0126 07:55:56.042408 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 07:55:56 crc kubenswrapper[4806]: E0126 07:55:56.043947 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 07:55:56.543930732 +0000 UTC m=+135.808338788 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 07:55:56 crc kubenswrapper[4806]: I0126 07:55:56.099432 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-rgn89" event={"ID":"3b7f2154-fe8d-4ae4-8009-feb30d797f9b","Type":"ContainerStarted","Data":"66c688dd3c6839200968e406999035a51ef5d44a5155de61e873773a436b6813"} Jan 26 07:55:56 crc kubenswrapper[4806]: I0126 07:55:56.099481 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-rgn89" event={"ID":"3b7f2154-fe8d-4ae4-8009-feb30d797f9b","Type":"ContainerStarted","Data":"aa6d37ef914e6aaaf8b811c01fcfeace6602b67b05eb18e15ea6c9809f6e7ff9"} Jan 26 07:55:56 crc kubenswrapper[4806]: I0126 07:55:56.099788 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-rgn89" Jan 26 07:55:56 crc kubenswrapper[4806]: I0126 07:55:56.102427 4806 patch_prober.go:28] interesting pod/console-operator-58897d9998-rgn89 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/readyz\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Jan 26 07:55:56 crc kubenswrapper[4806]: I0126 07:55:56.102469 4806 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-rgn89" podUID="3b7f2154-fe8d-4ae4-8009-feb30d797f9b" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/readyz\": dial tcp 10.217.0.10:8443: connect: connection refused" Jan 26 07:55:56 crc kubenswrapper[4806]: I0126 07:55:56.151722 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tncnb\" (UID: \"3bcae027-4e25-4c41-bbc9-639927f58691\") " pod="openshift-image-registry/image-registry-697d97f7c8-tncnb" Jan 26 07:55:56 crc kubenswrapper[4806]: E0126 07:55:56.152165 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 07:55:56.652152865 +0000 UTC m=+135.916560921 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tncnb" (UID: "3bcae027-4e25-4c41-bbc9-639927f58691") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 07:55:56 crc kubenswrapper[4806]: I0126 07:55:56.200831 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-gk5q9" event={"ID":"0f3802bf-e4bc-4952-9e22-428d62ec0349","Type":"ContainerStarted","Data":"2ccce792376d769f789406159930bde98e4c3a9df45195e488be84844f9d7846"} Jan 26 07:55:56 crc kubenswrapper[4806]: I0126 07:55:56.258319 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 07:55:56 crc kubenswrapper[4806]: E0126 07:55:56.258699 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 07:55:56.758655839 +0000 UTC m=+136.023063895 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 07:55:56 crc kubenswrapper[4806]: I0126 07:55:56.259028 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tncnb\" (UID: \"3bcae027-4e25-4c41-bbc9-639927f58691\") " pod="openshift-image-registry/image-registry-697d97f7c8-tncnb" Jan 26 07:55:56 crc kubenswrapper[4806]: E0126 07:55:56.260818 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 07:55:56.76080981 +0000 UTC m=+136.025217866 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tncnb" (UID: "3bcae027-4e25-4c41-bbc9-639927f58691") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 07:55:56 crc kubenswrapper[4806]: I0126 07:55:56.305151 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9jh6s" event={"ID":"8dc6ffac-58e2-477a-adaf-3eb1de776a9c","Type":"ContainerStarted","Data":"c0965f0fde98f7e66ccff85fdaa45f01c184576ca35a2a61a9948f65256a99fd"} Jan 26 07:55:56 crc kubenswrapper[4806]: I0126 07:55:56.340586 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-hdxh9" event={"ID":"d66e251a-5a67-45c4-be63-2f46b56df1a5","Type":"ContainerStarted","Data":"ac13517043271dfada98aa335b446c39c3de3cf4582d80da3b3b9528a1d8cea0"} Jan 26 07:55:56 crc kubenswrapper[4806]: I0126 07:55:56.344446 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q5z6f" event={"ID":"14769a57-f19b-4d49-868f-d1754827714b","Type":"ContainerStarted","Data":"9f3f7f56073ccf3343e9a450f5da5b876cfd6487b9c7fb2a44c4b61e3d7fd147"} Jan 26 07:55:56 crc kubenswrapper[4806]: I0126 07:55:56.362610 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 07:55:56 crc kubenswrapper[4806]: E0126 07:55:56.363992 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 07:55:56.86397447 +0000 UTC m=+136.128382526 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 07:55:56 crc kubenswrapper[4806]: I0126 07:55:56.428876 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-wkkp2" event={"ID":"e8b535af-199e-479d-a619-194b396b8eb5","Type":"ContainerStarted","Data":"e1214aa6c380c0eae76808c390a67d5f8a04f99f829dfa38d5d43031184978f4"} Jan 26 07:55:56 crc kubenswrapper[4806]: I0126 07:55:56.445294 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-l5mfg" event={"ID":"2ce69172-bf74-4c4a-8aeb-9b1d86f50254","Type":"ContainerStarted","Data":"aec3ff5ce151123d0c089db74a4735a74f43eed2aabcb807f2582da2b773c59e"} Jan 26 07:55:56 crc kubenswrapper[4806]: I0126 07:55:56.464919 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-njnsk" event={"ID":"ff8cfaa0-24a3-4aeb-8b43-d0c5c6c401f1","Type":"ContainerStarted","Data":"6b905cee8193819ef2fa45c50da5447627506c9b68b29f25e51b29bb340a5c1f"} Jan 26 07:55:56 crc kubenswrapper[4806]: I0126 07:55:56.466224 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tncnb\" (UID: \"3bcae027-4e25-4c41-bbc9-639927f58691\") " pod="openshift-image-registry/image-registry-697d97f7c8-tncnb" Jan 26 07:55:56 crc kubenswrapper[4806]: E0126 07:55:56.466722 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 07:55:56.966709959 +0000 UTC m=+136.231118015 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tncnb" (UID: "3bcae027-4e25-4c41-bbc9-639927f58691") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 07:55:56 crc kubenswrapper[4806]: I0126 07:55:56.467714 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-46429"] Jan 26 07:55:56 crc kubenswrapper[4806]: I0126 07:55:56.495032 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5g5mr" event={"ID":"19ceab6e-3284-4ff6-b3a7-541d73c25150","Type":"ContainerStarted","Data":"da6e3b58033b82da3573b61ec53d56b3fdf02aa0228ad2ee9027d2c16017fcc9"} Jan 26 07:55:56 crc kubenswrapper[4806]: I0126 07:55:56.525589 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-sxf5m"] Jan 26 07:55:56 crc kubenswrapper[4806]: I0126 07:55:56.531206 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-sbd2c"] Jan 26 07:55:56 crc kubenswrapper[4806]: I0126 07:55:56.536324 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-d46vj" event={"ID":"e658ebdf-74ef-4dab-b48e-53557c516bd3","Type":"ContainerStarted","Data":"9cc5be41bb48193c089b031420da60acef8b3c3b2e0a7d80cf0e1b15bbff2c97"} Jan 26 07:55:56 crc kubenswrapper[4806]: I0126 07:55:56.567079 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 07:55:56 crc kubenswrapper[4806]: E0126 07:55:56.567465 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 07:55:57.067449751 +0000 UTC m=+136.331857807 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 07:55:56 crc kubenswrapper[4806]: I0126 07:55:56.577907 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-4gdvx" event={"ID":"3ba883c9-d8a0-42ec-8894-87769eabf95b","Type":"ContainerStarted","Data":"9c1fe5f476bd0fbd46eba44b5bd593265830df279ebb914369759b1e4369fd2c"} Jan 26 07:55:56 crc kubenswrapper[4806]: I0126 07:55:56.599706 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hq8jp"] Jan 26 07:55:56 crc kubenswrapper[4806]: I0126 07:55:56.639721 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-wb6lm" event={"ID":"2a0dd2e2-3942-4daa-a45b-17f7bdc66d00","Type":"ContainerStarted","Data":"a55e51effe2ce901cedaceca0491e8b18ef4caf27c687d03601cb1e4f3e057e6"} Jan 26 07:55:56 crc kubenswrapper[4806]: I0126 07:55:56.667273 4806 generic.go:334] "Generic (PLEG): container finished" podID="cf7f962d-5924-4e0e-bd23-cd46ba65f5a9" containerID="7e32b6bd1f9f550e60de1776b262447bcfae273ba7954436f91dcb1729d26894" exitCode=0 Jan 26 07:55:56 crc kubenswrapper[4806]: I0126 07:55:56.667352 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gmnqv" event={"ID":"cf7f962d-5924-4e0e-bd23-cd46ba65f5a9","Type":"ContainerDied","Data":"7e32b6bd1f9f550e60de1776b262447bcfae273ba7954436f91dcb1729d26894"} Jan 26 07:55:56 crc kubenswrapper[4806]: I0126 07:55:56.673803 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tncnb\" (UID: \"3bcae027-4e25-4c41-bbc9-639927f58691\") " pod="openshift-image-registry/image-registry-697d97f7c8-tncnb" Jan 26 07:55:56 crc kubenswrapper[4806]: E0126 07:55:56.674091 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 07:55:57.174077849 +0000 UTC m=+136.438485895 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tncnb" (UID: "3bcae027-4e25-4c41-bbc9-639927f58691") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 07:55:56 crc kubenswrapper[4806]: I0126 07:55:56.706847 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-s7jrc" event={"ID":"8a123de3-5556-4e34-8433-52805089c13c","Type":"ContainerStarted","Data":"e791a32de99c149a80ab2ce489dc772f13c2a9acf4292a1afe911db98702181e"} Jan 26 07:55:56 crc kubenswrapper[4806]: I0126 07:55:56.741081 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-htpwn" event={"ID":"5f97371c-7dc2-4170-90e7-f044dcc62f2a","Type":"ContainerStarted","Data":"5e9efc969d99a819e52e3d41ad90acba26e9e917535e82d1fe1f833fdd07d890"} Jan 26 07:55:56 crc kubenswrapper[4806]: I0126 07:55:56.786547 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 07:55:56 crc kubenswrapper[4806]: E0126 07:55:56.787689 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 07:55:57.287668593 +0000 UTC m=+136.552076649 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 07:55:56 crc kubenswrapper[4806]: I0126 07:55:56.818082 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qnzvz" event={"ID":"5d00dcee-5512-4730-8743-e128136b9364","Type":"ContainerStarted","Data":"9b579192db54455216dcf6785c29f11a2dfab68451654f29ddd890aa35e25a0b"} Jan 26 07:55:56 crc kubenswrapper[4806]: I0126 07:55:56.891853 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tncnb\" (UID: \"3bcae027-4e25-4c41-bbc9-639927f58691\") " pod="openshift-image-registry/image-registry-697d97f7c8-tncnb" Jan 26 07:55:56 crc kubenswrapper[4806]: E0126 07:55:56.892275 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 07:55:57.392257104 +0000 UTC m=+136.656665160 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tncnb" (UID: "3bcae027-4e25-4c41-bbc9-639927f58691") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 07:55:56 crc kubenswrapper[4806]: I0126 07:55:56.914157 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-vhrp6"] Jan 26 07:55:56 crc kubenswrapper[4806]: I0126 07:55:56.981284 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zd7nh"] Jan 26 07:55:56 crc kubenswrapper[4806]: I0126 07:55:56.981649 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-gk5q9" podStartSLOduration=116.981628075 podStartE2EDuration="1m56.981628075s" podCreationTimestamp="2026-01-26 07:54:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 07:55:56.979105704 +0000 UTC m=+136.243513760" watchObservedRunningTime="2026-01-26 07:55:56.981628075 +0000 UTC m=+136.246036121" Jan 26 07:55:56 crc kubenswrapper[4806]: I0126 07:55:56.994187 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 07:55:56 crc kubenswrapper[4806]: E0126 07:55:56.995463 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 07:55:57.495448085 +0000 UTC m=+136.759856141 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 07:55:57 crc kubenswrapper[4806]: I0126 07:55:57.033542 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ncld5"] Jan 26 07:55:57 crc kubenswrapper[4806]: W0126 07:55:57.035245 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb624b9bd_2dce_41fd_8abf_f21908db8f6c.slice/crio-f4b75a2d8ad9ee2d5660067843777063e3d836f2817cb7193fa99872cd70dc7a WatchSource:0}: Error finding container f4b75a2d8ad9ee2d5660067843777063e3d836f2817cb7193fa99872cd70dc7a: Status 404 returned error can't find the container with id f4b75a2d8ad9ee2d5660067843777063e3d836f2817cb7193fa99872cd70dc7a Jan 26 07:55:57 crc kubenswrapper[4806]: I0126 07:55:57.105301 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tncnb\" (UID: \"3bcae027-4e25-4c41-bbc9-639927f58691\") " pod="openshift-image-registry/image-registry-697d97f7c8-tncnb" Jan 26 07:55:57 crc kubenswrapper[4806]: E0126 07:55:57.106072 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 07:55:57.606041025 +0000 UTC m=+136.870449091 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tncnb" (UID: "3bcae027-4e25-4c41-bbc9-639927f58691") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 07:55:57 crc kubenswrapper[4806]: I0126 07:55:57.110222 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-wb6lm" podStartSLOduration=117.110199602 podStartE2EDuration="1m57.110199602s" podCreationTimestamp="2026-01-26 07:54:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 07:55:57.09592761 +0000 UTC m=+136.360335676" watchObservedRunningTime="2026-01-26 07:55:57.110199602 +0000 UTC m=+136.374607658" Jan 26 07:55:57 crc kubenswrapper[4806]: I0126 07:55:57.134645 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-bs67m"] Jan 26 07:55:57 crc kubenswrapper[4806]: I0126 07:55:57.183730 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-4r9h2"] Jan 26 07:55:57 crc kubenswrapper[4806]: I0126 07:55:57.191071 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-rgn89" podStartSLOduration=118.191028603 podStartE2EDuration="1m58.191028603s" podCreationTimestamp="2026-01-26 07:53:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 07:55:57.169349931 +0000 UTC m=+136.433757987" watchObservedRunningTime="2026-01-26 07:55:57.191028603 +0000 UTC m=+136.455436659" Jan 26 07:55:57 crc kubenswrapper[4806]: I0126 07:55:57.236945 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 07:55:57 crc kubenswrapper[4806]: E0126 07:55:57.237383 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 07:55:57.73736743 +0000 UTC m=+137.001775486 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 07:55:57 crc kubenswrapper[4806]: I0126 07:55:57.336720 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zrncs"] Jan 26 07:55:57 crc kubenswrapper[4806]: I0126 07:55:57.338737 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tncnb\" (UID: \"3bcae027-4e25-4c41-bbc9-639927f58691\") " pod="openshift-image-registry/image-registry-697d97f7c8-tncnb" Jan 26 07:55:57 crc kubenswrapper[4806]: E0126 07:55:57.338998 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 07:55:57.838986227 +0000 UTC m=+137.103394283 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tncnb" (UID: "3bcae027-4e25-4c41-bbc9-639927f58691") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 07:55:57 crc kubenswrapper[4806]: I0126 07:55:57.347325 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6sjl8"] Jan 26 07:55:57 crc kubenswrapper[4806]: I0126 07:55:57.445908 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 07:55:57 crc kubenswrapper[4806]: E0126 07:55:57.446407 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 07:55:57.946367176 +0000 UTC m=+137.210775232 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 07:55:57 crc kubenswrapper[4806]: I0126 07:55:57.547755 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tncnb\" (UID: \"3bcae027-4e25-4c41-bbc9-639927f58691\") " pod="openshift-image-registry/image-registry-697d97f7c8-tncnb" Jan 26 07:55:57 crc kubenswrapper[4806]: E0126 07:55:57.548877 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 07:55:58.048862438 +0000 UTC m=+137.313270484 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tncnb" (UID: "3bcae027-4e25-4c41-bbc9-639927f58691") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 07:55:57 crc kubenswrapper[4806]: I0126 07:55:57.657994 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 07:55:57 crc kubenswrapper[4806]: E0126 07:55:57.658441 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 07:55:58.158425328 +0000 UTC m=+137.422833384 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 07:55:57 crc kubenswrapper[4806]: I0126 07:55:57.760841 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tncnb\" (UID: \"3bcae027-4e25-4c41-bbc9-639927f58691\") " pod="openshift-image-registry/image-registry-697d97f7c8-tncnb" Jan 26 07:55:57 crc kubenswrapper[4806]: E0126 07:55:57.761258 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 07:55:58.261242239 +0000 UTC m=+137.525650295 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tncnb" (UID: "3bcae027-4e25-4c41-bbc9-639927f58691") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 07:55:57 crc kubenswrapper[4806]: I0126 07:55:57.767828 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-sf2rk"] Jan 26 07:55:57 crc kubenswrapper[4806]: I0126 07:55:57.825293 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9jh6s" event={"ID":"8dc6ffac-58e2-477a-adaf-3eb1de776a9c","Type":"ContainerStarted","Data":"b2dd3bb6f765bee918d282bc2fe7be4d3ff32ad7fb863f761c51a25e498cde2d"} Jan 26 07:55:57 crc kubenswrapper[4806]: I0126 07:55:57.862548 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 07:55:57 crc kubenswrapper[4806]: E0126 07:55:57.863394 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 07:55:58.36337724 +0000 UTC m=+137.627785296 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 07:55:57 crc kubenswrapper[4806]: I0126 07:55:57.881184 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-hdxh9" event={"ID":"d66e251a-5a67-45c4-be63-2f46b56df1a5","Type":"ContainerStarted","Data":"39d3ccbff1a26b8ae79eeb17cd893cdf52835f2ffeb0b20d0b5955a00b09d66d"} Jan 26 07:55:57 crc kubenswrapper[4806]: I0126 07:55:57.882616 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-hdxh9" Jan 26 07:55:57 crc kubenswrapper[4806]: I0126 07:55:57.885081 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jdcjm" event={"ID":"bd8d4294-4075-456c-ab53-d3646b5117b5","Type":"ContainerStarted","Data":"e035bd2e3875867bd6c8147e5db299ad1d214ee0bcdf5ddd319f2651d2418ba3"} Jan 26 07:55:57 crc kubenswrapper[4806]: I0126 07:55:57.901731 4806 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-hdxh9 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.11:6443/healthz\": dial tcp 10.217.0.11:6443: connect: connection refused" start-of-body= Jan 26 07:55:57 crc kubenswrapper[4806]: I0126 07:55:57.901798 4806 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-hdxh9" podUID="d66e251a-5a67-45c4-be63-2f46b56df1a5" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.11:6443/healthz\": dial tcp 10.217.0.11:6443: connect: connection refused" Jan 26 07:55:57 crc kubenswrapper[4806]: I0126 07:55:57.932614 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-sbd2c" event={"ID":"b624b9bd-2dce-41fd-8abf-f21908db8f6c","Type":"ContainerStarted","Data":"f4b75a2d8ad9ee2d5660067843777063e3d836f2817cb7193fa99872cd70dc7a"} Jan 26 07:55:57 crc kubenswrapper[4806]: I0126 07:55:57.934717 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9jh6s" podStartSLOduration=118.934706933 podStartE2EDuration="1m58.934706933s" podCreationTimestamp="2026-01-26 07:53:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 07:55:57.85130439 +0000 UTC m=+137.115712436" watchObservedRunningTime="2026-01-26 07:55:57.934706933 +0000 UTC m=+137.199114989" Jan 26 07:55:57 crc kubenswrapper[4806]: I0126 07:55:57.939332 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-hdxh9" podStartSLOduration=118.939321803 podStartE2EDuration="1m58.939321803s" podCreationTimestamp="2026-01-26 07:53:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 07:55:57.938360286 +0000 UTC m=+137.202768342" watchObservedRunningTime="2026-01-26 07:55:57.939321803 +0000 UTC m=+137.203729859" Jan 26 07:55:57 crc kubenswrapper[4806]: I0126 07:55:57.969334 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tncnb\" (UID: \"3bcae027-4e25-4c41-bbc9-639927f58691\") " pod="openshift-image-registry/image-registry-697d97f7c8-tncnb" Jan 26 07:55:57 crc kubenswrapper[4806]: E0126 07:55:57.969777 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 07:55:58.469762042 +0000 UTC m=+137.734170098 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tncnb" (UID: "3bcae027-4e25-4c41-bbc9-639927f58691") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 07:55:57 crc kubenswrapper[4806]: I0126 07:55:57.984499 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-mkfwt"] Jan 26 07:55:57 crc kubenswrapper[4806]: I0126 07:55:57.990377 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jdcjm" podStartSLOduration=117.990343382 podStartE2EDuration="1m57.990343382s" podCreationTimestamp="2026-01-26 07:54:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 07:55:57.979789885 +0000 UTC m=+137.244197941" watchObservedRunningTime="2026-01-26 07:55:57.990343382 +0000 UTC m=+137.254751438" Jan 26 07:55:58 crc kubenswrapper[4806]: I0126 07:55:58.001657 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-bs67m" event={"ID":"001f7476-4d33-474a-80c9-8e99cb19b4e5","Type":"ContainerStarted","Data":"07f44b6287d21a95cafd55a79222041ed745c94d00767bc2297d0cd72226b965"} Jan 26 07:55:58 crc kubenswrapper[4806]: I0126 07:55:58.058132 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mkvhj"] Jan 26 07:55:58 crc kubenswrapper[4806]: I0126 07:55:58.058227 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490225-v2v7z"] Jan 26 07:55:58 crc kubenswrapper[4806]: I0126 07:55:58.071876 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-wb6lm" event={"ID":"2a0dd2e2-3942-4daa-a45b-17f7bdc66d00","Type":"ContainerStarted","Data":"43db210b3dcfa4a0a0146a24a82067410f03d53f4227c4daf64224f9bc0ecfd5"} Jan 26 07:55:58 crc kubenswrapper[4806]: I0126 07:55:58.073069 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 07:55:58 crc kubenswrapper[4806]: E0126 07:55:58.073436 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 07:55:58.573368955 +0000 UTC m=+137.837777001 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 07:55:58 crc kubenswrapper[4806]: I0126 07:55:58.073691 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tncnb\" (UID: \"3bcae027-4e25-4c41-bbc9-639927f58691\") " pod="openshift-image-registry/image-registry-697d97f7c8-tncnb" Jan 26 07:55:58 crc kubenswrapper[4806]: E0126 07:55:58.075628 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 07:55:58.575612368 +0000 UTC m=+137.840020424 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tncnb" (UID: "3bcae027-4e25-4c41-bbc9-639927f58691") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 07:55:58 crc kubenswrapper[4806]: I0126 07:55:58.089914 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zd7nh" event={"ID":"76e1842c-1bb9-492c-9494-55a872376b54","Type":"ContainerStarted","Data":"78f7da37d603ff10c93869fba6641d9f1c18a25d6d55755ac3c53830575a271e"} Jan 26 07:55:58 crc kubenswrapper[4806]: I0126 07:55:58.098043 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xqmhh"] Jan 26 07:55:58 crc kubenswrapper[4806]: I0126 07:55:58.119208 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-f9fk8"] Jan 26 07:55:58 crc kubenswrapper[4806]: I0126 07:55:58.121913 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-jrg5t"] Jan 26 07:55:58 crc kubenswrapper[4806]: I0126 07:55:58.139498 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-nkrgc"] Jan 26 07:55:58 crc kubenswrapper[4806]: I0126 07:55:58.146477 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-htpwn" event={"ID":"5f97371c-7dc2-4170-90e7-f044dcc62f2a","Type":"ContainerStarted","Data":"1745a5dcb0fb858f00d64d30ace8504983267edfc40757ebd0ea6794125875bb"} Jan 26 07:55:58 crc kubenswrapper[4806]: I0126 07:55:58.168821 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hq8jp" event={"ID":"aa802450-996f-4548-a763-1e08d1cc564a","Type":"ContainerStarted","Data":"52130a1ab2c90a4fe3f922cbc1687836e6d84affe125734634a9d4f7320d9e42"} Jan 26 07:55:58 crc kubenswrapper[4806]: I0126 07:55:58.177139 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 07:55:58 crc kubenswrapper[4806]: I0126 07:55:58.183842 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vd8ts"] Jan 26 07:55:58 crc kubenswrapper[4806]: E0126 07:55:58.184694 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 07:55:58.684670654 +0000 UTC m=+137.949078890 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 07:55:58 crc kubenswrapper[4806]: I0126 07:55:58.185271 4806 generic.go:334] "Generic (PLEG): container finished" podID="3ba883c9-d8a0-42ec-8894-87769eabf95b" containerID="b5e107c8b6674b73f4b6c88cef40ece7ccb8afe6b328604d3a31d663afea08ed" exitCode=0 Jan 26 07:55:58 crc kubenswrapper[4806]: I0126 07:55:58.188174 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-4gdvx" event={"ID":"3ba883c9-d8a0-42ec-8894-87769eabf95b","Type":"ContainerDied","Data":"b5e107c8b6674b73f4b6c88cef40ece7ccb8afe6b328604d3a31d663afea08ed"} Jan 26 07:55:58 crc kubenswrapper[4806]: I0126 07:55:58.198335 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-mls86"] Jan 26 07:55:58 crc kubenswrapper[4806]: I0126 07:55:58.203579 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-htpwn" podStartSLOduration=119.203550247 podStartE2EDuration="1m59.203550247s" podCreationTimestamp="2026-01-26 07:53:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 07:55:58.188711458 +0000 UTC m=+137.453119514" watchObservedRunningTime="2026-01-26 07:55:58.203550247 +0000 UTC m=+137.467958293" Jan 26 07:55:58 crc kubenswrapper[4806]: I0126 07:55:58.219303 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-46429" event={"ID":"58108b0d-1028-4924-b025-1c11d3238dc1","Type":"ContainerStarted","Data":"a37a65473a168f390c17fed020b4cb745c7dddcf1880f92039991e4eb8f50c46"} Jan 26 07:55:58 crc kubenswrapper[4806]: I0126 07:55:58.233014 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-7c9hs"] Jan 26 07:55:58 crc kubenswrapper[4806]: I0126 07:55:58.281982 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tncnb\" (UID: \"3bcae027-4e25-4c41-bbc9-639927f58691\") " pod="openshift-image-registry/image-registry-697d97f7c8-tncnb" Jan 26 07:55:58 crc kubenswrapper[4806]: E0126 07:55:58.282364 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 07:55:58.78234557 +0000 UTC m=+138.046753626 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tncnb" (UID: "3bcae027-4e25-4c41-bbc9-639927f58691") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 07:55:58 crc kubenswrapper[4806]: W0126 07:55:58.313053 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd7f14ee6_1d4a_4cb9_bf7d_7a3ad4b0a7c1.slice/crio-05f351bd628e4f1d0ed186e903a09110b6d0c74eb8f7ffa6bec2e64cb6f7da24 WatchSource:0}: Error finding container 05f351bd628e4f1d0ed186e903a09110b6d0c74eb8f7ffa6bec2e64cb6f7da24: Status 404 returned error can't find the container with id 05f351bd628e4f1d0ed186e903a09110b6d0c74eb8f7ffa6bec2e64cb6f7da24 Jan 26 07:55:58 crc kubenswrapper[4806]: I0126 07:55:58.316891 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zrncs" event={"ID":"aff32d6f-604f-49f1-8547-bea4a259ed45","Type":"ContainerStarted","Data":"130ba06f48d1cd56bc6ebef661d27f8e1d18cecf9ab1c4181fff2d52c71b10ca"} Jan 26 07:55:58 crc kubenswrapper[4806]: I0126 07:55:58.376984 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-njnsk" event={"ID":"ff8cfaa0-24a3-4aeb-8b43-d0c5c6c401f1","Type":"ContainerStarted","Data":"54d61c6fec03c94c3b972c7f40b85d0b5818b469234589cec4d1a4bacb94119f"} Jan 26 07:55:58 crc kubenswrapper[4806]: I0126 07:55:58.382808 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 07:55:58 crc kubenswrapper[4806]: E0126 07:55:58.383123 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 07:55:58.882979739 +0000 UTC m=+138.147387785 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 07:55:58 crc kubenswrapper[4806]: I0126 07:55:58.383248 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tncnb\" (UID: \"3bcae027-4e25-4c41-bbc9-639927f58691\") " pod="openshift-image-registry/image-registry-697d97f7c8-tncnb" Jan 26 07:55:58 crc kubenswrapper[4806]: E0126 07:55:58.383594 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 07:55:58.883586406 +0000 UTC m=+138.147994462 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tncnb" (UID: "3bcae027-4e25-4c41-bbc9-639927f58691") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 07:55:58 crc kubenswrapper[4806]: I0126 07:55:58.440804 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qnzvz" event={"ID":"5d00dcee-5512-4730-8743-e128136b9364","Type":"ContainerStarted","Data":"7ead5a869662d1c984471f72a9cdb1fc6a4b5a09e147cd58f291655c9533279f"} Jan 26 07:55:58 crc kubenswrapper[4806]: I0126 07:55:58.482178 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q5z6f" event={"ID":"14769a57-f19b-4d49-868f-d1754827714b","Type":"ContainerStarted","Data":"cf76e26e9d50384799924cebb3e5a1176ca291a809409c0976bf8e806d3df9e4"} Jan 26 07:55:58 crc kubenswrapper[4806]: I0126 07:55:58.483508 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q5z6f" Jan 26 07:55:58 crc kubenswrapper[4806]: I0126 07:55:58.483948 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 07:55:58 crc kubenswrapper[4806]: E0126 07:55:58.484431 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 07:55:58.984414571 +0000 UTC m=+138.248822627 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 07:55:58 crc kubenswrapper[4806]: I0126 07:55:58.490011 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6sjl8" event={"ID":"3bacec42-b4b2-4638-9ea2-8db24615e6db","Type":"ContainerStarted","Data":"854df640b81db93a8c7ab63f82dad39d46919493f3bb3bdf5257a7474db53b56"} Jan 26 07:55:58 crc kubenswrapper[4806]: I0126 07:55:58.493128 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-sxf5m" event={"ID":"fca40e01-6a4e-46e1-970d-60b8436aa04e","Type":"ContainerStarted","Data":"a4111fb6c163b4b270639cdc91110cce895ee7eb0bff7d6d9b7eae0157b213df"} Jan 26 07:55:58 crc kubenswrapper[4806]: I0126 07:55:58.498364 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-4r9h2" event={"ID":"380d2e61-5b11-446b-b3b9-d3e7fa65569e","Type":"ContainerStarted","Data":"14b3527cfabfdfc7b1b808de944e3e49d8bc72f2e596aaf8c48e3da236fea77e"} Jan 26 07:55:58 crc kubenswrapper[4806]: I0126 07:55:58.498456 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q5z6f" Jan 26 07:55:58 crc kubenswrapper[4806]: I0126 07:55:58.503096 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ncld5" event={"ID":"ac2990c0-273e-4614-821b-a1db45c09c0f","Type":"ContainerStarted","Data":"80bc33e4dc5c61c04e9e7268b249f0500bd290a7fb878909bf933a930aef57f5"} Jan 26 07:55:58 crc kubenswrapper[4806]: I0126 07:55:58.552747 4806 csr.go:261] certificate signing request csr-59cfl is approved, waiting to be issued Jan 26 07:55:58 crc kubenswrapper[4806]: I0126 07:55:58.558969 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-vhrp6" event={"ID":"2a762473-781e-436f-bc99-584a5301abc3","Type":"ContainerStarted","Data":"01e4d472e4d9e309b18c18bdd77182bddd792cb04851241d6bc6796770eac0a0"} Jan 26 07:55:58 crc kubenswrapper[4806]: I0126 07:55:58.559317 4806 csr.go:257] certificate signing request csr-59cfl is issued Jan 26 07:55:58 crc kubenswrapper[4806]: I0126 07:55:58.563785 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q5z6f" podStartSLOduration=118.563768899 podStartE2EDuration="1m58.563768899s" podCreationTimestamp="2026-01-26 07:54:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 07:55:58.516772003 +0000 UTC m=+137.781180059" watchObservedRunningTime="2026-01-26 07:55:58.563768899 +0000 UTC m=+137.828176955" Jan 26 07:55:58 crc kubenswrapper[4806]: I0126 07:55:58.635694 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tncnb\" (UID: \"3bcae027-4e25-4c41-bbc9-639927f58691\") " pod="openshift-image-registry/image-registry-697d97f7c8-tncnb" Jan 26 07:55:58 crc kubenswrapper[4806]: I0126 07:55:58.638022 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-rgn89" Jan 26 07:55:58 crc kubenswrapper[4806]: E0126 07:55:58.639574 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 07:55:59.139557177 +0000 UTC m=+138.403965233 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tncnb" (UID: "3bcae027-4e25-4c41-bbc9-639927f58691") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 07:55:58 crc kubenswrapper[4806]: I0126 07:55:58.739583 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 07:55:58 crc kubenswrapper[4806]: E0126 07:55:58.741097 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 07:55:59.241072221 +0000 UTC m=+138.505480277 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 07:55:58 crc kubenswrapper[4806]: I0126 07:55:58.841834 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tncnb\" (UID: \"3bcae027-4e25-4c41-bbc9-639927f58691\") " pod="openshift-image-registry/image-registry-697d97f7c8-tncnb" Jan 26 07:55:58 crc kubenswrapper[4806]: E0126 07:55:58.842379 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 07:55:59.342353159 +0000 UTC m=+138.606761405 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tncnb" (UID: "3bcae027-4e25-4c41-bbc9-639927f58691") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 07:55:58 crc kubenswrapper[4806]: I0126 07:55:58.951570 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 07:55:58 crc kubenswrapper[4806]: E0126 07:55:58.951987 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 07:55:59.451962961 +0000 UTC m=+138.716371027 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 07:55:59 crc kubenswrapper[4806]: I0126 07:55:59.061593 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tncnb\" (UID: \"3bcae027-4e25-4c41-bbc9-639927f58691\") " pod="openshift-image-registry/image-registry-697d97f7c8-tncnb" Jan 26 07:55:59 crc kubenswrapper[4806]: E0126 07:55:59.062234 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 07:55:59.562217611 +0000 UTC m=+138.826625667 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tncnb" (UID: "3bcae027-4e25-4c41-bbc9-639927f58691") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 07:55:59 crc kubenswrapper[4806]: I0126 07:55:59.164746 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 07:55:59 crc kubenswrapper[4806]: E0126 07:55:59.165178 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 07:55:59.665156685 +0000 UTC m=+138.929564741 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 07:55:59 crc kubenswrapper[4806]: I0126 07:55:59.275732 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tncnb\" (UID: \"3bcae027-4e25-4c41-bbc9-639927f58691\") " pod="openshift-image-registry/image-registry-697d97f7c8-tncnb" Jan 26 07:55:59 crc kubenswrapper[4806]: E0126 07:55:59.276270 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 07:55:59.776246739 +0000 UTC m=+139.040654795 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tncnb" (UID: "3bcae027-4e25-4c41-bbc9-639927f58691") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 07:55:59 crc kubenswrapper[4806]: I0126 07:55:59.379955 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 07:55:59 crc kubenswrapper[4806]: E0126 07:55:59.380156 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 07:55:59.88012501 +0000 UTC m=+139.144533066 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 07:55:59 crc kubenswrapper[4806]: I0126 07:55:59.380487 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tncnb\" (UID: \"3bcae027-4e25-4c41-bbc9-639927f58691\") " pod="openshift-image-registry/image-registry-697d97f7c8-tncnb" Jan 26 07:55:59 crc kubenswrapper[4806]: E0126 07:55:59.380998 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 07:55:59.880988444 +0000 UTC m=+139.145396500 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tncnb" (UID: "3bcae027-4e25-4c41-bbc9-639927f58691") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 07:55:59 crc kubenswrapper[4806]: I0126 07:55:59.483512 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 07:55:59 crc kubenswrapper[4806]: E0126 07:55:59.484783 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 07:55:59.984743821 +0000 UTC m=+139.249151877 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 07:55:59 crc kubenswrapper[4806]: I0126 07:55:59.489105 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tncnb\" (UID: \"3bcae027-4e25-4c41-bbc9-639927f58691\") " pod="openshift-image-registry/image-registry-697d97f7c8-tncnb" Jan 26 07:55:59 crc kubenswrapper[4806]: E0126 07:55:59.489558 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 07:55:59.989546217 +0000 UTC m=+139.253954273 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tncnb" (UID: "3bcae027-4e25-4c41-bbc9-639927f58691") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 07:55:59 crc kubenswrapper[4806]: I0126 07:55:59.561616 4806 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-26 07:50:58 +0000 UTC, rotation deadline is 2026-10-13 00:26:46.065413778 +0000 UTC Jan 26 07:55:59 crc kubenswrapper[4806]: I0126 07:55:59.562017 4806 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6232h30m46.503402467s for next certificate rotation Jan 26 07:55:59 crc kubenswrapper[4806]: I0126 07:55:59.591084 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 07:55:59 crc kubenswrapper[4806]: E0126 07:55:59.591698 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 07:56:00.091674218 +0000 UTC m=+139.356082284 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 07:55:59 crc kubenswrapper[4806]: I0126 07:55:59.595932 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-d46vj" event={"ID":"e658ebdf-74ef-4dab-b48e-53557c516bd3","Type":"ContainerStarted","Data":"cd43699b977294ba6ce956b60f30a06dba9939c8b24e793faec14bbe7c7505dd"} Jan 26 07:55:59 crc kubenswrapper[4806]: I0126 07:55:59.597268 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xqmhh" event={"ID":"d6260cd7-9202-46cc-b943-b60aaa0e07ff","Type":"ContainerStarted","Data":"6e6a0d31f994aff6ed9e011947a8d2d7672ff7ca68e7ddc5c181bf9f33bd03ce"} Jan 26 07:55:59 crc kubenswrapper[4806]: I0126 07:55:59.610666 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-46429" event={"ID":"58108b0d-1028-4924-b025-1c11d3238dc1","Type":"ContainerStarted","Data":"c254712653dc4867a7b97379dd9dfbf5efa0bdb0b9da94ba605938b05e9f5ea2"} Jan 26 07:55:59 crc kubenswrapper[4806]: I0126 07:55:59.629399 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zrncs" event={"ID":"aff32d6f-604f-49f1-8547-bea4a259ed45","Type":"ContainerStarted","Data":"b46220f55d407a4172a7e5143ae4828466642cddd272ecf9fd6615967dec8406"} Jan 26 07:55:59 crc kubenswrapper[4806]: I0126 07:55:59.662964 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jdcjm" event={"ID":"bd8d4294-4075-456c-ab53-d3646b5117b5","Type":"ContainerStarted","Data":"01721d1068a40829fa020584ba847a1bc078130843942e07e38a66c8e5cadd94"} Jan 26 07:55:59 crc kubenswrapper[4806]: I0126 07:55:59.685090 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zrncs" podStartSLOduration=120.685064101 podStartE2EDuration="2m0.685064101s" podCreationTimestamp="2026-01-26 07:53:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 07:55:59.684181947 +0000 UTC m=+138.948590003" watchObservedRunningTime="2026-01-26 07:55:59.685064101 +0000 UTC m=+138.949472157" Jan 26 07:55:59 crc kubenswrapper[4806]: I0126 07:55:59.685493 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-46429" podStartSLOduration=119.685487063 podStartE2EDuration="1m59.685487063s" podCreationTimestamp="2026-01-26 07:54:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 07:55:59.647748699 +0000 UTC m=+138.912156755" watchObservedRunningTime="2026-01-26 07:55:59.685487063 +0000 UTC m=+138.949895119" Jan 26 07:55:59 crc kubenswrapper[4806]: I0126 07:55:59.692922 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tncnb\" (UID: \"3bcae027-4e25-4c41-bbc9-639927f58691\") " pod="openshift-image-registry/image-registry-697d97f7c8-tncnb" Jan 26 07:55:59 crc kubenswrapper[4806]: E0126 07:55:59.693300 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 07:56:00.193282023 +0000 UTC m=+139.457690069 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tncnb" (UID: "3bcae027-4e25-4c41-bbc9-639927f58691") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 07:55:59 crc kubenswrapper[4806]: I0126 07:55:59.744340 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-vhrp6" event={"ID":"2a762473-781e-436f-bc99-584a5301abc3","Type":"ContainerStarted","Data":"9905a8490a82fcbbd2da61e7cd496b6044f98aa5764e8c1e85ec96d6d6a4222b"} Jan 26 07:55:59 crc kubenswrapper[4806]: I0126 07:55:59.778831 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-mkfwt" event={"ID":"5192ad78-580b-4126-9241-1d52339308b1","Type":"ContainerStarted","Data":"a185d0e4050f5ff4f7858e0845f4179b0c5c0411bc55b2669b529fe3fa8326e2"} Jan 26 07:55:59 crc kubenswrapper[4806]: I0126 07:55:59.784674 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mkvhj" event={"ID":"f42b0469-833b-4dca-bc17-71e62b73f378","Type":"ContainerStarted","Data":"758d4c7ff2c104602d8d063e73c4c19deafdb296dbbb549254894a2e0ca6450a"} Jan 26 07:55:59 crc kubenswrapper[4806]: I0126 07:55:59.796093 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 07:55:59 crc kubenswrapper[4806]: E0126 07:55:59.796507 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 07:56:00.296485095 +0000 UTC m=+139.560893151 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 07:55:59 crc kubenswrapper[4806]: I0126 07:55:59.851195 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-l5mfg" event={"ID":"2ce69172-bf74-4c4a-8aeb-9b1d86f50254","Type":"ContainerStarted","Data":"3f2e882bcc02d5da38d1904670742843d6bf91be044217925408140d2cf6a638"} Jan 26 07:55:59 crc kubenswrapper[4806]: I0126 07:55:59.887402 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-l5mfg" podStartSLOduration=119.887378669 podStartE2EDuration="1m59.887378669s" podCreationTimestamp="2026-01-26 07:54:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 07:55:59.884828477 +0000 UTC m=+139.149236533" watchObservedRunningTime="2026-01-26 07:55:59.887378669 +0000 UTC m=+139.151786715" Jan 26 07:55:59 crc kubenswrapper[4806]: I0126 07:55:59.898851 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tncnb\" (UID: \"3bcae027-4e25-4c41-bbc9-639927f58691\") " pod="openshift-image-registry/image-registry-697d97f7c8-tncnb" Jan 26 07:55:59 crc kubenswrapper[4806]: E0126 07:55:59.899956 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 07:56:00.399936223 +0000 UTC m=+139.664344279 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tncnb" (UID: "3bcae027-4e25-4c41-bbc9-639927f58691") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 07:55:59 crc kubenswrapper[4806]: I0126 07:55:59.966191 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490225-v2v7z" event={"ID":"d7f14ee6-1d4a-4cb9-bf7d-7a3ad4b0a7c1","Type":"ContainerStarted","Data":"05f351bd628e4f1d0ed186e903a09110b6d0c74eb8f7ffa6bec2e64cb6f7da24"} Jan 26 07:55:59 crc kubenswrapper[4806]: I0126 07:55:59.980989 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-s7jrc" event={"ID":"8a123de3-5556-4e34-8433-52805089c13c","Type":"ContainerStarted","Data":"279e13024379104f20f082d2614a5edb5e7a729d1b3c643a553c432eca8bd8a1"} Jan 26 07:55:59 crc kubenswrapper[4806]: I0126 07:55:59.982190 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-s7jrc" Jan 26 07:55:59 crc kubenswrapper[4806]: I0126 07:55:59.986223 4806 patch_prober.go:28] interesting pod/downloads-7954f5f757-s7jrc container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Jan 26 07:55:59 crc kubenswrapper[4806]: I0126 07:55:59.986271 4806 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-s7jrc" podUID="8a123de3-5556-4e34-8433-52805089c13c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Jan 26 07:56:00 crc kubenswrapper[4806]: I0126 07:55:59.999850 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 07:56:00 crc kubenswrapper[4806]: E0126 07:56:00.000161 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 07:56:00.50012641 +0000 UTC m=+139.764534476 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 07:56:00 crc kubenswrapper[4806]: I0126 07:56:00.006011 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-s7jrc" podStartSLOduration=120.005985645 podStartE2EDuration="2m0.005985645s" podCreationTimestamp="2026-01-26 07:54:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 07:56:00.000243063 +0000 UTC m=+139.264651129" watchObservedRunningTime="2026-01-26 07:56:00.005985645 +0000 UTC m=+139.270393701" Jan 26 07:56:00 crc kubenswrapper[4806]: I0126 07:56:00.037823 4806 generic.go:334] "Generic (PLEG): container finished" podID="b624b9bd-2dce-41fd-8abf-f21908db8f6c" containerID="a90702fbcdc9e4511631d71f083d4f507f85a1763b769149a0ca75c60f72ae01" exitCode=0 Jan 26 07:56:00 crc kubenswrapper[4806]: I0126 07:56:00.037950 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-sbd2c" event={"ID":"b624b9bd-2dce-41fd-8abf-f21908db8f6c","Type":"ContainerDied","Data":"a90702fbcdc9e4511631d71f083d4f507f85a1763b769149a0ca75c60f72ae01"} Jan 26 07:56:00 crc kubenswrapper[4806]: I0126 07:56:00.060661 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5g5mr" event={"ID":"19ceab6e-3284-4ff6-b3a7-541d73c25150","Type":"ContainerStarted","Data":"ccb8e28450385f450801c21b98783276f51408357af3da4b817e4368b7fca582"} Jan 26 07:56:00 crc kubenswrapper[4806]: I0126 07:56:00.061381 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5g5mr" Jan 26 07:56:00 crc kubenswrapper[4806]: I0126 07:56:00.075717 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5g5mr" Jan 26 07:56:00 crc kubenswrapper[4806]: I0126 07:56:00.106446 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ncld5" event={"ID":"ac2990c0-273e-4614-821b-a1db45c09c0f","Type":"ContainerStarted","Data":"d555b0d9722a9e8f02562d435b74c02d2a1f95934ea046aabd3ca8cffdc74380"} Jan 26 07:56:00 crc kubenswrapper[4806]: I0126 07:56:00.107117 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tncnb\" (UID: \"3bcae027-4e25-4c41-bbc9-639927f58691\") " pod="openshift-image-registry/image-registry-697d97f7c8-tncnb" Jan 26 07:56:00 crc kubenswrapper[4806]: E0126 07:56:00.109486 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 07:56:00.609461884 +0000 UTC m=+139.873869940 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tncnb" (UID: "3bcae027-4e25-4c41-bbc9-639927f58691") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 07:56:00 crc kubenswrapper[4806]: I0126 07:56:00.121503 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-l5mfg" Jan 26 07:56:00 crc kubenswrapper[4806]: I0126 07:56:00.129867 4806 patch_prober.go:28] interesting pod/router-default-5444994796-l5mfg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 07:56:00 crc kubenswrapper[4806]: [-]has-synced failed: reason withheld Jan 26 07:56:00 crc kubenswrapper[4806]: [+]process-running ok Jan 26 07:56:00 crc kubenswrapper[4806]: healthz check failed Jan 26 07:56:00 crc kubenswrapper[4806]: I0126 07:56:00.130396 4806 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-l5mfg" podUID="2ce69172-bf74-4c4a-8aeb-9b1d86f50254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 07:56:00 crc kubenswrapper[4806]: I0126 07:56:00.153571 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-vd8ts" event={"ID":"84d147bb-634e-40fb-a631-91ff228c0801","Type":"ContainerStarted","Data":"3ea9b6dafb11bcb753041d747bb04fe310c95a0c57dda8f232c5f214cb5d9a91"} Jan 26 07:56:00 crc kubenswrapper[4806]: I0126 07:56:00.154651 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5g5mr" podStartSLOduration=120.154631418 podStartE2EDuration="2m0.154631418s" podCreationTimestamp="2026-01-26 07:54:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 07:56:00.153904808 +0000 UTC m=+139.418312864" watchObservedRunningTime="2026-01-26 07:56:00.154631418 +0000 UTC m=+139.419039464" Jan 26 07:56:00 crc kubenswrapper[4806]: I0126 07:56:00.202161 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-f9fk8" event={"ID":"5cfe7537-639f-4bbc-9e05-bca737f295ce","Type":"ContainerStarted","Data":"54d76d416c8d483b2f86cf5e76636be3e10a104688f4fd5d9aad94165a627d81"} Jan 26 07:56:00 crc kubenswrapper[4806]: I0126 07:56:00.202921 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-f9fk8" Jan 26 07:56:00 crc kubenswrapper[4806]: I0126 07:56:00.212018 4806 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-f9fk8 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:5443/healthz\": dial tcp 10.217.0.39:5443: connect: connection refused" start-of-body= Jan 26 07:56:00 crc kubenswrapper[4806]: I0126 07:56:00.212112 4806 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-f9fk8" podUID="5cfe7537-639f-4bbc-9e05-bca737f295ce" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.39:5443/healthz\": dial tcp 10.217.0.39:5443: connect: connection refused" Jan 26 07:56:00 crc kubenswrapper[4806]: I0126 07:56:00.213491 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 07:56:00 crc kubenswrapper[4806]: E0126 07:56:00.214092 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 07:56:00.714060945 +0000 UTC m=+139.978469001 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 07:56:00 crc kubenswrapper[4806]: I0126 07:56:00.245356 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-f9fk8" podStartSLOduration=120.245309607 podStartE2EDuration="2m0.245309607s" podCreationTimestamp="2026-01-26 07:54:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 07:56:00.242014594 +0000 UTC m=+139.506422660" watchObservedRunningTime="2026-01-26 07:56:00.245309607 +0000 UTC m=+139.509717663" Jan 26 07:56:00 crc kubenswrapper[4806]: I0126 07:56:00.258047 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-bs67m" event={"ID":"001f7476-4d33-474a-80c9-8e99cb19b4e5","Type":"ContainerStarted","Data":"686254e6f0b1d3f407e33eb4c144596eb5c7628fd40e0ac5d42db31ceae42f36"} Jan 26 07:56:00 crc kubenswrapper[4806]: I0126 07:56:00.308567 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-bs67m" podStartSLOduration=120.308497899 podStartE2EDuration="2m0.308497899s" podCreationTimestamp="2026-01-26 07:54:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 07:56:00.301710708 +0000 UTC m=+139.566118764" watchObservedRunningTime="2026-01-26 07:56:00.308497899 +0000 UTC m=+139.572905955" Jan 26 07:56:00 crc kubenswrapper[4806]: I0126 07:56:00.309549 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-njnsk" event={"ID":"ff8cfaa0-24a3-4aeb-8b43-d0c5c6c401f1","Type":"ContainerStarted","Data":"796739a97026e4def19a7fad7582ad1b1b4d449e6943869b8270800f9e3f5b12"} Jan 26 07:56:00 crc kubenswrapper[4806]: I0126 07:56:00.321713 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tncnb\" (UID: \"3bcae027-4e25-4c41-bbc9-639927f58691\") " pod="openshift-image-registry/image-registry-697d97f7c8-tncnb" Jan 26 07:56:00 crc kubenswrapper[4806]: E0126 07:56:00.323262 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 07:56:00.823246775 +0000 UTC m=+140.087654831 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tncnb" (UID: "3bcae027-4e25-4c41-bbc9-639927f58691") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 07:56:00 crc kubenswrapper[4806]: I0126 07:56:00.340251 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-nkrgc" event={"ID":"41318e1c-f829-49d4-9ad8-5dc639973784","Type":"ContainerStarted","Data":"16d5010e34d3f633595a147efe2926bfd0fa3429006576673ad0d6e78a471174"} Jan 26 07:56:00 crc kubenswrapper[4806]: I0126 07:56:00.349873 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-njnsk" podStartSLOduration=121.349836395 podStartE2EDuration="2m1.349836395s" podCreationTimestamp="2026-01-26 07:53:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 07:56:00.349200117 +0000 UTC m=+139.613608163" watchObservedRunningTime="2026-01-26 07:56:00.349836395 +0000 UTC m=+139.614244451" Jan 26 07:56:00 crc kubenswrapper[4806]: I0126 07:56:00.399401 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-nkrgc" podStartSLOduration=9.399380623 podStartE2EDuration="9.399380623s" podCreationTimestamp="2026-01-26 07:55:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 07:56:00.396104871 +0000 UTC m=+139.660512927" watchObservedRunningTime="2026-01-26 07:56:00.399380623 +0000 UTC m=+139.663788679" Jan 26 07:56:00 crc kubenswrapper[4806]: I0126 07:56:00.423973 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 07:56:00 crc kubenswrapper[4806]: E0126 07:56:00.424185 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 07:56:00.924146102 +0000 UTC m=+140.188554168 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 07:56:00 crc kubenswrapper[4806]: I0126 07:56:00.425116 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tncnb\" (UID: \"3bcae027-4e25-4c41-bbc9-639927f58691\") " pod="openshift-image-registry/image-registry-697d97f7c8-tncnb" Jan 26 07:56:00 crc kubenswrapper[4806]: I0126 07:56:00.425446 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-wkkp2" event={"ID":"e8b535af-199e-479d-a619-194b396b8eb5","Type":"ContainerStarted","Data":"f1d2dde248492d377f8ebaee004509478e94bacb19639d8da7da1eaa855cba34"} Jan 26 07:56:00 crc kubenswrapper[4806]: E0126 07:56:00.427255 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 07:56:00.927213468 +0000 UTC m=+140.191621524 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tncnb" (UID: "3bcae027-4e25-4c41-bbc9-639927f58691") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 07:56:00 crc kubenswrapper[4806]: I0126 07:56:00.465814 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-4r9h2" event={"ID":"380d2e61-5b11-446b-b3b9-d3e7fa65569e","Type":"ContainerStarted","Data":"30de6d5cdaeb2c93208c1cc4edcffd1d7e558e37567f8f10509c6d0dbd60e6e3"} Jan 26 07:56:00 crc kubenswrapper[4806]: I0126 07:56:00.478172 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-wkkp2" podStartSLOduration=8.478156326 podStartE2EDuration="8.478156326s" podCreationTimestamp="2026-01-26 07:55:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 07:56:00.477054554 +0000 UTC m=+139.741462610" watchObservedRunningTime="2026-01-26 07:56:00.478156326 +0000 UTC m=+139.742564382" Jan 26 07:56:00 crc kubenswrapper[4806]: I0126 07:56:00.495039 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-7c9hs" event={"ID":"4cdf983d-19e2-445c-9ae4-237ec10dd839","Type":"ContainerStarted","Data":"7069d4a1c0d1fb68e14e00d5a87c5566af4def3e8cde72c3966cd704437787a3"} Jan 26 07:56:00 crc kubenswrapper[4806]: I0126 07:56:00.523443 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qnzvz" event={"ID":"5d00dcee-5512-4730-8743-e128136b9364","Type":"ContainerStarted","Data":"3e55ce992d5b882c4b881fc08e9572fa627622b229b8d7ea4652004101c671c4"} Jan 26 07:56:00 crc kubenswrapper[4806]: I0126 07:56:00.531353 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 07:56:00 crc kubenswrapper[4806]: E0126 07:56:00.533101 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 07:56:01.033069025 +0000 UTC m=+140.297477081 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 07:56:00 crc kubenswrapper[4806]: I0126 07:56:00.534397 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-sf2rk" event={"ID":"6dac3476-1363-4ddc-8fa5-9f0e110e5c38","Type":"ContainerStarted","Data":"d984dc2a09f6067f45b4fa50eeccadd4e638b342b1899df0e2b34880e37fa676"} Jan 26 07:56:00 crc kubenswrapper[4806]: I0126 07:56:00.541582 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-mls86" event={"ID":"e04696fa-1756-4ad4-9908-a09f7339c584","Type":"ContainerStarted","Data":"8a8b53ae97817b7ca87d131fe96573da90d94b208489d2ca0248887f85fccc35"} Jan 26 07:56:00 crc kubenswrapper[4806]: I0126 07:56:00.555083 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-7c9hs" podStartSLOduration=120.555061535 podStartE2EDuration="2m0.555061535s" podCreationTimestamp="2026-01-26 07:54:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 07:56:00.550369443 +0000 UTC m=+139.814777499" watchObservedRunningTime="2026-01-26 07:56:00.555061535 +0000 UTC m=+139.819469591" Jan 26 07:56:00 crc kubenswrapper[4806]: I0126 07:56:00.574103 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gmnqv" event={"ID":"cf7f962d-5924-4e0e-bd23-cd46ba65f5a9","Type":"ContainerStarted","Data":"87b7774513df79f125578a82c6814e63077f535a445cd1765db652fbea19eeb7"} Jan 26 07:56:00 crc kubenswrapper[4806]: I0126 07:56:00.613393 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-qnzvz" podStartSLOduration=120.61337585 podStartE2EDuration="2m0.61337585s" podCreationTimestamp="2026-01-26 07:54:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 07:56:00.611913829 +0000 UTC m=+139.876321885" watchObservedRunningTime="2026-01-26 07:56:00.61337585 +0000 UTC m=+139.877783906" Jan 26 07:56:00 crc kubenswrapper[4806]: I0126 07:56:00.629612 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hq8jp" event={"ID":"aa802450-996f-4548-a763-1e08d1cc564a","Type":"ContainerStarted","Data":"15ce978ed167e40311c01b0449a06063c2487d38888749f35386ca829b5deda5"} Jan 26 07:56:00 crc kubenswrapper[4806]: I0126 07:56:00.629853 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hq8jp" Jan 26 07:56:00 crc kubenswrapper[4806]: I0126 07:56:00.641182 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tncnb\" (UID: \"3bcae027-4e25-4c41-bbc9-639927f58691\") " pod="openshift-image-registry/image-registry-697d97f7c8-tncnb" Jan 26 07:56:00 crc kubenswrapper[4806]: E0126 07:56:00.642293 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 07:56:01.142281286 +0000 UTC m=+140.406689342 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tncnb" (UID: "3bcae027-4e25-4c41-bbc9-639927f58691") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 07:56:00 crc kubenswrapper[4806]: I0126 07:56:00.666329 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-sf2rk" podStartSLOduration=120.666314614 podStartE2EDuration="2m0.666314614s" podCreationTimestamp="2026-01-26 07:54:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 07:56:00.665552042 +0000 UTC m=+139.929960098" watchObservedRunningTime="2026-01-26 07:56:00.666314614 +0000 UTC m=+139.930722670" Jan 26 07:56:00 crc kubenswrapper[4806]: I0126 07:56:00.667179 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-jrg5t" event={"ID":"61b3db9d-f649-4424-8cc5-b7b3f3f1161b","Type":"ContainerStarted","Data":"0612774dcd676a862396ce32cbe4421daf0637106c24522cbe04d0576ef99494"} Jan 26 07:56:00 crc kubenswrapper[4806]: I0126 07:56:00.718065 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hq8jp" podStartSLOduration=120.718049133 podStartE2EDuration="2m0.718049133s" podCreationTimestamp="2026-01-26 07:54:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 07:56:00.717564479 +0000 UTC m=+139.981972525" watchObservedRunningTime="2026-01-26 07:56:00.718049133 +0000 UTC m=+139.982457189" Jan 26 07:56:00 crc kubenswrapper[4806]: I0126 07:56:00.719404 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zd7nh" event={"ID":"76e1842c-1bb9-492c-9494-55a872376b54","Type":"ContainerStarted","Data":"f43781d8d605ccecbfb9f9930a891a75da2c4899f4d7002afb87f5ae72ff6e37"} Jan 26 07:56:00 crc kubenswrapper[4806]: I0126 07:56:00.721110 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6sjl8" event={"ID":"3bacec42-b4b2-4638-9ea2-8db24615e6db","Type":"ContainerStarted","Data":"87449cba6f1bac4e731a233e196aabf024dc6cf88c6d259b2a5d27a6455c4216"} Jan 26 07:56:00 crc kubenswrapper[4806]: I0126 07:56:00.742321 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 07:56:00 crc kubenswrapper[4806]: E0126 07:56:00.743223 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 07:56:01.243202513 +0000 UTC m=+140.507610579 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 07:56:00 crc kubenswrapper[4806]: I0126 07:56:00.751456 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-sxf5m" event={"ID":"fca40e01-6a4e-46e1-970d-60b8436aa04e","Type":"ContainerStarted","Data":"1f36174818b1067c35b8b1d01e1d4d4cd4dc3318f2745d5bcc3a8ae2d65868ce"} Jan 26 07:56:00 crc kubenswrapper[4806]: I0126 07:56:00.758494 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hq8jp" Jan 26 07:56:00 crc kubenswrapper[4806]: I0126 07:56:00.762812 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gmnqv" podStartSLOduration=120.762789645 podStartE2EDuration="2m0.762789645s" podCreationTimestamp="2026-01-26 07:54:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 07:56:00.758893075 +0000 UTC m=+140.023301131" watchObservedRunningTime="2026-01-26 07:56:00.762789645 +0000 UTC m=+140.027197701" Jan 26 07:56:00 crc kubenswrapper[4806]: I0126 07:56:00.766562 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-hdxh9" Jan 26 07:56:00 crc kubenswrapper[4806]: I0126 07:56:00.801360 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6sjl8" podStartSLOduration=120.801327413 podStartE2EDuration="2m0.801327413s" podCreationTimestamp="2026-01-26 07:54:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 07:56:00.799080319 +0000 UTC m=+140.063488365" watchObservedRunningTime="2026-01-26 07:56:00.801327413 +0000 UTC m=+140.065735469" Jan 26 07:56:00 crc kubenswrapper[4806]: I0126 07:56:00.845828 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tncnb\" (UID: \"3bcae027-4e25-4c41-bbc9-639927f58691\") " pod="openshift-image-registry/image-registry-697d97f7c8-tncnb" Jan 26 07:56:00 crc kubenswrapper[4806]: E0126 07:56:00.850671 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 07:56:01.350655004 +0000 UTC m=+140.615063060 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tncnb" (UID: "3bcae027-4e25-4c41-bbc9-639927f58691") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 07:56:00 crc kubenswrapper[4806]: I0126 07:56:00.893317 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-zd7nh" podStartSLOduration=120.893295857 podStartE2EDuration="2m0.893295857s" podCreationTimestamp="2026-01-26 07:54:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 07:56:00.838197273 +0000 UTC m=+140.102605329" watchObservedRunningTime="2026-01-26 07:56:00.893295857 +0000 UTC m=+140.157703913" Jan 26 07:56:00 crc kubenswrapper[4806]: I0126 07:56:00.947351 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 07:56:00 crc kubenswrapper[4806]: E0126 07:56:00.947700 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 07:56:01.447685402 +0000 UTC m=+140.712093458 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 07:56:01 crc kubenswrapper[4806]: I0126 07:56:01.048982 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tncnb\" (UID: \"3bcae027-4e25-4c41-bbc9-639927f58691\") " pod="openshift-image-registry/image-registry-697d97f7c8-tncnb" Jan 26 07:56:01 crc kubenswrapper[4806]: E0126 07:56:01.049320 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 07:56:01.549305828 +0000 UTC m=+140.813713884 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tncnb" (UID: "3bcae027-4e25-4c41-bbc9-639927f58691") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 07:56:01 crc kubenswrapper[4806]: I0126 07:56:01.125073 4806 patch_prober.go:28] interesting pod/router-default-5444994796-l5mfg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 07:56:01 crc kubenswrapper[4806]: [-]has-synced failed: reason withheld Jan 26 07:56:01 crc kubenswrapper[4806]: [+]process-running ok Jan 26 07:56:01 crc kubenswrapper[4806]: healthz check failed Jan 26 07:56:01 crc kubenswrapper[4806]: I0126 07:56:01.125150 4806 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-l5mfg" podUID="2ce69172-bf74-4c4a-8aeb-9b1d86f50254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 07:56:01 crc kubenswrapper[4806]: I0126 07:56:01.149665 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 07:56:01 crc kubenswrapper[4806]: E0126 07:56:01.149902 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 07:56:01.649878306 +0000 UTC m=+140.914286362 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 07:56:01 crc kubenswrapper[4806]: I0126 07:56:01.149991 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tncnb\" (UID: \"3bcae027-4e25-4c41-bbc9-639927f58691\") " pod="openshift-image-registry/image-registry-697d97f7c8-tncnb" Jan 26 07:56:01 crc kubenswrapper[4806]: E0126 07:56:01.150439 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 07:56:01.650430681 +0000 UTC m=+140.914838737 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tncnb" (UID: "3bcae027-4e25-4c41-bbc9-639927f58691") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 07:56:01 crc kubenswrapper[4806]: I0126 07:56:01.251141 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 07:56:01 crc kubenswrapper[4806]: E0126 07:56:01.251381 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 07:56:01.751353468 +0000 UTC m=+141.015761524 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 07:56:01 crc kubenswrapper[4806]: I0126 07:56:01.251479 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tncnb\" (UID: \"3bcae027-4e25-4c41-bbc9-639927f58691\") " pod="openshift-image-registry/image-registry-697d97f7c8-tncnb" Jan 26 07:56:01 crc kubenswrapper[4806]: E0126 07:56:01.251880 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 07:56:01.751869063 +0000 UTC m=+141.016277329 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tncnb" (UID: "3bcae027-4e25-4c41-bbc9-639927f58691") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 07:56:01 crc kubenswrapper[4806]: I0126 07:56:01.352342 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 07:56:01 crc kubenswrapper[4806]: E0126 07:56:01.352560 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 07:56:01.852513532 +0000 UTC m=+141.116921588 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 07:56:01 crc kubenswrapper[4806]: I0126 07:56:01.352840 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tncnb\" (UID: \"3bcae027-4e25-4c41-bbc9-639927f58691\") " pod="openshift-image-registry/image-registry-697d97f7c8-tncnb" Jan 26 07:56:01 crc kubenswrapper[4806]: E0126 07:56:01.353237 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 07:56:01.853222992 +0000 UTC m=+141.117631048 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tncnb" (UID: "3bcae027-4e25-4c41-bbc9-639927f58691") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 07:56:01 crc kubenswrapper[4806]: I0126 07:56:01.454421 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 07:56:01 crc kubenswrapper[4806]: E0126 07:56:01.454821 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 07:56:01.954803598 +0000 UTC m=+141.219211654 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 07:56:01 crc kubenswrapper[4806]: I0126 07:56:01.455112 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tncnb\" (UID: \"3bcae027-4e25-4c41-bbc9-639927f58691\") " pod="openshift-image-registry/image-registry-697d97f7c8-tncnb" Jan 26 07:56:01 crc kubenswrapper[4806]: E0126 07:56:01.455513 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 07:56:01.955499878 +0000 UTC m=+141.219907934 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tncnb" (UID: "3bcae027-4e25-4c41-bbc9-639927f58691") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 07:56:01 crc kubenswrapper[4806]: I0126 07:56:01.556561 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 07:56:01 crc kubenswrapper[4806]: E0126 07:56:01.556838 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 07:56:02.056807635 +0000 UTC m=+141.321215691 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 07:56:01 crc kubenswrapper[4806]: I0126 07:56:01.557058 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tncnb\" (UID: \"3bcae027-4e25-4c41-bbc9-639927f58691\") " pod="openshift-image-registry/image-registry-697d97f7c8-tncnb" Jan 26 07:56:01 crc kubenswrapper[4806]: E0126 07:56:01.557450 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 07:56:02.057434583 +0000 UTC m=+141.321842639 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tncnb" (UID: "3bcae027-4e25-4c41-bbc9-639927f58691") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 07:56:01 crc kubenswrapper[4806]: I0126 07:56:01.658533 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 07:56:01 crc kubenswrapper[4806]: E0126 07:56:01.658855 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 07:56:02.158840734 +0000 UTC m=+141.423248790 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 07:56:01 crc kubenswrapper[4806]: I0126 07:56:01.687232 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-79ddh"] Jan 26 07:56:01 crc kubenswrapper[4806]: I0126 07:56:01.688261 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-79ddh" Jan 26 07:56:01 crc kubenswrapper[4806]: I0126 07:56:01.693896 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 26 07:56:01 crc kubenswrapper[4806]: I0126 07:56:01.715063 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-79ddh"] Jan 26 07:56:01 crc kubenswrapper[4806]: I0126 07:56:01.762579 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slhdg\" (UniqueName: \"kubernetes.io/projected/df97f49a-b950-45f2-8c66-52f2c6c33163-kube-api-access-slhdg\") pod \"community-operators-79ddh\" (UID: \"df97f49a-b950-45f2-8c66-52f2c6c33163\") " pod="openshift-marketplace/community-operators-79ddh" Jan 26 07:56:01 crc kubenswrapper[4806]: I0126 07:56:01.763142 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tncnb\" (UID: \"3bcae027-4e25-4c41-bbc9-639927f58691\") " pod="openshift-image-registry/image-registry-697d97f7c8-tncnb" Jan 26 07:56:01 crc kubenswrapper[4806]: I0126 07:56:01.763166 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df97f49a-b950-45f2-8c66-52f2c6c33163-utilities\") pod \"community-operators-79ddh\" (UID: \"df97f49a-b950-45f2-8c66-52f2c6c33163\") " pod="openshift-marketplace/community-operators-79ddh" Jan 26 07:56:01 crc kubenswrapper[4806]: I0126 07:56:01.763189 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df97f49a-b950-45f2-8c66-52f2c6c33163-catalog-content\") pod \"community-operators-79ddh\" (UID: \"df97f49a-b950-45f2-8c66-52f2c6c33163\") " pod="openshift-marketplace/community-operators-79ddh" Jan 26 07:56:01 crc kubenswrapper[4806]: E0126 07:56:01.763476 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 07:56:02.263464176 +0000 UTC m=+141.527872232 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tncnb" (UID: "3bcae027-4e25-4c41-bbc9-639927f58691") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 07:56:01 crc kubenswrapper[4806]: I0126 07:56:01.793171 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-f9fk8" event={"ID":"5cfe7537-639f-4bbc-9e05-bca737f295ce","Type":"ContainerStarted","Data":"d0e321fd703de93f2fc4d8d13e1cb2749f347cba21bc8f4bfa3503efeb03704a"} Jan 26 07:56:01 crc kubenswrapper[4806]: I0126 07:56:01.810486 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-sf2rk" event={"ID":"6dac3476-1363-4ddc-8fa5-9f0e110e5c38","Type":"ContainerStarted","Data":"cc5f8537edd8ff7a905a44994f05225a4c3492a9156345d32398ef638981649e"} Jan 26 07:56:01 crc kubenswrapper[4806]: I0126 07:56:01.813048 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-sbd2c" event={"ID":"b624b9bd-2dce-41fd-8abf-f21908db8f6c","Type":"ContainerStarted","Data":"6bf1d7ebb1993503c4142e73a5faeca7d1bafb8f7de21f97d5b469ba31e4d51f"} Jan 26 07:56:01 crc kubenswrapper[4806]: I0126 07:56:01.813708 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-sbd2c" Jan 26 07:56:01 crc kubenswrapper[4806]: I0126 07:56:01.814950 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-mls86" event={"ID":"e04696fa-1756-4ad4-9908-a09f7339c584","Type":"ContainerStarted","Data":"ffbae90f20bd119b16f344fca6d7011aa014f3f5b1322f0da4c1f8b8e7123c32"} Jan 26 07:56:01 crc kubenswrapper[4806]: I0126 07:56:01.814989 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-mls86" event={"ID":"e04696fa-1756-4ad4-9908-a09f7339c584","Type":"ContainerStarted","Data":"a6da45a46b259c686b41a9a87de4d660711b16bccfa05cd29078b3e3054a8e9f"} Jan 26 07:56:01 crc kubenswrapper[4806]: I0126 07:56:01.815889 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-nkrgc" event={"ID":"41318e1c-f829-49d4-9ad8-5dc639973784","Type":"ContainerStarted","Data":"742352f18b830d3e16b51d16b4d225b1d88cefccabad7827b032a50eba8a810b"} Jan 26 07:56:01 crc kubenswrapper[4806]: I0126 07:56:01.817252 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xqmhh" event={"ID":"d6260cd7-9202-46cc-b943-b60aaa0e07ff","Type":"ContainerStarted","Data":"f97f2ba33f19f2b1ecd24aa27d6bf759b01491b83834d70290e9fb7f99b8c753"} Jan 26 07:56:01 crc kubenswrapper[4806]: I0126 07:56:01.835879 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-vhrp6" event={"ID":"2a762473-781e-436f-bc99-584a5301abc3","Type":"ContainerStarted","Data":"99187e8182db821dbb22521ebc93342ef91eed25079cfd8f6922685c40a89677"} Jan 26 07:56:01 crc kubenswrapper[4806]: I0126 07:56:01.837714 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-sbd2c" podStartSLOduration=122.8376938 podStartE2EDuration="2m2.8376938s" podCreationTimestamp="2026-01-26 07:53:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 07:56:01.836554477 +0000 UTC m=+141.100962543" watchObservedRunningTime="2026-01-26 07:56:01.8376938 +0000 UTC m=+141.102101856" Jan 26 07:56:01 crc kubenswrapper[4806]: I0126 07:56:01.853749 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-jrg5t" event={"ID":"61b3db9d-f649-4424-8cc5-b7b3f3f1161b","Type":"ContainerStarted","Data":"7e2ad32a43671f873e052b40f143347da8bcbd2b93510cdbf4a38c210d0795b7"} Jan 26 07:56:01 crc kubenswrapper[4806]: I0126 07:56:01.853808 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-jrg5t" event={"ID":"61b3db9d-f649-4424-8cc5-b7b3f3f1161b","Type":"ContainerStarted","Data":"b3ee1c712c2e45f0670a978e44dcf3879a094704acbc8851e18674d75dcf5ceb"} Jan 26 07:56:01 crc kubenswrapper[4806]: I0126 07:56:01.854500 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-jrg5t" Jan 26 07:56:01 crc kubenswrapper[4806]: I0126 07:56:01.862272 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490225-v2v7z" event={"ID":"d7f14ee6-1d4a-4cb9-bf7d-7a3ad4b0a7c1","Type":"ContainerStarted","Data":"3b8ffc16eec0a9205c74046e6957dc892472e5de7293eb6ef937088aaf25fa9d"} Jan 26 07:56:01 crc kubenswrapper[4806]: I0126 07:56:01.863601 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 07:56:01 crc kubenswrapper[4806]: I0126 07:56:01.864200 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df97f49a-b950-45f2-8c66-52f2c6c33163-catalog-content\") pod \"community-operators-79ddh\" (UID: \"df97f49a-b950-45f2-8c66-52f2c6c33163\") " pod="openshift-marketplace/community-operators-79ddh" Jan 26 07:56:01 crc kubenswrapper[4806]: I0126 07:56:01.864327 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-slhdg\" (UniqueName: \"kubernetes.io/projected/df97f49a-b950-45f2-8c66-52f2c6c33163-kube-api-access-slhdg\") pod \"community-operators-79ddh\" (UID: \"df97f49a-b950-45f2-8c66-52f2c6c33163\") " pod="openshift-marketplace/community-operators-79ddh" Jan 26 07:56:01 crc kubenswrapper[4806]: I0126 07:56:01.864452 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df97f49a-b950-45f2-8c66-52f2c6c33163-utilities\") pod \"community-operators-79ddh\" (UID: \"df97f49a-b950-45f2-8c66-52f2c6c33163\") " pod="openshift-marketplace/community-operators-79ddh" Jan 26 07:56:01 crc kubenswrapper[4806]: E0126 07:56:01.865299 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 07:56:02.365278018 +0000 UTC m=+141.629686074 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 07:56:01 crc kubenswrapper[4806]: I0126 07:56:01.867224 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df97f49a-b950-45f2-8c66-52f2c6c33163-catalog-content\") pod \"community-operators-79ddh\" (UID: \"df97f49a-b950-45f2-8c66-52f2c6c33163\") " pod="openshift-marketplace/community-operators-79ddh" Jan 26 07:56:01 crc kubenswrapper[4806]: I0126 07:56:01.868335 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df97f49a-b950-45f2-8c66-52f2c6c33163-utilities\") pod \"community-operators-79ddh\" (UID: \"df97f49a-b950-45f2-8c66-52f2c6c33163\") " pod="openshift-marketplace/community-operators-79ddh" Jan 26 07:56:01 crc kubenswrapper[4806]: I0126 07:56:01.869940 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xqmhh" podStartSLOduration=121.869929529 podStartE2EDuration="2m1.869929529s" podCreationTimestamp="2026-01-26 07:54:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 07:56:01.863196619 +0000 UTC m=+141.127604675" watchObservedRunningTime="2026-01-26 07:56:01.869929529 +0000 UTC m=+141.134337585" Jan 26 07:56:01 crc kubenswrapper[4806]: I0126 07:56:01.888828 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mkvhj" event={"ID":"f42b0469-833b-4dca-bc17-71e62b73f378","Type":"ContainerStarted","Data":"1879b6805705a13a901c9e2b0e81c0db46233fa98a57d70f264050703e6d5d06"} Jan 26 07:56:01 crc kubenswrapper[4806]: I0126 07:56:01.916131 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-slhdg\" (UniqueName: \"kubernetes.io/projected/df97f49a-b950-45f2-8c66-52f2c6c33163-kube-api-access-slhdg\") pod \"community-operators-79ddh\" (UID: \"df97f49a-b950-45f2-8c66-52f2c6c33163\") " pod="openshift-marketplace/community-operators-79ddh" Jan 26 07:56:01 crc kubenswrapper[4806]: I0126 07:56:01.917565 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-d46vj" event={"ID":"e658ebdf-74ef-4dab-b48e-53557c516bd3","Type":"ContainerStarted","Data":"eba0e361ed28e909fa6737f5456e099f5004e42b4ed2c9e5f93610d436e648bf"} Jan 26 07:56:01 crc kubenswrapper[4806]: I0126 07:56:01.917725 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-vhrp6" podStartSLOduration=121.917707147 podStartE2EDuration="2m1.917707147s" podCreationTimestamp="2026-01-26 07:54:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 07:56:01.915391192 +0000 UTC m=+141.179799248" watchObservedRunningTime="2026-01-26 07:56:01.917707147 +0000 UTC m=+141.182115203" Jan 26 07:56:01 crc kubenswrapper[4806]: I0126 07:56:01.929846 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-vd8ts" event={"ID":"84d147bb-634e-40fb-a631-91ff228c0801","Type":"ContainerStarted","Data":"5f08d65064dbbb7d8267d85acd04d18b836184e8e70e780e307341d0f8bcdef4"} Jan 26 07:56:01 crc kubenswrapper[4806]: I0126 07:56:01.931150 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-vd8ts" Jan 26 07:56:01 crc kubenswrapper[4806]: I0126 07:56:01.946739 4806 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-vd8ts container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" start-of-body= Jan 26 07:56:01 crc kubenswrapper[4806]: I0126 07:56:01.946826 4806 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-vd8ts" podUID="84d147bb-634e-40fb-a631-91ff228c0801" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" Jan 26 07:56:01 crc kubenswrapper[4806]: I0126 07:56:01.949380 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-4r9h2" event={"ID":"380d2e61-5b11-446b-b3b9-d3e7fa65569e","Type":"ContainerStarted","Data":"2ba18f94cd81123de510db1119cdd93306348f886d2f14fb27920f4b61108551"} Jan 26 07:56:01 crc kubenswrapper[4806]: I0126 07:56:01.964737 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-mkfwt" event={"ID":"5192ad78-580b-4126-9241-1d52339308b1","Type":"ContainerStarted","Data":"efb7ba0ffc06649d6e18f153466ea1fdc67ed32fd7b9594ebf36cc61f0f1c220"} Jan 26 07:56:01 crc kubenswrapper[4806]: I0126 07:56:01.964784 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-mkfwt" event={"ID":"5192ad78-580b-4126-9241-1d52339308b1","Type":"ContainerStarted","Data":"ff443b19bf36847aa6a37ad41ed1bb56f7800ca4eb07a1c697f1cd2688455ef0"} Jan 26 07:56:01 crc kubenswrapper[4806]: I0126 07:56:01.965963 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tncnb\" (UID: \"3bcae027-4e25-4c41-bbc9-639927f58691\") " pod="openshift-image-registry/image-registry-697d97f7c8-tncnb" Jan 26 07:56:01 crc kubenswrapper[4806]: E0126 07:56:01.969028 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 07:56:02.469003534 +0000 UTC m=+141.733411590 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tncnb" (UID: "3bcae027-4e25-4c41-bbc9-639927f58691") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 07:56:01 crc kubenswrapper[4806]: I0126 07:56:01.976460 4806 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 26 07:56:01 crc kubenswrapper[4806]: I0126 07:56:01.985184 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-7c9hs" event={"ID":"4cdf983d-19e2-445c-9ae4-237ec10dd839","Type":"ContainerStarted","Data":"a8dd04c53d07bb001aa7b09915a9f26019f25fac1a1ba24106afa69189408071"} Jan 26 07:56:01 crc kubenswrapper[4806]: I0126 07:56:01.992327 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-bq7zd"] Jan 26 07:56:01 crc kubenswrapper[4806]: I0126 07:56:01.999145 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-sxf5m" event={"ID":"fca40e01-6a4e-46e1-970d-60b8436aa04e","Type":"ContainerStarted","Data":"91252d8380105b0df3a343e2696f01a30e4f879a581c418c449b674f856f4bf0"} Jan 26 07:56:02 crc kubenswrapper[4806]: I0126 07:56:02.001653 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bq7zd" Jan 26 07:56:02 crc kubenswrapper[4806]: I0126 07:56:02.003226 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-79ddh" Jan 26 07:56:02 crc kubenswrapper[4806]: I0126 07:56:02.004629 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 26 07:56:02 crc kubenswrapper[4806]: I0126 07:56:02.035969 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mkvhj" podStartSLOduration=122.035940012 podStartE2EDuration="2m2.035940012s" podCreationTimestamp="2026-01-26 07:54:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 07:56:02.035648134 +0000 UTC m=+141.300056190" watchObservedRunningTime="2026-01-26 07:56:02.035940012 +0000 UTC m=+141.300348058" Jan 26 07:56:02 crc kubenswrapper[4806]: I0126 07:56:02.037223 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ncld5" event={"ID":"ac2990c0-273e-4614-821b-a1db45c09c0f","Type":"ContainerStarted","Data":"d20943f001db4897e91812b6415b6528ab8c0cef3e49d51033a8c0d11993ce01"} Jan 26 07:56:02 crc kubenswrapper[4806]: I0126 07:56:02.039235 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ncld5" Jan 26 07:56:02 crc kubenswrapper[4806]: I0126 07:56:02.069132 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 07:56:02 crc kubenswrapper[4806]: I0126 07:56:02.069746 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0531f954-d1d9-42f0-bd29-f8ff5b0871b4-utilities\") pod \"certified-operators-bq7zd\" (UID: \"0531f954-d1d9-42f0-bd29-f8ff5b0871b4\") " pod="openshift-marketplace/certified-operators-bq7zd" Jan 26 07:56:02 crc kubenswrapper[4806]: I0126 07:56:02.069813 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0531f954-d1d9-42f0-bd29-f8ff5b0871b4-catalog-content\") pod \"certified-operators-bq7zd\" (UID: \"0531f954-d1d9-42f0-bd29-f8ff5b0871b4\") " pod="openshift-marketplace/certified-operators-bq7zd" Jan 26 07:56:02 crc kubenswrapper[4806]: I0126 07:56:02.069839 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpx27\" (UniqueName: \"kubernetes.io/projected/0531f954-d1d9-42f0-bd29-f8ff5b0871b4-kube-api-access-qpx27\") pod \"certified-operators-bq7zd\" (UID: \"0531f954-d1d9-42f0-bd29-f8ff5b0871b4\") " pod="openshift-marketplace/certified-operators-bq7zd" Jan 26 07:56:02 crc kubenswrapper[4806]: E0126 07:56:02.070210 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 07:56:02.570189569 +0000 UTC m=+141.834597625 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 07:56:02 crc kubenswrapper[4806]: I0126 07:56:02.078779 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bq7zd"] Jan 26 07:56:02 crc kubenswrapper[4806]: I0126 07:56:02.094871 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-4gdvx" event={"ID":"3ba883c9-d8a0-42ec-8894-87769eabf95b","Type":"ContainerStarted","Data":"461ef3fcc1dbcc9a00f83b96779a76892db7f933353cceb7570668184bee0bd3"} Jan 26 07:56:02 crc kubenswrapper[4806]: I0126 07:56:02.094917 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-4gdvx" event={"ID":"3ba883c9-d8a0-42ec-8894-87769eabf95b","Type":"ContainerStarted","Data":"b4272dcb195f66a8cecc826f1f19596a34e002b91e982a77521f4646edf9e005"} Jan 26 07:56:02 crc kubenswrapper[4806]: I0126 07:56:02.141166 4806 patch_prober.go:28] interesting pod/downloads-7954f5f757-s7jrc container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Jan 26 07:56:02 crc kubenswrapper[4806]: I0126 07:56:02.141503 4806 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-s7jrc" podUID="8a123de3-5556-4e34-8433-52805089c13c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Jan 26 07:56:02 crc kubenswrapper[4806]: I0126 07:56:02.158148 4806 patch_prober.go:28] interesting pod/router-default-5444994796-l5mfg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 07:56:02 crc kubenswrapper[4806]: [-]has-synced failed: reason withheld Jan 26 07:56:02 crc kubenswrapper[4806]: [+]process-running ok Jan 26 07:56:02 crc kubenswrapper[4806]: healthz check failed Jan 26 07:56:02 crc kubenswrapper[4806]: I0126 07:56:02.158211 4806 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-l5mfg" podUID="2ce69172-bf74-4c4a-8aeb-9b1d86f50254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 07:56:02 crc kubenswrapper[4806]: I0126 07:56:02.162586 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-45bpz"] Jan 26 07:56:02 crc kubenswrapper[4806]: I0126 07:56:02.163828 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-45bpz" Jan 26 07:56:02 crc kubenswrapper[4806]: I0126 07:56:02.179876 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0531f954-d1d9-42f0-bd29-f8ff5b0871b4-catalog-content\") pod \"certified-operators-bq7zd\" (UID: \"0531f954-d1d9-42f0-bd29-f8ff5b0871b4\") " pod="openshift-marketplace/certified-operators-bq7zd" Jan 26 07:56:02 crc kubenswrapper[4806]: I0126 07:56:02.179913 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qpx27\" (UniqueName: \"kubernetes.io/projected/0531f954-d1d9-42f0-bd29-f8ff5b0871b4-kube-api-access-qpx27\") pod \"certified-operators-bq7zd\" (UID: \"0531f954-d1d9-42f0-bd29-f8ff5b0871b4\") " pod="openshift-marketplace/certified-operators-bq7zd" Jan 26 07:56:02 crc kubenswrapper[4806]: I0126 07:56:02.180541 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0531f954-d1d9-42f0-bd29-f8ff5b0871b4-utilities\") pod \"certified-operators-bq7zd\" (UID: \"0531f954-d1d9-42f0-bd29-f8ff5b0871b4\") " pod="openshift-marketplace/certified-operators-bq7zd" Jan 26 07:56:02 crc kubenswrapper[4806]: I0126 07:56:02.180579 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tncnb\" (UID: \"3bcae027-4e25-4c41-bbc9-639927f58691\") " pod="openshift-image-registry/image-registry-697d97f7c8-tncnb" Jan 26 07:56:02 crc kubenswrapper[4806]: I0126 07:56:02.182816 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0531f954-d1d9-42f0-bd29-f8ff5b0871b4-catalog-content\") pod \"certified-operators-bq7zd\" (UID: \"0531f954-d1d9-42f0-bd29-f8ff5b0871b4\") " pod="openshift-marketplace/certified-operators-bq7zd" Jan 26 07:56:02 crc kubenswrapper[4806]: I0126 07:56:02.186478 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0531f954-d1d9-42f0-bd29-f8ff5b0871b4-utilities\") pod \"certified-operators-bq7zd\" (UID: \"0531f954-d1d9-42f0-bd29-f8ff5b0871b4\") " pod="openshift-marketplace/certified-operators-bq7zd" Jan 26 07:56:02 crc kubenswrapper[4806]: E0126 07:56:02.188799 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 07:56:02.688781074 +0000 UTC m=+141.953189130 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-tncnb" (UID: "3bcae027-4e25-4c41-bbc9-639927f58691") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 07:56:02 crc kubenswrapper[4806]: I0126 07:56:02.193589 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-45bpz"] Jan 26 07:56:02 crc kubenswrapper[4806]: I0126 07:56:02.193663 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29490225-v2v7z" podStartSLOduration=123.193642531 podStartE2EDuration="2m3.193642531s" podCreationTimestamp="2026-01-26 07:53:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 07:56:02.158057507 +0000 UTC m=+141.422465563" watchObservedRunningTime="2026-01-26 07:56:02.193642531 +0000 UTC m=+141.458050587" Jan 26 07:56:02 crc kubenswrapper[4806]: I0126 07:56:02.219210 4806 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-26T07:56:01.976488305Z","Handler":null,"Name":""} Jan 26 07:56:02 crc kubenswrapper[4806]: I0126 07:56:02.225359 4806 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 26 07:56:02 crc kubenswrapper[4806]: I0126 07:56:02.225393 4806 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 26 07:56:02 crc kubenswrapper[4806]: I0126 07:56:02.245832 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-jrg5t" podStartSLOduration=10.245815553 podStartE2EDuration="10.245815553s" podCreationTimestamp="2026-01-26 07:55:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 07:56:02.244690251 +0000 UTC m=+141.509098307" watchObservedRunningTime="2026-01-26 07:56:02.245815553 +0000 UTC m=+141.510223609" Jan 26 07:56:02 crc kubenswrapper[4806]: I0126 07:56:02.257350 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qpx27\" (UniqueName: \"kubernetes.io/projected/0531f954-d1d9-42f0-bd29-f8ff5b0871b4-kube-api-access-qpx27\") pod \"certified-operators-bq7zd\" (UID: \"0531f954-d1d9-42f0-bd29-f8ff5b0871b4\") " pod="openshift-marketplace/certified-operators-bq7zd" Jan 26 07:56:02 crc kubenswrapper[4806]: I0126 07:56:02.257538 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-f9fk8" Jan 26 07:56:02 crc kubenswrapper[4806]: I0126 07:56:02.281148 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 07:56:02 crc kubenswrapper[4806]: I0126 07:56:02.281484 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f544176-9dd8-4416-99f3-53299cd7ffb0-utilities\") pod \"community-operators-45bpz\" (UID: \"4f544176-9dd8-4416-99f3-53299cd7ffb0\") " pod="openshift-marketplace/community-operators-45bpz" Jan 26 07:56:02 crc kubenswrapper[4806]: I0126 07:56:02.281515 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z57jj\" (UniqueName: \"kubernetes.io/projected/4f544176-9dd8-4416-99f3-53299cd7ffb0-kube-api-access-z57jj\") pod \"community-operators-45bpz\" (UID: \"4f544176-9dd8-4416-99f3-53299cd7ffb0\") " pod="openshift-marketplace/community-operators-45bpz" Jan 26 07:56:02 crc kubenswrapper[4806]: I0126 07:56:02.281551 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f544176-9dd8-4416-99f3-53299cd7ffb0-catalog-content\") pod \"community-operators-45bpz\" (UID: \"4f544176-9dd8-4416-99f3-53299cd7ffb0\") " pod="openshift-marketplace/community-operators-45bpz" Jan 26 07:56:02 crc kubenswrapper[4806]: I0126 07:56:02.321165 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-rqpjn"] Jan 26 07:56:02 crc kubenswrapper[4806]: I0126 07:56:02.328881 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rqpjn" Jan 26 07:56:02 crc kubenswrapper[4806]: I0126 07:56:02.368994 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-4gdvx" podStartSLOduration=123.368976868 podStartE2EDuration="2m3.368976868s" podCreationTimestamp="2026-01-26 07:53:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 07:56:02.362080503 +0000 UTC m=+141.626488559" watchObservedRunningTime="2026-01-26 07:56:02.368976868 +0000 UTC m=+141.633384924" Jan 26 07:56:02 crc kubenswrapper[4806]: I0126 07:56:02.385318 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z57jj\" (UniqueName: \"kubernetes.io/projected/4f544176-9dd8-4416-99f3-53299cd7ffb0-kube-api-access-z57jj\") pod \"community-operators-45bpz\" (UID: \"4f544176-9dd8-4416-99f3-53299cd7ffb0\") " pod="openshift-marketplace/community-operators-45bpz" Jan 26 07:56:02 crc kubenswrapper[4806]: I0126 07:56:02.385365 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f544176-9dd8-4416-99f3-53299cd7ffb0-catalog-content\") pod \"community-operators-45bpz\" (UID: \"4f544176-9dd8-4416-99f3-53299cd7ffb0\") " pod="openshift-marketplace/community-operators-45bpz" Jan 26 07:56:02 crc kubenswrapper[4806]: I0126 07:56:02.385448 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/339dd820-f50a-4135-9da6-5768324b8d55-catalog-content\") pod \"certified-operators-rqpjn\" (UID: \"339dd820-f50a-4135-9da6-5768324b8d55\") " pod="openshift-marketplace/certified-operators-rqpjn" Jan 26 07:56:02 crc kubenswrapper[4806]: I0126 07:56:02.385541 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/339dd820-f50a-4135-9da6-5768324b8d55-utilities\") pod \"certified-operators-rqpjn\" (UID: \"339dd820-f50a-4135-9da6-5768324b8d55\") " pod="openshift-marketplace/certified-operators-rqpjn" Jan 26 07:56:02 crc kubenswrapper[4806]: I0126 07:56:02.385566 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6kg2\" (UniqueName: \"kubernetes.io/projected/339dd820-f50a-4135-9da6-5768324b8d55-kube-api-access-m6kg2\") pod \"certified-operators-rqpjn\" (UID: \"339dd820-f50a-4135-9da6-5768324b8d55\") " pod="openshift-marketplace/certified-operators-rqpjn" Jan 26 07:56:02 crc kubenswrapper[4806]: I0126 07:56:02.385589 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f544176-9dd8-4416-99f3-53299cd7ffb0-utilities\") pod \"community-operators-45bpz\" (UID: \"4f544176-9dd8-4416-99f3-53299cd7ffb0\") " pod="openshift-marketplace/community-operators-45bpz" Jan 26 07:56:02 crc kubenswrapper[4806]: I0126 07:56:02.386352 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 26 07:56:02 crc kubenswrapper[4806]: I0126 07:56:02.387710 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f544176-9dd8-4416-99f3-53299cd7ffb0-catalog-content\") pod \"community-operators-45bpz\" (UID: \"4f544176-9dd8-4416-99f3-53299cd7ffb0\") " pod="openshift-marketplace/community-operators-45bpz" Jan 26 07:56:02 crc kubenswrapper[4806]: I0126 07:56:02.388114 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f544176-9dd8-4416-99f3-53299cd7ffb0-utilities\") pod \"community-operators-45bpz\" (UID: \"4f544176-9dd8-4416-99f3-53299cd7ffb0\") " pod="openshift-marketplace/community-operators-45bpz" Jan 26 07:56:02 crc kubenswrapper[4806]: I0126 07:56:02.393468 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bq7zd" Jan 26 07:56:02 crc kubenswrapper[4806]: I0126 07:56:02.401789 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rqpjn"] Jan 26 07:56:02 crc kubenswrapper[4806]: I0126 07:56:02.433246 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-sxf5m" podStartSLOduration=122.4332162 podStartE2EDuration="2m2.4332162s" podCreationTimestamp="2026-01-26 07:54:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 07:56:02.431398789 +0000 UTC m=+141.695806845" watchObservedRunningTime="2026-01-26 07:56:02.4332162 +0000 UTC m=+141.697624256" Jan 26 07:56:02 crc kubenswrapper[4806]: I0126 07:56:02.439558 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z57jj\" (UniqueName: \"kubernetes.io/projected/4f544176-9dd8-4416-99f3-53299cd7ffb0-kube-api-access-z57jj\") pod \"community-operators-45bpz\" (UID: \"4f544176-9dd8-4416-99f3-53299cd7ffb0\") " pod="openshift-marketplace/community-operators-45bpz" Jan 26 07:56:02 crc kubenswrapper[4806]: I0126 07:56:02.487201 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/339dd820-f50a-4135-9da6-5768324b8d55-utilities\") pod \"certified-operators-rqpjn\" (UID: \"339dd820-f50a-4135-9da6-5768324b8d55\") " pod="openshift-marketplace/certified-operators-rqpjn" Jan 26 07:56:02 crc kubenswrapper[4806]: I0126 07:56:02.487245 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6kg2\" (UniqueName: \"kubernetes.io/projected/339dd820-f50a-4135-9da6-5768324b8d55-kube-api-access-m6kg2\") pod \"certified-operators-rqpjn\" (UID: \"339dd820-f50a-4135-9da6-5768324b8d55\") " pod="openshift-marketplace/certified-operators-rqpjn" Jan 26 07:56:02 crc kubenswrapper[4806]: I0126 07:56:02.487301 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tncnb\" (UID: \"3bcae027-4e25-4c41-bbc9-639927f58691\") " pod="openshift-image-registry/image-registry-697d97f7c8-tncnb" Jan 26 07:56:02 crc kubenswrapper[4806]: I0126 07:56:02.487334 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/339dd820-f50a-4135-9da6-5768324b8d55-catalog-content\") pod \"certified-operators-rqpjn\" (UID: \"339dd820-f50a-4135-9da6-5768324b8d55\") " pod="openshift-marketplace/certified-operators-rqpjn" Jan 26 07:56:02 crc kubenswrapper[4806]: I0126 07:56:02.487915 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/339dd820-f50a-4135-9da6-5768324b8d55-utilities\") pod \"certified-operators-rqpjn\" (UID: \"339dd820-f50a-4135-9da6-5768324b8d55\") " pod="openshift-marketplace/certified-operators-rqpjn" Jan 26 07:56:02 crc kubenswrapper[4806]: I0126 07:56:02.501981 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/339dd820-f50a-4135-9da6-5768324b8d55-catalog-content\") pod \"certified-operators-rqpjn\" (UID: \"339dd820-f50a-4135-9da6-5768324b8d55\") " pod="openshift-marketplace/certified-operators-rqpjn" Jan 26 07:56:02 crc kubenswrapper[4806]: I0126 07:56:02.535352 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-45bpz" Jan 26 07:56:02 crc kubenswrapper[4806]: I0126 07:56:02.536773 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-4r9h2" podStartSLOduration=122.536754081 podStartE2EDuration="2m2.536754081s" podCreationTimestamp="2026-01-26 07:54:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 07:56:02.534868588 +0000 UTC m=+141.799276644" watchObservedRunningTime="2026-01-26 07:56:02.536754081 +0000 UTC m=+141.801162137" Jan 26 07:56:02 crc kubenswrapper[4806]: I0126 07:56:02.545060 4806 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 26 07:56:02 crc kubenswrapper[4806]: I0126 07:56:02.545102 4806 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tncnb\" (UID: \"3bcae027-4e25-4c41-bbc9-639927f58691\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-tncnb" Jan 26 07:56:02 crc kubenswrapper[4806]: I0126 07:56:02.546057 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6kg2\" (UniqueName: \"kubernetes.io/projected/339dd820-f50a-4135-9da6-5768324b8d55-kube-api-access-m6kg2\") pod \"certified-operators-rqpjn\" (UID: \"339dd820-f50a-4135-9da6-5768324b8d55\") " pod="openshift-marketplace/certified-operators-rqpjn" Jan 26 07:56:02 crc kubenswrapper[4806]: I0126 07:56:02.684192 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-d46vj" podStartSLOduration=122.68417238 podStartE2EDuration="2m2.68417238s" podCreationTimestamp="2026-01-26 07:54:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 07:56:02.577366227 +0000 UTC m=+141.841774283" watchObservedRunningTime="2026-01-26 07:56:02.68417238 +0000 UTC m=+141.948580436" Jan 26 07:56:02 crc kubenswrapper[4806]: I0126 07:56:02.695847 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rqpjn" Jan 26 07:56:02 crc kubenswrapper[4806]: I0126 07:56:02.696724 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ncld5" podStartSLOduration=122.696702273 podStartE2EDuration="2m2.696702273s" podCreationTimestamp="2026-01-26 07:54:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 07:56:02.690581181 +0000 UTC m=+141.954989237" watchObservedRunningTime="2026-01-26 07:56:02.696702273 +0000 UTC m=+141.961110329" Jan 26 07:56:02 crc kubenswrapper[4806]: I0126 07:56:02.721594 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-vd8ts" podStartSLOduration=122.721579055 podStartE2EDuration="2m2.721579055s" podCreationTimestamp="2026-01-26 07:54:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 07:56:02.720170955 +0000 UTC m=+141.984579011" watchObservedRunningTime="2026-01-26 07:56:02.721579055 +0000 UTC m=+141.985987111" Jan 26 07:56:02 crc kubenswrapper[4806]: I0126 07:56:02.851683 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-mkfwt" podStartSLOduration=122.851657315 podStartE2EDuration="2m2.851657315s" podCreationTimestamp="2026-01-26 07:54:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 07:56:02.785873689 +0000 UTC m=+142.050281745" watchObservedRunningTime="2026-01-26 07:56:02.851657315 +0000 UTC m=+142.116065371" Jan 26 07:56:03 crc kubenswrapper[4806]: I0126 07:56:03.098238 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 26 07:56:03 crc kubenswrapper[4806]: I0126 07:56:03.160178 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-79ddh"] Jan 26 07:56:03 crc kubenswrapper[4806]: I0126 07:56:03.161413 4806 patch_prober.go:28] interesting pod/router-default-5444994796-l5mfg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 07:56:03 crc kubenswrapper[4806]: [-]has-synced failed: reason withheld Jan 26 07:56:03 crc kubenswrapper[4806]: [+]process-running ok Jan 26 07:56:03 crc kubenswrapper[4806]: healthz check failed Jan 26 07:56:03 crc kubenswrapper[4806]: I0126 07:56:03.162232 4806 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-l5mfg" podUID="2ce69172-bf74-4c4a-8aeb-9b1d86f50254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 07:56:03 crc kubenswrapper[4806]: I0126 07:56:03.200408 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-mls86" event={"ID":"e04696fa-1756-4ad4-9908-a09f7339c584","Type":"ContainerStarted","Data":"2668a6445ad063b61695b6947b032e1642682b86a1f1627e7273cd20343b49c6"} Jan 26 07:56:03 crc kubenswrapper[4806]: I0126 07:56:03.206863 4806 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-vd8ts container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" start-of-body= Jan 26 07:56:03 crc kubenswrapper[4806]: I0126 07:56:03.211485 4806 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-vd8ts" podUID="84d147bb-634e-40fb-a631-91ff228c0801" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" Jan 26 07:56:03 crc kubenswrapper[4806]: I0126 07:56:03.208789 4806 patch_prober.go:28] interesting pod/downloads-7954f5f757-s7jrc container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Jan 26 07:56:03 crc kubenswrapper[4806]: I0126 07:56:03.211562 4806 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-s7jrc" podUID="8a123de3-5556-4e34-8433-52805089c13c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Jan 26 07:56:03 crc kubenswrapper[4806]: I0126 07:56:03.249153 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-tncnb\" (UID: \"3bcae027-4e25-4c41-bbc9-639927f58691\") " pod="openshift-image-registry/image-registry-697d97f7c8-tncnb" Jan 26 07:56:03 crc kubenswrapper[4806]: I0126 07:56:03.497237 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bq7zd"] Jan 26 07:56:03 crc kubenswrapper[4806]: I0126 07:56:03.503876 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-tncnb" Jan 26 07:56:03 crc kubenswrapper[4806]: I0126 07:56:03.658282 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rqpjn"] Jan 26 07:56:03 crc kubenswrapper[4806]: W0126 07:56:03.678971 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod339dd820_f50a_4135_9da6_5768324b8d55.slice/crio-99fa590e83629c40ca425dee37063742d504d77c790580d693abc93bba343b20 WatchSource:0}: Error finding container 99fa590e83629c40ca425dee37063742d504d77c790580d693abc93bba343b20: Status 404 returned error can't find the container with id 99fa590e83629c40ca425dee37063742d504d77c790580d693abc93bba343b20 Jan 26 07:56:03 crc kubenswrapper[4806]: I0126 07:56:03.702938 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-ts6cz"] Jan 26 07:56:03 crc kubenswrapper[4806]: I0126 07:56:03.704066 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ts6cz" Jan 26 07:56:03 crc kubenswrapper[4806]: I0126 07:56:03.714866 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 26 07:56:03 crc kubenswrapper[4806]: I0126 07:56:03.720728 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-qd6mh" Jan 26 07:56:03 crc kubenswrapper[4806]: I0126 07:56:03.721610 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-qd6mh" Jan 26 07:56:03 crc kubenswrapper[4806]: I0126 07:56:03.732738 4806 patch_prober.go:28] interesting pod/console-f9d7485db-qd6mh container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.12:8443/health\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Jan 26 07:56:03 crc kubenswrapper[4806]: I0126 07:56:03.732794 4806 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-qd6mh" podUID="ee89739e-edc1-41b5-bf4a-da80ba0a59aa" containerName="console" probeResult="failure" output="Get \"https://10.217.0.12:8443/health\": dial tcp 10.217.0.12:8443: connect: connection refused" Jan 26 07:56:03 crc kubenswrapper[4806]: I0126 07:56:03.742986 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-sbd2c" Jan 26 07:56:03 crc kubenswrapper[4806]: I0126 07:56:03.822861 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ts6cz"] Jan 26 07:56:03 crc kubenswrapper[4806]: I0126 07:56:03.846221 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-45bpz"] Jan 26 07:56:03 crc kubenswrapper[4806]: I0126 07:56:03.873187 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a078c937-6bed-4604-a0a1-25c9c7d2503d-utilities\") pod \"redhat-marketplace-ts6cz\" (UID: \"a078c937-6bed-4604-a0a1-25c9c7d2503d\") " pod="openshift-marketplace/redhat-marketplace-ts6cz" Jan 26 07:56:03 crc kubenswrapper[4806]: I0126 07:56:03.873270 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a078c937-6bed-4604-a0a1-25c9c7d2503d-catalog-content\") pod \"redhat-marketplace-ts6cz\" (UID: \"a078c937-6bed-4604-a0a1-25c9c7d2503d\") " pod="openshift-marketplace/redhat-marketplace-ts6cz" Jan 26 07:56:03 crc kubenswrapper[4806]: I0126 07:56:03.873411 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wwxr\" (UniqueName: \"kubernetes.io/projected/a078c937-6bed-4604-a0a1-25c9c7d2503d-kube-api-access-8wwxr\") pod \"redhat-marketplace-ts6cz\" (UID: \"a078c937-6bed-4604-a0a1-25c9c7d2503d\") " pod="openshift-marketplace/redhat-marketplace-ts6cz" Jan 26 07:56:03 crc kubenswrapper[4806]: I0126 07:56:03.896269 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gmnqv" Jan 26 07:56:03 crc kubenswrapper[4806]: I0126 07:56:03.896301 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gmnqv" Jan 26 07:56:03 crc kubenswrapper[4806]: I0126 07:56:03.923777 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gmnqv" Jan 26 07:56:03 crc kubenswrapper[4806]: I0126 07:56:03.974948 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a078c937-6bed-4604-a0a1-25c9c7d2503d-utilities\") pod \"redhat-marketplace-ts6cz\" (UID: \"a078c937-6bed-4604-a0a1-25c9c7d2503d\") " pod="openshift-marketplace/redhat-marketplace-ts6cz" Jan 26 07:56:03 crc kubenswrapper[4806]: I0126 07:56:03.975011 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a078c937-6bed-4604-a0a1-25c9c7d2503d-catalog-content\") pod \"redhat-marketplace-ts6cz\" (UID: \"a078c937-6bed-4604-a0a1-25c9c7d2503d\") " pod="openshift-marketplace/redhat-marketplace-ts6cz" Jan 26 07:56:03 crc kubenswrapper[4806]: I0126 07:56:03.975071 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wwxr\" (UniqueName: \"kubernetes.io/projected/a078c937-6bed-4604-a0a1-25c9c7d2503d-kube-api-access-8wwxr\") pod \"redhat-marketplace-ts6cz\" (UID: \"a078c937-6bed-4604-a0a1-25c9c7d2503d\") " pod="openshift-marketplace/redhat-marketplace-ts6cz" Jan 26 07:56:03 crc kubenswrapper[4806]: I0126 07:56:03.975770 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a078c937-6bed-4604-a0a1-25c9c7d2503d-utilities\") pod \"redhat-marketplace-ts6cz\" (UID: \"a078c937-6bed-4604-a0a1-25c9c7d2503d\") " pod="openshift-marketplace/redhat-marketplace-ts6cz" Jan 26 07:56:03 crc kubenswrapper[4806]: I0126 07:56:03.976019 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a078c937-6bed-4604-a0a1-25c9c7d2503d-catalog-content\") pod \"redhat-marketplace-ts6cz\" (UID: \"a078c937-6bed-4604-a0a1-25c9c7d2503d\") " pod="openshift-marketplace/redhat-marketplace-ts6cz" Jan 26 07:56:04 crc kubenswrapper[4806]: I0126 07:56:04.000305 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wwxr\" (UniqueName: \"kubernetes.io/projected/a078c937-6bed-4604-a0a1-25c9c7d2503d-kube-api-access-8wwxr\") pod \"redhat-marketplace-ts6cz\" (UID: \"a078c937-6bed-4604-a0a1-25c9c7d2503d\") " pod="openshift-marketplace/redhat-marketplace-ts6cz" Jan 26 07:56:04 crc kubenswrapper[4806]: I0126 07:56:04.090492 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ts6cz" Jan 26 07:56:04 crc kubenswrapper[4806]: I0126 07:56:04.102051 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-248dw"] Jan 26 07:56:04 crc kubenswrapper[4806]: I0126 07:56:04.103082 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-248dw" Jan 26 07:56:04 crc kubenswrapper[4806]: I0126 07:56:04.121671 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-248dw"] Jan 26 07:56:04 crc kubenswrapper[4806]: I0126 07:56:04.131745 4806 patch_prober.go:28] interesting pod/router-default-5444994796-l5mfg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 07:56:04 crc kubenswrapper[4806]: [-]has-synced failed: reason withheld Jan 26 07:56:04 crc kubenswrapper[4806]: [+]process-running ok Jan 26 07:56:04 crc kubenswrapper[4806]: healthz check failed Jan 26 07:56:04 crc kubenswrapper[4806]: I0126 07:56:04.131810 4806 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-l5mfg" podUID="2ce69172-bf74-4c4a-8aeb-9b1d86f50254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 07:56:04 crc kubenswrapper[4806]: I0126 07:56:04.218631 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" Jan 26 07:56:04 crc kubenswrapper[4806]: I0126 07:56:04.242483 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-mls86" event={"ID":"e04696fa-1756-4ad4-9908-a09f7339c584","Type":"ContainerStarted","Data":"06afb8b40e6a963c9b180bc4f1c59b05009243e0ddcff203d5256e7219c8fe65"} Jan 26 07:56:04 crc kubenswrapper[4806]: I0126 07:56:04.252096 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490225-v2v7z" event={"ID":"d7f14ee6-1d4a-4cb9-bf7d-7a3ad4b0a7c1","Type":"ContainerDied","Data":"3b8ffc16eec0a9205c74046e6957dc892472e5de7293eb6ef937088aaf25fa9d"} Jan 26 07:56:04 crc kubenswrapper[4806]: I0126 07:56:04.252163 4806 generic.go:334] "Generic (PLEG): container finished" podID="d7f14ee6-1d4a-4cb9-bf7d-7a3ad4b0a7c1" containerID="3b8ffc16eec0a9205c74046e6957dc892472e5de7293eb6ef937088aaf25fa9d" exitCode=0 Jan 26 07:56:04 crc kubenswrapper[4806]: I0126 07:56:04.253765 4806 generic.go:334] "Generic (PLEG): container finished" podID="df97f49a-b950-45f2-8c66-52f2c6c33163" containerID="7f8dee80216d1651426ad8599d8f663f4a7b01e897924ccba87dc62ee3139bde" exitCode=0 Jan 26 07:56:04 crc kubenswrapper[4806]: I0126 07:56:04.254556 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-79ddh" event={"ID":"df97f49a-b950-45f2-8c66-52f2c6c33163","Type":"ContainerDied","Data":"7f8dee80216d1651426ad8599d8f663f4a7b01e897924ccba87dc62ee3139bde"} Jan 26 07:56:04 crc kubenswrapper[4806]: I0126 07:56:04.254586 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-79ddh" event={"ID":"df97f49a-b950-45f2-8c66-52f2c6c33163","Type":"ContainerStarted","Data":"d6f7e14f6f659f0c374f0128c4d28383bdeed890dbcf5b42f10df91c952fa6b1"} Jan 26 07:56:04 crc kubenswrapper[4806]: I0126 07:56:04.257866 4806 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 07:56:04 crc kubenswrapper[4806]: I0126 07:56:04.264422 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-4gdvx" Jan 26 07:56:04 crc kubenswrapper[4806]: I0126 07:56:04.264486 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-4gdvx" Jan 26 07:56:04 crc kubenswrapper[4806]: I0126 07:56:04.265429 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-45bpz" event={"ID":"4f544176-9dd8-4416-99f3-53299cd7ffb0","Type":"ContainerStarted","Data":"dca281a711bb84e93192ca3c5a14f6ceb8da3cf1c87da2ec2ed42bf460164eee"} Jan 26 07:56:04 crc kubenswrapper[4806]: I0126 07:56:04.277869 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bq7zd" event={"ID":"0531f954-d1d9-42f0-bd29-f8ff5b0871b4","Type":"ContainerStarted","Data":"ceae611e088a277c36bfe42d897da1092b828ee94aeb89c5dfef7356c336c5c1"} Jan 26 07:56:04 crc kubenswrapper[4806]: I0126 07:56:04.278960 4806 patch_prober.go:28] interesting pod/apiserver-76f77b778f-4gdvx container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 26 07:56:04 crc kubenswrapper[4806]: [+]log ok Jan 26 07:56:04 crc kubenswrapper[4806]: [+]etcd ok Jan 26 07:56:04 crc kubenswrapper[4806]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 26 07:56:04 crc kubenswrapper[4806]: [+]poststarthook/generic-apiserver-start-informers ok Jan 26 07:56:04 crc kubenswrapper[4806]: [+]poststarthook/max-in-flight-filter ok Jan 26 07:56:04 crc kubenswrapper[4806]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 26 07:56:04 crc kubenswrapper[4806]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 26 07:56:04 crc kubenswrapper[4806]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 26 07:56:04 crc kubenswrapper[4806]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Jan 26 07:56:04 crc kubenswrapper[4806]: [+]poststarthook/project.openshift.io-projectcache ok Jan 26 07:56:04 crc kubenswrapper[4806]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 26 07:56:04 crc kubenswrapper[4806]: [+]poststarthook/openshift.io-startinformers ok Jan 26 07:56:04 crc kubenswrapper[4806]: [+]poststarthook/openshift.io-restmapperupdater ok Jan 26 07:56:04 crc kubenswrapper[4806]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 26 07:56:04 crc kubenswrapper[4806]: livez check failed Jan 26 07:56:04 crc kubenswrapper[4806]: I0126 07:56:04.279045 4806 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-4gdvx" podUID="3ba883c9-d8a0-42ec-8894-87769eabf95b" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 07:56:04 crc kubenswrapper[4806]: I0126 07:56:04.282260 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/422dbc29-ef6c-40a2-8928-ef97946880a0-utilities\") pod \"redhat-marketplace-248dw\" (UID: \"422dbc29-ef6c-40a2-8928-ef97946880a0\") " pod="openshift-marketplace/redhat-marketplace-248dw" Jan 26 07:56:04 crc kubenswrapper[4806]: I0126 07:56:04.282334 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fg5pf\" (UniqueName: \"kubernetes.io/projected/422dbc29-ef6c-40a2-8928-ef97946880a0-kube-api-access-fg5pf\") pod \"redhat-marketplace-248dw\" (UID: \"422dbc29-ef6c-40a2-8928-ef97946880a0\") " pod="openshift-marketplace/redhat-marketplace-248dw" Jan 26 07:56:04 crc kubenswrapper[4806]: I0126 07:56:04.282360 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/422dbc29-ef6c-40a2-8928-ef97946880a0-catalog-content\") pod \"redhat-marketplace-248dw\" (UID: \"422dbc29-ef6c-40a2-8928-ef97946880a0\") " pod="openshift-marketplace/redhat-marketplace-248dw" Jan 26 07:56:04 crc kubenswrapper[4806]: I0126 07:56:04.296254 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rqpjn" event={"ID":"339dd820-f50a-4135-9da6-5768324b8d55","Type":"ContainerStarted","Data":"fbc84a449235680ffbcf290fcf3911cf270d2ce4e3f604e218451c30518dcbcf"} Jan 26 07:56:04 crc kubenswrapper[4806]: I0126 07:56:04.296292 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rqpjn" event={"ID":"339dd820-f50a-4135-9da6-5768324b8d55","Type":"ContainerStarted","Data":"99fa590e83629c40ca425dee37063742d504d77c790580d693abc93bba343b20"} Jan 26 07:56:04 crc kubenswrapper[4806]: I0126 07:56:04.314898 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-vd8ts" Jan 26 07:56:04 crc kubenswrapper[4806]: I0126 07:56:04.317271 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-gmnqv" Jan 26 07:56:04 crc kubenswrapper[4806]: I0126 07:56:04.337272 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-mls86" podStartSLOduration=12.337252784 podStartE2EDuration="12.337252784s" podCreationTimestamp="2026-01-26 07:55:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 07:56:04.335747632 +0000 UTC m=+143.600155698" watchObservedRunningTime="2026-01-26 07:56:04.337252784 +0000 UTC m=+143.601660870" Jan 26 07:56:04 crc kubenswrapper[4806]: I0126 07:56:04.344838 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-tncnb"] Jan 26 07:56:04 crc kubenswrapper[4806]: I0126 07:56:04.383034 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/422dbc29-ef6c-40a2-8928-ef97946880a0-utilities\") pod \"redhat-marketplace-248dw\" (UID: \"422dbc29-ef6c-40a2-8928-ef97946880a0\") " pod="openshift-marketplace/redhat-marketplace-248dw" Jan 26 07:56:04 crc kubenswrapper[4806]: I0126 07:56:04.383337 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fg5pf\" (UniqueName: \"kubernetes.io/projected/422dbc29-ef6c-40a2-8928-ef97946880a0-kube-api-access-fg5pf\") pod \"redhat-marketplace-248dw\" (UID: \"422dbc29-ef6c-40a2-8928-ef97946880a0\") " pod="openshift-marketplace/redhat-marketplace-248dw" Jan 26 07:56:04 crc kubenswrapper[4806]: I0126 07:56:04.383358 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/422dbc29-ef6c-40a2-8928-ef97946880a0-catalog-content\") pod \"redhat-marketplace-248dw\" (UID: \"422dbc29-ef6c-40a2-8928-ef97946880a0\") " pod="openshift-marketplace/redhat-marketplace-248dw" Jan 26 07:56:04 crc kubenswrapper[4806]: I0126 07:56:04.383876 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/422dbc29-ef6c-40a2-8928-ef97946880a0-catalog-content\") pod \"redhat-marketplace-248dw\" (UID: \"422dbc29-ef6c-40a2-8928-ef97946880a0\") " pod="openshift-marketplace/redhat-marketplace-248dw" Jan 26 07:56:04 crc kubenswrapper[4806]: I0126 07:56:04.384099 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/422dbc29-ef6c-40a2-8928-ef97946880a0-utilities\") pod \"redhat-marketplace-248dw\" (UID: \"422dbc29-ef6c-40a2-8928-ef97946880a0\") " pod="openshift-marketplace/redhat-marketplace-248dw" Jan 26 07:56:04 crc kubenswrapper[4806]: I0126 07:56:04.417376 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fg5pf\" (UniqueName: \"kubernetes.io/projected/422dbc29-ef6c-40a2-8928-ef97946880a0-kube-api-access-fg5pf\") pod \"redhat-marketplace-248dw\" (UID: \"422dbc29-ef6c-40a2-8928-ef97946880a0\") " pod="openshift-marketplace/redhat-marketplace-248dw" Jan 26 07:56:04 crc kubenswrapper[4806]: I0126 07:56:04.438787 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-248dw" Jan 26 07:56:04 crc kubenswrapper[4806]: I0126 07:56:04.479817 4806 patch_prober.go:28] interesting pod/downloads-7954f5f757-s7jrc container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Jan 26 07:56:04 crc kubenswrapper[4806]: I0126 07:56:04.479855 4806 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-s7jrc" podUID="8a123de3-5556-4e34-8433-52805089c13c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Jan 26 07:56:04 crc kubenswrapper[4806]: I0126 07:56:04.480088 4806 patch_prober.go:28] interesting pod/downloads-7954f5f757-s7jrc container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Jan 26 07:56:04 crc kubenswrapper[4806]: I0126 07:56:04.480102 4806 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-s7jrc" podUID="8a123de3-5556-4e34-8433-52805089c13c" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Jan 26 07:56:04 crc kubenswrapper[4806]: I0126 07:56:04.904378 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-5p4nt"] Jan 26 07:56:04 crc kubenswrapper[4806]: I0126 07:56:04.905719 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5p4nt" Jan 26 07:56:04 crc kubenswrapper[4806]: I0126 07:56:04.915912 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 26 07:56:04 crc kubenswrapper[4806]: I0126 07:56:04.932263 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5p4nt"] Jan 26 07:56:05 crc kubenswrapper[4806]: I0126 07:56:05.024619 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c31e36f6-aabe-4f1e-8e7e-3bb086ec1cd3-utilities\") pod \"redhat-operators-5p4nt\" (UID: \"c31e36f6-aabe-4f1e-8e7e-3bb086ec1cd3\") " pod="openshift-marketplace/redhat-operators-5p4nt" Jan 26 07:56:05 crc kubenswrapper[4806]: I0126 07:56:05.024742 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qk7b\" (UniqueName: \"kubernetes.io/projected/c31e36f6-aabe-4f1e-8e7e-3bb086ec1cd3-kube-api-access-9qk7b\") pod \"redhat-operators-5p4nt\" (UID: \"c31e36f6-aabe-4f1e-8e7e-3bb086ec1cd3\") " pod="openshift-marketplace/redhat-operators-5p4nt" Jan 26 07:56:05 crc kubenswrapper[4806]: I0126 07:56:05.024772 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c31e36f6-aabe-4f1e-8e7e-3bb086ec1cd3-catalog-content\") pod \"redhat-operators-5p4nt\" (UID: \"c31e36f6-aabe-4f1e-8e7e-3bb086ec1cd3\") " pod="openshift-marketplace/redhat-operators-5p4nt" Jan 26 07:56:05 crc kubenswrapper[4806]: I0126 07:56:05.031937 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ts6cz"] Jan 26 07:56:05 crc kubenswrapper[4806]: I0126 07:56:05.120061 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-l5mfg" Jan 26 07:56:05 crc kubenswrapper[4806]: I0126 07:56:05.126035 4806 patch_prober.go:28] interesting pod/router-default-5444994796-l5mfg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 07:56:05 crc kubenswrapper[4806]: [-]has-synced failed: reason withheld Jan 26 07:56:05 crc kubenswrapper[4806]: [+]process-running ok Jan 26 07:56:05 crc kubenswrapper[4806]: healthz check failed Jan 26 07:56:05 crc kubenswrapper[4806]: I0126 07:56:05.126099 4806 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-l5mfg" podUID="2ce69172-bf74-4c4a-8aeb-9b1d86f50254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 07:56:05 crc kubenswrapper[4806]: I0126 07:56:05.126391 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qk7b\" (UniqueName: \"kubernetes.io/projected/c31e36f6-aabe-4f1e-8e7e-3bb086ec1cd3-kube-api-access-9qk7b\") pod \"redhat-operators-5p4nt\" (UID: \"c31e36f6-aabe-4f1e-8e7e-3bb086ec1cd3\") " pod="openshift-marketplace/redhat-operators-5p4nt" Jan 26 07:56:05 crc kubenswrapper[4806]: I0126 07:56:05.126446 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c31e36f6-aabe-4f1e-8e7e-3bb086ec1cd3-catalog-content\") pod \"redhat-operators-5p4nt\" (UID: \"c31e36f6-aabe-4f1e-8e7e-3bb086ec1cd3\") " pod="openshift-marketplace/redhat-operators-5p4nt" Jan 26 07:56:05 crc kubenswrapper[4806]: I0126 07:56:05.126539 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c31e36f6-aabe-4f1e-8e7e-3bb086ec1cd3-utilities\") pod \"redhat-operators-5p4nt\" (UID: \"c31e36f6-aabe-4f1e-8e7e-3bb086ec1cd3\") " pod="openshift-marketplace/redhat-operators-5p4nt" Jan 26 07:56:05 crc kubenswrapper[4806]: I0126 07:56:05.128081 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c31e36f6-aabe-4f1e-8e7e-3bb086ec1cd3-utilities\") pod \"redhat-operators-5p4nt\" (UID: \"c31e36f6-aabe-4f1e-8e7e-3bb086ec1cd3\") " pod="openshift-marketplace/redhat-operators-5p4nt" Jan 26 07:56:05 crc kubenswrapper[4806]: I0126 07:56:05.128293 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c31e36f6-aabe-4f1e-8e7e-3bb086ec1cd3-catalog-content\") pod \"redhat-operators-5p4nt\" (UID: \"c31e36f6-aabe-4f1e-8e7e-3bb086ec1cd3\") " pod="openshift-marketplace/redhat-operators-5p4nt" Jan 26 07:56:05 crc kubenswrapper[4806]: I0126 07:56:05.181842 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qk7b\" (UniqueName: \"kubernetes.io/projected/c31e36f6-aabe-4f1e-8e7e-3bb086ec1cd3-kube-api-access-9qk7b\") pod \"redhat-operators-5p4nt\" (UID: \"c31e36f6-aabe-4f1e-8e7e-3bb086ec1cd3\") " pod="openshift-marketplace/redhat-operators-5p4nt" Jan 26 07:56:05 crc kubenswrapper[4806]: I0126 07:56:05.244322 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5p4nt" Jan 26 07:56:05 crc kubenswrapper[4806]: I0126 07:56:05.288851 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-cmd7h"] Jan 26 07:56:05 crc kubenswrapper[4806]: I0126 07:56:05.289912 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cmd7h" Jan 26 07:56:05 crc kubenswrapper[4806]: I0126 07:56:05.307341 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cmd7h"] Jan 26 07:56:05 crc kubenswrapper[4806]: I0126 07:56:05.358085 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ts6cz" event={"ID":"a078c937-6bed-4604-a0a1-25c9c7d2503d","Type":"ContainerStarted","Data":"442cb782efd043b3c8a6e90a2886b0e89480b6555ca7004b410e5567c8c18e19"} Jan 26 07:56:05 crc kubenswrapper[4806]: I0126 07:56:05.373458 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-tncnb" event={"ID":"3bcae027-4e25-4c41-bbc9-639927f58691","Type":"ContainerStarted","Data":"29ef45d46170a370c06a09ebee24da06d861a02478eb5c93239234f9362fee75"} Jan 26 07:56:05 crc kubenswrapper[4806]: I0126 07:56:05.373538 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-tncnb" event={"ID":"3bcae027-4e25-4c41-bbc9-639927f58691","Type":"ContainerStarted","Data":"e462c7aa5efd405dbdbea2c0c4ed6ec5e86b59fde72ec70de998ba0788a45ab0"} Jan 26 07:56:05 crc kubenswrapper[4806]: I0126 07:56:05.374337 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-tncnb" Jan 26 07:56:05 crc kubenswrapper[4806]: I0126 07:56:05.395366 4806 generic.go:334] "Generic (PLEG): container finished" podID="4f544176-9dd8-4416-99f3-53299cd7ffb0" containerID="d86193acd0e7effc1f269af0665320de3945ab8568b4c6345b2b8ebbfb878c02" exitCode=0 Jan 26 07:56:05 crc kubenswrapper[4806]: I0126 07:56:05.395451 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-45bpz" event={"ID":"4f544176-9dd8-4416-99f3-53299cd7ffb0","Type":"ContainerDied","Data":"d86193acd0e7effc1f269af0665320de3945ab8568b4c6345b2b8ebbfb878c02"} Jan 26 07:56:05 crc kubenswrapper[4806]: I0126 07:56:05.406160 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-tncnb" podStartSLOduration=125.406142249 podStartE2EDuration="2m5.406142249s" podCreationTimestamp="2026-01-26 07:54:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 07:56:05.401196289 +0000 UTC m=+144.665604345" watchObservedRunningTime="2026-01-26 07:56:05.406142249 +0000 UTC m=+144.670550305" Jan 26 07:56:05 crc kubenswrapper[4806]: I0126 07:56:05.408472 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-248dw"] Jan 26 07:56:05 crc kubenswrapper[4806]: I0126 07:56:05.425504 4806 generic.go:334] "Generic (PLEG): container finished" podID="0531f954-d1d9-42f0-bd29-f8ff5b0871b4" containerID="9f73378235c52bb9162bfaa56d05c51d249e027ec303ea8e8c5de45218d50f49" exitCode=0 Jan 26 07:56:05 crc kubenswrapper[4806]: I0126 07:56:05.425625 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bq7zd" event={"ID":"0531f954-d1d9-42f0-bd29-f8ff5b0871b4","Type":"ContainerDied","Data":"9f73378235c52bb9162bfaa56d05c51d249e027ec303ea8e8c5de45218d50f49"} Jan 26 07:56:05 crc kubenswrapper[4806]: I0126 07:56:05.434227 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf65c110-74ee-4e0c-a7e8-bb27c891ff12-utilities\") pod \"redhat-operators-cmd7h\" (UID: \"cf65c110-74ee-4e0c-a7e8-bb27c891ff12\") " pod="openshift-marketplace/redhat-operators-cmd7h" Jan 26 07:56:05 crc kubenswrapper[4806]: I0126 07:56:05.434954 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xr85x\" (UniqueName: \"kubernetes.io/projected/cf65c110-74ee-4e0c-a7e8-bb27c891ff12-kube-api-access-xr85x\") pod \"redhat-operators-cmd7h\" (UID: \"cf65c110-74ee-4e0c-a7e8-bb27c891ff12\") " pod="openshift-marketplace/redhat-operators-cmd7h" Jan 26 07:56:05 crc kubenswrapper[4806]: I0126 07:56:05.434991 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf65c110-74ee-4e0c-a7e8-bb27c891ff12-catalog-content\") pod \"redhat-operators-cmd7h\" (UID: \"cf65c110-74ee-4e0c-a7e8-bb27c891ff12\") " pod="openshift-marketplace/redhat-operators-cmd7h" Jan 26 07:56:05 crc kubenswrapper[4806]: I0126 07:56:05.444611 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 26 07:56:05 crc kubenswrapper[4806]: I0126 07:56:05.445451 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 07:56:05 crc kubenswrapper[4806]: I0126 07:56:05.447774 4806 generic.go:334] "Generic (PLEG): container finished" podID="339dd820-f50a-4135-9da6-5768324b8d55" containerID="fbc84a449235680ffbcf290fcf3911cf270d2ce4e3f604e218451c30518dcbcf" exitCode=0 Jan 26 07:56:05 crc kubenswrapper[4806]: I0126 07:56:05.448537 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rqpjn" event={"ID":"339dd820-f50a-4135-9da6-5768324b8d55","Type":"ContainerDied","Data":"fbc84a449235680ffbcf290fcf3911cf270d2ce4e3f604e218451c30518dcbcf"} Jan 26 07:56:05 crc kubenswrapper[4806]: I0126 07:56:05.458970 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 26 07:56:05 crc kubenswrapper[4806]: I0126 07:56:05.459285 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 26 07:56:05 crc kubenswrapper[4806]: I0126 07:56:05.474090 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 26 07:56:05 crc kubenswrapper[4806]: W0126 07:56:05.479121 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod422dbc29_ef6c_40a2_8928_ef97946880a0.slice/crio-fee2b079c0103ea3f5414c42770bd099723c4b95a30174354d8e98a1a8a26c25 WatchSource:0}: Error finding container fee2b079c0103ea3f5414c42770bd099723c4b95a30174354d8e98a1a8a26c25: Status 404 returned error can't find the container with id fee2b079c0103ea3f5414c42770bd099723c4b95a30174354d8e98a1a8a26c25 Jan 26 07:56:05 crc kubenswrapper[4806]: I0126 07:56:05.537999 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf65c110-74ee-4e0c-a7e8-bb27c891ff12-utilities\") pod \"redhat-operators-cmd7h\" (UID: \"cf65c110-74ee-4e0c-a7e8-bb27c891ff12\") " pod="openshift-marketplace/redhat-operators-cmd7h" Jan 26 07:56:05 crc kubenswrapper[4806]: I0126 07:56:05.538056 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xr85x\" (UniqueName: \"kubernetes.io/projected/cf65c110-74ee-4e0c-a7e8-bb27c891ff12-kube-api-access-xr85x\") pod \"redhat-operators-cmd7h\" (UID: \"cf65c110-74ee-4e0c-a7e8-bb27c891ff12\") " pod="openshift-marketplace/redhat-operators-cmd7h" Jan 26 07:56:05 crc kubenswrapper[4806]: I0126 07:56:05.538097 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf65c110-74ee-4e0c-a7e8-bb27c891ff12-catalog-content\") pod \"redhat-operators-cmd7h\" (UID: \"cf65c110-74ee-4e0c-a7e8-bb27c891ff12\") " pod="openshift-marketplace/redhat-operators-cmd7h" Jan 26 07:56:05 crc kubenswrapper[4806]: I0126 07:56:05.541151 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf65c110-74ee-4e0c-a7e8-bb27c891ff12-utilities\") pod \"redhat-operators-cmd7h\" (UID: \"cf65c110-74ee-4e0c-a7e8-bb27c891ff12\") " pod="openshift-marketplace/redhat-operators-cmd7h" Jan 26 07:56:05 crc kubenswrapper[4806]: I0126 07:56:05.541911 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf65c110-74ee-4e0c-a7e8-bb27c891ff12-catalog-content\") pod \"redhat-operators-cmd7h\" (UID: \"cf65c110-74ee-4e0c-a7e8-bb27c891ff12\") " pod="openshift-marketplace/redhat-operators-cmd7h" Jan 26 07:56:05 crc kubenswrapper[4806]: I0126 07:56:05.564986 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xr85x\" (UniqueName: \"kubernetes.io/projected/cf65c110-74ee-4e0c-a7e8-bb27c891ff12-kube-api-access-xr85x\") pod \"redhat-operators-cmd7h\" (UID: \"cf65c110-74ee-4e0c-a7e8-bb27c891ff12\") " pod="openshift-marketplace/redhat-operators-cmd7h" Jan 26 07:56:05 crc kubenswrapper[4806]: I0126 07:56:05.639780 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7f643dc7-1803-468f-9ec9-10f09b0a0e6b-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"7f643dc7-1803-468f-9ec9-10f09b0a0e6b\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 07:56:05 crc kubenswrapper[4806]: I0126 07:56:05.639889 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7f643dc7-1803-468f-9ec9-10f09b0a0e6b-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"7f643dc7-1803-468f-9ec9-10f09b0a0e6b\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 07:56:05 crc kubenswrapper[4806]: I0126 07:56:05.662499 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cmd7h" Jan 26 07:56:05 crc kubenswrapper[4806]: I0126 07:56:05.741716 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7f643dc7-1803-468f-9ec9-10f09b0a0e6b-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"7f643dc7-1803-468f-9ec9-10f09b0a0e6b\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 07:56:05 crc kubenswrapper[4806]: I0126 07:56:05.742110 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7f643dc7-1803-468f-9ec9-10f09b0a0e6b-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"7f643dc7-1803-468f-9ec9-10f09b0a0e6b\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 07:56:05 crc kubenswrapper[4806]: I0126 07:56:05.741809 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7f643dc7-1803-468f-9ec9-10f09b0a0e6b-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"7f643dc7-1803-468f-9ec9-10f09b0a0e6b\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 07:56:05 crc kubenswrapper[4806]: I0126 07:56:05.767446 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7f643dc7-1803-468f-9ec9-10f09b0a0e6b-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"7f643dc7-1803-468f-9ec9-10f09b0a0e6b\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 07:56:05 crc kubenswrapper[4806]: I0126 07:56:05.821931 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 07:56:05 crc kubenswrapper[4806]: I0126 07:56:05.962051 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490225-v2v7z" Jan 26 07:56:06 crc kubenswrapper[4806]: I0126 07:56:06.041971 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5p4nt"] Jan 26 07:56:06 crc kubenswrapper[4806]: I0126 07:56:06.046646 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d7f14ee6-1d4a-4cb9-bf7d-7a3ad4b0a7c1-secret-volume\") pod \"d7f14ee6-1d4a-4cb9-bf7d-7a3ad4b0a7c1\" (UID: \"d7f14ee6-1d4a-4cb9-bf7d-7a3ad4b0a7c1\") " Jan 26 07:56:06 crc kubenswrapper[4806]: I0126 07:56:06.046726 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h82dx\" (UniqueName: \"kubernetes.io/projected/d7f14ee6-1d4a-4cb9-bf7d-7a3ad4b0a7c1-kube-api-access-h82dx\") pod \"d7f14ee6-1d4a-4cb9-bf7d-7a3ad4b0a7c1\" (UID: \"d7f14ee6-1d4a-4cb9-bf7d-7a3ad4b0a7c1\") " Jan 26 07:56:06 crc kubenswrapper[4806]: I0126 07:56:06.046746 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d7f14ee6-1d4a-4cb9-bf7d-7a3ad4b0a7c1-config-volume\") pod \"d7f14ee6-1d4a-4cb9-bf7d-7a3ad4b0a7c1\" (UID: \"d7f14ee6-1d4a-4cb9-bf7d-7a3ad4b0a7c1\") " Jan 26 07:56:06 crc kubenswrapper[4806]: I0126 07:56:06.048004 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7f14ee6-1d4a-4cb9-bf7d-7a3ad4b0a7c1-config-volume" (OuterVolumeSpecName: "config-volume") pod "d7f14ee6-1d4a-4cb9-bf7d-7a3ad4b0a7c1" (UID: "d7f14ee6-1d4a-4cb9-bf7d-7a3ad4b0a7c1"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 07:56:06 crc kubenswrapper[4806]: I0126 07:56:06.051991 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7f14ee6-1d4a-4cb9-bf7d-7a3ad4b0a7c1-kube-api-access-h82dx" (OuterVolumeSpecName: "kube-api-access-h82dx") pod "d7f14ee6-1d4a-4cb9-bf7d-7a3ad4b0a7c1" (UID: "d7f14ee6-1d4a-4cb9-bf7d-7a3ad4b0a7c1"). InnerVolumeSpecName "kube-api-access-h82dx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 07:56:06 crc kubenswrapper[4806]: I0126 07:56:06.052422 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7f14ee6-1d4a-4cb9-bf7d-7a3ad4b0a7c1-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "d7f14ee6-1d4a-4cb9-bf7d-7a3ad4b0a7c1" (UID: "d7f14ee6-1d4a-4cb9-bf7d-7a3ad4b0a7c1"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 07:56:06 crc kubenswrapper[4806]: I0126 07:56:06.081732 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cmd7h"] Jan 26 07:56:06 crc kubenswrapper[4806]: W0126 07:56:06.095877 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcf65c110_74ee_4e0c_a7e8_bb27c891ff12.slice/crio-cb57cb3421d92cf7b4b438e5228673cae473582d2b36033754d5c568b867dc63 WatchSource:0}: Error finding container cb57cb3421d92cf7b4b438e5228673cae473582d2b36033754d5c568b867dc63: Status 404 returned error can't find the container with id cb57cb3421d92cf7b4b438e5228673cae473582d2b36033754d5c568b867dc63 Jan 26 07:56:06 crc kubenswrapper[4806]: W0126 07:56:06.116205 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc31e36f6_aabe_4f1e_8e7e_3bb086ec1cd3.slice/crio-b63d2e646b24fa7834cb91f6f08c3ab94f38b88f43ead7aee31e10e506c2b45f WatchSource:0}: Error finding container b63d2e646b24fa7834cb91f6f08c3ab94f38b88f43ead7aee31e10e506c2b45f: Status 404 returned error can't find the container with id b63d2e646b24fa7834cb91f6f08c3ab94f38b88f43ead7aee31e10e506c2b45f Jan 26 07:56:06 crc kubenswrapper[4806]: I0126 07:56:06.121991 4806 patch_prober.go:28] interesting pod/router-default-5444994796-l5mfg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 07:56:06 crc kubenswrapper[4806]: [-]has-synced failed: reason withheld Jan 26 07:56:06 crc kubenswrapper[4806]: [+]process-running ok Jan 26 07:56:06 crc kubenswrapper[4806]: healthz check failed Jan 26 07:56:06 crc kubenswrapper[4806]: I0126 07:56:06.122040 4806 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-l5mfg" podUID="2ce69172-bf74-4c4a-8aeb-9b1d86f50254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 07:56:06 crc kubenswrapper[4806]: I0126 07:56:06.148223 4806 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d7f14ee6-1d4a-4cb9-bf7d-7a3ad4b0a7c1-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 07:56:06 crc kubenswrapper[4806]: I0126 07:56:06.148250 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h82dx\" (UniqueName: \"kubernetes.io/projected/d7f14ee6-1d4a-4cb9-bf7d-7a3ad4b0a7c1-kube-api-access-h82dx\") on node \"crc\" DevicePath \"\"" Jan 26 07:56:06 crc kubenswrapper[4806]: I0126 07:56:06.148260 4806 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d7f14ee6-1d4a-4cb9-bf7d-7a3ad4b0a7c1-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 07:56:06 crc kubenswrapper[4806]: I0126 07:56:06.195033 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 26 07:56:06 crc kubenswrapper[4806]: W0126 07:56:06.221068 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod7f643dc7_1803_468f_9ec9_10f09b0a0e6b.slice/crio-3df751bfe12808668014df9bc874d08717fc8f7a7f13228e35aea4be97035a36 WatchSource:0}: Error finding container 3df751bfe12808668014df9bc874d08717fc8f7a7f13228e35aea4be97035a36: Status 404 returned error can't find the container with id 3df751bfe12808668014df9bc874d08717fc8f7a7f13228e35aea4be97035a36 Jan 26 07:56:06 crc kubenswrapper[4806]: I0126 07:56:06.498138 4806 generic.go:334] "Generic (PLEG): container finished" podID="a078c937-6bed-4604-a0a1-25c9c7d2503d" containerID="c632ee6f5c612e8ebfbd2f6a9c7186f34b5dba409c11fd08809f428c0b20f8c3" exitCode=0 Jan 26 07:56:06 crc kubenswrapper[4806]: I0126 07:56:06.498551 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ts6cz" event={"ID":"a078c937-6bed-4604-a0a1-25c9c7d2503d","Type":"ContainerDied","Data":"c632ee6f5c612e8ebfbd2f6a9c7186f34b5dba409c11fd08809f428c0b20f8c3"} Jan 26 07:56:06 crc kubenswrapper[4806]: I0126 07:56:06.504024 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5p4nt" event={"ID":"c31e36f6-aabe-4f1e-8e7e-3bb086ec1cd3","Type":"ContainerStarted","Data":"b63d2e646b24fa7834cb91f6f08c3ab94f38b88f43ead7aee31e10e506c2b45f"} Jan 26 07:56:06 crc kubenswrapper[4806]: I0126 07:56:06.537561 4806 generic.go:334] "Generic (PLEG): container finished" podID="422dbc29-ef6c-40a2-8928-ef97946880a0" containerID="a5fe96f8df2064ccfa5a3c2c1aba93b78afc2716872ce4d0a6e98c71f7cdee2a" exitCode=0 Jan 26 07:56:06 crc kubenswrapper[4806]: I0126 07:56:06.537626 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-248dw" event={"ID":"422dbc29-ef6c-40a2-8928-ef97946880a0","Type":"ContainerDied","Data":"a5fe96f8df2064ccfa5a3c2c1aba93b78afc2716872ce4d0a6e98c71f7cdee2a"} Jan 26 07:56:06 crc kubenswrapper[4806]: I0126 07:56:06.537697 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-248dw" event={"ID":"422dbc29-ef6c-40a2-8928-ef97946880a0","Type":"ContainerStarted","Data":"fee2b079c0103ea3f5414c42770bd099723c4b95a30174354d8e98a1a8a26c25"} Jan 26 07:56:06 crc kubenswrapper[4806]: I0126 07:56:06.544215 4806 generic.go:334] "Generic (PLEG): container finished" podID="cf65c110-74ee-4e0c-a7e8-bb27c891ff12" containerID="3063b61193856a80d2928479406834902b39f7cf88a4972ca8a1a1a34ceb6a1e" exitCode=0 Jan 26 07:56:06 crc kubenswrapper[4806]: I0126 07:56:06.544323 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cmd7h" event={"ID":"cf65c110-74ee-4e0c-a7e8-bb27c891ff12","Type":"ContainerDied","Data":"3063b61193856a80d2928479406834902b39f7cf88a4972ca8a1a1a34ceb6a1e"} Jan 26 07:56:06 crc kubenswrapper[4806]: I0126 07:56:06.544370 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cmd7h" event={"ID":"cf65c110-74ee-4e0c-a7e8-bb27c891ff12","Type":"ContainerStarted","Data":"cb57cb3421d92cf7b4b438e5228673cae473582d2b36033754d5c568b867dc63"} Jan 26 07:56:06 crc kubenswrapper[4806]: I0126 07:56:06.557245 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"7f643dc7-1803-468f-9ec9-10f09b0a0e6b","Type":"ContainerStarted","Data":"3df751bfe12808668014df9bc874d08717fc8f7a7f13228e35aea4be97035a36"} Jan 26 07:56:06 crc kubenswrapper[4806]: I0126 07:56:06.561997 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490225-v2v7z" event={"ID":"d7f14ee6-1d4a-4cb9-bf7d-7a3ad4b0a7c1","Type":"ContainerDied","Data":"05f351bd628e4f1d0ed186e903a09110b6d0c74eb8f7ffa6bec2e64cb6f7da24"} Jan 26 07:56:06 crc kubenswrapper[4806]: I0126 07:56:06.562054 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="05f351bd628e4f1d0ed186e903a09110b6d0c74eb8f7ffa6bec2e64cb6f7da24" Jan 26 07:56:06 crc kubenswrapper[4806]: I0126 07:56:06.562028 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490225-v2v7z" Jan 26 07:56:07 crc kubenswrapper[4806]: I0126 07:56:07.121277 4806 patch_prober.go:28] interesting pod/router-default-5444994796-l5mfg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 07:56:07 crc kubenswrapper[4806]: [-]has-synced failed: reason withheld Jan 26 07:56:07 crc kubenswrapper[4806]: [+]process-running ok Jan 26 07:56:07 crc kubenswrapper[4806]: healthz check failed Jan 26 07:56:07 crc kubenswrapper[4806]: I0126 07:56:07.121338 4806 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-l5mfg" podUID="2ce69172-bf74-4c4a-8aeb-9b1d86f50254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 07:56:07 crc kubenswrapper[4806]: I0126 07:56:07.577597 4806 generic.go:334] "Generic (PLEG): container finished" podID="c31e36f6-aabe-4f1e-8e7e-3bb086ec1cd3" containerID="4dd53dcd0162e198ce2edfac6a6d9cf0d545222b546903179b887fc7b0343059" exitCode=0 Jan 26 07:56:07 crc kubenswrapper[4806]: I0126 07:56:07.577924 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5p4nt" event={"ID":"c31e36f6-aabe-4f1e-8e7e-3bb086ec1cd3","Type":"ContainerDied","Data":"4dd53dcd0162e198ce2edfac6a6d9cf0d545222b546903179b887fc7b0343059"} Jan 26 07:56:07 crc kubenswrapper[4806]: I0126 07:56:07.581646 4806 generic.go:334] "Generic (PLEG): container finished" podID="7f643dc7-1803-468f-9ec9-10f09b0a0e6b" containerID="5638b0e2c7c812dea6a8c6c42e49fee2c9c875fcf5f46c4df8bd29f5b4160523" exitCode=0 Jan 26 07:56:07 crc kubenswrapper[4806]: I0126 07:56:07.582062 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"7f643dc7-1803-468f-9ec9-10f09b0a0e6b","Type":"ContainerDied","Data":"5638b0e2c7c812dea6a8c6c42e49fee2c9c875fcf5f46c4df8bd29f5b4160523"} Jan 26 07:56:08 crc kubenswrapper[4806]: I0126 07:56:08.094586 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 07:56:08 crc kubenswrapper[4806]: I0126 07:56:08.094668 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 07:56:08 crc kubenswrapper[4806]: I0126 07:56:08.094701 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 07:56:08 crc kubenswrapper[4806]: I0126 07:56:08.094751 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 07:56:08 crc kubenswrapper[4806]: I0126 07:56:08.095417 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 07:56:08 crc kubenswrapper[4806]: I0126 07:56:08.100399 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 07:56:08 crc kubenswrapper[4806]: I0126 07:56:08.110428 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 07:56:08 crc kubenswrapper[4806]: I0126 07:56:08.116905 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 07:56:08 crc kubenswrapper[4806]: I0126 07:56:08.121593 4806 patch_prober.go:28] interesting pod/router-default-5444994796-l5mfg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 07:56:08 crc kubenswrapper[4806]: [-]has-synced failed: reason withheld Jan 26 07:56:08 crc kubenswrapper[4806]: [+]process-running ok Jan 26 07:56:08 crc kubenswrapper[4806]: healthz check failed Jan 26 07:56:08 crc kubenswrapper[4806]: I0126 07:56:08.121645 4806 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-l5mfg" podUID="2ce69172-bf74-4c4a-8aeb-9b1d86f50254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 07:56:08 crc kubenswrapper[4806]: I0126 07:56:08.201701 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 07:56:08 crc kubenswrapper[4806]: I0126 07:56:08.232919 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 07:56:08 crc kubenswrapper[4806]: I0126 07:56:08.237838 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 07:56:08 crc kubenswrapper[4806]: W0126 07:56:08.847798 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-1c4bcbbf157a22803cbf0cb2de2245ce4eecb50833e72bc1a8549317f965e582 WatchSource:0}: Error finding container 1c4bcbbf157a22803cbf0cb2de2245ce4eecb50833e72bc1a8549317f965e582: Status 404 returned error can't find the container with id 1c4bcbbf157a22803cbf0cb2de2245ce4eecb50833e72bc1a8549317f965e582 Jan 26 07:56:09 crc kubenswrapper[4806]: I0126 07:56:09.147835 4806 patch_prober.go:28] interesting pod/router-default-5444994796-l5mfg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 07:56:09 crc kubenswrapper[4806]: [-]has-synced failed: reason withheld Jan 26 07:56:09 crc kubenswrapper[4806]: [+]process-running ok Jan 26 07:56:09 crc kubenswrapper[4806]: healthz check failed Jan 26 07:56:09 crc kubenswrapper[4806]: I0126 07:56:09.147880 4806 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-l5mfg" podUID="2ce69172-bf74-4c4a-8aeb-9b1d86f50254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 07:56:09 crc kubenswrapper[4806]: I0126 07:56:09.273203 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-4gdvx" Jan 26 07:56:09 crc kubenswrapper[4806]: I0126 07:56:09.282299 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-4gdvx" Jan 26 07:56:09 crc kubenswrapper[4806]: I0126 07:56:09.292116 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 07:56:09 crc kubenswrapper[4806]: I0126 07:56:09.435644 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7f643dc7-1803-468f-9ec9-10f09b0a0e6b-kubelet-dir\") pod \"7f643dc7-1803-468f-9ec9-10f09b0a0e6b\" (UID: \"7f643dc7-1803-468f-9ec9-10f09b0a0e6b\") " Jan 26 07:56:09 crc kubenswrapper[4806]: I0126 07:56:09.435821 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7f643dc7-1803-468f-9ec9-10f09b0a0e6b-kube-api-access\") pod \"7f643dc7-1803-468f-9ec9-10f09b0a0e6b\" (UID: \"7f643dc7-1803-468f-9ec9-10f09b0a0e6b\") " Jan 26 07:56:09 crc kubenswrapper[4806]: I0126 07:56:09.435968 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f643dc7-1803-468f-9ec9-10f09b0a0e6b-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "7f643dc7-1803-468f-9ec9-10f09b0a0e6b" (UID: "7f643dc7-1803-468f-9ec9-10f09b0a0e6b"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 07:56:09 crc kubenswrapper[4806]: I0126 07:56:09.436583 4806 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7f643dc7-1803-468f-9ec9-10f09b0a0e6b-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 26 07:56:09 crc kubenswrapper[4806]: I0126 07:56:09.446711 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f643dc7-1803-468f-9ec9-10f09b0a0e6b-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "7f643dc7-1803-468f-9ec9-10f09b0a0e6b" (UID: "7f643dc7-1803-468f-9ec9-10f09b0a0e6b"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 07:56:09 crc kubenswrapper[4806]: I0126 07:56:09.544012 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7f643dc7-1803-468f-9ec9-10f09b0a0e6b-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 07:56:09 crc kubenswrapper[4806]: I0126 07:56:09.634020 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"1c4bcbbf157a22803cbf0cb2de2245ce4eecb50833e72bc1a8549317f965e582"} Jan 26 07:56:09 crc kubenswrapper[4806]: I0126 07:56:09.635105 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"49e789989cb2dff4db5655a0deea9f5bb35cccbaaa89158dd1fd545a45206c4a"} Jan 26 07:56:09 crc kubenswrapper[4806]: I0126 07:56:09.658778 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"71dfacd7600dc2a46a8a105b3dbf4cef7546a573ae0932dc292d0c1ed470e1fc"} Jan 26 07:56:09 crc kubenswrapper[4806]: I0126 07:56:09.663654 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 07:56:09 crc kubenswrapper[4806]: I0126 07:56:09.665826 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"7f643dc7-1803-468f-9ec9-10f09b0a0e6b","Type":"ContainerDied","Data":"3df751bfe12808668014df9bc874d08717fc8f7a7f13228e35aea4be97035a36"} Jan 26 07:56:09 crc kubenswrapper[4806]: I0126 07:56:09.665877 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3df751bfe12808668014df9bc874d08717fc8f7a7f13228e35aea4be97035a36" Jan 26 07:56:10 crc kubenswrapper[4806]: I0126 07:56:10.125802 4806 patch_prober.go:28] interesting pod/router-default-5444994796-l5mfg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 07:56:10 crc kubenswrapper[4806]: [-]has-synced failed: reason withheld Jan 26 07:56:10 crc kubenswrapper[4806]: [+]process-running ok Jan 26 07:56:10 crc kubenswrapper[4806]: healthz check failed Jan 26 07:56:10 crc kubenswrapper[4806]: I0126 07:56:10.126176 4806 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-l5mfg" podUID="2ce69172-bf74-4c4a-8aeb-9b1d86f50254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 07:56:10 crc kubenswrapper[4806]: I0126 07:56:10.260127 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-jrg5t" Jan 26 07:56:10 crc kubenswrapper[4806]: I0126 07:56:10.682221 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"6fff08845c1ad8e17ef9940593de4e4892f28c9ebc5b1085585348d287cb7f6c"} Jan 26 07:56:10 crc kubenswrapper[4806]: I0126 07:56:10.686754 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"ee293579b3bd31008f61632c06379f35cc299a0a8e3f90ef6eca223d5c40ef08"} Jan 26 07:56:10 crc kubenswrapper[4806]: I0126 07:56:10.689464 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"536b3e0f9915fae19cd7618f0317003c4475500fe062ff1c853a5cc65226425a"} Jan 26 07:56:10 crc kubenswrapper[4806]: I0126 07:56:10.689868 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 07:56:10 crc kubenswrapper[4806]: I0126 07:56:10.794710 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 26 07:56:10 crc kubenswrapper[4806]: E0126 07:56:10.794974 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f643dc7-1803-468f-9ec9-10f09b0a0e6b" containerName="pruner" Jan 26 07:56:10 crc kubenswrapper[4806]: I0126 07:56:10.794987 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f643dc7-1803-468f-9ec9-10f09b0a0e6b" containerName="pruner" Jan 26 07:56:10 crc kubenswrapper[4806]: E0126 07:56:10.795002 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7f14ee6-1d4a-4cb9-bf7d-7a3ad4b0a7c1" containerName="collect-profiles" Jan 26 07:56:10 crc kubenswrapper[4806]: I0126 07:56:10.795007 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7f14ee6-1d4a-4cb9-bf7d-7a3ad4b0a7c1" containerName="collect-profiles" Jan 26 07:56:10 crc kubenswrapper[4806]: I0126 07:56:10.795104 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7f14ee6-1d4a-4cb9-bf7d-7a3ad4b0a7c1" containerName="collect-profiles" Jan 26 07:56:10 crc kubenswrapper[4806]: I0126 07:56:10.795120 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f643dc7-1803-468f-9ec9-10f09b0a0e6b" containerName="pruner" Jan 26 07:56:10 crc kubenswrapper[4806]: I0126 07:56:10.795553 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 07:56:10 crc kubenswrapper[4806]: I0126 07:56:10.798730 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 26 07:56:10 crc kubenswrapper[4806]: I0126 07:56:10.801945 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 26 07:56:10 crc kubenswrapper[4806]: I0126 07:56:10.805000 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 26 07:56:10 crc kubenswrapper[4806]: I0126 07:56:10.981929 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6b38e5ab-4e02-48cc-9a16-e327e1aff30a-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"6b38e5ab-4e02-48cc-9a16-e327e1aff30a\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 07:56:10 crc kubenswrapper[4806]: I0126 07:56:10.982040 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6b38e5ab-4e02-48cc-9a16-e327e1aff30a-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"6b38e5ab-4e02-48cc-9a16-e327e1aff30a\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 07:56:11 crc kubenswrapper[4806]: I0126 07:56:11.083338 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6b38e5ab-4e02-48cc-9a16-e327e1aff30a-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"6b38e5ab-4e02-48cc-9a16-e327e1aff30a\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 07:56:11 crc kubenswrapper[4806]: I0126 07:56:11.083408 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6b38e5ab-4e02-48cc-9a16-e327e1aff30a-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"6b38e5ab-4e02-48cc-9a16-e327e1aff30a\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 07:56:11 crc kubenswrapper[4806]: I0126 07:56:11.083546 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6b38e5ab-4e02-48cc-9a16-e327e1aff30a-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"6b38e5ab-4e02-48cc-9a16-e327e1aff30a\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 07:56:11 crc kubenswrapper[4806]: I0126 07:56:11.107452 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6b38e5ab-4e02-48cc-9a16-e327e1aff30a-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"6b38e5ab-4e02-48cc-9a16-e327e1aff30a\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 07:56:11 crc kubenswrapper[4806]: I0126 07:56:11.127783 4806 patch_prober.go:28] interesting pod/router-default-5444994796-l5mfg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 07:56:11 crc kubenswrapper[4806]: [-]has-synced failed: reason withheld Jan 26 07:56:11 crc kubenswrapper[4806]: [+]process-running ok Jan 26 07:56:11 crc kubenswrapper[4806]: healthz check failed Jan 26 07:56:11 crc kubenswrapper[4806]: I0126 07:56:11.127837 4806 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-l5mfg" podUID="2ce69172-bf74-4c4a-8aeb-9b1d86f50254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 07:56:11 crc kubenswrapper[4806]: I0126 07:56:11.144550 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 07:56:11 crc kubenswrapper[4806]: I0126 07:56:11.520429 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 26 07:56:11 crc kubenswrapper[4806]: I0126 07:56:11.818812 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"6b38e5ab-4e02-48cc-9a16-e327e1aff30a","Type":"ContainerStarted","Data":"a5be70ab7658dfaaa46050a045529fe6d10d98d2275a276fee2e78323c773e8f"} Jan 26 07:56:12 crc kubenswrapper[4806]: I0126 07:56:12.121980 4806 patch_prober.go:28] interesting pod/router-default-5444994796-l5mfg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 07:56:12 crc kubenswrapper[4806]: [-]has-synced failed: reason withheld Jan 26 07:56:12 crc kubenswrapper[4806]: [+]process-running ok Jan 26 07:56:12 crc kubenswrapper[4806]: healthz check failed Jan 26 07:56:12 crc kubenswrapper[4806]: I0126 07:56:12.122032 4806 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-l5mfg" podUID="2ce69172-bf74-4c4a-8aeb-9b1d86f50254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 07:56:12 crc kubenswrapper[4806]: I0126 07:56:12.851027 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"6b38e5ab-4e02-48cc-9a16-e327e1aff30a","Type":"ContainerStarted","Data":"dfa73a273ec3d32bd473dd3a6bb0947ea608be81e62e336e99aadaed1d8b726e"} Jan 26 07:56:13 crc kubenswrapper[4806]: I0126 07:56:13.121800 4806 patch_prober.go:28] interesting pod/router-default-5444994796-l5mfg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 07:56:13 crc kubenswrapper[4806]: [-]has-synced failed: reason withheld Jan 26 07:56:13 crc kubenswrapper[4806]: [+]process-running ok Jan 26 07:56:13 crc kubenswrapper[4806]: healthz check failed Jan 26 07:56:13 crc kubenswrapper[4806]: I0126 07:56:13.121852 4806 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-l5mfg" podUID="2ce69172-bf74-4c4a-8aeb-9b1d86f50254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 07:56:13 crc kubenswrapper[4806]: I0126 07:56:13.745609 4806 patch_prober.go:28] interesting pod/console-f9d7485db-qd6mh container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.12:8443/health\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Jan 26 07:56:13 crc kubenswrapper[4806]: I0126 07:56:13.745671 4806 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-qd6mh" podUID="ee89739e-edc1-41b5-bf4a-da80ba0a59aa" containerName="console" probeResult="failure" output="Get \"https://10.217.0.12:8443/health\": dial tcp 10.217.0.12:8443: connect: connection refused" Jan 26 07:56:13 crc kubenswrapper[4806]: I0126 07:56:13.879278 4806 generic.go:334] "Generic (PLEG): container finished" podID="6b38e5ab-4e02-48cc-9a16-e327e1aff30a" containerID="dfa73a273ec3d32bd473dd3a6bb0947ea608be81e62e336e99aadaed1d8b726e" exitCode=0 Jan 26 07:56:13 crc kubenswrapper[4806]: I0126 07:56:13.879325 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"6b38e5ab-4e02-48cc-9a16-e327e1aff30a","Type":"ContainerDied","Data":"dfa73a273ec3d32bd473dd3a6bb0947ea608be81e62e336e99aadaed1d8b726e"} Jan 26 07:56:14 crc kubenswrapper[4806]: I0126 07:56:14.120368 4806 patch_prober.go:28] interesting pod/router-default-5444994796-l5mfg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 07:56:14 crc kubenswrapper[4806]: [-]has-synced failed: reason withheld Jan 26 07:56:14 crc kubenswrapper[4806]: [+]process-running ok Jan 26 07:56:14 crc kubenswrapper[4806]: healthz check failed Jan 26 07:56:14 crc kubenswrapper[4806]: I0126 07:56:14.120437 4806 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-l5mfg" podUID="2ce69172-bf74-4c4a-8aeb-9b1d86f50254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 07:56:14 crc kubenswrapper[4806]: I0126 07:56:14.485129 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-s7jrc" Jan 26 07:56:15 crc kubenswrapper[4806]: I0126 07:56:15.122984 4806 patch_prober.go:28] interesting pod/router-default-5444994796-l5mfg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 07:56:15 crc kubenswrapper[4806]: [-]has-synced failed: reason withheld Jan 26 07:56:15 crc kubenswrapper[4806]: [+]process-running ok Jan 26 07:56:15 crc kubenswrapper[4806]: healthz check failed Jan 26 07:56:15 crc kubenswrapper[4806]: I0126 07:56:15.123047 4806 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-l5mfg" podUID="2ce69172-bf74-4c4a-8aeb-9b1d86f50254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 07:56:15 crc kubenswrapper[4806]: I0126 07:56:15.806669 4806 patch_prober.go:28] interesting pod/machine-config-daemon-k2tlk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 07:56:15 crc kubenswrapper[4806]: I0126 07:56:15.806753 4806 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 07:56:16 crc kubenswrapper[4806]: I0126 07:56:16.128604 4806 patch_prober.go:28] interesting pod/router-default-5444994796-l5mfg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 07:56:16 crc kubenswrapper[4806]: [-]has-synced failed: reason withheld Jan 26 07:56:16 crc kubenswrapper[4806]: [+]process-running ok Jan 26 07:56:16 crc kubenswrapper[4806]: healthz check failed Jan 26 07:56:16 crc kubenswrapper[4806]: I0126 07:56:16.132344 4806 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-l5mfg" podUID="2ce69172-bf74-4c4a-8aeb-9b1d86f50254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 07:56:17 crc kubenswrapper[4806]: I0126 07:56:17.121115 4806 patch_prober.go:28] interesting pod/router-default-5444994796-l5mfg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 07:56:17 crc kubenswrapper[4806]: [-]has-synced failed: reason withheld Jan 26 07:56:17 crc kubenswrapper[4806]: [+]process-running ok Jan 26 07:56:17 crc kubenswrapper[4806]: healthz check failed Jan 26 07:56:17 crc kubenswrapper[4806]: I0126 07:56:17.121564 4806 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-l5mfg" podUID="2ce69172-bf74-4c4a-8aeb-9b1d86f50254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 07:56:18 crc kubenswrapper[4806]: I0126 07:56:18.119622 4806 patch_prober.go:28] interesting pod/router-default-5444994796-l5mfg container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 07:56:18 crc kubenswrapper[4806]: [-]has-synced failed: reason withheld Jan 26 07:56:18 crc kubenswrapper[4806]: [+]process-running ok Jan 26 07:56:18 crc kubenswrapper[4806]: healthz check failed Jan 26 07:56:18 crc kubenswrapper[4806]: I0126 07:56:18.119707 4806 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-l5mfg" podUID="2ce69172-bf74-4c4a-8aeb-9b1d86f50254" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 07:56:18 crc kubenswrapper[4806]: I0126 07:56:18.247577 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-7pdxm"] Jan 26 07:56:18 crc kubenswrapper[4806]: I0126 07:56:18.247792 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-7pdxm" podUID="e8f91104-da2f-4f2a-90b2-619d9035f8ca" containerName="controller-manager" containerID="cri-o://4052e8d2c06743e3a043c779cd56f2fd0434b7271ab2ab71f9b50227bd05e3ba" gracePeriod=30 Jan 26 07:56:18 crc kubenswrapper[4806]: I0126 07:56:18.293097 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-q5z6f"] Jan 26 07:56:18 crc kubenswrapper[4806]: I0126 07:56:18.293299 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q5z6f" podUID="14769a57-f19b-4d49-868f-d1754827714b" containerName="route-controller-manager" containerID="cri-o://cf76e26e9d50384799924cebb3e5a1176ca291a809409c0976bf8e806d3df9e4" gracePeriod=30 Jan 26 07:56:18 crc kubenswrapper[4806]: I0126 07:56:18.914029 4806 generic.go:334] "Generic (PLEG): container finished" podID="14769a57-f19b-4d49-868f-d1754827714b" containerID="cf76e26e9d50384799924cebb3e5a1176ca291a809409c0976bf8e806d3df9e4" exitCode=0 Jan 26 07:56:18 crc kubenswrapper[4806]: I0126 07:56:18.914172 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q5z6f" event={"ID":"14769a57-f19b-4d49-868f-d1754827714b","Type":"ContainerDied","Data":"cf76e26e9d50384799924cebb3e5a1176ca291a809409c0976bf8e806d3df9e4"} Jan 26 07:56:18 crc kubenswrapper[4806]: I0126 07:56:18.916754 4806 generic.go:334] "Generic (PLEG): container finished" podID="e8f91104-da2f-4f2a-90b2-619d9035f8ca" containerID="4052e8d2c06743e3a043c779cd56f2fd0434b7271ab2ab71f9b50227bd05e3ba" exitCode=0 Jan 26 07:56:18 crc kubenswrapper[4806]: I0126 07:56:18.916795 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-7pdxm" event={"ID":"e8f91104-da2f-4f2a-90b2-619d9035f8ca","Type":"ContainerDied","Data":"4052e8d2c06743e3a043c779cd56f2fd0434b7271ab2ab71f9b50227bd05e3ba"} Jan 26 07:56:19 crc kubenswrapper[4806]: I0126 07:56:19.121813 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-l5mfg" Jan 26 07:56:19 crc kubenswrapper[4806]: I0126 07:56:19.124236 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-l5mfg" Jan 26 07:56:22 crc kubenswrapper[4806]: I0126 07:56:22.797117 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/137029f0-49ad-4400-b117-2eff9271bce3-metrics-certs\") pod \"network-metrics-daemon-rqmvf\" (UID: \"137029f0-49ad-4400-b117-2eff9271bce3\") " pod="openshift-multus/network-metrics-daemon-rqmvf" Jan 26 07:56:22 crc kubenswrapper[4806]: I0126 07:56:22.819071 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/137029f0-49ad-4400-b117-2eff9271bce3-metrics-certs\") pod \"network-metrics-daemon-rqmvf\" (UID: \"137029f0-49ad-4400-b117-2eff9271bce3\") " pod="openshift-multus/network-metrics-daemon-rqmvf" Jan 26 07:56:22 crc kubenswrapper[4806]: I0126 07:56:22.922498 4806 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-7pdxm container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Jan 26 07:56:22 crc kubenswrapper[4806]: I0126 07:56:22.922584 4806 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-7pdxm" podUID="e8f91104-da2f-4f2a-90b2-619d9035f8ca" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Jan 26 07:56:22 crc kubenswrapper[4806]: I0126 07:56:22.923867 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqmvf" Jan 26 07:56:23 crc kubenswrapper[4806]: I0126 07:56:23.509381 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-tncnb" Jan 26 07:56:23 crc kubenswrapper[4806]: I0126 07:56:23.725026 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-qd6mh" Jan 26 07:56:23 crc kubenswrapper[4806]: I0126 07:56:23.730264 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-qd6mh" Jan 26 07:56:24 crc kubenswrapper[4806]: I0126 07:56:24.212209 4806 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-q5z6f container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Jan 26 07:56:24 crc kubenswrapper[4806]: I0126 07:56:24.212287 4806 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q5z6f" podUID="14769a57-f19b-4d49-868f-d1754827714b" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Jan 26 07:56:31 crc kubenswrapper[4806]: I0126 07:56:31.141474 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 07:56:31 crc kubenswrapper[4806]: I0126 07:56:31.319036 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6b38e5ab-4e02-48cc-9a16-e327e1aff30a-kubelet-dir\") pod \"6b38e5ab-4e02-48cc-9a16-e327e1aff30a\" (UID: \"6b38e5ab-4e02-48cc-9a16-e327e1aff30a\") " Jan 26 07:56:31 crc kubenswrapper[4806]: I0126 07:56:31.319336 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6b38e5ab-4e02-48cc-9a16-e327e1aff30a-kube-api-access\") pod \"6b38e5ab-4e02-48cc-9a16-e327e1aff30a\" (UID: \"6b38e5ab-4e02-48cc-9a16-e327e1aff30a\") " Jan 26 07:56:31 crc kubenswrapper[4806]: I0126 07:56:31.319656 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6b38e5ab-4e02-48cc-9a16-e327e1aff30a-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "6b38e5ab-4e02-48cc-9a16-e327e1aff30a" (UID: "6b38e5ab-4e02-48cc-9a16-e327e1aff30a"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 07:56:31 crc kubenswrapper[4806]: I0126 07:56:31.331717 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b38e5ab-4e02-48cc-9a16-e327e1aff30a-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "6b38e5ab-4e02-48cc-9a16-e327e1aff30a" (UID: "6b38e5ab-4e02-48cc-9a16-e327e1aff30a"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 07:56:31 crc kubenswrapper[4806]: I0126 07:56:31.421027 4806 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6b38e5ab-4e02-48cc-9a16-e327e1aff30a-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 26 07:56:31 crc kubenswrapper[4806]: I0126 07:56:31.421084 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6b38e5ab-4e02-48cc-9a16-e327e1aff30a-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 07:56:31 crc kubenswrapper[4806]: I0126 07:56:31.988559 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"6b38e5ab-4e02-48cc-9a16-e327e1aff30a","Type":"ContainerDied","Data":"a5be70ab7658dfaaa46050a045529fe6d10d98d2275a276fee2e78323c773e8f"} Jan 26 07:56:31 crc kubenswrapper[4806]: I0126 07:56:31.988797 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a5be70ab7658dfaaa46050a045529fe6d10d98d2275a276fee2e78323c773e8f" Jan 26 07:56:31 crc kubenswrapper[4806]: I0126 07:56:31.988641 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 07:56:33 crc kubenswrapper[4806]: I0126 07:56:33.104066 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-7pdxm" Jan 26 07:56:33 crc kubenswrapper[4806]: I0126 07:56:33.245113 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e8f91104-da2f-4f2a-90b2-619d9035f8ca-proxy-ca-bundles\") pod \"e8f91104-da2f-4f2a-90b2-619d9035f8ca\" (UID: \"e8f91104-da2f-4f2a-90b2-619d9035f8ca\") " Jan 26 07:56:33 crc kubenswrapper[4806]: I0126 07:56:33.245185 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f2n57\" (UniqueName: \"kubernetes.io/projected/e8f91104-da2f-4f2a-90b2-619d9035f8ca-kube-api-access-f2n57\") pod \"e8f91104-da2f-4f2a-90b2-619d9035f8ca\" (UID: \"e8f91104-da2f-4f2a-90b2-619d9035f8ca\") " Jan 26 07:56:33 crc kubenswrapper[4806]: I0126 07:56:33.245230 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8f91104-da2f-4f2a-90b2-619d9035f8ca-config\") pod \"e8f91104-da2f-4f2a-90b2-619d9035f8ca\" (UID: \"e8f91104-da2f-4f2a-90b2-619d9035f8ca\") " Jan 26 07:56:33 crc kubenswrapper[4806]: I0126 07:56:33.245261 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e8f91104-da2f-4f2a-90b2-619d9035f8ca-serving-cert\") pod \"e8f91104-da2f-4f2a-90b2-619d9035f8ca\" (UID: \"e8f91104-da2f-4f2a-90b2-619d9035f8ca\") " Jan 26 07:56:33 crc kubenswrapper[4806]: I0126 07:56:33.245279 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e8f91104-da2f-4f2a-90b2-619d9035f8ca-client-ca\") pod \"e8f91104-da2f-4f2a-90b2-619d9035f8ca\" (UID: \"e8f91104-da2f-4f2a-90b2-619d9035f8ca\") " Jan 26 07:56:33 crc kubenswrapper[4806]: I0126 07:56:33.245839 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8f91104-da2f-4f2a-90b2-619d9035f8ca-client-ca" (OuterVolumeSpecName: "client-ca") pod "e8f91104-da2f-4f2a-90b2-619d9035f8ca" (UID: "e8f91104-da2f-4f2a-90b2-619d9035f8ca"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 07:56:33 crc kubenswrapper[4806]: I0126 07:56:33.245850 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8f91104-da2f-4f2a-90b2-619d9035f8ca-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "e8f91104-da2f-4f2a-90b2-619d9035f8ca" (UID: "e8f91104-da2f-4f2a-90b2-619d9035f8ca"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 07:56:33 crc kubenswrapper[4806]: I0126 07:56:33.245914 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8f91104-da2f-4f2a-90b2-619d9035f8ca-config" (OuterVolumeSpecName: "config") pod "e8f91104-da2f-4f2a-90b2-619d9035f8ca" (UID: "e8f91104-da2f-4f2a-90b2-619d9035f8ca"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 07:56:33 crc kubenswrapper[4806]: I0126 07:56:33.251200 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8f91104-da2f-4f2a-90b2-619d9035f8ca-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e8f91104-da2f-4f2a-90b2-619d9035f8ca" (UID: "e8f91104-da2f-4f2a-90b2-619d9035f8ca"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 07:56:33 crc kubenswrapper[4806]: I0126 07:56:33.257586 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8f91104-da2f-4f2a-90b2-619d9035f8ca-kube-api-access-f2n57" (OuterVolumeSpecName: "kube-api-access-f2n57") pod "e8f91104-da2f-4f2a-90b2-619d9035f8ca" (UID: "e8f91104-da2f-4f2a-90b2-619d9035f8ca"). InnerVolumeSpecName "kube-api-access-f2n57". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 07:56:33 crc kubenswrapper[4806]: I0126 07:56:33.346285 4806 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e8f91104-da2f-4f2a-90b2-619d9035f8ca-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 07:56:33 crc kubenswrapper[4806]: I0126 07:56:33.346332 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f2n57\" (UniqueName: \"kubernetes.io/projected/e8f91104-da2f-4f2a-90b2-619d9035f8ca-kube-api-access-f2n57\") on node \"crc\" DevicePath \"\"" Jan 26 07:56:33 crc kubenswrapper[4806]: I0126 07:56:33.346346 4806 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e8f91104-da2f-4f2a-90b2-619d9035f8ca-config\") on node \"crc\" DevicePath \"\"" Jan 26 07:56:33 crc kubenswrapper[4806]: I0126 07:56:33.346357 4806 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e8f91104-da2f-4f2a-90b2-619d9035f8ca-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 07:56:33 crc kubenswrapper[4806]: I0126 07:56:33.346365 4806 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e8f91104-da2f-4f2a-90b2-619d9035f8ca-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 07:56:33 crc kubenswrapper[4806]: I0126 07:56:33.437546 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7cc5cc848b-r228k"] Jan 26 07:56:33 crc kubenswrapper[4806]: E0126 07:56:33.437812 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b38e5ab-4e02-48cc-9a16-e327e1aff30a" containerName="pruner" Jan 26 07:56:33 crc kubenswrapper[4806]: I0126 07:56:33.437828 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b38e5ab-4e02-48cc-9a16-e327e1aff30a" containerName="pruner" Jan 26 07:56:33 crc kubenswrapper[4806]: E0126 07:56:33.437839 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8f91104-da2f-4f2a-90b2-619d9035f8ca" containerName="controller-manager" Jan 26 07:56:33 crc kubenswrapper[4806]: I0126 07:56:33.437846 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8f91104-da2f-4f2a-90b2-619d9035f8ca" containerName="controller-manager" Jan 26 07:56:33 crc kubenswrapper[4806]: I0126 07:56:33.437970 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b38e5ab-4e02-48cc-9a16-e327e1aff30a" containerName="pruner" Jan 26 07:56:33 crc kubenswrapper[4806]: I0126 07:56:33.437988 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8f91104-da2f-4f2a-90b2-619d9035f8ca" containerName="controller-manager" Jan 26 07:56:33 crc kubenswrapper[4806]: I0126 07:56:33.438493 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7cc5cc848b-r228k" Jan 26 07:56:33 crc kubenswrapper[4806]: I0126 07:56:33.449622 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7cc5cc848b-r228k"] Jan 26 07:56:33 crc kubenswrapper[4806]: I0126 07:56:33.550695 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8643bd03-327f-4479-8cc2-0be4bb810ea3-proxy-ca-bundles\") pod \"controller-manager-7cc5cc848b-r228k\" (UID: \"8643bd03-327f-4479-8cc2-0be4bb810ea3\") " pod="openshift-controller-manager/controller-manager-7cc5cc848b-r228k" Jan 26 07:56:33 crc kubenswrapper[4806]: I0126 07:56:33.551085 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8643bd03-327f-4479-8cc2-0be4bb810ea3-serving-cert\") pod \"controller-manager-7cc5cc848b-r228k\" (UID: \"8643bd03-327f-4479-8cc2-0be4bb810ea3\") " pod="openshift-controller-manager/controller-manager-7cc5cc848b-r228k" Jan 26 07:56:33 crc kubenswrapper[4806]: I0126 07:56:33.551173 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8643bd03-327f-4479-8cc2-0be4bb810ea3-config\") pod \"controller-manager-7cc5cc848b-r228k\" (UID: \"8643bd03-327f-4479-8cc2-0be4bb810ea3\") " pod="openshift-controller-manager/controller-manager-7cc5cc848b-r228k" Jan 26 07:56:33 crc kubenswrapper[4806]: I0126 07:56:33.551207 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9qgk\" (UniqueName: \"kubernetes.io/projected/8643bd03-327f-4479-8cc2-0be4bb810ea3-kube-api-access-q9qgk\") pod \"controller-manager-7cc5cc848b-r228k\" (UID: \"8643bd03-327f-4479-8cc2-0be4bb810ea3\") " pod="openshift-controller-manager/controller-manager-7cc5cc848b-r228k" Jan 26 07:56:33 crc kubenswrapper[4806]: I0126 07:56:33.551234 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8643bd03-327f-4479-8cc2-0be4bb810ea3-client-ca\") pod \"controller-manager-7cc5cc848b-r228k\" (UID: \"8643bd03-327f-4479-8cc2-0be4bb810ea3\") " pod="openshift-controller-manager/controller-manager-7cc5cc848b-r228k" Jan 26 07:56:33 crc kubenswrapper[4806]: I0126 07:56:33.652824 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8643bd03-327f-4479-8cc2-0be4bb810ea3-proxy-ca-bundles\") pod \"controller-manager-7cc5cc848b-r228k\" (UID: \"8643bd03-327f-4479-8cc2-0be4bb810ea3\") " pod="openshift-controller-manager/controller-manager-7cc5cc848b-r228k" Jan 26 07:56:33 crc kubenswrapper[4806]: I0126 07:56:33.652883 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8643bd03-327f-4479-8cc2-0be4bb810ea3-serving-cert\") pod \"controller-manager-7cc5cc848b-r228k\" (UID: \"8643bd03-327f-4479-8cc2-0be4bb810ea3\") " pod="openshift-controller-manager/controller-manager-7cc5cc848b-r228k" Jan 26 07:56:33 crc kubenswrapper[4806]: I0126 07:56:33.652946 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8643bd03-327f-4479-8cc2-0be4bb810ea3-config\") pod \"controller-manager-7cc5cc848b-r228k\" (UID: \"8643bd03-327f-4479-8cc2-0be4bb810ea3\") " pod="openshift-controller-manager/controller-manager-7cc5cc848b-r228k" Jan 26 07:56:33 crc kubenswrapper[4806]: I0126 07:56:33.652970 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9qgk\" (UniqueName: \"kubernetes.io/projected/8643bd03-327f-4479-8cc2-0be4bb810ea3-kube-api-access-q9qgk\") pod \"controller-manager-7cc5cc848b-r228k\" (UID: \"8643bd03-327f-4479-8cc2-0be4bb810ea3\") " pod="openshift-controller-manager/controller-manager-7cc5cc848b-r228k" Jan 26 07:56:33 crc kubenswrapper[4806]: I0126 07:56:33.653009 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8643bd03-327f-4479-8cc2-0be4bb810ea3-client-ca\") pod \"controller-manager-7cc5cc848b-r228k\" (UID: \"8643bd03-327f-4479-8cc2-0be4bb810ea3\") " pod="openshift-controller-manager/controller-manager-7cc5cc848b-r228k" Jan 26 07:56:33 crc kubenswrapper[4806]: I0126 07:56:33.654225 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8643bd03-327f-4479-8cc2-0be4bb810ea3-proxy-ca-bundles\") pod \"controller-manager-7cc5cc848b-r228k\" (UID: \"8643bd03-327f-4479-8cc2-0be4bb810ea3\") " pod="openshift-controller-manager/controller-manager-7cc5cc848b-r228k" Jan 26 07:56:33 crc kubenswrapper[4806]: I0126 07:56:33.654416 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8643bd03-327f-4479-8cc2-0be4bb810ea3-config\") pod \"controller-manager-7cc5cc848b-r228k\" (UID: \"8643bd03-327f-4479-8cc2-0be4bb810ea3\") " pod="openshift-controller-manager/controller-manager-7cc5cc848b-r228k" Jan 26 07:56:33 crc kubenswrapper[4806]: I0126 07:56:33.655107 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8643bd03-327f-4479-8cc2-0be4bb810ea3-client-ca\") pod \"controller-manager-7cc5cc848b-r228k\" (UID: \"8643bd03-327f-4479-8cc2-0be4bb810ea3\") " pod="openshift-controller-manager/controller-manager-7cc5cc848b-r228k" Jan 26 07:56:33 crc kubenswrapper[4806]: I0126 07:56:33.657293 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8643bd03-327f-4479-8cc2-0be4bb810ea3-serving-cert\") pod \"controller-manager-7cc5cc848b-r228k\" (UID: \"8643bd03-327f-4479-8cc2-0be4bb810ea3\") " pod="openshift-controller-manager/controller-manager-7cc5cc848b-r228k" Jan 26 07:56:33 crc kubenswrapper[4806]: I0126 07:56:33.671168 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9qgk\" (UniqueName: \"kubernetes.io/projected/8643bd03-327f-4479-8cc2-0be4bb810ea3-kube-api-access-q9qgk\") pod \"controller-manager-7cc5cc848b-r228k\" (UID: \"8643bd03-327f-4479-8cc2-0be4bb810ea3\") " pod="openshift-controller-manager/controller-manager-7cc5cc848b-r228k" Jan 26 07:56:33 crc kubenswrapper[4806]: I0126 07:56:33.761474 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7cc5cc848b-r228k" Jan 26 07:56:33 crc kubenswrapper[4806]: I0126 07:56:33.923311 4806 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-7pdxm container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 07:56:33 crc kubenswrapper[4806]: I0126 07:56:33.923427 4806 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-7pdxm" podUID="e8f91104-da2f-4f2a-90b2-619d9035f8ca" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 26 07:56:34 crc kubenswrapper[4806]: I0126 07:56:34.000534 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-7pdxm" event={"ID":"e8f91104-da2f-4f2a-90b2-619d9035f8ca","Type":"ContainerDied","Data":"688372280e8ba1fba515cf38eb3112e991aa22340a2b6f65e4591b184508c17f"} Jan 26 07:56:34 crc kubenswrapper[4806]: I0126 07:56:34.000579 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-7pdxm" Jan 26 07:56:34 crc kubenswrapper[4806]: I0126 07:56:34.000587 4806 scope.go:117] "RemoveContainer" containerID="4052e8d2c06743e3a043c779cd56f2fd0434b7271ab2ab71f9b50227bd05e3ba" Jan 26 07:56:34 crc kubenswrapper[4806]: I0126 07:56:34.030081 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-7pdxm"] Jan 26 07:56:34 crc kubenswrapper[4806]: I0126 07:56:34.032496 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-7pdxm"] Jan 26 07:56:34 crc kubenswrapper[4806]: E0126 07:56:34.737675 4806 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 26 07:56:34 crc kubenswrapper[4806]: E0126 07:56:34.737923 4806 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qpx27,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-bq7zd_openshift-marketplace(0531f954-d1d9-42f0-bd29-f8ff5b0871b4): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 07:56:34 crc kubenswrapper[4806]: E0126 07:56:34.739134 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-bq7zd" podUID="0531f954-d1d9-42f0-bd29-f8ff5b0871b4" Jan 26 07:56:35 crc kubenswrapper[4806]: I0126 07:56:35.049808 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8f91104-da2f-4f2a-90b2-619d9035f8ca" path="/var/lib/kubelet/pods/e8f91104-da2f-4f2a-90b2-619d9035f8ca/volumes" Jan 26 07:56:35 crc kubenswrapper[4806]: I0126 07:56:35.206182 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-ncld5" Jan 26 07:56:35 crc kubenswrapper[4806]: I0126 07:56:35.211862 4806 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-q5z6f container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 07:56:35 crc kubenswrapper[4806]: I0126 07:56:35.211939 4806 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q5z6f" podUID="14769a57-f19b-4d49-868f-d1754827714b" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 26 07:56:38 crc kubenswrapper[4806]: I0126 07:56:38.118424 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7cc5cc848b-r228k"] Jan 26 07:56:40 crc kubenswrapper[4806]: E0126 07:56:40.874352 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-bq7zd" podUID="0531f954-d1d9-42f0-bd29-f8ff5b0871b4" Jan 26 07:56:40 crc kubenswrapper[4806]: E0126 07:56:40.897182 4806 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 26 07:56:40 crc kubenswrapper[4806]: E0126 07:56:40.897738 4806 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9qk7b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-5p4nt_openshift-marketplace(c31e36f6-aabe-4f1e-8e7e-3bb086ec1cd3): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 07:56:40 crc kubenswrapper[4806]: E0126 07:56:40.899582 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-5p4nt" podUID="c31e36f6-aabe-4f1e-8e7e-3bb086ec1cd3" Jan 26 07:56:40 crc kubenswrapper[4806]: I0126 07:56:40.912121 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q5z6f" Jan 26 07:56:40 crc kubenswrapper[4806]: I0126 07:56:40.984451 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/14769a57-f19b-4d49-868f-d1754827714b-serving-cert\") pod \"14769a57-f19b-4d49-868f-d1754827714b\" (UID: \"14769a57-f19b-4d49-868f-d1754827714b\") " Jan 26 07:56:40 crc kubenswrapper[4806]: I0126 07:56:40.984631 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/14769a57-f19b-4d49-868f-d1754827714b-client-ca\") pod \"14769a57-f19b-4d49-868f-d1754827714b\" (UID: \"14769a57-f19b-4d49-868f-d1754827714b\") " Jan 26 07:56:40 crc kubenswrapper[4806]: I0126 07:56:40.984666 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bfnpk\" (UniqueName: \"kubernetes.io/projected/14769a57-f19b-4d49-868f-d1754827714b-kube-api-access-bfnpk\") pod \"14769a57-f19b-4d49-868f-d1754827714b\" (UID: \"14769a57-f19b-4d49-868f-d1754827714b\") " Jan 26 07:56:40 crc kubenswrapper[4806]: I0126 07:56:40.984697 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14769a57-f19b-4d49-868f-d1754827714b-config\") pod \"14769a57-f19b-4d49-868f-d1754827714b\" (UID: \"14769a57-f19b-4d49-868f-d1754827714b\") " Jan 26 07:56:40 crc kubenswrapper[4806]: I0126 07:56:40.985539 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14769a57-f19b-4d49-868f-d1754827714b-config" (OuterVolumeSpecName: "config") pod "14769a57-f19b-4d49-868f-d1754827714b" (UID: "14769a57-f19b-4d49-868f-d1754827714b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 07:56:40 crc kubenswrapper[4806]: I0126 07:56:40.986350 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14769a57-f19b-4d49-868f-d1754827714b-client-ca" (OuterVolumeSpecName: "client-ca") pod "14769a57-f19b-4d49-868f-d1754827714b" (UID: "14769a57-f19b-4d49-868f-d1754827714b"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 07:56:40 crc kubenswrapper[4806]: I0126 07:56:40.988327 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7cccfc774d-2hfxs"] Jan 26 07:56:40 crc kubenswrapper[4806]: E0126 07:56:40.988593 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14769a57-f19b-4d49-868f-d1754827714b" containerName="route-controller-manager" Jan 26 07:56:40 crc kubenswrapper[4806]: I0126 07:56:40.988612 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="14769a57-f19b-4d49-868f-d1754827714b" containerName="route-controller-manager" Jan 26 07:56:40 crc kubenswrapper[4806]: I0126 07:56:40.988716 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="14769a57-f19b-4d49-868f-d1754827714b" containerName="route-controller-manager" Jan 26 07:56:40 crc kubenswrapper[4806]: I0126 07:56:40.997572 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7cccfc774d-2hfxs"] Jan 26 07:56:40 crc kubenswrapper[4806]: I0126 07:56:40.997686 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7cccfc774d-2hfxs" Jan 26 07:56:41 crc kubenswrapper[4806]: I0126 07:56:41.001377 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14769a57-f19b-4d49-868f-d1754827714b-kube-api-access-bfnpk" (OuterVolumeSpecName: "kube-api-access-bfnpk") pod "14769a57-f19b-4d49-868f-d1754827714b" (UID: "14769a57-f19b-4d49-868f-d1754827714b"). InnerVolumeSpecName "kube-api-access-bfnpk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 07:56:41 crc kubenswrapper[4806]: E0126 07:56:41.002173 4806 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 26 07:56:41 crc kubenswrapper[4806]: E0126 07:56:41.002299 4806 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xr85x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-cmd7h_openshift-marketplace(cf65c110-74ee-4e0c-a7e8-bb27c891ff12): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 07:56:41 crc kubenswrapper[4806]: I0126 07:56:41.003386 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14769a57-f19b-4d49-868f-d1754827714b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "14769a57-f19b-4d49-868f-d1754827714b" (UID: "14769a57-f19b-4d49-868f-d1754827714b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 07:56:41 crc kubenswrapper[4806]: E0126 07:56:41.003578 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-cmd7h" podUID="cf65c110-74ee-4e0c-a7e8-bb27c891ff12" Jan 26 07:56:41 crc kubenswrapper[4806]: I0126 07:56:41.086136 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b4a4dbaf-e0fb-4f3b-8b03-709e232a265e-client-ca\") pod \"route-controller-manager-7cccfc774d-2hfxs\" (UID: \"b4a4dbaf-e0fb-4f3b-8b03-709e232a265e\") " pod="openshift-route-controller-manager/route-controller-manager-7cccfc774d-2hfxs" Jan 26 07:56:41 crc kubenswrapper[4806]: I0126 07:56:41.087353 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4a4dbaf-e0fb-4f3b-8b03-709e232a265e-config\") pod \"route-controller-manager-7cccfc774d-2hfxs\" (UID: \"b4a4dbaf-e0fb-4f3b-8b03-709e232a265e\") " pod="openshift-route-controller-manager/route-controller-manager-7cccfc774d-2hfxs" Jan 26 07:56:41 crc kubenswrapper[4806]: I0126 07:56:41.087388 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b4a4dbaf-e0fb-4f3b-8b03-709e232a265e-serving-cert\") pod \"route-controller-manager-7cccfc774d-2hfxs\" (UID: \"b4a4dbaf-e0fb-4f3b-8b03-709e232a265e\") " pod="openshift-route-controller-manager/route-controller-manager-7cccfc774d-2hfxs" Jan 26 07:56:41 crc kubenswrapper[4806]: I0126 07:56:41.087416 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gsx9\" (UniqueName: \"kubernetes.io/projected/b4a4dbaf-e0fb-4f3b-8b03-709e232a265e-kube-api-access-8gsx9\") pod \"route-controller-manager-7cccfc774d-2hfxs\" (UID: \"b4a4dbaf-e0fb-4f3b-8b03-709e232a265e\") " pod="openshift-route-controller-manager/route-controller-manager-7cccfc774d-2hfxs" Jan 26 07:56:41 crc kubenswrapper[4806]: I0126 07:56:41.087482 4806 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/14769a57-f19b-4d49-868f-d1754827714b-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 07:56:41 crc kubenswrapper[4806]: I0126 07:56:41.087493 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bfnpk\" (UniqueName: \"kubernetes.io/projected/14769a57-f19b-4d49-868f-d1754827714b-kube-api-access-bfnpk\") on node \"crc\" DevicePath \"\"" Jan 26 07:56:41 crc kubenswrapper[4806]: I0126 07:56:41.087501 4806 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14769a57-f19b-4d49-868f-d1754827714b-config\") on node \"crc\" DevicePath \"\"" Jan 26 07:56:41 crc kubenswrapper[4806]: I0126 07:56:41.087509 4806 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/14769a57-f19b-4d49-868f-d1754827714b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 07:56:41 crc kubenswrapper[4806]: I0126 07:56:41.088764 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q5z6f" event={"ID":"14769a57-f19b-4d49-868f-d1754827714b","Type":"ContainerDied","Data":"9f3f7f56073ccf3343e9a450f5da5b876cfd6487b9c7fb2a44c4b61e3d7fd147"} Jan 26 07:56:41 crc kubenswrapper[4806]: I0126 07:56:41.088832 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q5z6f" Jan 26 07:56:41 crc kubenswrapper[4806]: E0126 07:56:41.203422 4806 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 26 07:56:41 crc kubenswrapper[4806]: E0126 07:56:41.203560 4806 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m6kg2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-rqpjn_openshift-marketplace(339dd820-f50a-4135-9da6-5768324b8d55): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 07:56:41 crc kubenswrapper[4806]: E0126 07:56:41.204831 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-rqpjn" podUID="339dd820-f50a-4135-9da6-5768324b8d55" Jan 26 07:56:41 crc kubenswrapper[4806]: I0126 07:56:41.205941 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b4a4dbaf-e0fb-4f3b-8b03-709e232a265e-client-ca\") pod \"route-controller-manager-7cccfc774d-2hfxs\" (UID: \"b4a4dbaf-e0fb-4f3b-8b03-709e232a265e\") " pod="openshift-route-controller-manager/route-controller-manager-7cccfc774d-2hfxs" Jan 26 07:56:41 crc kubenswrapper[4806]: I0126 07:56:41.206853 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4a4dbaf-e0fb-4f3b-8b03-709e232a265e-config\") pod \"route-controller-manager-7cccfc774d-2hfxs\" (UID: \"b4a4dbaf-e0fb-4f3b-8b03-709e232a265e\") " pod="openshift-route-controller-manager/route-controller-manager-7cccfc774d-2hfxs" Jan 26 07:56:41 crc kubenswrapper[4806]: I0126 07:56:41.207173 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b4a4dbaf-e0fb-4f3b-8b03-709e232a265e-serving-cert\") pod \"route-controller-manager-7cccfc774d-2hfxs\" (UID: \"b4a4dbaf-e0fb-4f3b-8b03-709e232a265e\") " pod="openshift-route-controller-manager/route-controller-manager-7cccfc774d-2hfxs" Jan 26 07:56:41 crc kubenswrapper[4806]: I0126 07:56:41.207229 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8gsx9\" (UniqueName: \"kubernetes.io/projected/b4a4dbaf-e0fb-4f3b-8b03-709e232a265e-kube-api-access-8gsx9\") pod \"route-controller-manager-7cccfc774d-2hfxs\" (UID: \"b4a4dbaf-e0fb-4f3b-8b03-709e232a265e\") " pod="openshift-route-controller-manager/route-controller-manager-7cccfc774d-2hfxs" Jan 26 07:56:41 crc kubenswrapper[4806]: I0126 07:56:41.207463 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b4a4dbaf-e0fb-4f3b-8b03-709e232a265e-client-ca\") pod \"route-controller-manager-7cccfc774d-2hfxs\" (UID: \"b4a4dbaf-e0fb-4f3b-8b03-709e232a265e\") " pod="openshift-route-controller-manager/route-controller-manager-7cccfc774d-2hfxs" Jan 26 07:56:41 crc kubenswrapper[4806]: I0126 07:56:41.208103 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4a4dbaf-e0fb-4f3b-8b03-709e232a265e-config\") pod \"route-controller-manager-7cccfc774d-2hfxs\" (UID: \"b4a4dbaf-e0fb-4f3b-8b03-709e232a265e\") " pod="openshift-route-controller-manager/route-controller-manager-7cccfc774d-2hfxs" Jan 26 07:56:41 crc kubenswrapper[4806]: I0126 07:56:41.216976 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b4a4dbaf-e0fb-4f3b-8b03-709e232a265e-serving-cert\") pod \"route-controller-manager-7cccfc774d-2hfxs\" (UID: \"b4a4dbaf-e0fb-4f3b-8b03-709e232a265e\") " pod="openshift-route-controller-manager/route-controller-manager-7cccfc774d-2hfxs" Jan 26 07:56:41 crc kubenswrapper[4806]: I0126 07:56:41.221983 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8gsx9\" (UniqueName: \"kubernetes.io/projected/b4a4dbaf-e0fb-4f3b-8b03-709e232a265e-kube-api-access-8gsx9\") pod \"route-controller-manager-7cccfc774d-2hfxs\" (UID: \"b4a4dbaf-e0fb-4f3b-8b03-709e232a265e\") " pod="openshift-route-controller-manager/route-controller-manager-7cccfc774d-2hfxs" Jan 26 07:56:41 crc kubenswrapper[4806]: I0126 07:56:41.255044 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-q5z6f"] Jan 26 07:56:41 crc kubenswrapper[4806]: I0126 07:56:41.266439 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-q5z6f"] Jan 26 07:56:41 crc kubenswrapper[4806]: I0126 07:56:41.307601 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-rqmvf"] Jan 26 07:56:41 crc kubenswrapper[4806]: I0126 07:56:41.444314 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7cccfc774d-2hfxs" Jan 26 07:56:41 crc kubenswrapper[4806]: I0126 07:56:41.884095 4806 scope.go:117] "RemoveContainer" containerID="cf76e26e9d50384799924cebb3e5a1176ca291a809409c0976bf8e806d3df9e4" Jan 26 07:56:41 crc kubenswrapper[4806]: E0126 07:56:41.890820 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-5p4nt" podUID="c31e36f6-aabe-4f1e-8e7e-3bb086ec1cd3" Jan 26 07:56:41 crc kubenswrapper[4806]: E0126 07:56:41.890941 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-cmd7h" podUID="cf65c110-74ee-4e0c-a7e8-bb27c891ff12" Jan 26 07:56:42 crc kubenswrapper[4806]: I0126 07:56:42.098129 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-rqmvf" event={"ID":"137029f0-49ad-4400-b117-2eff9271bce3","Type":"ContainerStarted","Data":"b11076060f27d1591934226166962bb23efae2dcc7b11139c2740824c1ec5c86"} Jan 26 07:56:42 crc kubenswrapper[4806]: E0126 07:56:42.104296 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-rqpjn" podUID="339dd820-f50a-4135-9da6-5768324b8d55" Jan 26 07:56:42 crc kubenswrapper[4806]: I0126 07:56:42.339493 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7cc5cc848b-r228k"] Jan 26 07:56:42 crc kubenswrapper[4806]: I0126 07:56:42.440197 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7cccfc774d-2hfxs"] Jan 26 07:56:42 crc kubenswrapper[4806]: W0126 07:56:42.444811 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb4a4dbaf_e0fb_4f3b_8b03_709e232a265e.slice/crio-c705e8d342445044ff6ee69aa553c98405965943ebea9a16ee63f074c5d675d5 WatchSource:0}: Error finding container c705e8d342445044ff6ee69aa553c98405965943ebea9a16ee63f074c5d675d5: Status 404 returned error can't find the container with id c705e8d342445044ff6ee69aa553c98405965943ebea9a16ee63f074c5d675d5 Jan 26 07:56:43 crc kubenswrapper[4806]: I0126 07:56:43.047003 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14769a57-f19b-4d49-868f-d1754827714b" path="/var/lib/kubelet/pods/14769a57-f19b-4d49-868f-d1754827714b/volumes" Jan 26 07:56:43 crc kubenswrapper[4806]: I0126 07:56:43.115092 4806 generic.go:334] "Generic (PLEG): container finished" podID="df97f49a-b950-45f2-8c66-52f2c6c33163" containerID="4d03042de38e2da530de8bbfab73a394136fb2c89fdb30c69b2eafeed14a82b2" exitCode=0 Jan 26 07:56:43 crc kubenswrapper[4806]: I0126 07:56:43.116009 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-79ddh" event={"ID":"df97f49a-b950-45f2-8c66-52f2c6c33163","Type":"ContainerDied","Data":"4d03042de38e2da530de8bbfab73a394136fb2c89fdb30c69b2eafeed14a82b2"} Jan 26 07:56:43 crc kubenswrapper[4806]: I0126 07:56:43.119965 4806 generic.go:334] "Generic (PLEG): container finished" podID="4f544176-9dd8-4416-99f3-53299cd7ffb0" containerID="ea666b208fdf78835c540e568f345bd0896ef9b61579eabb512293a19144806f" exitCode=0 Jan 26 07:56:43 crc kubenswrapper[4806]: I0126 07:56:43.120038 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-45bpz" event={"ID":"4f544176-9dd8-4416-99f3-53299cd7ffb0","Type":"ContainerDied","Data":"ea666b208fdf78835c540e568f345bd0896ef9b61579eabb512293a19144806f"} Jan 26 07:56:43 crc kubenswrapper[4806]: I0126 07:56:43.122480 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-rqmvf" event={"ID":"137029f0-49ad-4400-b117-2eff9271bce3","Type":"ContainerStarted","Data":"a5cb545b68dc9d0ffecc2e68914e02b97e8364310052c6ce90ac354c9a455329"} Jan 26 07:56:43 crc kubenswrapper[4806]: I0126 07:56:43.122540 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-rqmvf" event={"ID":"137029f0-49ad-4400-b117-2eff9271bce3","Type":"ContainerStarted","Data":"97576a3cac53f45861f327f28b869ad098a3856213b281379c3ae3f3f5464819"} Jan 26 07:56:43 crc kubenswrapper[4806]: I0126 07:56:43.135894 4806 generic.go:334] "Generic (PLEG): container finished" podID="a078c937-6bed-4604-a0a1-25c9c7d2503d" containerID="d633fc37d630dcb69de0324a4873c8763cb10c676151f0af5806d6f3c27daa82" exitCode=0 Jan 26 07:56:43 crc kubenswrapper[4806]: I0126 07:56:43.136970 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ts6cz" event={"ID":"a078c937-6bed-4604-a0a1-25c9c7d2503d","Type":"ContainerDied","Data":"d633fc37d630dcb69de0324a4873c8763cb10c676151f0af5806d6f3c27daa82"} Jan 26 07:56:43 crc kubenswrapper[4806]: I0126 07:56:43.148845 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7cccfc774d-2hfxs" event={"ID":"b4a4dbaf-e0fb-4f3b-8b03-709e232a265e","Type":"ContainerStarted","Data":"4fbaed5b3a6f82a5584220127613531194bb11d92b23d5b980160665725c56e2"} Jan 26 07:56:43 crc kubenswrapper[4806]: I0126 07:56:43.148887 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7cccfc774d-2hfxs" event={"ID":"b4a4dbaf-e0fb-4f3b-8b03-709e232a265e","Type":"ContainerStarted","Data":"c705e8d342445044ff6ee69aa553c98405965943ebea9a16ee63f074c5d675d5"} Jan 26 07:56:43 crc kubenswrapper[4806]: I0126 07:56:43.149481 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7cccfc774d-2hfxs" Jan 26 07:56:43 crc kubenswrapper[4806]: I0126 07:56:43.154502 4806 generic.go:334] "Generic (PLEG): container finished" podID="422dbc29-ef6c-40a2-8928-ef97946880a0" containerID="d5c2815870c53f05ed7684ee87098fe78f03dbeea9ed293f80ff9cae1caa396e" exitCode=0 Jan 26 07:56:43 crc kubenswrapper[4806]: I0126 07:56:43.154797 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-248dw" event={"ID":"422dbc29-ef6c-40a2-8928-ef97946880a0","Type":"ContainerDied","Data":"d5c2815870c53f05ed7684ee87098fe78f03dbeea9ed293f80ff9cae1caa396e"} Jan 26 07:56:43 crc kubenswrapper[4806]: I0126 07:56:43.160909 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7cc5cc848b-r228k" event={"ID":"8643bd03-327f-4479-8cc2-0be4bb810ea3","Type":"ContainerStarted","Data":"4fa1af2b19232f4759e35136e032f9b1a94dbcea18763ed2605d39a8d9ed207d"} Jan 26 07:56:43 crc kubenswrapper[4806]: I0126 07:56:43.160957 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7cc5cc848b-r228k" event={"ID":"8643bd03-327f-4479-8cc2-0be4bb810ea3","Type":"ContainerStarted","Data":"597a8867a7b5f0123f2dddf6bb6a7754cda88032c78ff897c243765746ee8356"} Jan 26 07:56:43 crc kubenswrapper[4806]: I0126 07:56:43.161073 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7cc5cc848b-r228k" podUID="8643bd03-327f-4479-8cc2-0be4bb810ea3" containerName="controller-manager" containerID="cri-o://4fa1af2b19232f4759e35136e032f9b1a94dbcea18763ed2605d39a8d9ed207d" gracePeriod=30 Jan 26 07:56:43 crc kubenswrapper[4806]: I0126 07:56:43.161437 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7cc5cc848b-r228k" Jan 26 07:56:43 crc kubenswrapper[4806]: I0126 07:56:43.171846 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7cc5cc848b-r228k" Jan 26 07:56:43 crc kubenswrapper[4806]: I0126 07:56:43.210016 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-hdxh9"] Jan 26 07:56:43 crc kubenswrapper[4806]: I0126 07:56:43.224602 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7cccfc774d-2hfxs" Jan 26 07:56:43 crc kubenswrapper[4806]: I0126 07:56:43.356811 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7cccfc774d-2hfxs" podStartSLOduration=5.356793028 podStartE2EDuration="5.356793028s" podCreationTimestamp="2026-01-26 07:56:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 07:56:43.35367268 +0000 UTC m=+182.618080736" watchObservedRunningTime="2026-01-26 07:56:43.356793028 +0000 UTC m=+182.621201084" Jan 26 07:56:43 crc kubenswrapper[4806]: I0126 07:56:43.399413 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7cc5cc848b-r228k" podStartSLOduration=25.39939813 podStartE2EDuration="25.39939813s" podCreationTimestamp="2026-01-26 07:56:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 07:56:43.397891848 +0000 UTC m=+182.662299914" watchObservedRunningTime="2026-01-26 07:56:43.39939813 +0000 UTC m=+182.663806186" Jan 26 07:56:43 crc kubenswrapper[4806]: I0126 07:56:43.762475 4806 patch_prober.go:28] interesting pod/controller-manager-7cc5cc848b-r228k container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.54:8443/healthz\": dial tcp 10.217.0.54:8443: connect: connection refused" start-of-body= Jan 26 07:56:43 crc kubenswrapper[4806]: I0126 07:56:43.762559 4806 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-7cc5cc848b-r228k" podUID="8643bd03-327f-4479-8cc2-0be4bb810ea3" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.54:8443/healthz\": dial tcp 10.217.0.54:8443: connect: connection refused" Jan 26 07:56:44 crc kubenswrapper[4806]: I0126 07:56:44.176567 4806 generic.go:334] "Generic (PLEG): container finished" podID="8643bd03-327f-4479-8cc2-0be4bb810ea3" containerID="4fa1af2b19232f4759e35136e032f9b1a94dbcea18763ed2605d39a8d9ed207d" exitCode=0 Jan 26 07:56:44 crc kubenswrapper[4806]: I0126 07:56:44.176674 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7cc5cc848b-r228k" event={"ID":"8643bd03-327f-4479-8cc2-0be4bb810ea3","Type":"ContainerDied","Data":"4fa1af2b19232f4759e35136e032f9b1a94dbcea18763ed2605d39a8d9ed207d"} Jan 26 07:56:44 crc kubenswrapper[4806]: I0126 07:56:44.195181 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-rqmvf" podStartSLOduration=165.19516623 podStartE2EDuration="2m45.19516623s" podCreationTimestamp="2026-01-26 07:53:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 07:56:44.192289719 +0000 UTC m=+183.456697775" watchObservedRunningTime="2026-01-26 07:56:44.19516623 +0000 UTC m=+183.459574286" Jan 26 07:56:45 crc kubenswrapper[4806]: I0126 07:56:45.330201 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7cc5cc848b-r228k" Jan 26 07:56:45 crc kubenswrapper[4806]: I0126 07:56:45.362409 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6cd6d8f48-z2srv"] Jan 26 07:56:45 crc kubenswrapper[4806]: E0126 07:56:45.362683 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8643bd03-327f-4479-8cc2-0be4bb810ea3" containerName="controller-manager" Jan 26 07:56:45 crc kubenswrapper[4806]: I0126 07:56:45.362701 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="8643bd03-327f-4479-8cc2-0be4bb810ea3" containerName="controller-manager" Jan 26 07:56:45 crc kubenswrapper[4806]: I0126 07:56:45.362807 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="8643bd03-327f-4479-8cc2-0be4bb810ea3" containerName="controller-manager" Jan 26 07:56:45 crc kubenswrapper[4806]: I0126 07:56:45.363186 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6cd6d8f48-z2srv" Jan 26 07:56:45 crc kubenswrapper[4806]: I0126 07:56:45.385354 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6cd6d8f48-z2srv"] Jan 26 07:56:45 crc kubenswrapper[4806]: I0126 07:56:45.465755 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8643bd03-327f-4479-8cc2-0be4bb810ea3-client-ca\") pod \"8643bd03-327f-4479-8cc2-0be4bb810ea3\" (UID: \"8643bd03-327f-4479-8cc2-0be4bb810ea3\") " Jan 26 07:56:45 crc kubenswrapper[4806]: I0126 07:56:45.465811 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8643bd03-327f-4479-8cc2-0be4bb810ea3-proxy-ca-bundles\") pod \"8643bd03-327f-4479-8cc2-0be4bb810ea3\" (UID: \"8643bd03-327f-4479-8cc2-0be4bb810ea3\") " Jan 26 07:56:45 crc kubenswrapper[4806]: I0126 07:56:45.465900 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q9qgk\" (UniqueName: \"kubernetes.io/projected/8643bd03-327f-4479-8cc2-0be4bb810ea3-kube-api-access-q9qgk\") pod \"8643bd03-327f-4479-8cc2-0be4bb810ea3\" (UID: \"8643bd03-327f-4479-8cc2-0be4bb810ea3\") " Jan 26 07:56:45 crc kubenswrapper[4806]: I0126 07:56:45.465921 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8643bd03-327f-4479-8cc2-0be4bb810ea3-serving-cert\") pod \"8643bd03-327f-4479-8cc2-0be4bb810ea3\" (UID: \"8643bd03-327f-4479-8cc2-0be4bb810ea3\") " Jan 26 07:56:45 crc kubenswrapper[4806]: I0126 07:56:45.465953 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8643bd03-327f-4479-8cc2-0be4bb810ea3-config\") pod \"8643bd03-327f-4479-8cc2-0be4bb810ea3\" (UID: \"8643bd03-327f-4479-8cc2-0be4bb810ea3\") " Jan 26 07:56:45 crc kubenswrapper[4806]: I0126 07:56:45.466142 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78lvf\" (UniqueName: \"kubernetes.io/projected/e433a9e2-2b5f-4baa-8fa5-683037aa9783-kube-api-access-78lvf\") pod \"controller-manager-6cd6d8f48-z2srv\" (UID: \"e433a9e2-2b5f-4baa-8fa5-683037aa9783\") " pod="openshift-controller-manager/controller-manager-6cd6d8f48-z2srv" Jan 26 07:56:45 crc kubenswrapper[4806]: I0126 07:56:45.466172 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e433a9e2-2b5f-4baa-8fa5-683037aa9783-client-ca\") pod \"controller-manager-6cd6d8f48-z2srv\" (UID: \"e433a9e2-2b5f-4baa-8fa5-683037aa9783\") " pod="openshift-controller-manager/controller-manager-6cd6d8f48-z2srv" Jan 26 07:56:45 crc kubenswrapper[4806]: I0126 07:56:45.466186 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e433a9e2-2b5f-4baa-8fa5-683037aa9783-serving-cert\") pod \"controller-manager-6cd6d8f48-z2srv\" (UID: \"e433a9e2-2b5f-4baa-8fa5-683037aa9783\") " pod="openshift-controller-manager/controller-manager-6cd6d8f48-z2srv" Jan 26 07:56:45 crc kubenswrapper[4806]: I0126 07:56:45.466219 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e433a9e2-2b5f-4baa-8fa5-683037aa9783-proxy-ca-bundles\") pod \"controller-manager-6cd6d8f48-z2srv\" (UID: \"e433a9e2-2b5f-4baa-8fa5-683037aa9783\") " pod="openshift-controller-manager/controller-manager-6cd6d8f48-z2srv" Jan 26 07:56:45 crc kubenswrapper[4806]: I0126 07:56:45.466237 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e433a9e2-2b5f-4baa-8fa5-683037aa9783-config\") pod \"controller-manager-6cd6d8f48-z2srv\" (UID: \"e433a9e2-2b5f-4baa-8fa5-683037aa9783\") " pod="openshift-controller-manager/controller-manager-6cd6d8f48-z2srv" Jan 26 07:56:45 crc kubenswrapper[4806]: I0126 07:56:45.467003 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8643bd03-327f-4479-8cc2-0be4bb810ea3-client-ca" (OuterVolumeSpecName: "client-ca") pod "8643bd03-327f-4479-8cc2-0be4bb810ea3" (UID: "8643bd03-327f-4479-8cc2-0be4bb810ea3"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 07:56:45 crc kubenswrapper[4806]: I0126 07:56:45.467322 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8643bd03-327f-4479-8cc2-0be4bb810ea3-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "8643bd03-327f-4479-8cc2-0be4bb810ea3" (UID: "8643bd03-327f-4479-8cc2-0be4bb810ea3"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 07:56:45 crc kubenswrapper[4806]: I0126 07:56:45.468133 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8643bd03-327f-4479-8cc2-0be4bb810ea3-config" (OuterVolumeSpecName: "config") pod "8643bd03-327f-4479-8cc2-0be4bb810ea3" (UID: "8643bd03-327f-4479-8cc2-0be4bb810ea3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 07:56:45 crc kubenswrapper[4806]: I0126 07:56:45.567094 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e433a9e2-2b5f-4baa-8fa5-683037aa9783-proxy-ca-bundles\") pod \"controller-manager-6cd6d8f48-z2srv\" (UID: \"e433a9e2-2b5f-4baa-8fa5-683037aa9783\") " pod="openshift-controller-manager/controller-manager-6cd6d8f48-z2srv" Jan 26 07:56:45 crc kubenswrapper[4806]: I0126 07:56:45.567145 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e433a9e2-2b5f-4baa-8fa5-683037aa9783-config\") pod \"controller-manager-6cd6d8f48-z2srv\" (UID: \"e433a9e2-2b5f-4baa-8fa5-683037aa9783\") " pod="openshift-controller-manager/controller-manager-6cd6d8f48-z2srv" Jan 26 07:56:45 crc kubenswrapper[4806]: I0126 07:56:45.567203 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78lvf\" (UniqueName: \"kubernetes.io/projected/e433a9e2-2b5f-4baa-8fa5-683037aa9783-kube-api-access-78lvf\") pod \"controller-manager-6cd6d8f48-z2srv\" (UID: \"e433a9e2-2b5f-4baa-8fa5-683037aa9783\") " pod="openshift-controller-manager/controller-manager-6cd6d8f48-z2srv" Jan 26 07:56:45 crc kubenswrapper[4806]: I0126 07:56:45.567260 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e433a9e2-2b5f-4baa-8fa5-683037aa9783-client-ca\") pod \"controller-manager-6cd6d8f48-z2srv\" (UID: \"e433a9e2-2b5f-4baa-8fa5-683037aa9783\") " pod="openshift-controller-manager/controller-manager-6cd6d8f48-z2srv" Jan 26 07:56:45 crc kubenswrapper[4806]: I0126 07:56:45.567288 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e433a9e2-2b5f-4baa-8fa5-683037aa9783-serving-cert\") pod \"controller-manager-6cd6d8f48-z2srv\" (UID: \"e433a9e2-2b5f-4baa-8fa5-683037aa9783\") " pod="openshift-controller-manager/controller-manager-6cd6d8f48-z2srv" Jan 26 07:56:45 crc kubenswrapper[4806]: I0126 07:56:45.567328 4806 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8643bd03-327f-4479-8cc2-0be4bb810ea3-config\") on node \"crc\" DevicePath \"\"" Jan 26 07:56:45 crc kubenswrapper[4806]: I0126 07:56:45.567340 4806 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8643bd03-327f-4479-8cc2-0be4bb810ea3-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 07:56:45 crc kubenswrapper[4806]: I0126 07:56:45.567349 4806 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8643bd03-327f-4479-8cc2-0be4bb810ea3-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 07:56:45 crc kubenswrapper[4806]: I0126 07:56:45.569024 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e433a9e2-2b5f-4baa-8fa5-683037aa9783-client-ca\") pod \"controller-manager-6cd6d8f48-z2srv\" (UID: \"e433a9e2-2b5f-4baa-8fa5-683037aa9783\") " pod="openshift-controller-manager/controller-manager-6cd6d8f48-z2srv" Jan 26 07:56:45 crc kubenswrapper[4806]: I0126 07:56:45.572002 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e433a9e2-2b5f-4baa-8fa5-683037aa9783-config\") pod \"controller-manager-6cd6d8f48-z2srv\" (UID: \"e433a9e2-2b5f-4baa-8fa5-683037aa9783\") " pod="openshift-controller-manager/controller-manager-6cd6d8f48-z2srv" Jan 26 07:56:45 crc kubenswrapper[4806]: I0126 07:56:45.582918 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e433a9e2-2b5f-4baa-8fa5-683037aa9783-proxy-ca-bundles\") pod \"controller-manager-6cd6d8f48-z2srv\" (UID: \"e433a9e2-2b5f-4baa-8fa5-683037aa9783\") " pod="openshift-controller-manager/controller-manager-6cd6d8f48-z2srv" Jan 26 07:56:45 crc kubenswrapper[4806]: I0126 07:56:45.590865 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 26 07:56:45 crc kubenswrapper[4806]: I0126 07:56:45.591432 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e433a9e2-2b5f-4baa-8fa5-683037aa9783-serving-cert\") pod \"controller-manager-6cd6d8f48-z2srv\" (UID: \"e433a9e2-2b5f-4baa-8fa5-683037aa9783\") " pod="openshift-controller-manager/controller-manager-6cd6d8f48-z2srv" Jan 26 07:56:45 crc kubenswrapper[4806]: I0126 07:56:45.591619 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 07:56:45 crc kubenswrapper[4806]: I0126 07:56:45.594230 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 26 07:56:45 crc kubenswrapper[4806]: I0126 07:56:45.596813 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 26 07:56:45 crc kubenswrapper[4806]: I0126 07:56:45.601103 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 26 07:56:45 crc kubenswrapper[4806]: I0126 07:56:45.634498 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78lvf\" (UniqueName: \"kubernetes.io/projected/e433a9e2-2b5f-4baa-8fa5-683037aa9783-kube-api-access-78lvf\") pod \"controller-manager-6cd6d8f48-z2srv\" (UID: \"e433a9e2-2b5f-4baa-8fa5-683037aa9783\") " pod="openshift-controller-manager/controller-manager-6cd6d8f48-z2srv" Jan 26 07:56:45 crc kubenswrapper[4806]: I0126 07:56:45.643564 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8643bd03-327f-4479-8cc2-0be4bb810ea3-kube-api-access-q9qgk" (OuterVolumeSpecName: "kube-api-access-q9qgk") pod "8643bd03-327f-4479-8cc2-0be4bb810ea3" (UID: "8643bd03-327f-4479-8cc2-0be4bb810ea3"). InnerVolumeSpecName "kube-api-access-q9qgk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 07:56:45 crc kubenswrapper[4806]: I0126 07:56:45.653196 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8643bd03-327f-4479-8cc2-0be4bb810ea3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8643bd03-327f-4479-8cc2-0be4bb810ea3" (UID: "8643bd03-327f-4479-8cc2-0be4bb810ea3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 07:56:45 crc kubenswrapper[4806]: I0126 07:56:45.668493 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q9qgk\" (UniqueName: \"kubernetes.io/projected/8643bd03-327f-4479-8cc2-0be4bb810ea3-kube-api-access-q9qgk\") on node \"crc\" DevicePath \"\"" Jan 26 07:56:45 crc kubenswrapper[4806]: I0126 07:56:45.668548 4806 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8643bd03-327f-4479-8cc2-0be4bb810ea3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 07:56:45 crc kubenswrapper[4806]: I0126 07:56:45.688616 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6cd6d8f48-z2srv" Jan 26 07:56:45 crc kubenswrapper[4806]: I0126 07:56:45.769664 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f9595910-bcaa-4041-b18d-ac8a2fd589af-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"f9595910-bcaa-4041-b18d-ac8a2fd589af\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 07:56:45 crc kubenswrapper[4806]: I0126 07:56:45.769806 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f9595910-bcaa-4041-b18d-ac8a2fd589af-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"f9595910-bcaa-4041-b18d-ac8a2fd589af\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 07:56:45 crc kubenswrapper[4806]: I0126 07:56:45.809631 4806 patch_prober.go:28] interesting pod/machine-config-daemon-k2tlk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 07:56:45 crc kubenswrapper[4806]: I0126 07:56:45.809705 4806 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 07:56:45 crc kubenswrapper[4806]: I0126 07:56:45.871608 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f9595910-bcaa-4041-b18d-ac8a2fd589af-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"f9595910-bcaa-4041-b18d-ac8a2fd589af\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 07:56:45 crc kubenswrapper[4806]: I0126 07:56:45.871657 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f9595910-bcaa-4041-b18d-ac8a2fd589af-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"f9595910-bcaa-4041-b18d-ac8a2fd589af\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 07:56:45 crc kubenswrapper[4806]: I0126 07:56:45.871753 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f9595910-bcaa-4041-b18d-ac8a2fd589af-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"f9595910-bcaa-4041-b18d-ac8a2fd589af\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 07:56:45 crc kubenswrapper[4806]: I0126 07:56:45.894798 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f9595910-bcaa-4041-b18d-ac8a2fd589af-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"f9595910-bcaa-4041-b18d-ac8a2fd589af\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 07:56:45 crc kubenswrapper[4806]: I0126 07:56:45.924039 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6cd6d8f48-z2srv"] Jan 26 07:56:45 crc kubenswrapper[4806]: I0126 07:56:45.958416 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 07:56:46 crc kubenswrapper[4806]: I0126 07:56:46.212696 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6cd6d8f48-z2srv" event={"ID":"e433a9e2-2b5f-4baa-8fa5-683037aa9783","Type":"ContainerStarted","Data":"c27398703ae65f339599c920818dec8904b34fd6f2d22bc6990264449f614693"} Jan 26 07:56:46 crc kubenswrapper[4806]: I0126 07:56:46.212751 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6cd6d8f48-z2srv" event={"ID":"e433a9e2-2b5f-4baa-8fa5-683037aa9783","Type":"ContainerStarted","Data":"5c684d84e1c26480b1034314e348151d6c92af71dd7fc47e760f4e00d519a72d"} Jan 26 07:56:46 crc kubenswrapper[4806]: I0126 07:56:46.213878 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6cd6d8f48-z2srv" Jan 26 07:56:46 crc kubenswrapper[4806]: I0126 07:56:46.219908 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7cc5cc848b-r228k" event={"ID":"8643bd03-327f-4479-8cc2-0be4bb810ea3","Type":"ContainerDied","Data":"597a8867a7b5f0123f2dddf6bb6a7754cda88032c78ff897c243765746ee8356"} Jan 26 07:56:46 crc kubenswrapper[4806]: I0126 07:56:46.219939 4806 scope.go:117] "RemoveContainer" containerID="4fa1af2b19232f4759e35136e032f9b1a94dbcea18763ed2605d39a8d9ed207d" Jan 26 07:56:46 crc kubenswrapper[4806]: I0126 07:56:46.220048 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7cc5cc848b-r228k" Jan 26 07:56:46 crc kubenswrapper[4806]: I0126 07:56:46.239059 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6cd6d8f48-z2srv" podStartSLOduration=8.239042119 podStartE2EDuration="8.239042119s" podCreationTimestamp="2026-01-26 07:56:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 07:56:46.23872919 +0000 UTC m=+185.503137236" watchObservedRunningTime="2026-01-26 07:56:46.239042119 +0000 UTC m=+185.503450175" Jan 26 07:56:46 crc kubenswrapper[4806]: I0126 07:56:46.239603 4806 patch_prober.go:28] interesting pod/controller-manager-6cd6d8f48-z2srv container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.56:8443/healthz\": dial tcp 10.217.0.56:8443: connect: connection refused" start-of-body= Jan 26 07:56:46 crc kubenswrapper[4806]: I0126 07:56:46.239652 4806 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-6cd6d8f48-z2srv" podUID="e433a9e2-2b5f-4baa-8fa5-683037aa9783" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.56:8443/healthz\": dial tcp 10.217.0.56:8443: connect: connection refused" Jan 26 07:56:46 crc kubenswrapper[4806]: I0126 07:56:46.241842 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-79ddh" event={"ID":"df97f49a-b950-45f2-8c66-52f2c6c33163","Type":"ContainerStarted","Data":"7ae882badc1d1fc9fe36a1400418f7f65b07c4c628603331aba0c7be64238ef2"} Jan 26 07:56:46 crc kubenswrapper[4806]: I0126 07:56:46.252008 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-45bpz" event={"ID":"4f544176-9dd8-4416-99f3-53299cd7ffb0","Type":"ContainerStarted","Data":"438717a9e89fea0b18d9a5b8a7d06e27643b3eb6c398a244e7baf7ac4c5e75f6"} Jan 26 07:56:46 crc kubenswrapper[4806]: I0126 07:56:46.260837 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ts6cz" event={"ID":"a078c937-6bed-4604-a0a1-25c9c7d2503d","Type":"ContainerStarted","Data":"763832a6f11cba7a659fb2a5f7e0a288e2a51bb64b901f54acd3ab3d3cadecb5"} Jan 26 07:56:46 crc kubenswrapper[4806]: I0126 07:56:46.266964 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-248dw" event={"ID":"422dbc29-ef6c-40a2-8928-ef97946880a0","Type":"ContainerStarted","Data":"0481914640d34a30f4d695147046551983070423ed700484307c1be9e7be64a4"} Jan 26 07:56:46 crc kubenswrapper[4806]: I0126 07:56:46.281960 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7cc5cc848b-r228k"] Jan 26 07:56:46 crc kubenswrapper[4806]: I0126 07:56:46.288071 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7cc5cc848b-r228k"] Jan 26 07:56:46 crc kubenswrapper[4806]: I0126 07:56:46.327175 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 26 07:56:46 crc kubenswrapper[4806]: I0126 07:56:46.327280 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-79ddh" podStartSLOduration=3.786411425 podStartE2EDuration="45.327257478s" podCreationTimestamp="2026-01-26 07:56:01 +0000 UTC" firstStartedPulling="2026-01-26 07:56:04.257489024 +0000 UTC m=+143.521897080" lastFinishedPulling="2026-01-26 07:56:45.798335077 +0000 UTC m=+185.062743133" observedRunningTime="2026-01-26 07:56:46.303671452 +0000 UTC m=+185.568079508" watchObservedRunningTime="2026-01-26 07:56:46.327257478 +0000 UTC m=+185.591665544" Jan 26 07:56:46 crc kubenswrapper[4806]: I0126 07:56:46.336250 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-45bpz" podStartSLOduration=4.071961602 podStartE2EDuration="44.336229861s" podCreationTimestamp="2026-01-26 07:56:02 +0000 UTC" firstStartedPulling="2026-01-26 07:56:05.401432296 +0000 UTC m=+144.665840352" lastFinishedPulling="2026-01-26 07:56:45.665700555 +0000 UTC m=+184.930108611" observedRunningTime="2026-01-26 07:56:46.3234453 +0000 UTC m=+185.587853356" watchObservedRunningTime="2026-01-26 07:56:46.336229861 +0000 UTC m=+185.600637917" Jan 26 07:56:46 crc kubenswrapper[4806]: I0126 07:56:46.355601 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-248dw" podStartSLOduration=3.208781413 podStartE2EDuration="42.355577217s" podCreationTimestamp="2026-01-26 07:56:04 +0000 UTC" firstStartedPulling="2026-01-26 07:56:06.541031446 +0000 UTC m=+145.805439502" lastFinishedPulling="2026-01-26 07:56:45.687827259 +0000 UTC m=+184.952235306" observedRunningTime="2026-01-26 07:56:46.352992184 +0000 UTC m=+185.617400240" watchObservedRunningTime="2026-01-26 07:56:46.355577217 +0000 UTC m=+185.619985273" Jan 26 07:56:46 crc kubenswrapper[4806]: I0126 07:56:46.375487 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-ts6cz" podStartSLOduration=3.991504064 podStartE2EDuration="43.375472098s" podCreationTimestamp="2026-01-26 07:56:03 +0000 UTC" firstStartedPulling="2026-01-26 07:56:06.500728699 +0000 UTC m=+145.765136745" lastFinishedPulling="2026-01-26 07:56:45.884696723 +0000 UTC m=+185.149104779" observedRunningTime="2026-01-26 07:56:46.373118591 +0000 UTC m=+185.637526647" watchObservedRunningTime="2026-01-26 07:56:46.375472098 +0000 UTC m=+185.639880154" Jan 26 07:56:47 crc kubenswrapper[4806]: I0126 07:56:47.049555 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8643bd03-327f-4479-8cc2-0be4bb810ea3" path="/var/lib/kubelet/pods/8643bd03-327f-4479-8cc2-0be4bb810ea3/volumes" Jan 26 07:56:47 crc kubenswrapper[4806]: I0126 07:56:47.273320 4806 generic.go:334] "Generic (PLEG): container finished" podID="f9595910-bcaa-4041-b18d-ac8a2fd589af" containerID="606bbfbeef3ae266148d4d415438ce925288fea081f117c9a5bb37c70a4b33b7" exitCode=0 Jan 26 07:56:47 crc kubenswrapper[4806]: I0126 07:56:47.273365 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"f9595910-bcaa-4041-b18d-ac8a2fd589af","Type":"ContainerDied","Data":"606bbfbeef3ae266148d4d415438ce925288fea081f117c9a5bb37c70a4b33b7"} Jan 26 07:56:47 crc kubenswrapper[4806]: I0126 07:56:47.273660 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"f9595910-bcaa-4041-b18d-ac8a2fd589af","Type":"ContainerStarted","Data":"5219dacc7ca818a92b47c359259e74b8ddea879ba958ddde3e803fca9963b4b1"} Jan 26 07:56:47 crc kubenswrapper[4806]: I0126 07:56:47.280669 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6cd6d8f48-z2srv" Jan 26 07:56:48 crc kubenswrapper[4806]: I0126 07:56:48.270564 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 07:56:48 crc kubenswrapper[4806]: I0126 07:56:48.692841 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 07:56:48 crc kubenswrapper[4806]: I0126 07:56:48.840951 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f9595910-bcaa-4041-b18d-ac8a2fd589af-kubelet-dir\") pod \"f9595910-bcaa-4041-b18d-ac8a2fd589af\" (UID: \"f9595910-bcaa-4041-b18d-ac8a2fd589af\") " Jan 26 07:56:48 crc kubenswrapper[4806]: I0126 07:56:48.841014 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f9595910-bcaa-4041-b18d-ac8a2fd589af-kube-api-access\") pod \"f9595910-bcaa-4041-b18d-ac8a2fd589af\" (UID: \"f9595910-bcaa-4041-b18d-ac8a2fd589af\") " Jan 26 07:56:48 crc kubenswrapper[4806]: I0126 07:56:48.842304 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f9595910-bcaa-4041-b18d-ac8a2fd589af-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "f9595910-bcaa-4041-b18d-ac8a2fd589af" (UID: "f9595910-bcaa-4041-b18d-ac8a2fd589af"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 07:56:48 crc kubenswrapper[4806]: I0126 07:56:48.850847 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9595910-bcaa-4041-b18d-ac8a2fd589af-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f9595910-bcaa-4041-b18d-ac8a2fd589af" (UID: "f9595910-bcaa-4041-b18d-ac8a2fd589af"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 07:56:48 crc kubenswrapper[4806]: I0126 07:56:48.942386 4806 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f9595910-bcaa-4041-b18d-ac8a2fd589af-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 26 07:56:48 crc kubenswrapper[4806]: I0126 07:56:48.942423 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f9595910-bcaa-4041-b18d-ac8a2fd589af-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 07:56:49 crc kubenswrapper[4806]: I0126 07:56:49.285357 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 07:56:49 crc kubenswrapper[4806]: I0126 07:56:49.285773 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"f9595910-bcaa-4041-b18d-ac8a2fd589af","Type":"ContainerDied","Data":"5219dacc7ca818a92b47c359259e74b8ddea879ba958ddde3e803fca9963b4b1"} Jan 26 07:56:49 crc kubenswrapper[4806]: I0126 07:56:49.285795 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5219dacc7ca818a92b47c359259e74b8ddea879ba958ddde3e803fca9963b4b1" Jan 26 07:56:52 crc kubenswrapper[4806]: I0126 07:56:52.004247 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-79ddh" Jan 26 07:56:52 crc kubenswrapper[4806]: I0126 07:56:52.004859 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-79ddh" Jan 26 07:56:52 crc kubenswrapper[4806]: I0126 07:56:52.084033 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-79ddh" Jan 26 07:56:52 crc kubenswrapper[4806]: I0126 07:56:52.352521 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-79ddh" Jan 26 07:56:52 crc kubenswrapper[4806]: I0126 07:56:52.388452 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 26 07:56:52 crc kubenswrapper[4806]: E0126 07:56:52.388916 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9595910-bcaa-4041-b18d-ac8a2fd589af" containerName="pruner" Jan 26 07:56:52 crc kubenswrapper[4806]: I0126 07:56:52.388993 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9595910-bcaa-4041-b18d-ac8a2fd589af" containerName="pruner" Jan 26 07:56:52 crc kubenswrapper[4806]: I0126 07:56:52.389141 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9595910-bcaa-4041-b18d-ac8a2fd589af" containerName="pruner" Jan 26 07:56:52 crc kubenswrapper[4806]: I0126 07:56:52.389657 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 26 07:56:52 crc kubenswrapper[4806]: I0126 07:56:52.393014 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 26 07:56:52 crc kubenswrapper[4806]: I0126 07:56:52.393914 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 26 07:56:52 crc kubenswrapper[4806]: I0126 07:56:52.395692 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fc2b475a-c1a6-46d8-bbc6-a8a7f5934df1-kubelet-dir\") pod \"installer-9-crc\" (UID: \"fc2b475a-c1a6-46d8-bbc6-a8a7f5934df1\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 07:56:52 crc kubenswrapper[4806]: I0126 07:56:52.395741 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/fc2b475a-c1a6-46d8-bbc6-a8a7f5934df1-var-lock\") pod \"installer-9-crc\" (UID: \"fc2b475a-c1a6-46d8-bbc6-a8a7f5934df1\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 07:56:52 crc kubenswrapper[4806]: I0126 07:56:52.395852 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fc2b475a-c1a6-46d8-bbc6-a8a7f5934df1-kube-api-access\") pod \"installer-9-crc\" (UID: \"fc2b475a-c1a6-46d8-bbc6-a8a7f5934df1\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 07:56:52 crc kubenswrapper[4806]: I0126 07:56:52.425246 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 26 07:56:52 crc kubenswrapper[4806]: I0126 07:56:52.497344 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fc2b475a-c1a6-46d8-bbc6-a8a7f5934df1-kube-api-access\") pod \"installer-9-crc\" (UID: \"fc2b475a-c1a6-46d8-bbc6-a8a7f5934df1\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 07:56:52 crc kubenswrapper[4806]: I0126 07:56:52.497399 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fc2b475a-c1a6-46d8-bbc6-a8a7f5934df1-kubelet-dir\") pod \"installer-9-crc\" (UID: \"fc2b475a-c1a6-46d8-bbc6-a8a7f5934df1\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 07:56:52 crc kubenswrapper[4806]: I0126 07:56:52.497428 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/fc2b475a-c1a6-46d8-bbc6-a8a7f5934df1-var-lock\") pod \"installer-9-crc\" (UID: \"fc2b475a-c1a6-46d8-bbc6-a8a7f5934df1\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 07:56:52 crc kubenswrapper[4806]: I0126 07:56:52.497522 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/fc2b475a-c1a6-46d8-bbc6-a8a7f5934df1-var-lock\") pod \"installer-9-crc\" (UID: \"fc2b475a-c1a6-46d8-bbc6-a8a7f5934df1\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 07:56:52 crc kubenswrapper[4806]: I0126 07:56:52.497535 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fc2b475a-c1a6-46d8-bbc6-a8a7f5934df1-kubelet-dir\") pod \"installer-9-crc\" (UID: \"fc2b475a-c1a6-46d8-bbc6-a8a7f5934df1\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 07:56:52 crc kubenswrapper[4806]: I0126 07:56:52.514608 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fc2b475a-c1a6-46d8-bbc6-a8a7f5934df1-kube-api-access\") pod \"installer-9-crc\" (UID: \"fc2b475a-c1a6-46d8-bbc6-a8a7f5934df1\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 07:56:52 crc kubenswrapper[4806]: I0126 07:56:52.536558 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-45bpz" Jan 26 07:56:52 crc kubenswrapper[4806]: I0126 07:56:52.536606 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-45bpz" Jan 26 07:56:52 crc kubenswrapper[4806]: I0126 07:56:52.586200 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-45bpz" Jan 26 07:56:52 crc kubenswrapper[4806]: I0126 07:56:52.724574 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 26 07:56:53 crc kubenswrapper[4806]: I0126 07:56:53.118260 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 26 07:56:53 crc kubenswrapper[4806]: W0126 07:56:53.123003 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podfc2b475a_c1a6_46d8_bbc6_a8a7f5934df1.slice/crio-dd933404c856bc701675ec20780237b4d3553e674c57b77e81f49298bae3b8ac WatchSource:0}: Error finding container dd933404c856bc701675ec20780237b4d3553e674c57b77e81f49298bae3b8ac: Status 404 returned error can't find the container with id dd933404c856bc701675ec20780237b4d3553e674c57b77e81f49298bae3b8ac Jan 26 07:56:53 crc kubenswrapper[4806]: I0126 07:56:53.310999 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"fc2b475a-c1a6-46d8-bbc6-a8a7f5934df1","Type":"ContainerStarted","Data":"dd933404c856bc701675ec20780237b4d3553e674c57b77e81f49298bae3b8ac"} Jan 26 07:56:53 crc kubenswrapper[4806]: I0126 07:56:53.362146 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-45bpz" Jan 26 07:56:54 crc kubenswrapper[4806]: I0126 07:56:54.091920 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-ts6cz" Jan 26 07:56:54 crc kubenswrapper[4806]: I0126 07:56:54.092015 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-ts6cz" Jan 26 07:56:54 crc kubenswrapper[4806]: I0126 07:56:54.141469 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-ts6cz" Jan 26 07:56:54 crc kubenswrapper[4806]: I0126 07:56:54.318154 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"fc2b475a-c1a6-46d8-bbc6-a8a7f5934df1","Type":"ContainerStarted","Data":"ba1df00b886e4f451e117cf26613045ab8106bd35b88b17625275bbe96b1e401"} Jan 26 07:56:54 crc kubenswrapper[4806]: I0126 07:56:54.365874 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-ts6cz" Jan 26 07:56:54 crc kubenswrapper[4806]: I0126 07:56:54.439505 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-248dw" Jan 26 07:56:54 crc kubenswrapper[4806]: I0126 07:56:54.439615 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-248dw" Jan 26 07:56:54 crc kubenswrapper[4806]: I0126 07:56:54.484482 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-248dw" Jan 26 07:56:55 crc kubenswrapper[4806]: I0126 07:56:55.326574 4806 generic.go:334] "Generic (PLEG): container finished" podID="339dd820-f50a-4135-9da6-5768324b8d55" containerID="efa82dba2cbbfa4848edc7074525dfbff1ac7fb2d3b0c658d997db9c11b0f404" exitCode=0 Jan 26 07:56:55 crc kubenswrapper[4806]: I0126 07:56:55.326672 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rqpjn" event={"ID":"339dd820-f50a-4135-9da6-5768324b8d55","Type":"ContainerDied","Data":"efa82dba2cbbfa4848edc7074525dfbff1ac7fb2d3b0c658d997db9c11b0f404"} Jan 26 07:56:55 crc kubenswrapper[4806]: I0126 07:56:55.350405 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=3.350376136 podStartE2EDuration="3.350376136s" podCreationTimestamp="2026-01-26 07:56:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 07:56:55.343989286 +0000 UTC m=+194.608397382" watchObservedRunningTime="2026-01-26 07:56:55.350376136 +0000 UTC m=+194.614784242" Jan 26 07:56:55 crc kubenswrapper[4806]: I0126 07:56:55.410298 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-248dw" Jan 26 07:56:55 crc kubenswrapper[4806]: I0126 07:56:55.722754 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-45bpz"] Jan 26 07:56:55 crc kubenswrapper[4806]: I0126 07:56:55.723061 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-45bpz" podUID="4f544176-9dd8-4416-99f3-53299cd7ffb0" containerName="registry-server" containerID="cri-o://438717a9e89fea0b18d9a5b8a7d06e27643b3eb6c398a244e7baf7ac4c5e75f6" gracePeriod=2 Jan 26 07:56:56 crc kubenswrapper[4806]: I0126 07:56:56.207136 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-45bpz" Jan 26 07:56:56 crc kubenswrapper[4806]: I0126 07:56:56.334345 4806 generic.go:334] "Generic (PLEG): container finished" podID="4f544176-9dd8-4416-99f3-53299cd7ffb0" containerID="438717a9e89fea0b18d9a5b8a7d06e27643b3eb6c398a244e7baf7ac4c5e75f6" exitCode=0 Jan 26 07:56:56 crc kubenswrapper[4806]: I0126 07:56:56.334404 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-45bpz" Jan 26 07:56:56 crc kubenswrapper[4806]: I0126 07:56:56.334417 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-45bpz" event={"ID":"4f544176-9dd8-4416-99f3-53299cd7ffb0","Type":"ContainerDied","Data":"438717a9e89fea0b18d9a5b8a7d06e27643b3eb6c398a244e7baf7ac4c5e75f6"} Jan 26 07:56:56 crc kubenswrapper[4806]: I0126 07:56:56.334447 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-45bpz" event={"ID":"4f544176-9dd8-4416-99f3-53299cd7ffb0","Type":"ContainerDied","Data":"dca281a711bb84e93192ca3c5a14f6ceb8da3cf1c87da2ec2ed42bf460164eee"} Jan 26 07:56:56 crc kubenswrapper[4806]: I0126 07:56:56.334466 4806 scope.go:117] "RemoveContainer" containerID="438717a9e89fea0b18d9a5b8a7d06e27643b3eb6c398a244e7baf7ac4c5e75f6" Jan 26 07:56:56 crc kubenswrapper[4806]: I0126 07:56:56.338927 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rqpjn" event={"ID":"339dd820-f50a-4135-9da6-5768324b8d55","Type":"ContainerStarted","Data":"bc32503594d93ff0f46e8dc878322f959aeda539088924f258bbad66d728e0f6"} Jan 26 07:56:56 crc kubenswrapper[4806]: I0126 07:56:56.350648 4806 scope.go:117] "RemoveContainer" containerID="ea666b208fdf78835c540e568f345bd0896ef9b61579eabb512293a19144806f" Jan 26 07:56:56 crc kubenswrapper[4806]: I0126 07:56:56.350863 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f544176-9dd8-4416-99f3-53299cd7ffb0-catalog-content\") pod \"4f544176-9dd8-4416-99f3-53299cd7ffb0\" (UID: \"4f544176-9dd8-4416-99f3-53299cd7ffb0\") " Jan 26 07:56:56 crc kubenswrapper[4806]: I0126 07:56:56.350910 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z57jj\" (UniqueName: \"kubernetes.io/projected/4f544176-9dd8-4416-99f3-53299cd7ffb0-kube-api-access-z57jj\") pod \"4f544176-9dd8-4416-99f3-53299cd7ffb0\" (UID: \"4f544176-9dd8-4416-99f3-53299cd7ffb0\") " Jan 26 07:56:56 crc kubenswrapper[4806]: I0126 07:56:56.350944 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f544176-9dd8-4416-99f3-53299cd7ffb0-utilities\") pod \"4f544176-9dd8-4416-99f3-53299cd7ffb0\" (UID: \"4f544176-9dd8-4416-99f3-53299cd7ffb0\") " Jan 26 07:56:56 crc kubenswrapper[4806]: I0126 07:56:56.351858 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4f544176-9dd8-4416-99f3-53299cd7ffb0-utilities" (OuterVolumeSpecName: "utilities") pod "4f544176-9dd8-4416-99f3-53299cd7ffb0" (UID: "4f544176-9dd8-4416-99f3-53299cd7ffb0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 07:56:56 crc kubenswrapper[4806]: I0126 07:56:56.371246 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f544176-9dd8-4416-99f3-53299cd7ffb0-kube-api-access-z57jj" (OuterVolumeSpecName: "kube-api-access-z57jj") pod "4f544176-9dd8-4416-99f3-53299cd7ffb0" (UID: "4f544176-9dd8-4416-99f3-53299cd7ffb0"). InnerVolumeSpecName "kube-api-access-z57jj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 07:56:56 crc kubenswrapper[4806]: I0126 07:56:56.380764 4806 scope.go:117] "RemoveContainer" containerID="d86193acd0e7effc1f269af0665320de3945ab8568b4c6345b2b8ebbfb878c02" Jan 26 07:56:56 crc kubenswrapper[4806]: I0126 07:56:56.382701 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-rqpjn" podStartSLOduration=2.924539763 podStartE2EDuration="54.382689299s" podCreationTimestamp="2026-01-26 07:56:02 +0000 UTC" firstStartedPulling="2026-01-26 07:56:04.300647122 +0000 UTC m=+143.565055178" lastFinishedPulling="2026-01-26 07:56:55.758796638 +0000 UTC m=+195.023204714" observedRunningTime="2026-01-26 07:56:56.362284923 +0000 UTC m=+195.626692979" watchObservedRunningTime="2026-01-26 07:56:56.382689299 +0000 UTC m=+195.647097355" Jan 26 07:56:56 crc kubenswrapper[4806]: I0126 07:56:56.417003 4806 scope.go:117] "RemoveContainer" containerID="438717a9e89fea0b18d9a5b8a7d06e27643b3eb6c398a244e7baf7ac4c5e75f6" Jan 26 07:56:56 crc kubenswrapper[4806]: E0126 07:56:56.417416 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"438717a9e89fea0b18d9a5b8a7d06e27643b3eb6c398a244e7baf7ac4c5e75f6\": container with ID starting with 438717a9e89fea0b18d9a5b8a7d06e27643b3eb6c398a244e7baf7ac4c5e75f6 not found: ID does not exist" containerID="438717a9e89fea0b18d9a5b8a7d06e27643b3eb6c398a244e7baf7ac4c5e75f6" Jan 26 07:56:56 crc kubenswrapper[4806]: I0126 07:56:56.417442 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"438717a9e89fea0b18d9a5b8a7d06e27643b3eb6c398a244e7baf7ac4c5e75f6"} err="failed to get container status \"438717a9e89fea0b18d9a5b8a7d06e27643b3eb6c398a244e7baf7ac4c5e75f6\": rpc error: code = NotFound desc = could not find container \"438717a9e89fea0b18d9a5b8a7d06e27643b3eb6c398a244e7baf7ac4c5e75f6\": container with ID starting with 438717a9e89fea0b18d9a5b8a7d06e27643b3eb6c398a244e7baf7ac4c5e75f6 not found: ID does not exist" Jan 26 07:56:56 crc kubenswrapper[4806]: I0126 07:56:56.417484 4806 scope.go:117] "RemoveContainer" containerID="ea666b208fdf78835c540e568f345bd0896ef9b61579eabb512293a19144806f" Jan 26 07:56:56 crc kubenswrapper[4806]: E0126 07:56:56.417845 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea666b208fdf78835c540e568f345bd0896ef9b61579eabb512293a19144806f\": container with ID starting with ea666b208fdf78835c540e568f345bd0896ef9b61579eabb512293a19144806f not found: ID does not exist" containerID="ea666b208fdf78835c540e568f345bd0896ef9b61579eabb512293a19144806f" Jan 26 07:56:56 crc kubenswrapper[4806]: I0126 07:56:56.417865 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea666b208fdf78835c540e568f345bd0896ef9b61579eabb512293a19144806f"} err="failed to get container status \"ea666b208fdf78835c540e568f345bd0896ef9b61579eabb512293a19144806f\": rpc error: code = NotFound desc = could not find container \"ea666b208fdf78835c540e568f345bd0896ef9b61579eabb512293a19144806f\": container with ID starting with ea666b208fdf78835c540e568f345bd0896ef9b61579eabb512293a19144806f not found: ID does not exist" Jan 26 07:56:56 crc kubenswrapper[4806]: I0126 07:56:56.417877 4806 scope.go:117] "RemoveContainer" containerID="d86193acd0e7effc1f269af0665320de3945ab8568b4c6345b2b8ebbfb878c02" Jan 26 07:56:56 crc kubenswrapper[4806]: E0126 07:56:56.418071 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d86193acd0e7effc1f269af0665320de3945ab8568b4c6345b2b8ebbfb878c02\": container with ID starting with d86193acd0e7effc1f269af0665320de3945ab8568b4c6345b2b8ebbfb878c02 not found: ID does not exist" containerID="d86193acd0e7effc1f269af0665320de3945ab8568b4c6345b2b8ebbfb878c02" Jan 26 07:56:56 crc kubenswrapper[4806]: I0126 07:56:56.418095 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d86193acd0e7effc1f269af0665320de3945ab8568b4c6345b2b8ebbfb878c02"} err="failed to get container status \"d86193acd0e7effc1f269af0665320de3945ab8568b4c6345b2b8ebbfb878c02\": rpc error: code = NotFound desc = could not find container \"d86193acd0e7effc1f269af0665320de3945ab8568b4c6345b2b8ebbfb878c02\": container with ID starting with d86193acd0e7effc1f269af0665320de3945ab8568b4c6345b2b8ebbfb878c02 not found: ID does not exist" Jan 26 07:56:56 crc kubenswrapper[4806]: I0126 07:56:56.427719 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4f544176-9dd8-4416-99f3-53299cd7ffb0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4f544176-9dd8-4416-99f3-53299cd7ffb0" (UID: "4f544176-9dd8-4416-99f3-53299cd7ffb0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 07:56:56 crc kubenswrapper[4806]: I0126 07:56:56.452461 4806 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f544176-9dd8-4416-99f3-53299cd7ffb0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 07:56:56 crc kubenswrapper[4806]: I0126 07:56:56.452499 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z57jj\" (UniqueName: \"kubernetes.io/projected/4f544176-9dd8-4416-99f3-53299cd7ffb0-kube-api-access-z57jj\") on node \"crc\" DevicePath \"\"" Jan 26 07:56:56 crc kubenswrapper[4806]: I0126 07:56:56.452515 4806 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f544176-9dd8-4416-99f3-53299cd7ffb0-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 07:56:56 crc kubenswrapper[4806]: I0126 07:56:56.677164 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-45bpz"] Jan 26 07:56:56 crc kubenswrapper[4806]: I0126 07:56:56.680108 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-45bpz"] Jan 26 07:56:57 crc kubenswrapper[4806]: I0126 07:56:57.049501 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f544176-9dd8-4416-99f3-53299cd7ffb0" path="/var/lib/kubelet/pods/4f544176-9dd8-4416-99f3-53299cd7ffb0/volumes" Jan 26 07:56:57 crc kubenswrapper[4806]: I0126 07:56:57.120480 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-248dw"] Jan 26 07:56:57 crc kubenswrapper[4806]: I0126 07:56:57.347445 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cmd7h" event={"ID":"cf65c110-74ee-4e0c-a7e8-bb27c891ff12","Type":"ContainerStarted","Data":"4223d58b07c8a779337c90d9019006f7bdb82f10b5281163b174ce1a8eb27de0"} Jan 26 07:56:57 crc kubenswrapper[4806]: I0126 07:56:57.351467 4806 generic.go:334] "Generic (PLEG): container finished" podID="0531f954-d1d9-42f0-bd29-f8ff5b0871b4" containerID="380b5ecf0eb94b7d73db31d33491c08b08b39c28920e47913488d09cd109854b" exitCode=0 Jan 26 07:56:57 crc kubenswrapper[4806]: I0126 07:56:57.351649 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bq7zd" event={"ID":"0531f954-d1d9-42f0-bd29-f8ff5b0871b4","Type":"ContainerDied","Data":"380b5ecf0eb94b7d73db31d33491c08b08b39c28920e47913488d09cd109854b"} Jan 26 07:56:57 crc kubenswrapper[4806]: I0126 07:56:57.351773 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-248dw" podUID="422dbc29-ef6c-40a2-8928-ef97946880a0" containerName="registry-server" containerID="cri-o://0481914640d34a30f4d695147046551983070423ed700484307c1be9e7be64a4" gracePeriod=2 Jan 26 07:56:57 crc kubenswrapper[4806]: I0126 07:56:57.853359 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-248dw" Jan 26 07:56:57 crc kubenswrapper[4806]: I0126 07:56:57.979015 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fg5pf\" (UniqueName: \"kubernetes.io/projected/422dbc29-ef6c-40a2-8928-ef97946880a0-kube-api-access-fg5pf\") pod \"422dbc29-ef6c-40a2-8928-ef97946880a0\" (UID: \"422dbc29-ef6c-40a2-8928-ef97946880a0\") " Jan 26 07:56:57 crc kubenswrapper[4806]: I0126 07:56:57.979109 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/422dbc29-ef6c-40a2-8928-ef97946880a0-utilities\") pod \"422dbc29-ef6c-40a2-8928-ef97946880a0\" (UID: \"422dbc29-ef6c-40a2-8928-ef97946880a0\") " Jan 26 07:56:57 crc kubenswrapper[4806]: I0126 07:56:57.979197 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/422dbc29-ef6c-40a2-8928-ef97946880a0-catalog-content\") pod \"422dbc29-ef6c-40a2-8928-ef97946880a0\" (UID: \"422dbc29-ef6c-40a2-8928-ef97946880a0\") " Jan 26 07:56:57 crc kubenswrapper[4806]: I0126 07:56:57.981237 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/422dbc29-ef6c-40a2-8928-ef97946880a0-utilities" (OuterVolumeSpecName: "utilities") pod "422dbc29-ef6c-40a2-8928-ef97946880a0" (UID: "422dbc29-ef6c-40a2-8928-ef97946880a0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 07:56:57 crc kubenswrapper[4806]: I0126 07:56:57.994646 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/422dbc29-ef6c-40a2-8928-ef97946880a0-kube-api-access-fg5pf" (OuterVolumeSpecName: "kube-api-access-fg5pf") pod "422dbc29-ef6c-40a2-8928-ef97946880a0" (UID: "422dbc29-ef6c-40a2-8928-ef97946880a0"). InnerVolumeSpecName "kube-api-access-fg5pf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 07:56:58 crc kubenswrapper[4806]: I0126 07:56:58.011631 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/422dbc29-ef6c-40a2-8928-ef97946880a0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "422dbc29-ef6c-40a2-8928-ef97946880a0" (UID: "422dbc29-ef6c-40a2-8928-ef97946880a0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 07:56:58 crc kubenswrapper[4806]: I0126 07:56:58.081218 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fg5pf\" (UniqueName: \"kubernetes.io/projected/422dbc29-ef6c-40a2-8928-ef97946880a0-kube-api-access-fg5pf\") on node \"crc\" DevicePath \"\"" Jan 26 07:56:58 crc kubenswrapper[4806]: I0126 07:56:58.081247 4806 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/422dbc29-ef6c-40a2-8928-ef97946880a0-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 07:56:58 crc kubenswrapper[4806]: I0126 07:56:58.081257 4806 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/422dbc29-ef6c-40a2-8928-ef97946880a0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 07:56:58 crc kubenswrapper[4806]: I0126 07:56:58.118338 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6cd6d8f48-z2srv"] Jan 26 07:56:58 crc kubenswrapper[4806]: I0126 07:56:58.118542 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-6cd6d8f48-z2srv" podUID="e433a9e2-2b5f-4baa-8fa5-683037aa9783" containerName="controller-manager" containerID="cri-o://c27398703ae65f339599c920818dec8904b34fd6f2d22bc6990264449f614693" gracePeriod=30 Jan 26 07:56:58 crc kubenswrapper[4806]: I0126 07:56:58.161263 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7cccfc774d-2hfxs"] Jan 26 07:56:58 crc kubenswrapper[4806]: I0126 07:56:58.161672 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7cccfc774d-2hfxs" podUID="b4a4dbaf-e0fb-4f3b-8b03-709e232a265e" containerName="route-controller-manager" containerID="cri-o://4fbaed5b3a6f82a5584220127613531194bb11d92b23d5b980160665725c56e2" gracePeriod=30 Jan 26 07:56:58 crc kubenswrapper[4806]: I0126 07:56:58.359693 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5p4nt" event={"ID":"c31e36f6-aabe-4f1e-8e7e-3bb086ec1cd3","Type":"ContainerStarted","Data":"ad4da5d9b0ba3adfe9a8c76216b1731a8aecddf2c5a6c649032f015d085c7f78"} Jan 26 07:56:58 crc kubenswrapper[4806]: I0126 07:56:58.363004 4806 generic.go:334] "Generic (PLEG): container finished" podID="b4a4dbaf-e0fb-4f3b-8b03-709e232a265e" containerID="4fbaed5b3a6f82a5584220127613531194bb11d92b23d5b980160665725c56e2" exitCode=0 Jan 26 07:56:58 crc kubenswrapper[4806]: I0126 07:56:58.363138 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7cccfc774d-2hfxs" event={"ID":"b4a4dbaf-e0fb-4f3b-8b03-709e232a265e","Type":"ContainerDied","Data":"4fbaed5b3a6f82a5584220127613531194bb11d92b23d5b980160665725c56e2"} Jan 26 07:56:58 crc kubenswrapper[4806]: I0126 07:56:58.369875 4806 generic.go:334] "Generic (PLEG): container finished" podID="422dbc29-ef6c-40a2-8928-ef97946880a0" containerID="0481914640d34a30f4d695147046551983070423ed700484307c1be9e7be64a4" exitCode=0 Jan 26 07:56:58 crc kubenswrapper[4806]: I0126 07:56:58.369973 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-248dw" event={"ID":"422dbc29-ef6c-40a2-8928-ef97946880a0","Type":"ContainerDied","Data":"0481914640d34a30f4d695147046551983070423ed700484307c1be9e7be64a4"} Jan 26 07:56:58 crc kubenswrapper[4806]: I0126 07:56:58.370009 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-248dw" event={"ID":"422dbc29-ef6c-40a2-8928-ef97946880a0","Type":"ContainerDied","Data":"fee2b079c0103ea3f5414c42770bd099723c4b95a30174354d8e98a1a8a26c25"} Jan 26 07:56:58 crc kubenswrapper[4806]: I0126 07:56:58.370030 4806 scope.go:117] "RemoveContainer" containerID="0481914640d34a30f4d695147046551983070423ed700484307c1be9e7be64a4" Jan 26 07:56:58 crc kubenswrapper[4806]: I0126 07:56:58.369940 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-248dw" Jan 26 07:56:58 crc kubenswrapper[4806]: I0126 07:56:58.382623 4806 generic.go:334] "Generic (PLEG): container finished" podID="cf65c110-74ee-4e0c-a7e8-bb27c891ff12" containerID="4223d58b07c8a779337c90d9019006f7bdb82f10b5281163b174ce1a8eb27de0" exitCode=0 Jan 26 07:56:58 crc kubenswrapper[4806]: I0126 07:56:58.382747 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cmd7h" event={"ID":"cf65c110-74ee-4e0c-a7e8-bb27c891ff12","Type":"ContainerDied","Data":"4223d58b07c8a779337c90d9019006f7bdb82f10b5281163b174ce1a8eb27de0"} Jan 26 07:56:58 crc kubenswrapper[4806]: I0126 07:56:58.384614 4806 scope.go:117] "RemoveContainer" containerID="d5c2815870c53f05ed7684ee87098fe78f03dbeea9ed293f80ff9cae1caa396e" Jan 26 07:56:58 crc kubenswrapper[4806]: I0126 07:56:58.391691 4806 generic.go:334] "Generic (PLEG): container finished" podID="e433a9e2-2b5f-4baa-8fa5-683037aa9783" containerID="c27398703ae65f339599c920818dec8904b34fd6f2d22bc6990264449f614693" exitCode=0 Jan 26 07:56:58 crc kubenswrapper[4806]: I0126 07:56:58.391779 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6cd6d8f48-z2srv" event={"ID":"e433a9e2-2b5f-4baa-8fa5-683037aa9783","Type":"ContainerDied","Data":"c27398703ae65f339599c920818dec8904b34fd6f2d22bc6990264449f614693"} Jan 26 07:56:58 crc kubenswrapper[4806]: I0126 07:56:58.394506 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bq7zd" event={"ID":"0531f954-d1d9-42f0-bd29-f8ff5b0871b4","Type":"ContainerStarted","Data":"1082596f16b184b7a4358a615449d25564638f2cb9a44bbc2084ed2e6fe2e0d2"} Jan 26 07:56:58 crc kubenswrapper[4806]: I0126 07:56:58.427277 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-248dw"] Jan 26 07:56:58 crc kubenswrapper[4806]: I0126 07:56:58.427723 4806 scope.go:117] "RemoveContainer" containerID="a5fe96f8df2064ccfa5a3c2c1aba93b78afc2716872ce4d0a6e98c71f7cdee2a" Jan 26 07:56:58 crc kubenswrapper[4806]: I0126 07:56:58.429983 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-248dw"] Jan 26 07:56:58 crc kubenswrapper[4806]: I0126 07:56:58.465210 4806 scope.go:117] "RemoveContainer" containerID="0481914640d34a30f4d695147046551983070423ed700484307c1be9e7be64a4" Jan 26 07:56:58 crc kubenswrapper[4806]: E0126 07:56:58.465974 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0481914640d34a30f4d695147046551983070423ed700484307c1be9e7be64a4\": container with ID starting with 0481914640d34a30f4d695147046551983070423ed700484307c1be9e7be64a4 not found: ID does not exist" containerID="0481914640d34a30f4d695147046551983070423ed700484307c1be9e7be64a4" Jan 26 07:56:58 crc kubenswrapper[4806]: I0126 07:56:58.466034 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0481914640d34a30f4d695147046551983070423ed700484307c1be9e7be64a4"} err="failed to get container status \"0481914640d34a30f4d695147046551983070423ed700484307c1be9e7be64a4\": rpc error: code = NotFound desc = could not find container \"0481914640d34a30f4d695147046551983070423ed700484307c1be9e7be64a4\": container with ID starting with 0481914640d34a30f4d695147046551983070423ed700484307c1be9e7be64a4 not found: ID does not exist" Jan 26 07:56:58 crc kubenswrapper[4806]: I0126 07:56:58.466070 4806 scope.go:117] "RemoveContainer" containerID="d5c2815870c53f05ed7684ee87098fe78f03dbeea9ed293f80ff9cae1caa396e" Jan 26 07:56:58 crc kubenswrapper[4806]: E0126 07:56:58.466627 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d5c2815870c53f05ed7684ee87098fe78f03dbeea9ed293f80ff9cae1caa396e\": container with ID starting with d5c2815870c53f05ed7684ee87098fe78f03dbeea9ed293f80ff9cae1caa396e not found: ID does not exist" containerID="d5c2815870c53f05ed7684ee87098fe78f03dbeea9ed293f80ff9cae1caa396e" Jan 26 07:56:58 crc kubenswrapper[4806]: I0126 07:56:58.466650 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d5c2815870c53f05ed7684ee87098fe78f03dbeea9ed293f80ff9cae1caa396e"} err="failed to get container status \"d5c2815870c53f05ed7684ee87098fe78f03dbeea9ed293f80ff9cae1caa396e\": rpc error: code = NotFound desc = could not find container \"d5c2815870c53f05ed7684ee87098fe78f03dbeea9ed293f80ff9cae1caa396e\": container with ID starting with d5c2815870c53f05ed7684ee87098fe78f03dbeea9ed293f80ff9cae1caa396e not found: ID does not exist" Jan 26 07:56:58 crc kubenswrapper[4806]: I0126 07:56:58.466668 4806 scope.go:117] "RemoveContainer" containerID="a5fe96f8df2064ccfa5a3c2c1aba93b78afc2716872ce4d0a6e98c71f7cdee2a" Jan 26 07:56:58 crc kubenswrapper[4806]: E0126 07:56:58.466917 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a5fe96f8df2064ccfa5a3c2c1aba93b78afc2716872ce4d0a6e98c71f7cdee2a\": container with ID starting with a5fe96f8df2064ccfa5a3c2c1aba93b78afc2716872ce4d0a6e98c71f7cdee2a not found: ID does not exist" containerID="a5fe96f8df2064ccfa5a3c2c1aba93b78afc2716872ce4d0a6e98c71f7cdee2a" Jan 26 07:56:58 crc kubenswrapper[4806]: I0126 07:56:58.466943 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5fe96f8df2064ccfa5a3c2c1aba93b78afc2716872ce4d0a6e98c71f7cdee2a"} err="failed to get container status \"a5fe96f8df2064ccfa5a3c2c1aba93b78afc2716872ce4d0a6e98c71f7cdee2a\": rpc error: code = NotFound desc = could not find container \"a5fe96f8df2064ccfa5a3c2c1aba93b78afc2716872ce4d0a6e98c71f7cdee2a\": container with ID starting with a5fe96f8df2064ccfa5a3c2c1aba93b78afc2716872ce4d0a6e98c71f7cdee2a not found: ID does not exist" Jan 26 07:56:58 crc kubenswrapper[4806]: I0126 07:56:58.590504 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7cccfc774d-2hfxs" Jan 26 07:56:58 crc kubenswrapper[4806]: I0126 07:56:58.605357 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-bq7zd" podStartSLOduration=5.296942693 podStartE2EDuration="57.605336158s" podCreationTimestamp="2026-01-26 07:56:01 +0000 UTC" firstStartedPulling="2026-01-26 07:56:05.433664415 +0000 UTC m=+144.698072471" lastFinishedPulling="2026-01-26 07:56:57.74205788 +0000 UTC m=+197.006465936" observedRunningTime="2026-01-26 07:56:58.442040518 +0000 UTC m=+197.706448574" watchObservedRunningTime="2026-01-26 07:56:58.605336158 +0000 UTC m=+197.869744214" Jan 26 07:56:58 crc kubenswrapper[4806]: I0126 07:56:58.691904 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b4a4dbaf-e0fb-4f3b-8b03-709e232a265e-serving-cert\") pod \"b4a4dbaf-e0fb-4f3b-8b03-709e232a265e\" (UID: \"b4a4dbaf-e0fb-4f3b-8b03-709e232a265e\") " Jan 26 07:56:58 crc kubenswrapper[4806]: I0126 07:56:58.692026 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4a4dbaf-e0fb-4f3b-8b03-709e232a265e-config\") pod \"b4a4dbaf-e0fb-4f3b-8b03-709e232a265e\" (UID: \"b4a4dbaf-e0fb-4f3b-8b03-709e232a265e\") " Jan 26 07:56:58 crc kubenswrapper[4806]: I0126 07:56:58.692071 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8gsx9\" (UniqueName: \"kubernetes.io/projected/b4a4dbaf-e0fb-4f3b-8b03-709e232a265e-kube-api-access-8gsx9\") pod \"b4a4dbaf-e0fb-4f3b-8b03-709e232a265e\" (UID: \"b4a4dbaf-e0fb-4f3b-8b03-709e232a265e\") " Jan 26 07:56:58 crc kubenswrapper[4806]: I0126 07:56:58.692100 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b4a4dbaf-e0fb-4f3b-8b03-709e232a265e-client-ca\") pod \"b4a4dbaf-e0fb-4f3b-8b03-709e232a265e\" (UID: \"b4a4dbaf-e0fb-4f3b-8b03-709e232a265e\") " Jan 26 07:56:58 crc kubenswrapper[4806]: I0126 07:56:58.693057 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4a4dbaf-e0fb-4f3b-8b03-709e232a265e-client-ca" (OuterVolumeSpecName: "client-ca") pod "b4a4dbaf-e0fb-4f3b-8b03-709e232a265e" (UID: "b4a4dbaf-e0fb-4f3b-8b03-709e232a265e"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 07:56:58 crc kubenswrapper[4806]: I0126 07:56:58.694488 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4a4dbaf-e0fb-4f3b-8b03-709e232a265e-config" (OuterVolumeSpecName: "config") pod "b4a4dbaf-e0fb-4f3b-8b03-709e232a265e" (UID: "b4a4dbaf-e0fb-4f3b-8b03-709e232a265e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 07:56:58 crc kubenswrapper[4806]: I0126 07:56:58.699734 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4a4dbaf-e0fb-4f3b-8b03-709e232a265e-kube-api-access-8gsx9" (OuterVolumeSpecName: "kube-api-access-8gsx9") pod "b4a4dbaf-e0fb-4f3b-8b03-709e232a265e" (UID: "b4a4dbaf-e0fb-4f3b-8b03-709e232a265e"). InnerVolumeSpecName "kube-api-access-8gsx9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 07:56:58 crc kubenswrapper[4806]: I0126 07:56:58.699745 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4a4dbaf-e0fb-4f3b-8b03-709e232a265e-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "b4a4dbaf-e0fb-4f3b-8b03-709e232a265e" (UID: "b4a4dbaf-e0fb-4f3b-8b03-709e232a265e"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 07:56:58 crc kubenswrapper[4806]: I0126 07:56:58.793235 4806 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b4a4dbaf-e0fb-4f3b-8b03-709e232a265e-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 07:56:58 crc kubenswrapper[4806]: I0126 07:56:58.793283 4806 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b4a4dbaf-e0fb-4f3b-8b03-709e232a265e-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 07:56:58 crc kubenswrapper[4806]: I0126 07:56:58.793296 4806 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4a4dbaf-e0fb-4f3b-8b03-709e232a265e-config\") on node \"crc\" DevicePath \"\"" Jan 26 07:56:58 crc kubenswrapper[4806]: I0126 07:56:58.793310 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8gsx9\" (UniqueName: \"kubernetes.io/projected/b4a4dbaf-e0fb-4f3b-8b03-709e232a265e-kube-api-access-8gsx9\") on node \"crc\" DevicePath \"\"" Jan 26 07:56:58 crc kubenswrapper[4806]: I0126 07:56:58.830589 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6cd6d8f48-z2srv" Jan 26 07:56:58 crc kubenswrapper[4806]: I0126 07:56:58.995166 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e433a9e2-2b5f-4baa-8fa5-683037aa9783-client-ca\") pod \"e433a9e2-2b5f-4baa-8fa5-683037aa9783\" (UID: \"e433a9e2-2b5f-4baa-8fa5-683037aa9783\") " Jan 26 07:56:58 crc kubenswrapper[4806]: I0126 07:56:58.995503 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-78lvf\" (UniqueName: \"kubernetes.io/projected/e433a9e2-2b5f-4baa-8fa5-683037aa9783-kube-api-access-78lvf\") pod \"e433a9e2-2b5f-4baa-8fa5-683037aa9783\" (UID: \"e433a9e2-2b5f-4baa-8fa5-683037aa9783\") " Jan 26 07:56:58 crc kubenswrapper[4806]: I0126 07:56:58.995548 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e433a9e2-2b5f-4baa-8fa5-683037aa9783-config\") pod \"e433a9e2-2b5f-4baa-8fa5-683037aa9783\" (UID: \"e433a9e2-2b5f-4baa-8fa5-683037aa9783\") " Jan 26 07:56:58 crc kubenswrapper[4806]: I0126 07:56:58.995582 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e433a9e2-2b5f-4baa-8fa5-683037aa9783-proxy-ca-bundles\") pod \"e433a9e2-2b5f-4baa-8fa5-683037aa9783\" (UID: \"e433a9e2-2b5f-4baa-8fa5-683037aa9783\") " Jan 26 07:56:58 crc kubenswrapper[4806]: I0126 07:56:58.995645 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e433a9e2-2b5f-4baa-8fa5-683037aa9783-serving-cert\") pod \"e433a9e2-2b5f-4baa-8fa5-683037aa9783\" (UID: \"e433a9e2-2b5f-4baa-8fa5-683037aa9783\") " Jan 26 07:56:58 crc kubenswrapper[4806]: I0126 07:56:58.996119 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e433a9e2-2b5f-4baa-8fa5-683037aa9783-client-ca" (OuterVolumeSpecName: "client-ca") pod "e433a9e2-2b5f-4baa-8fa5-683037aa9783" (UID: "e433a9e2-2b5f-4baa-8fa5-683037aa9783"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 07:56:58 crc kubenswrapper[4806]: I0126 07:56:58.996288 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e433a9e2-2b5f-4baa-8fa5-683037aa9783-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "e433a9e2-2b5f-4baa-8fa5-683037aa9783" (UID: "e433a9e2-2b5f-4baa-8fa5-683037aa9783"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 07:56:58 crc kubenswrapper[4806]: I0126 07:56:58.996325 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e433a9e2-2b5f-4baa-8fa5-683037aa9783-config" (OuterVolumeSpecName: "config") pod "e433a9e2-2b5f-4baa-8fa5-683037aa9783" (UID: "e433a9e2-2b5f-4baa-8fa5-683037aa9783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 07:56:58 crc kubenswrapper[4806]: I0126 07:56:58.998849 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e433a9e2-2b5f-4baa-8fa5-683037aa9783-kube-api-access-78lvf" (OuterVolumeSpecName: "kube-api-access-78lvf") pod "e433a9e2-2b5f-4baa-8fa5-683037aa9783" (UID: "e433a9e2-2b5f-4baa-8fa5-683037aa9783"). InnerVolumeSpecName "kube-api-access-78lvf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 07:56:59 crc kubenswrapper[4806]: I0126 07:56:59.000694 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e433a9e2-2b5f-4baa-8fa5-683037aa9783-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e433a9e2-2b5f-4baa-8fa5-683037aa9783" (UID: "e433a9e2-2b5f-4baa-8fa5-683037aa9783"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 07:56:59 crc kubenswrapper[4806]: I0126 07:56:59.053760 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="422dbc29-ef6c-40a2-8928-ef97946880a0" path="/var/lib/kubelet/pods/422dbc29-ef6c-40a2-8928-ef97946880a0/volumes" Jan 26 07:56:59 crc kubenswrapper[4806]: I0126 07:56:59.097429 4806 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e433a9e2-2b5f-4baa-8fa5-683037aa9783-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 07:56:59 crc kubenswrapper[4806]: I0126 07:56:59.097470 4806 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e433a9e2-2b5f-4baa-8fa5-683037aa9783-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 07:56:59 crc kubenswrapper[4806]: I0126 07:56:59.097482 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-78lvf\" (UniqueName: \"kubernetes.io/projected/e433a9e2-2b5f-4baa-8fa5-683037aa9783-kube-api-access-78lvf\") on node \"crc\" DevicePath \"\"" Jan 26 07:56:59 crc kubenswrapper[4806]: I0126 07:56:59.097496 4806 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e433a9e2-2b5f-4baa-8fa5-683037aa9783-config\") on node \"crc\" DevicePath \"\"" Jan 26 07:56:59 crc kubenswrapper[4806]: I0126 07:56:59.097508 4806 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e433a9e2-2b5f-4baa-8fa5-683037aa9783-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 07:56:59 crc kubenswrapper[4806]: I0126 07:56:59.403000 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6cd6d8f48-z2srv" Jan 26 07:56:59 crc kubenswrapper[4806]: I0126 07:56:59.403011 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6cd6d8f48-z2srv" event={"ID":"e433a9e2-2b5f-4baa-8fa5-683037aa9783","Type":"ContainerDied","Data":"5c684d84e1c26480b1034314e348151d6c92af71dd7fc47e760f4e00d519a72d"} Jan 26 07:56:59 crc kubenswrapper[4806]: I0126 07:56:59.403064 4806 scope.go:117] "RemoveContainer" containerID="c27398703ae65f339599c920818dec8904b34fd6f2d22bc6990264449f614693" Jan 26 07:56:59 crc kubenswrapper[4806]: I0126 07:56:59.405812 4806 generic.go:334] "Generic (PLEG): container finished" podID="c31e36f6-aabe-4f1e-8e7e-3bb086ec1cd3" containerID="ad4da5d9b0ba3adfe9a8c76216b1731a8aecddf2c5a6c649032f015d085c7f78" exitCode=0 Jan 26 07:56:59 crc kubenswrapper[4806]: I0126 07:56:59.405885 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5p4nt" event={"ID":"c31e36f6-aabe-4f1e-8e7e-3bb086ec1cd3","Type":"ContainerDied","Data":"ad4da5d9b0ba3adfe9a8c76216b1731a8aecddf2c5a6c649032f015d085c7f78"} Jan 26 07:56:59 crc kubenswrapper[4806]: I0126 07:56:59.414970 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7cccfc774d-2hfxs" event={"ID":"b4a4dbaf-e0fb-4f3b-8b03-709e232a265e","Type":"ContainerDied","Data":"c705e8d342445044ff6ee69aa553c98405965943ebea9a16ee63f074c5d675d5"} Jan 26 07:56:59 crc kubenswrapper[4806]: I0126 07:56:59.415027 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7cccfc774d-2hfxs" Jan 26 07:56:59 crc kubenswrapper[4806]: I0126 07:56:59.418402 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cmd7h" event={"ID":"cf65c110-74ee-4e0c-a7e8-bb27c891ff12","Type":"ContainerStarted","Data":"7bf666faccd61f121827ae1e2022c62d2329ac64117bf6f624fdf1d545cdd3f4"} Jan 26 07:56:59 crc kubenswrapper[4806]: I0126 07:56:59.426649 4806 scope.go:117] "RemoveContainer" containerID="4fbaed5b3a6f82a5584220127613531194bb11d92b23d5b980160665725c56e2" Jan 26 07:56:59 crc kubenswrapper[4806]: I0126 07:56:59.448687 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6cd6d8f48-z2srv"] Jan 26 07:56:59 crc kubenswrapper[4806]: I0126 07:56:59.452148 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6cd6d8f48-z2srv"] Jan 26 07:56:59 crc kubenswrapper[4806]: I0126 07:56:59.462792 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7cccfc774d-2hfxs"] Jan 26 07:56:59 crc kubenswrapper[4806]: I0126 07:56:59.467787 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7cccfc774d-2hfxs"] Jan 26 07:56:59 crc kubenswrapper[4806]: I0126 07:56:59.483201 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-96f6d79dc-fwc4q"] Jan 26 07:56:59 crc kubenswrapper[4806]: E0126 07:56:59.483551 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f544176-9dd8-4416-99f3-53299cd7ffb0" containerName="registry-server" Jan 26 07:56:59 crc kubenswrapper[4806]: I0126 07:56:59.483570 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f544176-9dd8-4416-99f3-53299cd7ffb0" containerName="registry-server" Jan 26 07:56:59 crc kubenswrapper[4806]: E0126 07:56:59.483590 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e433a9e2-2b5f-4baa-8fa5-683037aa9783" containerName="controller-manager" Jan 26 07:56:59 crc kubenswrapper[4806]: I0126 07:56:59.483597 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="e433a9e2-2b5f-4baa-8fa5-683037aa9783" containerName="controller-manager" Jan 26 07:56:59 crc kubenswrapper[4806]: E0126 07:56:59.483605 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="422dbc29-ef6c-40a2-8928-ef97946880a0" containerName="registry-server" Jan 26 07:56:59 crc kubenswrapper[4806]: I0126 07:56:59.483611 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="422dbc29-ef6c-40a2-8928-ef97946880a0" containerName="registry-server" Jan 26 07:56:59 crc kubenswrapper[4806]: E0126 07:56:59.483618 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f544176-9dd8-4416-99f3-53299cd7ffb0" containerName="extract-utilities" Jan 26 07:56:59 crc kubenswrapper[4806]: I0126 07:56:59.483624 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f544176-9dd8-4416-99f3-53299cd7ffb0" containerName="extract-utilities" Jan 26 07:56:59 crc kubenswrapper[4806]: E0126 07:56:59.483636 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="422dbc29-ef6c-40a2-8928-ef97946880a0" containerName="extract-utilities" Jan 26 07:56:59 crc kubenswrapper[4806]: I0126 07:56:59.483644 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="422dbc29-ef6c-40a2-8928-ef97946880a0" containerName="extract-utilities" Jan 26 07:56:59 crc kubenswrapper[4806]: E0126 07:56:59.483653 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="422dbc29-ef6c-40a2-8928-ef97946880a0" containerName="extract-content" Jan 26 07:56:59 crc kubenswrapper[4806]: I0126 07:56:59.483658 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="422dbc29-ef6c-40a2-8928-ef97946880a0" containerName="extract-content" Jan 26 07:56:59 crc kubenswrapper[4806]: E0126 07:56:59.483674 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f544176-9dd8-4416-99f3-53299cd7ffb0" containerName="extract-content" Jan 26 07:56:59 crc kubenswrapper[4806]: I0126 07:56:59.483680 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f544176-9dd8-4416-99f3-53299cd7ffb0" containerName="extract-content" Jan 26 07:56:59 crc kubenswrapper[4806]: E0126 07:56:59.483691 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4a4dbaf-e0fb-4f3b-8b03-709e232a265e" containerName="route-controller-manager" Jan 26 07:56:59 crc kubenswrapper[4806]: I0126 07:56:59.483697 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4a4dbaf-e0fb-4f3b-8b03-709e232a265e" containerName="route-controller-manager" Jan 26 07:56:59 crc kubenswrapper[4806]: I0126 07:56:59.483798 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4a4dbaf-e0fb-4f3b-8b03-709e232a265e" containerName="route-controller-manager" Jan 26 07:56:59 crc kubenswrapper[4806]: I0126 07:56:59.483810 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="422dbc29-ef6c-40a2-8928-ef97946880a0" containerName="registry-server" Jan 26 07:56:59 crc kubenswrapper[4806]: I0126 07:56:59.483820 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f544176-9dd8-4416-99f3-53299cd7ffb0" containerName="registry-server" Jan 26 07:56:59 crc kubenswrapper[4806]: I0126 07:56:59.483831 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="e433a9e2-2b5f-4baa-8fa5-683037aa9783" containerName="controller-manager" Jan 26 07:56:59 crc kubenswrapper[4806]: I0126 07:56:59.484300 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-96f6d79dc-fwc4q" Jan 26 07:56:59 crc kubenswrapper[4806]: I0126 07:56:59.488091 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 26 07:56:59 crc kubenswrapper[4806]: I0126 07:56:59.489898 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 26 07:56:59 crc kubenswrapper[4806]: I0126 07:56:59.490151 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 26 07:56:59 crc kubenswrapper[4806]: I0126 07:56:59.490795 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 26 07:56:59 crc kubenswrapper[4806]: I0126 07:56:59.490967 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 26 07:56:59 crc kubenswrapper[4806]: I0126 07:56:59.491120 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 26 07:56:59 crc kubenswrapper[4806]: I0126 07:56:59.492429 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-cmd7h" podStartSLOduration=2.240090724 podStartE2EDuration="54.492404762s" podCreationTimestamp="2026-01-26 07:56:05 +0000 UTC" firstStartedPulling="2026-01-26 07:56:06.546576272 +0000 UTC m=+145.810984328" lastFinishedPulling="2026-01-26 07:56:58.7988903 +0000 UTC m=+198.063298366" observedRunningTime="2026-01-26 07:56:59.478764719 +0000 UTC m=+198.743172775" watchObservedRunningTime="2026-01-26 07:56:59.492404762 +0000 UTC m=+198.756812818" Jan 26 07:56:59 crc kubenswrapper[4806]: I0126 07:56:59.492659 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-677859bc77-4cmtx"] Jan 26 07:56:59 crc kubenswrapper[4806]: I0126 07:56:59.493877 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-677859bc77-4cmtx" Jan 26 07:56:59 crc kubenswrapper[4806]: I0126 07:56:59.494256 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 26 07:56:59 crc kubenswrapper[4806]: I0126 07:56:59.498752 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 26 07:56:59 crc kubenswrapper[4806]: I0126 07:56:59.498891 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 26 07:56:59 crc kubenswrapper[4806]: I0126 07:56:59.499043 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 26 07:56:59 crc kubenswrapper[4806]: I0126 07:56:59.499209 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 26 07:56:59 crc kubenswrapper[4806]: I0126 07:56:59.499363 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 26 07:56:59 crc kubenswrapper[4806]: I0126 07:56:59.503109 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 26 07:56:59 crc kubenswrapper[4806]: I0126 07:56:59.513900 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-96f6d79dc-fwc4q"] Jan 26 07:56:59 crc kubenswrapper[4806]: I0126 07:56:59.516567 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-677859bc77-4cmtx"] Jan 26 07:56:59 crc kubenswrapper[4806]: I0126 07:56:59.603889 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/217de6b0-bcc1-40a2-b9bd-b61ebd6b3d5f-client-ca\") pod \"controller-manager-96f6d79dc-fwc4q\" (UID: \"217de6b0-bcc1-40a2-b9bd-b61ebd6b3d5f\") " pod="openshift-controller-manager/controller-manager-96f6d79dc-fwc4q" Jan 26 07:56:59 crc kubenswrapper[4806]: I0126 07:56:59.603945 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zggrn\" (UniqueName: \"kubernetes.io/projected/217de6b0-bcc1-40a2-b9bd-b61ebd6b3d5f-kube-api-access-zggrn\") pod \"controller-manager-96f6d79dc-fwc4q\" (UID: \"217de6b0-bcc1-40a2-b9bd-b61ebd6b3d5f\") " pod="openshift-controller-manager/controller-manager-96f6d79dc-fwc4q" Jan 26 07:56:59 crc kubenswrapper[4806]: I0126 07:56:59.604004 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4a8e9db2-4f5a-4499-962a-c8f784f509c6-client-ca\") pod \"route-controller-manager-677859bc77-4cmtx\" (UID: \"4a8e9db2-4f5a-4499-962a-c8f784f509c6\") " pod="openshift-route-controller-manager/route-controller-manager-677859bc77-4cmtx" Jan 26 07:56:59 crc kubenswrapper[4806]: I0126 07:56:59.604029 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/217de6b0-bcc1-40a2-b9bd-b61ebd6b3d5f-serving-cert\") pod \"controller-manager-96f6d79dc-fwc4q\" (UID: \"217de6b0-bcc1-40a2-b9bd-b61ebd6b3d5f\") " pod="openshift-controller-manager/controller-manager-96f6d79dc-fwc4q" Jan 26 07:56:59 crc kubenswrapper[4806]: I0126 07:56:59.604064 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a8e9db2-4f5a-4499-962a-c8f784f509c6-config\") pod \"route-controller-manager-677859bc77-4cmtx\" (UID: \"4a8e9db2-4f5a-4499-962a-c8f784f509c6\") " pod="openshift-route-controller-manager/route-controller-manager-677859bc77-4cmtx" Jan 26 07:56:59 crc kubenswrapper[4806]: I0126 07:56:59.604081 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/217de6b0-bcc1-40a2-b9bd-b61ebd6b3d5f-proxy-ca-bundles\") pod \"controller-manager-96f6d79dc-fwc4q\" (UID: \"217de6b0-bcc1-40a2-b9bd-b61ebd6b3d5f\") " pod="openshift-controller-manager/controller-manager-96f6d79dc-fwc4q" Jan 26 07:56:59 crc kubenswrapper[4806]: I0126 07:56:59.604099 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxfvg\" (UniqueName: \"kubernetes.io/projected/4a8e9db2-4f5a-4499-962a-c8f784f509c6-kube-api-access-kxfvg\") pod \"route-controller-manager-677859bc77-4cmtx\" (UID: \"4a8e9db2-4f5a-4499-962a-c8f784f509c6\") " pod="openshift-route-controller-manager/route-controller-manager-677859bc77-4cmtx" Jan 26 07:56:59 crc kubenswrapper[4806]: I0126 07:56:59.604198 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/217de6b0-bcc1-40a2-b9bd-b61ebd6b3d5f-config\") pod \"controller-manager-96f6d79dc-fwc4q\" (UID: \"217de6b0-bcc1-40a2-b9bd-b61ebd6b3d5f\") " pod="openshift-controller-manager/controller-manager-96f6d79dc-fwc4q" Jan 26 07:56:59 crc kubenswrapper[4806]: I0126 07:56:59.604238 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4a8e9db2-4f5a-4499-962a-c8f784f509c6-serving-cert\") pod \"route-controller-manager-677859bc77-4cmtx\" (UID: \"4a8e9db2-4f5a-4499-962a-c8f784f509c6\") " pod="openshift-route-controller-manager/route-controller-manager-677859bc77-4cmtx" Jan 26 07:56:59 crc kubenswrapper[4806]: I0126 07:56:59.705764 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zggrn\" (UniqueName: \"kubernetes.io/projected/217de6b0-bcc1-40a2-b9bd-b61ebd6b3d5f-kube-api-access-zggrn\") pod \"controller-manager-96f6d79dc-fwc4q\" (UID: \"217de6b0-bcc1-40a2-b9bd-b61ebd6b3d5f\") " pod="openshift-controller-manager/controller-manager-96f6d79dc-fwc4q" Jan 26 07:56:59 crc kubenswrapper[4806]: I0126 07:56:59.706068 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4a8e9db2-4f5a-4499-962a-c8f784f509c6-client-ca\") pod \"route-controller-manager-677859bc77-4cmtx\" (UID: \"4a8e9db2-4f5a-4499-962a-c8f784f509c6\") " pod="openshift-route-controller-manager/route-controller-manager-677859bc77-4cmtx" Jan 26 07:56:59 crc kubenswrapper[4806]: I0126 07:56:59.706173 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/217de6b0-bcc1-40a2-b9bd-b61ebd6b3d5f-serving-cert\") pod \"controller-manager-96f6d79dc-fwc4q\" (UID: \"217de6b0-bcc1-40a2-b9bd-b61ebd6b3d5f\") " pod="openshift-controller-manager/controller-manager-96f6d79dc-fwc4q" Jan 26 07:56:59 crc kubenswrapper[4806]: I0126 07:56:59.706286 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a8e9db2-4f5a-4499-962a-c8f784f509c6-config\") pod \"route-controller-manager-677859bc77-4cmtx\" (UID: \"4a8e9db2-4f5a-4499-962a-c8f784f509c6\") " pod="openshift-route-controller-manager/route-controller-manager-677859bc77-4cmtx" Jan 26 07:56:59 crc kubenswrapper[4806]: I0126 07:56:59.706365 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/217de6b0-bcc1-40a2-b9bd-b61ebd6b3d5f-proxy-ca-bundles\") pod \"controller-manager-96f6d79dc-fwc4q\" (UID: \"217de6b0-bcc1-40a2-b9bd-b61ebd6b3d5f\") " pod="openshift-controller-manager/controller-manager-96f6d79dc-fwc4q" Jan 26 07:56:59 crc kubenswrapper[4806]: I0126 07:56:59.706440 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxfvg\" (UniqueName: \"kubernetes.io/projected/4a8e9db2-4f5a-4499-962a-c8f784f509c6-kube-api-access-kxfvg\") pod \"route-controller-manager-677859bc77-4cmtx\" (UID: \"4a8e9db2-4f5a-4499-962a-c8f784f509c6\") " pod="openshift-route-controller-manager/route-controller-manager-677859bc77-4cmtx" Jan 26 07:56:59 crc kubenswrapper[4806]: I0126 07:56:59.706525 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/217de6b0-bcc1-40a2-b9bd-b61ebd6b3d5f-config\") pod \"controller-manager-96f6d79dc-fwc4q\" (UID: \"217de6b0-bcc1-40a2-b9bd-b61ebd6b3d5f\") " pod="openshift-controller-manager/controller-manager-96f6d79dc-fwc4q" Jan 26 07:56:59 crc kubenswrapper[4806]: I0126 07:56:59.706667 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4a8e9db2-4f5a-4499-962a-c8f784f509c6-serving-cert\") pod \"route-controller-manager-677859bc77-4cmtx\" (UID: \"4a8e9db2-4f5a-4499-962a-c8f784f509c6\") " pod="openshift-route-controller-manager/route-controller-manager-677859bc77-4cmtx" Jan 26 07:56:59 crc kubenswrapper[4806]: I0126 07:56:59.706780 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/217de6b0-bcc1-40a2-b9bd-b61ebd6b3d5f-client-ca\") pod \"controller-manager-96f6d79dc-fwc4q\" (UID: \"217de6b0-bcc1-40a2-b9bd-b61ebd6b3d5f\") " pod="openshift-controller-manager/controller-manager-96f6d79dc-fwc4q" Jan 26 07:56:59 crc kubenswrapper[4806]: I0126 07:56:59.707314 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4a8e9db2-4f5a-4499-962a-c8f784f509c6-client-ca\") pod \"route-controller-manager-677859bc77-4cmtx\" (UID: \"4a8e9db2-4f5a-4499-962a-c8f784f509c6\") " pod="openshift-route-controller-manager/route-controller-manager-677859bc77-4cmtx" Jan 26 07:56:59 crc kubenswrapper[4806]: I0126 07:56:59.707463 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a8e9db2-4f5a-4499-962a-c8f784f509c6-config\") pod \"route-controller-manager-677859bc77-4cmtx\" (UID: \"4a8e9db2-4f5a-4499-962a-c8f784f509c6\") " pod="openshift-route-controller-manager/route-controller-manager-677859bc77-4cmtx" Jan 26 07:56:59 crc kubenswrapper[4806]: I0126 07:56:59.707854 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/217de6b0-bcc1-40a2-b9bd-b61ebd6b3d5f-client-ca\") pod \"controller-manager-96f6d79dc-fwc4q\" (UID: \"217de6b0-bcc1-40a2-b9bd-b61ebd6b3d5f\") " pod="openshift-controller-manager/controller-manager-96f6d79dc-fwc4q" Jan 26 07:56:59 crc kubenswrapper[4806]: I0126 07:56:59.708288 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/217de6b0-bcc1-40a2-b9bd-b61ebd6b3d5f-config\") pod \"controller-manager-96f6d79dc-fwc4q\" (UID: \"217de6b0-bcc1-40a2-b9bd-b61ebd6b3d5f\") " pod="openshift-controller-manager/controller-manager-96f6d79dc-fwc4q" Jan 26 07:56:59 crc kubenswrapper[4806]: I0126 07:56:59.708573 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/217de6b0-bcc1-40a2-b9bd-b61ebd6b3d5f-proxy-ca-bundles\") pod \"controller-manager-96f6d79dc-fwc4q\" (UID: \"217de6b0-bcc1-40a2-b9bd-b61ebd6b3d5f\") " pod="openshift-controller-manager/controller-manager-96f6d79dc-fwc4q" Jan 26 07:56:59 crc kubenswrapper[4806]: I0126 07:56:59.710756 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/217de6b0-bcc1-40a2-b9bd-b61ebd6b3d5f-serving-cert\") pod \"controller-manager-96f6d79dc-fwc4q\" (UID: \"217de6b0-bcc1-40a2-b9bd-b61ebd6b3d5f\") " pod="openshift-controller-manager/controller-manager-96f6d79dc-fwc4q" Jan 26 07:56:59 crc kubenswrapper[4806]: I0126 07:56:59.710767 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4a8e9db2-4f5a-4499-962a-c8f784f509c6-serving-cert\") pod \"route-controller-manager-677859bc77-4cmtx\" (UID: \"4a8e9db2-4f5a-4499-962a-c8f784f509c6\") " pod="openshift-route-controller-manager/route-controller-manager-677859bc77-4cmtx" Jan 26 07:56:59 crc kubenswrapper[4806]: I0126 07:56:59.732043 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxfvg\" (UniqueName: \"kubernetes.io/projected/4a8e9db2-4f5a-4499-962a-c8f784f509c6-kube-api-access-kxfvg\") pod \"route-controller-manager-677859bc77-4cmtx\" (UID: \"4a8e9db2-4f5a-4499-962a-c8f784f509c6\") " pod="openshift-route-controller-manager/route-controller-manager-677859bc77-4cmtx" Jan 26 07:56:59 crc kubenswrapper[4806]: I0126 07:56:59.733130 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zggrn\" (UniqueName: \"kubernetes.io/projected/217de6b0-bcc1-40a2-b9bd-b61ebd6b3d5f-kube-api-access-zggrn\") pod \"controller-manager-96f6d79dc-fwc4q\" (UID: \"217de6b0-bcc1-40a2-b9bd-b61ebd6b3d5f\") " pod="openshift-controller-manager/controller-manager-96f6d79dc-fwc4q" Jan 26 07:56:59 crc kubenswrapper[4806]: I0126 07:56:59.799354 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-96f6d79dc-fwc4q" Jan 26 07:56:59 crc kubenswrapper[4806]: I0126 07:56:59.810046 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-677859bc77-4cmtx" Jan 26 07:57:00 crc kubenswrapper[4806]: I0126 07:57:00.068614 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-677859bc77-4cmtx"] Jan 26 07:57:00 crc kubenswrapper[4806]: I0126 07:57:00.205615 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-96f6d79dc-fwc4q"] Jan 26 07:57:00 crc kubenswrapper[4806]: W0126 07:57:00.210636 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod217de6b0_bcc1_40a2_b9bd_b61ebd6b3d5f.slice/crio-f149206e66ee1b7ab3aba2019914caa681ef5be5d69cab20f97dbd29acd5158a WatchSource:0}: Error finding container f149206e66ee1b7ab3aba2019914caa681ef5be5d69cab20f97dbd29acd5158a: Status 404 returned error can't find the container with id f149206e66ee1b7ab3aba2019914caa681ef5be5d69cab20f97dbd29acd5158a Jan 26 07:57:00 crc kubenswrapper[4806]: I0126 07:57:00.425544 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-96f6d79dc-fwc4q" event={"ID":"217de6b0-bcc1-40a2-b9bd-b61ebd6b3d5f","Type":"ContainerStarted","Data":"4455fe9c76c39e494a6709b203e057e65fa8c2992c38b7a9846cbaf1bd320613"} Jan 26 07:57:00 crc kubenswrapper[4806]: I0126 07:57:00.425602 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-96f6d79dc-fwc4q" event={"ID":"217de6b0-bcc1-40a2-b9bd-b61ebd6b3d5f","Type":"ContainerStarted","Data":"f149206e66ee1b7ab3aba2019914caa681ef5be5d69cab20f97dbd29acd5158a"} Jan 26 07:57:00 crc kubenswrapper[4806]: I0126 07:57:00.425719 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-96f6d79dc-fwc4q" Jan 26 07:57:00 crc kubenswrapper[4806]: I0126 07:57:00.427620 4806 patch_prober.go:28] interesting pod/controller-manager-96f6d79dc-fwc4q container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.60:8443/healthz\": dial tcp 10.217.0.60:8443: connect: connection refused" start-of-body= Jan 26 07:57:00 crc kubenswrapper[4806]: I0126 07:57:00.427685 4806 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-96f6d79dc-fwc4q" podUID="217de6b0-bcc1-40a2-b9bd-b61ebd6b3d5f" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.60:8443/healthz\": dial tcp 10.217.0.60:8443: connect: connection refused" Jan 26 07:57:00 crc kubenswrapper[4806]: I0126 07:57:00.429324 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-677859bc77-4cmtx" event={"ID":"4a8e9db2-4f5a-4499-962a-c8f784f509c6","Type":"ContainerStarted","Data":"8628bc1daff07c10fb3ccb6936c4281a5090982cb2de289fe349d61bd65b32b9"} Jan 26 07:57:00 crc kubenswrapper[4806]: I0126 07:57:00.429364 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-677859bc77-4cmtx" event={"ID":"4a8e9db2-4f5a-4499-962a-c8f784f509c6","Type":"ContainerStarted","Data":"184f4e14b3b8b9b68afb64d9af7e05fddf7a068994be23d8d16c1cb7e7c4f090"} Jan 26 07:57:00 crc kubenswrapper[4806]: I0126 07:57:00.429510 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-677859bc77-4cmtx" Jan 26 07:57:00 crc kubenswrapper[4806]: I0126 07:57:00.432256 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5p4nt" event={"ID":"c31e36f6-aabe-4f1e-8e7e-3bb086ec1cd3","Type":"ContainerStarted","Data":"3e86adb9182afa3b208548f2d08bd7436c4a16dd25644a95bcbb36aaf3d3e9d1"} Jan 26 07:57:00 crc kubenswrapper[4806]: I0126 07:57:00.452549 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-96f6d79dc-fwc4q" podStartSLOduration=2.4525143529999998 podStartE2EDuration="2.452514353s" podCreationTimestamp="2026-01-26 07:56:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 07:57:00.452279366 +0000 UTC m=+199.716687432" watchObservedRunningTime="2026-01-26 07:57:00.452514353 +0000 UTC m=+199.716922409" Jan 26 07:57:00 crc kubenswrapper[4806]: I0126 07:57:00.503669 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-677859bc77-4cmtx" podStartSLOduration=2.5036439169999998 podStartE2EDuration="2.503643917s" podCreationTimestamp="2026-01-26 07:56:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 07:57:00.481259291 +0000 UTC m=+199.745667347" watchObservedRunningTime="2026-01-26 07:57:00.503643917 +0000 UTC m=+199.768051973" Jan 26 07:57:00 crc kubenswrapper[4806]: I0126 07:57:00.866885 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-677859bc77-4cmtx" Jan 26 07:57:00 crc kubenswrapper[4806]: I0126 07:57:00.885780 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-5p4nt" podStartSLOduration=4.527586063 podStartE2EDuration="56.885759917s" podCreationTimestamp="2026-01-26 07:56:04 +0000 UTC" firstStartedPulling="2026-01-26 07:56:07.581042215 +0000 UTC m=+146.845450271" lastFinishedPulling="2026-01-26 07:56:59.939216069 +0000 UTC m=+199.203624125" observedRunningTime="2026-01-26 07:57:00.503116701 +0000 UTC m=+199.767524757" watchObservedRunningTime="2026-01-26 07:57:00.885759917 +0000 UTC m=+200.150167973" Jan 26 07:57:01 crc kubenswrapper[4806]: I0126 07:57:01.047908 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4a4dbaf-e0fb-4f3b-8b03-709e232a265e" path="/var/lib/kubelet/pods/b4a4dbaf-e0fb-4f3b-8b03-709e232a265e/volumes" Jan 26 07:57:01 crc kubenswrapper[4806]: I0126 07:57:01.048593 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e433a9e2-2b5f-4baa-8fa5-683037aa9783" path="/var/lib/kubelet/pods/e433a9e2-2b5f-4baa-8fa5-683037aa9783/volumes" Jan 26 07:57:01 crc kubenswrapper[4806]: I0126 07:57:01.446137 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-96f6d79dc-fwc4q" Jan 26 07:57:02 crc kubenswrapper[4806]: I0126 07:57:02.393925 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-bq7zd" Jan 26 07:57:02 crc kubenswrapper[4806]: I0126 07:57:02.394146 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-bq7zd" Jan 26 07:57:02 crc kubenswrapper[4806]: I0126 07:57:02.436015 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-bq7zd" Jan 26 07:57:02 crc kubenswrapper[4806]: I0126 07:57:02.697494 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-rqpjn" Jan 26 07:57:02 crc kubenswrapper[4806]: I0126 07:57:02.697562 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-rqpjn" Jan 26 07:57:02 crc kubenswrapper[4806]: I0126 07:57:02.736802 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-rqpjn" Jan 26 07:57:02 crc kubenswrapper[4806]: I0126 07:57:02.792938 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-bq7zd" Jan 26 07:57:03 crc kubenswrapper[4806]: I0126 07:57:03.484126 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-rqpjn" Jan 26 07:57:05 crc kubenswrapper[4806]: I0126 07:57:05.245998 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-5p4nt" Jan 26 07:57:05 crc kubenswrapper[4806]: I0126 07:57:05.246455 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-5p4nt" Jan 26 07:57:05 crc kubenswrapper[4806]: I0126 07:57:05.321712 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rqpjn"] Jan 26 07:57:05 crc kubenswrapper[4806]: I0126 07:57:05.457958 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-rqpjn" podUID="339dd820-f50a-4135-9da6-5768324b8d55" containerName="registry-server" containerID="cri-o://bc32503594d93ff0f46e8dc878322f959aeda539088924f258bbad66d728e0f6" gracePeriod=2 Jan 26 07:57:05 crc kubenswrapper[4806]: I0126 07:57:05.663953 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-cmd7h" Jan 26 07:57:05 crc kubenswrapper[4806]: I0126 07:57:05.664012 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-cmd7h" Jan 26 07:57:05 crc kubenswrapper[4806]: I0126 07:57:05.713996 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-cmd7h" Jan 26 07:57:05 crc kubenswrapper[4806]: I0126 07:57:05.955543 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rqpjn" Jan 26 07:57:05 crc kubenswrapper[4806]: I0126 07:57:05.991064 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/339dd820-f50a-4135-9da6-5768324b8d55-utilities\") pod \"339dd820-f50a-4135-9da6-5768324b8d55\" (UID: \"339dd820-f50a-4135-9da6-5768324b8d55\") " Jan 26 07:57:05 crc kubenswrapper[4806]: I0126 07:57:05.991277 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m6kg2\" (UniqueName: \"kubernetes.io/projected/339dd820-f50a-4135-9da6-5768324b8d55-kube-api-access-m6kg2\") pod \"339dd820-f50a-4135-9da6-5768324b8d55\" (UID: \"339dd820-f50a-4135-9da6-5768324b8d55\") " Jan 26 07:57:05 crc kubenswrapper[4806]: I0126 07:57:05.991315 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/339dd820-f50a-4135-9da6-5768324b8d55-catalog-content\") pod \"339dd820-f50a-4135-9da6-5768324b8d55\" (UID: \"339dd820-f50a-4135-9da6-5768324b8d55\") " Jan 26 07:57:05 crc kubenswrapper[4806]: I0126 07:57:05.991996 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/339dd820-f50a-4135-9da6-5768324b8d55-utilities" (OuterVolumeSpecName: "utilities") pod "339dd820-f50a-4135-9da6-5768324b8d55" (UID: "339dd820-f50a-4135-9da6-5768324b8d55"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 07:57:06 crc kubenswrapper[4806]: I0126 07:57:06.004829 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/339dd820-f50a-4135-9da6-5768324b8d55-kube-api-access-m6kg2" (OuterVolumeSpecName: "kube-api-access-m6kg2") pod "339dd820-f50a-4135-9da6-5768324b8d55" (UID: "339dd820-f50a-4135-9da6-5768324b8d55"). InnerVolumeSpecName "kube-api-access-m6kg2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 07:57:06 crc kubenswrapper[4806]: I0126 07:57:06.043279 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/339dd820-f50a-4135-9da6-5768324b8d55-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "339dd820-f50a-4135-9da6-5768324b8d55" (UID: "339dd820-f50a-4135-9da6-5768324b8d55"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 07:57:06 crc kubenswrapper[4806]: I0126 07:57:06.093258 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m6kg2\" (UniqueName: \"kubernetes.io/projected/339dd820-f50a-4135-9da6-5768324b8d55-kube-api-access-m6kg2\") on node \"crc\" DevicePath \"\"" Jan 26 07:57:06 crc kubenswrapper[4806]: I0126 07:57:06.093298 4806 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/339dd820-f50a-4135-9da6-5768324b8d55-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 07:57:06 crc kubenswrapper[4806]: I0126 07:57:06.093309 4806 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/339dd820-f50a-4135-9da6-5768324b8d55-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 07:57:06 crc kubenswrapper[4806]: I0126 07:57:06.293483 4806 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5p4nt" podUID="c31e36f6-aabe-4f1e-8e7e-3bb086ec1cd3" containerName="registry-server" probeResult="failure" output=< Jan 26 07:57:06 crc kubenswrapper[4806]: timeout: failed to connect service ":50051" within 1s Jan 26 07:57:06 crc kubenswrapper[4806]: > Jan 26 07:57:06 crc kubenswrapper[4806]: I0126 07:57:06.467019 4806 generic.go:334] "Generic (PLEG): container finished" podID="339dd820-f50a-4135-9da6-5768324b8d55" containerID="bc32503594d93ff0f46e8dc878322f959aeda539088924f258bbad66d728e0f6" exitCode=0 Jan 26 07:57:06 crc kubenswrapper[4806]: I0126 07:57:06.467107 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rqpjn" Jan 26 07:57:06 crc kubenswrapper[4806]: I0126 07:57:06.467115 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rqpjn" event={"ID":"339dd820-f50a-4135-9da6-5768324b8d55","Type":"ContainerDied","Data":"bc32503594d93ff0f46e8dc878322f959aeda539088924f258bbad66d728e0f6"} Jan 26 07:57:06 crc kubenswrapper[4806]: I0126 07:57:06.467181 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rqpjn" event={"ID":"339dd820-f50a-4135-9da6-5768324b8d55","Type":"ContainerDied","Data":"99fa590e83629c40ca425dee37063742d504d77c790580d693abc93bba343b20"} Jan 26 07:57:06 crc kubenswrapper[4806]: I0126 07:57:06.467205 4806 scope.go:117] "RemoveContainer" containerID="bc32503594d93ff0f46e8dc878322f959aeda539088924f258bbad66d728e0f6" Jan 26 07:57:06 crc kubenswrapper[4806]: I0126 07:57:06.493469 4806 scope.go:117] "RemoveContainer" containerID="efa82dba2cbbfa4848edc7074525dfbff1ac7fb2d3b0c658d997db9c11b0f404" Jan 26 07:57:06 crc kubenswrapper[4806]: I0126 07:57:06.522901 4806 scope.go:117] "RemoveContainer" containerID="fbc84a449235680ffbcf290fcf3911cf270d2ce4e3f604e218451c30518dcbcf" Jan 26 07:57:06 crc kubenswrapper[4806]: I0126 07:57:06.524870 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rqpjn"] Jan 26 07:57:06 crc kubenswrapper[4806]: I0126 07:57:06.540318 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-rqpjn"] Jan 26 07:57:06 crc kubenswrapper[4806]: I0126 07:57:06.540625 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-cmd7h" Jan 26 07:57:06 crc kubenswrapper[4806]: I0126 07:57:06.563750 4806 scope.go:117] "RemoveContainer" containerID="bc32503594d93ff0f46e8dc878322f959aeda539088924f258bbad66d728e0f6" Jan 26 07:57:06 crc kubenswrapper[4806]: E0126 07:57:06.565886 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc32503594d93ff0f46e8dc878322f959aeda539088924f258bbad66d728e0f6\": container with ID starting with bc32503594d93ff0f46e8dc878322f959aeda539088924f258bbad66d728e0f6 not found: ID does not exist" containerID="bc32503594d93ff0f46e8dc878322f959aeda539088924f258bbad66d728e0f6" Jan 26 07:57:06 crc kubenswrapper[4806]: I0126 07:57:06.565976 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc32503594d93ff0f46e8dc878322f959aeda539088924f258bbad66d728e0f6"} err="failed to get container status \"bc32503594d93ff0f46e8dc878322f959aeda539088924f258bbad66d728e0f6\": rpc error: code = NotFound desc = could not find container \"bc32503594d93ff0f46e8dc878322f959aeda539088924f258bbad66d728e0f6\": container with ID starting with bc32503594d93ff0f46e8dc878322f959aeda539088924f258bbad66d728e0f6 not found: ID does not exist" Jan 26 07:57:06 crc kubenswrapper[4806]: I0126 07:57:06.566018 4806 scope.go:117] "RemoveContainer" containerID="efa82dba2cbbfa4848edc7074525dfbff1ac7fb2d3b0c658d997db9c11b0f404" Jan 26 07:57:06 crc kubenswrapper[4806]: E0126 07:57:06.566390 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"efa82dba2cbbfa4848edc7074525dfbff1ac7fb2d3b0c658d997db9c11b0f404\": container with ID starting with efa82dba2cbbfa4848edc7074525dfbff1ac7fb2d3b0c658d997db9c11b0f404 not found: ID does not exist" containerID="efa82dba2cbbfa4848edc7074525dfbff1ac7fb2d3b0c658d997db9c11b0f404" Jan 26 07:57:06 crc kubenswrapper[4806]: I0126 07:57:06.566416 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"efa82dba2cbbfa4848edc7074525dfbff1ac7fb2d3b0c658d997db9c11b0f404"} err="failed to get container status \"efa82dba2cbbfa4848edc7074525dfbff1ac7fb2d3b0c658d997db9c11b0f404\": rpc error: code = NotFound desc = could not find container \"efa82dba2cbbfa4848edc7074525dfbff1ac7fb2d3b0c658d997db9c11b0f404\": container with ID starting with efa82dba2cbbfa4848edc7074525dfbff1ac7fb2d3b0c658d997db9c11b0f404 not found: ID does not exist" Jan 26 07:57:06 crc kubenswrapper[4806]: I0126 07:57:06.566501 4806 scope.go:117] "RemoveContainer" containerID="fbc84a449235680ffbcf290fcf3911cf270d2ce4e3f604e218451c30518dcbcf" Jan 26 07:57:06 crc kubenswrapper[4806]: E0126 07:57:06.567112 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fbc84a449235680ffbcf290fcf3911cf270d2ce4e3f604e218451c30518dcbcf\": container with ID starting with fbc84a449235680ffbcf290fcf3911cf270d2ce4e3f604e218451c30518dcbcf not found: ID does not exist" containerID="fbc84a449235680ffbcf290fcf3911cf270d2ce4e3f604e218451c30518dcbcf" Jan 26 07:57:06 crc kubenswrapper[4806]: I0126 07:57:06.567235 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fbc84a449235680ffbcf290fcf3911cf270d2ce4e3f604e218451c30518dcbcf"} err="failed to get container status \"fbc84a449235680ffbcf290fcf3911cf270d2ce4e3f604e218451c30518dcbcf\": rpc error: code = NotFound desc = could not find container \"fbc84a449235680ffbcf290fcf3911cf270d2ce4e3f604e218451c30518dcbcf\": container with ID starting with fbc84a449235680ffbcf290fcf3911cf270d2ce4e3f604e218451c30518dcbcf not found: ID does not exist" Jan 26 07:57:07 crc kubenswrapper[4806]: I0126 07:57:07.055748 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="339dd820-f50a-4135-9da6-5768324b8d55" path="/var/lib/kubelet/pods/339dd820-f50a-4135-9da6-5768324b8d55/volumes" Jan 26 07:57:07 crc kubenswrapper[4806]: I0126 07:57:07.723445 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cmd7h"] Jan 26 07:57:08 crc kubenswrapper[4806]: I0126 07:57:08.238072 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-hdxh9" podUID="d66e251a-5a67-45c4-be63-2f46b56df1a5" containerName="oauth-openshift" containerID="cri-o://39d3ccbff1a26b8ae79eeb17cd893cdf52835f2ffeb0b20d0b5955a00b09d66d" gracePeriod=15 Jan 26 07:57:08 crc kubenswrapper[4806]: I0126 07:57:08.478673 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-cmd7h" podUID="cf65c110-74ee-4e0c-a7e8-bb27c891ff12" containerName="registry-server" containerID="cri-o://7bf666faccd61f121827ae1e2022c62d2329ac64117bf6f624fdf1d545cdd3f4" gracePeriod=2 Jan 26 07:57:09 crc kubenswrapper[4806]: I0126 07:57:09.489965 4806 generic.go:334] "Generic (PLEG): container finished" podID="d66e251a-5a67-45c4-be63-2f46b56df1a5" containerID="39d3ccbff1a26b8ae79eeb17cd893cdf52835f2ffeb0b20d0b5955a00b09d66d" exitCode=0 Jan 26 07:57:09 crc kubenswrapper[4806]: I0126 07:57:09.490017 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-hdxh9" event={"ID":"d66e251a-5a67-45c4-be63-2f46b56df1a5","Type":"ContainerDied","Data":"39d3ccbff1a26b8ae79eeb17cd893cdf52835f2ffeb0b20d0b5955a00b09d66d"} Jan 26 07:57:09 crc kubenswrapper[4806]: I0126 07:57:09.640361 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-hdxh9" Jan 26 07:57:09 crc kubenswrapper[4806]: I0126 07:57:09.738431 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d66e251a-5a67-45c4-be63-2f46b56df1a5-v4-0-config-system-cliconfig\") pod \"d66e251a-5a67-45c4-be63-2f46b56df1a5\" (UID: \"d66e251a-5a67-45c4-be63-2f46b56df1a5\") " Jan 26 07:57:09 crc kubenswrapper[4806]: I0126 07:57:09.738507 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d66e251a-5a67-45c4-be63-2f46b56df1a5-v4-0-config-user-template-provider-selection\") pod \"d66e251a-5a67-45c4-be63-2f46b56df1a5\" (UID: \"d66e251a-5a67-45c4-be63-2f46b56df1a5\") " Jan 26 07:57:09 crc kubenswrapper[4806]: I0126 07:57:09.738590 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d66e251a-5a67-45c4-be63-2f46b56df1a5-v4-0-config-system-ocp-branding-template\") pod \"d66e251a-5a67-45c4-be63-2f46b56df1a5\" (UID: \"d66e251a-5a67-45c4-be63-2f46b56df1a5\") " Jan 26 07:57:09 crc kubenswrapper[4806]: I0126 07:57:09.738618 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d66e251a-5a67-45c4-be63-2f46b56df1a5-v4-0-config-user-template-error\") pod \"d66e251a-5a67-45c4-be63-2f46b56df1a5\" (UID: \"d66e251a-5a67-45c4-be63-2f46b56df1a5\") " Jan 26 07:57:09 crc kubenswrapper[4806]: I0126 07:57:09.738636 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d66e251a-5a67-45c4-be63-2f46b56df1a5-v4-0-config-user-template-login\") pod \"d66e251a-5a67-45c4-be63-2f46b56df1a5\" (UID: \"d66e251a-5a67-45c4-be63-2f46b56df1a5\") " Jan 26 07:57:09 crc kubenswrapper[4806]: I0126 07:57:09.738653 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d66e251a-5a67-45c4-be63-2f46b56df1a5-audit-dir\") pod \"d66e251a-5a67-45c4-be63-2f46b56df1a5\" (UID: \"d66e251a-5a67-45c4-be63-2f46b56df1a5\") " Jan 26 07:57:09 crc kubenswrapper[4806]: I0126 07:57:09.738681 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d66e251a-5a67-45c4-be63-2f46b56df1a5-v4-0-config-system-service-ca\") pod \"d66e251a-5a67-45c4-be63-2f46b56df1a5\" (UID: \"d66e251a-5a67-45c4-be63-2f46b56df1a5\") " Jan 26 07:57:09 crc kubenswrapper[4806]: I0126 07:57:09.738701 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d66e251a-5a67-45c4-be63-2f46b56df1a5-v4-0-config-system-serving-cert\") pod \"d66e251a-5a67-45c4-be63-2f46b56df1a5\" (UID: \"d66e251a-5a67-45c4-be63-2f46b56df1a5\") " Jan 26 07:57:09 crc kubenswrapper[4806]: I0126 07:57:09.738718 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d66e251a-5a67-45c4-be63-2f46b56df1a5-v4-0-config-system-router-certs\") pod \"d66e251a-5a67-45c4-be63-2f46b56df1a5\" (UID: \"d66e251a-5a67-45c4-be63-2f46b56df1a5\") " Jan 26 07:57:09 crc kubenswrapper[4806]: I0126 07:57:09.738737 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d66e251a-5a67-45c4-be63-2f46b56df1a5-audit-policies\") pod \"d66e251a-5a67-45c4-be63-2f46b56df1a5\" (UID: \"d66e251a-5a67-45c4-be63-2f46b56df1a5\") " Jan 26 07:57:09 crc kubenswrapper[4806]: I0126 07:57:09.738751 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d66e251a-5a67-45c4-be63-2f46b56df1a5-v4-0-config-system-trusted-ca-bundle\") pod \"d66e251a-5a67-45c4-be63-2f46b56df1a5\" (UID: \"d66e251a-5a67-45c4-be63-2f46b56df1a5\") " Jan 26 07:57:09 crc kubenswrapper[4806]: I0126 07:57:09.738774 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-86g8r\" (UniqueName: \"kubernetes.io/projected/d66e251a-5a67-45c4-be63-2f46b56df1a5-kube-api-access-86g8r\") pod \"d66e251a-5a67-45c4-be63-2f46b56df1a5\" (UID: \"d66e251a-5a67-45c4-be63-2f46b56df1a5\") " Jan 26 07:57:09 crc kubenswrapper[4806]: I0126 07:57:09.738777 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d66e251a-5a67-45c4-be63-2f46b56df1a5-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "d66e251a-5a67-45c4-be63-2f46b56df1a5" (UID: "d66e251a-5a67-45c4-be63-2f46b56df1a5"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 07:57:09 crc kubenswrapper[4806]: I0126 07:57:09.738803 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d66e251a-5a67-45c4-be63-2f46b56df1a5-v4-0-config-user-idp-0-file-data\") pod \"d66e251a-5a67-45c4-be63-2f46b56df1a5\" (UID: \"d66e251a-5a67-45c4-be63-2f46b56df1a5\") " Jan 26 07:57:09 crc kubenswrapper[4806]: I0126 07:57:09.738937 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d66e251a-5a67-45c4-be63-2f46b56df1a5-v4-0-config-system-session\") pod \"d66e251a-5a67-45c4-be63-2f46b56df1a5\" (UID: \"d66e251a-5a67-45c4-be63-2f46b56df1a5\") " Jan 26 07:57:09 crc kubenswrapper[4806]: I0126 07:57:09.739364 4806 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d66e251a-5a67-45c4-be63-2f46b56df1a5-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 26 07:57:09 crc kubenswrapper[4806]: I0126 07:57:09.739478 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d66e251a-5a67-45c4-be63-2f46b56df1a5-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "d66e251a-5a67-45c4-be63-2f46b56df1a5" (UID: "d66e251a-5a67-45c4-be63-2f46b56df1a5"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 07:57:09 crc kubenswrapper[4806]: I0126 07:57:09.740630 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d66e251a-5a67-45c4-be63-2f46b56df1a5-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "d66e251a-5a67-45c4-be63-2f46b56df1a5" (UID: "d66e251a-5a67-45c4-be63-2f46b56df1a5"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 07:57:09 crc kubenswrapper[4806]: I0126 07:57:09.741241 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d66e251a-5a67-45c4-be63-2f46b56df1a5-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "d66e251a-5a67-45c4-be63-2f46b56df1a5" (UID: "d66e251a-5a67-45c4-be63-2f46b56df1a5"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 07:57:09 crc kubenswrapper[4806]: I0126 07:57:09.744452 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d66e251a-5a67-45c4-be63-2f46b56df1a5-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "d66e251a-5a67-45c4-be63-2f46b56df1a5" (UID: "d66e251a-5a67-45c4-be63-2f46b56df1a5"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 07:57:09 crc kubenswrapper[4806]: I0126 07:57:09.744968 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d66e251a-5a67-45c4-be63-2f46b56df1a5-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "d66e251a-5a67-45c4-be63-2f46b56df1a5" (UID: "d66e251a-5a67-45c4-be63-2f46b56df1a5"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 07:57:09 crc kubenswrapper[4806]: I0126 07:57:09.747335 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d66e251a-5a67-45c4-be63-2f46b56df1a5-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "d66e251a-5a67-45c4-be63-2f46b56df1a5" (UID: "d66e251a-5a67-45c4-be63-2f46b56df1a5"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 07:57:09 crc kubenswrapper[4806]: I0126 07:57:09.747808 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d66e251a-5a67-45c4-be63-2f46b56df1a5-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "d66e251a-5a67-45c4-be63-2f46b56df1a5" (UID: "d66e251a-5a67-45c4-be63-2f46b56df1a5"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 07:57:09 crc kubenswrapper[4806]: I0126 07:57:09.748356 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d66e251a-5a67-45c4-be63-2f46b56df1a5-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "d66e251a-5a67-45c4-be63-2f46b56df1a5" (UID: "d66e251a-5a67-45c4-be63-2f46b56df1a5"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 07:57:09 crc kubenswrapper[4806]: I0126 07:57:09.748444 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d66e251a-5a67-45c4-be63-2f46b56df1a5-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "d66e251a-5a67-45c4-be63-2f46b56df1a5" (UID: "d66e251a-5a67-45c4-be63-2f46b56df1a5"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 07:57:09 crc kubenswrapper[4806]: I0126 07:57:09.749022 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d66e251a-5a67-45c4-be63-2f46b56df1a5-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "d66e251a-5a67-45c4-be63-2f46b56df1a5" (UID: "d66e251a-5a67-45c4-be63-2f46b56df1a5"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 07:57:09 crc kubenswrapper[4806]: I0126 07:57:09.749107 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d66e251a-5a67-45c4-be63-2f46b56df1a5-kube-api-access-86g8r" (OuterVolumeSpecName: "kube-api-access-86g8r") pod "d66e251a-5a67-45c4-be63-2f46b56df1a5" (UID: "d66e251a-5a67-45c4-be63-2f46b56df1a5"). InnerVolumeSpecName "kube-api-access-86g8r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 07:57:09 crc kubenswrapper[4806]: I0126 07:57:09.749301 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d66e251a-5a67-45c4-be63-2f46b56df1a5-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "d66e251a-5a67-45c4-be63-2f46b56df1a5" (UID: "d66e251a-5a67-45c4-be63-2f46b56df1a5"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 07:57:09 crc kubenswrapper[4806]: I0126 07:57:09.753808 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d66e251a-5a67-45c4-be63-2f46b56df1a5-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "d66e251a-5a67-45c4-be63-2f46b56df1a5" (UID: "d66e251a-5a67-45c4-be63-2f46b56df1a5"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 07:57:09 crc kubenswrapper[4806]: I0126 07:57:09.840203 4806 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d66e251a-5a67-45c4-be63-2f46b56df1a5-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 26 07:57:09 crc kubenswrapper[4806]: I0126 07:57:09.840241 4806 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d66e251a-5a67-45c4-be63-2f46b56df1a5-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 26 07:57:09 crc kubenswrapper[4806]: I0126 07:57:09.840253 4806 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d66e251a-5a67-45c4-be63-2f46b56df1a5-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 26 07:57:09 crc kubenswrapper[4806]: I0126 07:57:09.840267 4806 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d66e251a-5a67-45c4-be63-2f46b56df1a5-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 26 07:57:09 crc kubenswrapper[4806]: I0126 07:57:09.840278 4806 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d66e251a-5a67-45c4-be63-2f46b56df1a5-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 07:57:09 crc kubenswrapper[4806]: I0126 07:57:09.840288 4806 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d66e251a-5a67-45c4-be63-2f46b56df1a5-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 07:57:09 crc kubenswrapper[4806]: I0126 07:57:09.840298 4806 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d66e251a-5a67-45c4-be63-2f46b56df1a5-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 26 07:57:09 crc kubenswrapper[4806]: I0126 07:57:09.840309 4806 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d66e251a-5a67-45c4-be63-2f46b56df1a5-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 07:57:09 crc kubenswrapper[4806]: I0126 07:57:09.840319 4806 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d66e251a-5a67-45c4-be63-2f46b56df1a5-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 26 07:57:09 crc kubenswrapper[4806]: I0126 07:57:09.840328 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-86g8r\" (UniqueName: \"kubernetes.io/projected/d66e251a-5a67-45c4-be63-2f46b56df1a5-kube-api-access-86g8r\") on node \"crc\" DevicePath \"\"" Jan 26 07:57:09 crc kubenswrapper[4806]: I0126 07:57:09.840337 4806 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d66e251a-5a67-45c4-be63-2f46b56df1a5-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 26 07:57:09 crc kubenswrapper[4806]: I0126 07:57:09.840346 4806 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d66e251a-5a67-45c4-be63-2f46b56df1a5-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 26 07:57:09 crc kubenswrapper[4806]: I0126 07:57:09.840354 4806 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d66e251a-5a67-45c4-be63-2f46b56df1a5-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 26 07:57:10 crc kubenswrapper[4806]: I0126 07:57:10.432060 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cmd7h" Jan 26 07:57:10 crc kubenswrapper[4806]: I0126 07:57:10.497072 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-hdxh9" event={"ID":"d66e251a-5a67-45c4-be63-2f46b56df1a5","Type":"ContainerDied","Data":"ac13517043271dfada98aa335b446c39c3de3cf4582d80da3b3b9528a1d8cea0"} Jan 26 07:57:10 crc kubenswrapper[4806]: I0126 07:57:10.497141 4806 scope.go:117] "RemoveContainer" containerID="39d3ccbff1a26b8ae79eeb17cd893cdf52835f2ffeb0b20d0b5955a00b09d66d" Jan 26 07:57:10 crc kubenswrapper[4806]: I0126 07:57:10.497099 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-hdxh9" Jan 26 07:57:10 crc kubenswrapper[4806]: I0126 07:57:10.500984 4806 generic.go:334] "Generic (PLEG): container finished" podID="cf65c110-74ee-4e0c-a7e8-bb27c891ff12" containerID="7bf666faccd61f121827ae1e2022c62d2329ac64117bf6f624fdf1d545cdd3f4" exitCode=0 Jan 26 07:57:10 crc kubenswrapper[4806]: I0126 07:57:10.501028 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cmd7h" event={"ID":"cf65c110-74ee-4e0c-a7e8-bb27c891ff12","Type":"ContainerDied","Data":"7bf666faccd61f121827ae1e2022c62d2329ac64117bf6f624fdf1d545cdd3f4"} Jan 26 07:57:10 crc kubenswrapper[4806]: I0126 07:57:10.501062 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cmd7h" event={"ID":"cf65c110-74ee-4e0c-a7e8-bb27c891ff12","Type":"ContainerDied","Data":"cb57cb3421d92cf7b4b438e5228673cae473582d2b36033754d5c568b867dc63"} Jan 26 07:57:10 crc kubenswrapper[4806]: I0126 07:57:10.501134 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cmd7h" Jan 26 07:57:10 crc kubenswrapper[4806]: I0126 07:57:10.516958 4806 scope.go:117] "RemoveContainer" containerID="7bf666faccd61f121827ae1e2022c62d2329ac64117bf6f624fdf1d545cdd3f4" Jan 26 07:57:10 crc kubenswrapper[4806]: I0126 07:57:10.534000 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-hdxh9"] Jan 26 07:57:10 crc kubenswrapper[4806]: I0126 07:57:10.534054 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-hdxh9"] Jan 26 07:57:10 crc kubenswrapper[4806]: I0126 07:57:10.550030 4806 scope.go:117] "RemoveContainer" containerID="4223d58b07c8a779337c90d9019006f7bdb82f10b5281163b174ce1a8eb27de0" Jan 26 07:57:10 crc kubenswrapper[4806]: I0126 07:57:10.550891 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xr85x\" (UniqueName: \"kubernetes.io/projected/cf65c110-74ee-4e0c-a7e8-bb27c891ff12-kube-api-access-xr85x\") pod \"cf65c110-74ee-4e0c-a7e8-bb27c891ff12\" (UID: \"cf65c110-74ee-4e0c-a7e8-bb27c891ff12\") " Jan 26 07:57:10 crc kubenswrapper[4806]: I0126 07:57:10.551008 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf65c110-74ee-4e0c-a7e8-bb27c891ff12-utilities\") pod \"cf65c110-74ee-4e0c-a7e8-bb27c891ff12\" (UID: \"cf65c110-74ee-4e0c-a7e8-bb27c891ff12\") " Jan 26 07:57:10 crc kubenswrapper[4806]: I0126 07:57:10.551069 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf65c110-74ee-4e0c-a7e8-bb27c891ff12-catalog-content\") pod \"cf65c110-74ee-4e0c-a7e8-bb27c891ff12\" (UID: \"cf65c110-74ee-4e0c-a7e8-bb27c891ff12\") " Jan 26 07:57:10 crc kubenswrapper[4806]: I0126 07:57:10.552275 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf65c110-74ee-4e0c-a7e8-bb27c891ff12-utilities" (OuterVolumeSpecName: "utilities") pod "cf65c110-74ee-4e0c-a7e8-bb27c891ff12" (UID: "cf65c110-74ee-4e0c-a7e8-bb27c891ff12"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 07:57:10 crc kubenswrapper[4806]: I0126 07:57:10.554374 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf65c110-74ee-4e0c-a7e8-bb27c891ff12-kube-api-access-xr85x" (OuterVolumeSpecName: "kube-api-access-xr85x") pod "cf65c110-74ee-4e0c-a7e8-bb27c891ff12" (UID: "cf65c110-74ee-4e0c-a7e8-bb27c891ff12"). InnerVolumeSpecName "kube-api-access-xr85x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 07:57:10 crc kubenswrapper[4806]: I0126 07:57:10.564565 4806 scope.go:117] "RemoveContainer" containerID="3063b61193856a80d2928479406834902b39f7cf88a4972ca8a1a1a34ceb6a1e" Jan 26 07:57:10 crc kubenswrapper[4806]: I0126 07:57:10.576636 4806 scope.go:117] "RemoveContainer" containerID="7bf666faccd61f121827ae1e2022c62d2329ac64117bf6f624fdf1d545cdd3f4" Jan 26 07:57:10 crc kubenswrapper[4806]: E0126 07:57:10.577119 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7bf666faccd61f121827ae1e2022c62d2329ac64117bf6f624fdf1d545cdd3f4\": container with ID starting with 7bf666faccd61f121827ae1e2022c62d2329ac64117bf6f624fdf1d545cdd3f4 not found: ID does not exist" containerID="7bf666faccd61f121827ae1e2022c62d2329ac64117bf6f624fdf1d545cdd3f4" Jan 26 07:57:10 crc kubenswrapper[4806]: I0126 07:57:10.577170 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7bf666faccd61f121827ae1e2022c62d2329ac64117bf6f624fdf1d545cdd3f4"} err="failed to get container status \"7bf666faccd61f121827ae1e2022c62d2329ac64117bf6f624fdf1d545cdd3f4\": rpc error: code = NotFound desc = could not find container \"7bf666faccd61f121827ae1e2022c62d2329ac64117bf6f624fdf1d545cdd3f4\": container with ID starting with 7bf666faccd61f121827ae1e2022c62d2329ac64117bf6f624fdf1d545cdd3f4 not found: ID does not exist" Jan 26 07:57:10 crc kubenswrapper[4806]: I0126 07:57:10.577200 4806 scope.go:117] "RemoveContainer" containerID="4223d58b07c8a779337c90d9019006f7bdb82f10b5281163b174ce1a8eb27de0" Jan 26 07:57:10 crc kubenswrapper[4806]: E0126 07:57:10.577508 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4223d58b07c8a779337c90d9019006f7bdb82f10b5281163b174ce1a8eb27de0\": container with ID starting with 4223d58b07c8a779337c90d9019006f7bdb82f10b5281163b174ce1a8eb27de0 not found: ID does not exist" containerID="4223d58b07c8a779337c90d9019006f7bdb82f10b5281163b174ce1a8eb27de0" Jan 26 07:57:10 crc kubenswrapper[4806]: I0126 07:57:10.577610 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4223d58b07c8a779337c90d9019006f7bdb82f10b5281163b174ce1a8eb27de0"} err="failed to get container status \"4223d58b07c8a779337c90d9019006f7bdb82f10b5281163b174ce1a8eb27de0\": rpc error: code = NotFound desc = could not find container \"4223d58b07c8a779337c90d9019006f7bdb82f10b5281163b174ce1a8eb27de0\": container with ID starting with 4223d58b07c8a779337c90d9019006f7bdb82f10b5281163b174ce1a8eb27de0 not found: ID does not exist" Jan 26 07:57:10 crc kubenswrapper[4806]: I0126 07:57:10.577633 4806 scope.go:117] "RemoveContainer" containerID="3063b61193856a80d2928479406834902b39f7cf88a4972ca8a1a1a34ceb6a1e" Jan 26 07:57:10 crc kubenswrapper[4806]: E0126 07:57:10.577939 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3063b61193856a80d2928479406834902b39f7cf88a4972ca8a1a1a34ceb6a1e\": container with ID starting with 3063b61193856a80d2928479406834902b39f7cf88a4972ca8a1a1a34ceb6a1e not found: ID does not exist" containerID="3063b61193856a80d2928479406834902b39f7cf88a4972ca8a1a1a34ceb6a1e" Jan 26 07:57:10 crc kubenswrapper[4806]: I0126 07:57:10.577975 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3063b61193856a80d2928479406834902b39f7cf88a4972ca8a1a1a34ceb6a1e"} err="failed to get container status \"3063b61193856a80d2928479406834902b39f7cf88a4972ca8a1a1a34ceb6a1e\": rpc error: code = NotFound desc = could not find container \"3063b61193856a80d2928479406834902b39f7cf88a4972ca8a1a1a34ceb6a1e\": container with ID starting with 3063b61193856a80d2928479406834902b39f7cf88a4972ca8a1a1a34ceb6a1e not found: ID does not exist" Jan 26 07:57:10 crc kubenswrapper[4806]: I0126 07:57:10.652767 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xr85x\" (UniqueName: \"kubernetes.io/projected/cf65c110-74ee-4e0c-a7e8-bb27c891ff12-kube-api-access-xr85x\") on node \"crc\" DevicePath \"\"" Jan 26 07:57:10 crc kubenswrapper[4806]: I0126 07:57:10.653030 4806 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf65c110-74ee-4e0c-a7e8-bb27c891ff12-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 07:57:10 crc kubenswrapper[4806]: I0126 07:57:10.674343 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf65c110-74ee-4e0c-a7e8-bb27c891ff12-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cf65c110-74ee-4e0c-a7e8-bb27c891ff12" (UID: "cf65c110-74ee-4e0c-a7e8-bb27c891ff12"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 07:57:10 crc kubenswrapper[4806]: I0126 07:57:10.754024 4806 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf65c110-74ee-4e0c-a7e8-bb27c891ff12-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 07:57:10 crc kubenswrapper[4806]: I0126 07:57:10.832666 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cmd7h"] Jan 26 07:57:10 crc kubenswrapper[4806]: I0126 07:57:10.835420 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-cmd7h"] Jan 26 07:57:11 crc kubenswrapper[4806]: I0126 07:57:11.053927 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf65c110-74ee-4e0c-a7e8-bb27c891ff12" path="/var/lib/kubelet/pods/cf65c110-74ee-4e0c-a7e8-bb27c891ff12/volumes" Jan 26 07:57:11 crc kubenswrapper[4806]: I0126 07:57:11.054740 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d66e251a-5a67-45c4-be63-2f46b56df1a5" path="/var/lib/kubelet/pods/d66e251a-5a67-45c4-be63-2f46b56df1a5/volumes" Jan 26 07:57:14 crc kubenswrapper[4806]: I0126 07:57:14.496331 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-69879bb87d-fjg57"] Jan 26 07:57:14 crc kubenswrapper[4806]: E0126 07:57:14.497375 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="339dd820-f50a-4135-9da6-5768324b8d55" containerName="extract-content" Jan 26 07:57:14 crc kubenswrapper[4806]: I0126 07:57:14.497399 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="339dd820-f50a-4135-9da6-5768324b8d55" containerName="extract-content" Jan 26 07:57:14 crc kubenswrapper[4806]: E0126 07:57:14.497422 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf65c110-74ee-4e0c-a7e8-bb27c891ff12" containerName="extract-utilities" Jan 26 07:57:14 crc kubenswrapper[4806]: I0126 07:57:14.497436 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf65c110-74ee-4e0c-a7e8-bb27c891ff12" containerName="extract-utilities" Jan 26 07:57:14 crc kubenswrapper[4806]: E0126 07:57:14.497454 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d66e251a-5a67-45c4-be63-2f46b56df1a5" containerName="oauth-openshift" Jan 26 07:57:14 crc kubenswrapper[4806]: I0126 07:57:14.497467 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="d66e251a-5a67-45c4-be63-2f46b56df1a5" containerName="oauth-openshift" Jan 26 07:57:14 crc kubenswrapper[4806]: E0126 07:57:14.497483 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf65c110-74ee-4e0c-a7e8-bb27c891ff12" containerName="extract-content" Jan 26 07:57:14 crc kubenswrapper[4806]: I0126 07:57:14.497495 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf65c110-74ee-4e0c-a7e8-bb27c891ff12" containerName="extract-content" Jan 26 07:57:14 crc kubenswrapper[4806]: E0126 07:57:14.497511 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="339dd820-f50a-4135-9da6-5768324b8d55" containerName="registry-server" Jan 26 07:57:14 crc kubenswrapper[4806]: I0126 07:57:14.497546 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="339dd820-f50a-4135-9da6-5768324b8d55" containerName="registry-server" Jan 26 07:57:14 crc kubenswrapper[4806]: E0126 07:57:14.497562 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="339dd820-f50a-4135-9da6-5768324b8d55" containerName="extract-utilities" Jan 26 07:57:14 crc kubenswrapper[4806]: I0126 07:57:14.497574 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="339dd820-f50a-4135-9da6-5768324b8d55" containerName="extract-utilities" Jan 26 07:57:14 crc kubenswrapper[4806]: E0126 07:57:14.497598 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf65c110-74ee-4e0c-a7e8-bb27c891ff12" containerName="registry-server" Jan 26 07:57:14 crc kubenswrapper[4806]: I0126 07:57:14.497609 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf65c110-74ee-4e0c-a7e8-bb27c891ff12" containerName="registry-server" Jan 26 07:57:14 crc kubenswrapper[4806]: I0126 07:57:14.497773 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="d66e251a-5a67-45c4-be63-2f46b56df1a5" containerName="oauth-openshift" Jan 26 07:57:14 crc kubenswrapper[4806]: I0126 07:57:14.497793 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="339dd820-f50a-4135-9da6-5768324b8d55" containerName="registry-server" Jan 26 07:57:14 crc kubenswrapper[4806]: I0126 07:57:14.497814 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf65c110-74ee-4e0c-a7e8-bb27c891ff12" containerName="registry-server" Jan 26 07:57:14 crc kubenswrapper[4806]: I0126 07:57:14.498484 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-69879bb87d-fjg57" Jan 26 07:57:14 crc kubenswrapper[4806]: I0126 07:57:14.502912 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 26 07:57:14 crc kubenswrapper[4806]: I0126 07:57:14.502929 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 26 07:57:14 crc kubenswrapper[4806]: I0126 07:57:14.504864 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 26 07:57:14 crc kubenswrapper[4806]: I0126 07:57:14.504984 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 26 07:57:14 crc kubenswrapper[4806]: I0126 07:57:14.505588 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 26 07:57:14 crc kubenswrapper[4806]: I0126 07:57:14.505780 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 26 07:57:14 crc kubenswrapper[4806]: I0126 07:57:14.505807 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 26 07:57:14 crc kubenswrapper[4806]: I0126 07:57:14.506187 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 26 07:57:14 crc kubenswrapper[4806]: I0126 07:57:14.506227 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 26 07:57:14 crc kubenswrapper[4806]: I0126 07:57:14.506265 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 26 07:57:14 crc kubenswrapper[4806]: I0126 07:57:14.509215 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 26 07:57:14 crc kubenswrapper[4806]: I0126 07:57:14.509383 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 26 07:57:14 crc kubenswrapper[4806]: I0126 07:57:14.514728 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 26 07:57:14 crc kubenswrapper[4806]: I0126 07:57:14.517621 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-69879bb87d-fjg57"] Jan 26 07:57:14 crc kubenswrapper[4806]: I0126 07:57:14.531133 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 26 07:57:14 crc kubenswrapper[4806]: I0126 07:57:14.540229 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 26 07:57:14 crc kubenswrapper[4806]: I0126 07:57:14.606243 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/bc382700-edfa-46b2-86f6-45f65ca9f96e-v4-0-config-system-serving-cert\") pod \"oauth-openshift-69879bb87d-fjg57\" (UID: \"bc382700-edfa-46b2-86f6-45f65ca9f96e\") " pod="openshift-authentication/oauth-openshift-69879bb87d-fjg57" Jan 26 07:57:14 crc kubenswrapper[4806]: I0126 07:57:14.606288 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/bc382700-edfa-46b2-86f6-45f65ca9f96e-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-69879bb87d-fjg57\" (UID: \"bc382700-edfa-46b2-86f6-45f65ca9f96e\") " pod="openshift-authentication/oauth-openshift-69879bb87d-fjg57" Jan 26 07:57:14 crc kubenswrapper[4806]: I0126 07:57:14.606313 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/bc382700-edfa-46b2-86f6-45f65ca9f96e-audit-dir\") pod \"oauth-openshift-69879bb87d-fjg57\" (UID: \"bc382700-edfa-46b2-86f6-45f65ca9f96e\") " pod="openshift-authentication/oauth-openshift-69879bb87d-fjg57" Jan 26 07:57:14 crc kubenswrapper[4806]: I0126 07:57:14.606343 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/bc382700-edfa-46b2-86f6-45f65ca9f96e-v4-0-config-user-template-login\") pod \"oauth-openshift-69879bb87d-fjg57\" (UID: \"bc382700-edfa-46b2-86f6-45f65ca9f96e\") " pod="openshift-authentication/oauth-openshift-69879bb87d-fjg57" Jan 26 07:57:14 crc kubenswrapper[4806]: I0126 07:57:14.606362 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/bc382700-edfa-46b2-86f6-45f65ca9f96e-v4-0-config-system-router-certs\") pod \"oauth-openshift-69879bb87d-fjg57\" (UID: \"bc382700-edfa-46b2-86f6-45f65ca9f96e\") " pod="openshift-authentication/oauth-openshift-69879bb87d-fjg57" Jan 26 07:57:14 crc kubenswrapper[4806]: I0126 07:57:14.606380 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/bc382700-edfa-46b2-86f6-45f65ca9f96e-v4-0-config-system-session\") pod \"oauth-openshift-69879bb87d-fjg57\" (UID: \"bc382700-edfa-46b2-86f6-45f65ca9f96e\") " pod="openshift-authentication/oauth-openshift-69879bb87d-fjg57" Jan 26 07:57:14 crc kubenswrapper[4806]: I0126 07:57:14.606408 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/bc382700-edfa-46b2-86f6-45f65ca9f96e-audit-policies\") pod \"oauth-openshift-69879bb87d-fjg57\" (UID: \"bc382700-edfa-46b2-86f6-45f65ca9f96e\") " pod="openshift-authentication/oauth-openshift-69879bb87d-fjg57" Jan 26 07:57:14 crc kubenswrapper[4806]: I0126 07:57:14.606435 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/bc382700-edfa-46b2-86f6-45f65ca9f96e-v4-0-config-system-service-ca\") pod \"oauth-openshift-69879bb87d-fjg57\" (UID: \"bc382700-edfa-46b2-86f6-45f65ca9f96e\") " pod="openshift-authentication/oauth-openshift-69879bb87d-fjg57" Jan 26 07:57:14 crc kubenswrapper[4806]: I0126 07:57:14.606452 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/bc382700-edfa-46b2-86f6-45f65ca9f96e-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-69879bb87d-fjg57\" (UID: \"bc382700-edfa-46b2-86f6-45f65ca9f96e\") " pod="openshift-authentication/oauth-openshift-69879bb87d-fjg57" Jan 26 07:57:14 crc kubenswrapper[4806]: I0126 07:57:14.606473 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bc382700-edfa-46b2-86f6-45f65ca9f96e-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-69879bb87d-fjg57\" (UID: \"bc382700-edfa-46b2-86f6-45f65ca9f96e\") " pod="openshift-authentication/oauth-openshift-69879bb87d-fjg57" Jan 26 07:57:14 crc kubenswrapper[4806]: I0126 07:57:14.606487 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/bc382700-edfa-46b2-86f6-45f65ca9f96e-v4-0-config-user-template-error\") pod \"oauth-openshift-69879bb87d-fjg57\" (UID: \"bc382700-edfa-46b2-86f6-45f65ca9f96e\") " pod="openshift-authentication/oauth-openshift-69879bb87d-fjg57" Jan 26 07:57:14 crc kubenswrapper[4806]: I0126 07:57:14.606510 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/bc382700-edfa-46b2-86f6-45f65ca9f96e-v4-0-config-system-cliconfig\") pod \"oauth-openshift-69879bb87d-fjg57\" (UID: \"bc382700-edfa-46b2-86f6-45f65ca9f96e\") " pod="openshift-authentication/oauth-openshift-69879bb87d-fjg57" Jan 26 07:57:14 crc kubenswrapper[4806]: I0126 07:57:14.606550 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/bc382700-edfa-46b2-86f6-45f65ca9f96e-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-69879bb87d-fjg57\" (UID: \"bc382700-edfa-46b2-86f6-45f65ca9f96e\") " pod="openshift-authentication/oauth-openshift-69879bb87d-fjg57" Jan 26 07:57:14 crc kubenswrapper[4806]: I0126 07:57:14.606586 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzgrs\" (UniqueName: \"kubernetes.io/projected/bc382700-edfa-46b2-86f6-45f65ca9f96e-kube-api-access-fzgrs\") pod \"oauth-openshift-69879bb87d-fjg57\" (UID: \"bc382700-edfa-46b2-86f6-45f65ca9f96e\") " pod="openshift-authentication/oauth-openshift-69879bb87d-fjg57" Jan 26 07:57:14 crc kubenswrapper[4806]: I0126 07:57:14.708583 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/bc382700-edfa-46b2-86f6-45f65ca9f96e-v4-0-config-system-session\") pod \"oauth-openshift-69879bb87d-fjg57\" (UID: \"bc382700-edfa-46b2-86f6-45f65ca9f96e\") " pod="openshift-authentication/oauth-openshift-69879bb87d-fjg57" Jan 26 07:57:14 crc kubenswrapper[4806]: I0126 07:57:14.708647 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/bc382700-edfa-46b2-86f6-45f65ca9f96e-audit-policies\") pod \"oauth-openshift-69879bb87d-fjg57\" (UID: \"bc382700-edfa-46b2-86f6-45f65ca9f96e\") " pod="openshift-authentication/oauth-openshift-69879bb87d-fjg57" Jan 26 07:57:14 crc kubenswrapper[4806]: I0126 07:57:14.708690 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/bc382700-edfa-46b2-86f6-45f65ca9f96e-v4-0-config-system-service-ca\") pod \"oauth-openshift-69879bb87d-fjg57\" (UID: \"bc382700-edfa-46b2-86f6-45f65ca9f96e\") " pod="openshift-authentication/oauth-openshift-69879bb87d-fjg57" Jan 26 07:57:14 crc kubenswrapper[4806]: I0126 07:57:14.708714 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/bc382700-edfa-46b2-86f6-45f65ca9f96e-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-69879bb87d-fjg57\" (UID: \"bc382700-edfa-46b2-86f6-45f65ca9f96e\") " pod="openshift-authentication/oauth-openshift-69879bb87d-fjg57" Jan 26 07:57:14 crc kubenswrapper[4806]: I0126 07:57:14.708742 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bc382700-edfa-46b2-86f6-45f65ca9f96e-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-69879bb87d-fjg57\" (UID: \"bc382700-edfa-46b2-86f6-45f65ca9f96e\") " pod="openshift-authentication/oauth-openshift-69879bb87d-fjg57" Jan 26 07:57:14 crc kubenswrapper[4806]: I0126 07:57:14.708766 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/bc382700-edfa-46b2-86f6-45f65ca9f96e-v4-0-config-user-template-error\") pod \"oauth-openshift-69879bb87d-fjg57\" (UID: \"bc382700-edfa-46b2-86f6-45f65ca9f96e\") " pod="openshift-authentication/oauth-openshift-69879bb87d-fjg57" Jan 26 07:57:14 crc kubenswrapper[4806]: I0126 07:57:14.708797 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/bc382700-edfa-46b2-86f6-45f65ca9f96e-v4-0-config-system-cliconfig\") pod \"oauth-openshift-69879bb87d-fjg57\" (UID: \"bc382700-edfa-46b2-86f6-45f65ca9f96e\") " pod="openshift-authentication/oauth-openshift-69879bb87d-fjg57" Jan 26 07:57:14 crc kubenswrapper[4806]: I0126 07:57:14.708824 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/bc382700-edfa-46b2-86f6-45f65ca9f96e-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-69879bb87d-fjg57\" (UID: \"bc382700-edfa-46b2-86f6-45f65ca9f96e\") " pod="openshift-authentication/oauth-openshift-69879bb87d-fjg57" Jan 26 07:57:14 crc kubenswrapper[4806]: I0126 07:57:14.708849 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzgrs\" (UniqueName: \"kubernetes.io/projected/bc382700-edfa-46b2-86f6-45f65ca9f96e-kube-api-access-fzgrs\") pod \"oauth-openshift-69879bb87d-fjg57\" (UID: \"bc382700-edfa-46b2-86f6-45f65ca9f96e\") " pod="openshift-authentication/oauth-openshift-69879bb87d-fjg57" Jan 26 07:57:14 crc kubenswrapper[4806]: I0126 07:57:14.708877 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/bc382700-edfa-46b2-86f6-45f65ca9f96e-v4-0-config-system-serving-cert\") pod \"oauth-openshift-69879bb87d-fjg57\" (UID: \"bc382700-edfa-46b2-86f6-45f65ca9f96e\") " pod="openshift-authentication/oauth-openshift-69879bb87d-fjg57" Jan 26 07:57:14 crc kubenswrapper[4806]: I0126 07:57:14.708905 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/bc382700-edfa-46b2-86f6-45f65ca9f96e-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-69879bb87d-fjg57\" (UID: \"bc382700-edfa-46b2-86f6-45f65ca9f96e\") " pod="openshift-authentication/oauth-openshift-69879bb87d-fjg57" Jan 26 07:57:14 crc kubenswrapper[4806]: I0126 07:57:14.708931 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/bc382700-edfa-46b2-86f6-45f65ca9f96e-audit-dir\") pod \"oauth-openshift-69879bb87d-fjg57\" (UID: \"bc382700-edfa-46b2-86f6-45f65ca9f96e\") " pod="openshift-authentication/oauth-openshift-69879bb87d-fjg57" Jan 26 07:57:14 crc kubenswrapper[4806]: I0126 07:57:14.708969 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/bc382700-edfa-46b2-86f6-45f65ca9f96e-v4-0-config-user-template-login\") pod \"oauth-openshift-69879bb87d-fjg57\" (UID: \"bc382700-edfa-46b2-86f6-45f65ca9f96e\") " pod="openshift-authentication/oauth-openshift-69879bb87d-fjg57" Jan 26 07:57:14 crc kubenswrapper[4806]: I0126 07:57:14.708996 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/bc382700-edfa-46b2-86f6-45f65ca9f96e-v4-0-config-system-router-certs\") pod \"oauth-openshift-69879bb87d-fjg57\" (UID: \"bc382700-edfa-46b2-86f6-45f65ca9f96e\") " pod="openshift-authentication/oauth-openshift-69879bb87d-fjg57" Jan 26 07:57:14 crc kubenswrapper[4806]: I0126 07:57:14.710381 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/bc382700-edfa-46b2-86f6-45f65ca9f96e-audit-dir\") pod \"oauth-openshift-69879bb87d-fjg57\" (UID: \"bc382700-edfa-46b2-86f6-45f65ca9f96e\") " pod="openshift-authentication/oauth-openshift-69879bb87d-fjg57" Jan 26 07:57:14 crc kubenswrapper[4806]: I0126 07:57:14.711637 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/bc382700-edfa-46b2-86f6-45f65ca9f96e-v4-0-config-system-cliconfig\") pod \"oauth-openshift-69879bb87d-fjg57\" (UID: \"bc382700-edfa-46b2-86f6-45f65ca9f96e\") " pod="openshift-authentication/oauth-openshift-69879bb87d-fjg57" Jan 26 07:57:14 crc kubenswrapper[4806]: I0126 07:57:14.712125 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/bc382700-edfa-46b2-86f6-45f65ca9f96e-audit-policies\") pod \"oauth-openshift-69879bb87d-fjg57\" (UID: \"bc382700-edfa-46b2-86f6-45f65ca9f96e\") " pod="openshift-authentication/oauth-openshift-69879bb87d-fjg57" Jan 26 07:57:14 crc kubenswrapper[4806]: I0126 07:57:14.713046 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bc382700-edfa-46b2-86f6-45f65ca9f96e-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-69879bb87d-fjg57\" (UID: \"bc382700-edfa-46b2-86f6-45f65ca9f96e\") " pod="openshift-authentication/oauth-openshift-69879bb87d-fjg57" Jan 26 07:57:14 crc kubenswrapper[4806]: I0126 07:57:14.716007 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/bc382700-edfa-46b2-86f6-45f65ca9f96e-v4-0-config-system-router-certs\") pod \"oauth-openshift-69879bb87d-fjg57\" (UID: \"bc382700-edfa-46b2-86f6-45f65ca9f96e\") " pod="openshift-authentication/oauth-openshift-69879bb87d-fjg57" Jan 26 07:57:14 crc kubenswrapper[4806]: I0126 07:57:14.716588 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/bc382700-edfa-46b2-86f6-45f65ca9f96e-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-69879bb87d-fjg57\" (UID: \"bc382700-edfa-46b2-86f6-45f65ca9f96e\") " pod="openshift-authentication/oauth-openshift-69879bb87d-fjg57" Jan 26 07:57:14 crc kubenswrapper[4806]: I0126 07:57:14.716847 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/bc382700-edfa-46b2-86f6-45f65ca9f96e-v4-0-config-user-template-error\") pod \"oauth-openshift-69879bb87d-fjg57\" (UID: \"bc382700-edfa-46b2-86f6-45f65ca9f96e\") " pod="openshift-authentication/oauth-openshift-69879bb87d-fjg57" Jan 26 07:57:14 crc kubenswrapper[4806]: I0126 07:57:14.717187 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/bc382700-edfa-46b2-86f6-45f65ca9f96e-v4-0-config-system-service-ca\") pod \"oauth-openshift-69879bb87d-fjg57\" (UID: \"bc382700-edfa-46b2-86f6-45f65ca9f96e\") " pod="openshift-authentication/oauth-openshift-69879bb87d-fjg57" Jan 26 07:57:14 crc kubenswrapper[4806]: I0126 07:57:14.722950 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/bc382700-edfa-46b2-86f6-45f65ca9f96e-v4-0-config-system-serving-cert\") pod \"oauth-openshift-69879bb87d-fjg57\" (UID: \"bc382700-edfa-46b2-86f6-45f65ca9f96e\") " pod="openshift-authentication/oauth-openshift-69879bb87d-fjg57" Jan 26 07:57:14 crc kubenswrapper[4806]: I0126 07:57:14.723207 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/bc382700-edfa-46b2-86f6-45f65ca9f96e-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-69879bb87d-fjg57\" (UID: \"bc382700-edfa-46b2-86f6-45f65ca9f96e\") " pod="openshift-authentication/oauth-openshift-69879bb87d-fjg57" Jan 26 07:57:14 crc kubenswrapper[4806]: I0126 07:57:14.723987 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/bc382700-edfa-46b2-86f6-45f65ca9f96e-v4-0-config-user-template-login\") pod \"oauth-openshift-69879bb87d-fjg57\" (UID: \"bc382700-edfa-46b2-86f6-45f65ca9f96e\") " pod="openshift-authentication/oauth-openshift-69879bb87d-fjg57" Jan 26 07:57:14 crc kubenswrapper[4806]: I0126 07:57:14.725084 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/bc382700-edfa-46b2-86f6-45f65ca9f96e-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-69879bb87d-fjg57\" (UID: \"bc382700-edfa-46b2-86f6-45f65ca9f96e\") " pod="openshift-authentication/oauth-openshift-69879bb87d-fjg57" Jan 26 07:57:14 crc kubenswrapper[4806]: I0126 07:57:14.729408 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/bc382700-edfa-46b2-86f6-45f65ca9f96e-v4-0-config-system-session\") pod \"oauth-openshift-69879bb87d-fjg57\" (UID: \"bc382700-edfa-46b2-86f6-45f65ca9f96e\") " pod="openshift-authentication/oauth-openshift-69879bb87d-fjg57" Jan 26 07:57:14 crc kubenswrapper[4806]: I0126 07:57:14.733147 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzgrs\" (UniqueName: \"kubernetes.io/projected/bc382700-edfa-46b2-86f6-45f65ca9f96e-kube-api-access-fzgrs\") pod \"oauth-openshift-69879bb87d-fjg57\" (UID: \"bc382700-edfa-46b2-86f6-45f65ca9f96e\") " pod="openshift-authentication/oauth-openshift-69879bb87d-fjg57" Jan 26 07:57:14 crc kubenswrapper[4806]: I0126 07:57:14.825643 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-69879bb87d-fjg57" Jan 26 07:57:15 crc kubenswrapper[4806]: I0126 07:57:15.294032 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-5p4nt" Jan 26 07:57:15 crc kubenswrapper[4806]: I0126 07:57:15.342010 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-69879bb87d-fjg57"] Jan 26 07:57:15 crc kubenswrapper[4806]: I0126 07:57:15.351366 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-5p4nt" Jan 26 07:57:15 crc kubenswrapper[4806]: W0126 07:57:15.358636 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbc382700_edfa_46b2_86f6_45f65ca9f96e.slice/crio-a9306bb18eb8befc0e65892f47bf8c063c61bb574ad7e768c2a0d6a4f2d0b7ca WatchSource:0}: Error finding container a9306bb18eb8befc0e65892f47bf8c063c61bb574ad7e768c2a0d6a4f2d0b7ca: Status 404 returned error can't find the container with id a9306bb18eb8befc0e65892f47bf8c063c61bb574ad7e768c2a0d6a4f2d0b7ca Jan 26 07:57:15 crc kubenswrapper[4806]: I0126 07:57:15.541145 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-69879bb87d-fjg57" event={"ID":"bc382700-edfa-46b2-86f6-45f65ca9f96e","Type":"ContainerStarted","Data":"a9306bb18eb8befc0e65892f47bf8c063c61bb574ad7e768c2a0d6a4f2d0b7ca"} Jan 26 07:57:15 crc kubenswrapper[4806]: I0126 07:57:15.807090 4806 patch_prober.go:28] interesting pod/machine-config-daemon-k2tlk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 07:57:15 crc kubenswrapper[4806]: I0126 07:57:15.807564 4806 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 07:57:15 crc kubenswrapper[4806]: I0126 07:57:15.807619 4806 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" Jan 26 07:57:15 crc kubenswrapper[4806]: I0126 07:57:15.808222 4806 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"824bb40c04b0ea474c45020c9c84ce7b7ee18c52beef9fee9618ba5a6e59be04"} pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 07:57:15 crc kubenswrapper[4806]: I0126 07:57:15.808294 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" containerName="machine-config-daemon" containerID="cri-o://824bb40c04b0ea474c45020c9c84ce7b7ee18c52beef9fee9618ba5a6e59be04" gracePeriod=600 Jan 26 07:57:17 crc kubenswrapper[4806]: I0126 07:57:17.555953 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-69879bb87d-fjg57" event={"ID":"bc382700-edfa-46b2-86f6-45f65ca9f96e","Type":"ContainerStarted","Data":"a4df7116a57c70684513a1fdf2e6958c9975a9fe976c8ece2d62e36a73973a82"} Jan 26 07:57:17 crc kubenswrapper[4806]: I0126 07:57:17.556292 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-69879bb87d-fjg57" Jan 26 07:57:17 crc kubenswrapper[4806]: I0126 07:57:17.558060 4806 generic.go:334] "Generic (PLEG): container finished" podID="d07502a2-50b0-4012-b335-340a1c694c50" containerID="824bb40c04b0ea474c45020c9c84ce7b7ee18c52beef9fee9618ba5a6e59be04" exitCode=0 Jan 26 07:57:17 crc kubenswrapper[4806]: I0126 07:57:17.558105 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" event={"ID":"d07502a2-50b0-4012-b335-340a1c694c50","Type":"ContainerDied","Data":"824bb40c04b0ea474c45020c9c84ce7b7ee18c52beef9fee9618ba5a6e59be04"} Jan 26 07:57:17 crc kubenswrapper[4806]: I0126 07:57:17.564072 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-69879bb87d-fjg57" Jan 26 07:57:17 crc kubenswrapper[4806]: I0126 07:57:17.582673 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-69879bb87d-fjg57" podStartSLOduration=34.582646894 podStartE2EDuration="34.582646894s" podCreationTimestamp="2026-01-26 07:56:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 07:57:17.58045684 +0000 UTC m=+216.844864986" watchObservedRunningTime="2026-01-26 07:57:17.582646894 +0000 UTC m=+216.847054980" Jan 26 07:57:18 crc kubenswrapper[4806]: I0126 07:57:18.127394 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-96f6d79dc-fwc4q"] Jan 26 07:57:18 crc kubenswrapper[4806]: I0126 07:57:18.127929 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-96f6d79dc-fwc4q" podUID="217de6b0-bcc1-40a2-b9bd-b61ebd6b3d5f" containerName="controller-manager" containerID="cri-o://4455fe9c76c39e494a6709b203e057e65fa8c2992c38b7a9846cbaf1bd320613" gracePeriod=30 Jan 26 07:57:18 crc kubenswrapper[4806]: I0126 07:57:18.223498 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-677859bc77-4cmtx"] Jan 26 07:57:18 crc kubenswrapper[4806]: I0126 07:57:18.223710 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-677859bc77-4cmtx" podUID="4a8e9db2-4f5a-4499-962a-c8f784f509c6" containerName="route-controller-manager" containerID="cri-o://8628bc1daff07c10fb3ccb6936c4281a5090982cb2de289fe349d61bd65b32b9" gracePeriod=30 Jan 26 07:57:18 crc kubenswrapper[4806]: I0126 07:57:18.563989 4806 generic.go:334] "Generic (PLEG): container finished" podID="217de6b0-bcc1-40a2-b9bd-b61ebd6b3d5f" containerID="4455fe9c76c39e494a6709b203e057e65fa8c2992c38b7a9846cbaf1bd320613" exitCode=0 Jan 26 07:57:18 crc kubenswrapper[4806]: I0126 07:57:18.564329 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-96f6d79dc-fwc4q" event={"ID":"217de6b0-bcc1-40a2-b9bd-b61ebd6b3d5f","Type":"ContainerDied","Data":"4455fe9c76c39e494a6709b203e057e65fa8c2992c38b7a9846cbaf1bd320613"} Jan 26 07:57:18 crc kubenswrapper[4806]: I0126 07:57:18.567000 4806 generic.go:334] "Generic (PLEG): container finished" podID="4a8e9db2-4f5a-4499-962a-c8f784f509c6" containerID="8628bc1daff07c10fb3ccb6936c4281a5090982cb2de289fe349d61bd65b32b9" exitCode=0 Jan 26 07:57:18 crc kubenswrapper[4806]: I0126 07:57:18.567070 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-677859bc77-4cmtx" event={"ID":"4a8e9db2-4f5a-4499-962a-c8f784f509c6","Type":"ContainerDied","Data":"8628bc1daff07c10fb3ccb6936c4281a5090982cb2de289fe349d61bd65b32b9"} Jan 26 07:57:18 crc kubenswrapper[4806]: I0126 07:57:18.574986 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" event={"ID":"d07502a2-50b0-4012-b335-340a1c694c50","Type":"ContainerStarted","Data":"61cccd600d491aa95cc07ae3edd2fe4d985307d841d68d06d1cce694939e53c9"} Jan 26 07:57:18 crc kubenswrapper[4806]: I0126 07:57:18.818890 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-677859bc77-4cmtx" Jan 26 07:57:18 crc kubenswrapper[4806]: I0126 07:57:18.879962 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a8e9db2-4f5a-4499-962a-c8f784f509c6-config\") pod \"4a8e9db2-4f5a-4499-962a-c8f784f509c6\" (UID: \"4a8e9db2-4f5a-4499-962a-c8f784f509c6\") " Jan 26 07:57:18 crc kubenswrapper[4806]: I0126 07:57:18.880185 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kxfvg\" (UniqueName: \"kubernetes.io/projected/4a8e9db2-4f5a-4499-962a-c8f784f509c6-kube-api-access-kxfvg\") pod \"4a8e9db2-4f5a-4499-962a-c8f784f509c6\" (UID: \"4a8e9db2-4f5a-4499-962a-c8f784f509c6\") " Jan 26 07:57:18 crc kubenswrapper[4806]: I0126 07:57:18.880355 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4a8e9db2-4f5a-4499-962a-c8f784f509c6-client-ca\") pod \"4a8e9db2-4f5a-4499-962a-c8f784f509c6\" (UID: \"4a8e9db2-4f5a-4499-962a-c8f784f509c6\") " Jan 26 07:57:18 crc kubenswrapper[4806]: I0126 07:57:18.880434 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4a8e9db2-4f5a-4499-962a-c8f784f509c6-serving-cert\") pod \"4a8e9db2-4f5a-4499-962a-c8f784f509c6\" (UID: \"4a8e9db2-4f5a-4499-962a-c8f784f509c6\") " Jan 26 07:57:18 crc kubenswrapper[4806]: I0126 07:57:18.881184 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a8e9db2-4f5a-4499-962a-c8f784f509c6-config" (OuterVolumeSpecName: "config") pod "4a8e9db2-4f5a-4499-962a-c8f784f509c6" (UID: "4a8e9db2-4f5a-4499-962a-c8f784f509c6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 07:57:18 crc kubenswrapper[4806]: I0126 07:57:18.883646 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a8e9db2-4f5a-4499-962a-c8f784f509c6-client-ca" (OuterVolumeSpecName: "client-ca") pod "4a8e9db2-4f5a-4499-962a-c8f784f509c6" (UID: "4a8e9db2-4f5a-4499-962a-c8f784f509c6"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 07:57:18 crc kubenswrapper[4806]: I0126 07:57:18.887354 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a8e9db2-4f5a-4499-962a-c8f784f509c6-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "4a8e9db2-4f5a-4499-962a-c8f784f509c6" (UID: "4a8e9db2-4f5a-4499-962a-c8f784f509c6"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 07:57:18 crc kubenswrapper[4806]: I0126 07:57:18.890267 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a8e9db2-4f5a-4499-962a-c8f784f509c6-kube-api-access-kxfvg" (OuterVolumeSpecName: "kube-api-access-kxfvg") pod "4a8e9db2-4f5a-4499-962a-c8f784f509c6" (UID: "4a8e9db2-4f5a-4499-962a-c8f784f509c6"). InnerVolumeSpecName "kube-api-access-kxfvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 07:57:18 crc kubenswrapper[4806]: I0126 07:57:18.929349 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-96f6d79dc-fwc4q" Jan 26 07:57:18 crc kubenswrapper[4806]: I0126 07:57:18.981900 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zggrn\" (UniqueName: \"kubernetes.io/projected/217de6b0-bcc1-40a2-b9bd-b61ebd6b3d5f-kube-api-access-zggrn\") pod \"217de6b0-bcc1-40a2-b9bd-b61ebd6b3d5f\" (UID: \"217de6b0-bcc1-40a2-b9bd-b61ebd6b3d5f\") " Jan 26 07:57:18 crc kubenswrapper[4806]: I0126 07:57:18.982009 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/217de6b0-bcc1-40a2-b9bd-b61ebd6b3d5f-serving-cert\") pod \"217de6b0-bcc1-40a2-b9bd-b61ebd6b3d5f\" (UID: \"217de6b0-bcc1-40a2-b9bd-b61ebd6b3d5f\") " Jan 26 07:57:18 crc kubenswrapper[4806]: I0126 07:57:18.982059 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/217de6b0-bcc1-40a2-b9bd-b61ebd6b3d5f-client-ca\") pod \"217de6b0-bcc1-40a2-b9bd-b61ebd6b3d5f\" (UID: \"217de6b0-bcc1-40a2-b9bd-b61ebd6b3d5f\") " Jan 26 07:57:18 crc kubenswrapper[4806]: I0126 07:57:18.982083 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/217de6b0-bcc1-40a2-b9bd-b61ebd6b3d5f-config\") pod \"217de6b0-bcc1-40a2-b9bd-b61ebd6b3d5f\" (UID: \"217de6b0-bcc1-40a2-b9bd-b61ebd6b3d5f\") " Jan 26 07:57:18 crc kubenswrapper[4806]: I0126 07:57:18.982121 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/217de6b0-bcc1-40a2-b9bd-b61ebd6b3d5f-proxy-ca-bundles\") pod \"217de6b0-bcc1-40a2-b9bd-b61ebd6b3d5f\" (UID: \"217de6b0-bcc1-40a2-b9bd-b61ebd6b3d5f\") " Jan 26 07:57:18 crc kubenswrapper[4806]: I0126 07:57:18.982391 4806 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4a8e9db2-4f5a-4499-962a-c8f784f509c6-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 07:57:18 crc kubenswrapper[4806]: I0126 07:57:18.982409 4806 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4a8e9db2-4f5a-4499-962a-c8f784f509c6-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 07:57:18 crc kubenswrapper[4806]: I0126 07:57:18.982418 4806 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a8e9db2-4f5a-4499-962a-c8f784f509c6-config\") on node \"crc\" DevicePath \"\"" Jan 26 07:57:18 crc kubenswrapper[4806]: I0126 07:57:18.982427 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kxfvg\" (UniqueName: \"kubernetes.io/projected/4a8e9db2-4f5a-4499-962a-c8f784f509c6-kube-api-access-kxfvg\") on node \"crc\" DevicePath \"\"" Jan 26 07:57:18 crc kubenswrapper[4806]: I0126 07:57:18.983175 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/217de6b0-bcc1-40a2-b9bd-b61ebd6b3d5f-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "217de6b0-bcc1-40a2-b9bd-b61ebd6b3d5f" (UID: "217de6b0-bcc1-40a2-b9bd-b61ebd6b3d5f"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 07:57:18 crc kubenswrapper[4806]: I0126 07:57:18.984997 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/217de6b0-bcc1-40a2-b9bd-b61ebd6b3d5f-client-ca" (OuterVolumeSpecName: "client-ca") pod "217de6b0-bcc1-40a2-b9bd-b61ebd6b3d5f" (UID: "217de6b0-bcc1-40a2-b9bd-b61ebd6b3d5f"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 07:57:18 crc kubenswrapper[4806]: I0126 07:57:18.985338 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/217de6b0-bcc1-40a2-b9bd-b61ebd6b3d5f-config" (OuterVolumeSpecName: "config") pod "217de6b0-bcc1-40a2-b9bd-b61ebd6b3d5f" (UID: "217de6b0-bcc1-40a2-b9bd-b61ebd6b3d5f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 07:57:18 crc kubenswrapper[4806]: I0126 07:57:18.989868 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/217de6b0-bcc1-40a2-b9bd-b61ebd6b3d5f-kube-api-access-zggrn" (OuterVolumeSpecName: "kube-api-access-zggrn") pod "217de6b0-bcc1-40a2-b9bd-b61ebd6b3d5f" (UID: "217de6b0-bcc1-40a2-b9bd-b61ebd6b3d5f"). InnerVolumeSpecName "kube-api-access-zggrn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 07:57:18 crc kubenswrapper[4806]: I0126 07:57:18.990717 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/217de6b0-bcc1-40a2-b9bd-b61ebd6b3d5f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "217de6b0-bcc1-40a2-b9bd-b61ebd6b3d5f" (UID: "217de6b0-bcc1-40a2-b9bd-b61ebd6b3d5f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 07:57:19 crc kubenswrapper[4806]: I0126 07:57:19.084342 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zggrn\" (UniqueName: \"kubernetes.io/projected/217de6b0-bcc1-40a2-b9bd-b61ebd6b3d5f-kube-api-access-zggrn\") on node \"crc\" DevicePath \"\"" Jan 26 07:57:19 crc kubenswrapper[4806]: I0126 07:57:19.084390 4806 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/217de6b0-bcc1-40a2-b9bd-b61ebd6b3d5f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 07:57:19 crc kubenswrapper[4806]: I0126 07:57:19.084408 4806 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/217de6b0-bcc1-40a2-b9bd-b61ebd6b3d5f-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 07:57:19 crc kubenswrapper[4806]: I0126 07:57:19.084422 4806 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/217de6b0-bcc1-40a2-b9bd-b61ebd6b3d5f-config\") on node \"crc\" DevicePath \"\"" Jan 26 07:57:19 crc kubenswrapper[4806]: I0126 07:57:19.084437 4806 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/217de6b0-bcc1-40a2-b9bd-b61ebd6b3d5f-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 07:57:19 crc kubenswrapper[4806]: I0126 07:57:19.496125 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-855c4f8dd7-hvp5r"] Jan 26 07:57:19 crc kubenswrapper[4806]: E0126 07:57:19.496434 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a8e9db2-4f5a-4499-962a-c8f784f509c6" containerName="route-controller-manager" Jan 26 07:57:19 crc kubenswrapper[4806]: I0126 07:57:19.496449 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a8e9db2-4f5a-4499-962a-c8f784f509c6" containerName="route-controller-manager" Jan 26 07:57:19 crc kubenswrapper[4806]: E0126 07:57:19.496465 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="217de6b0-bcc1-40a2-b9bd-b61ebd6b3d5f" containerName="controller-manager" Jan 26 07:57:19 crc kubenswrapper[4806]: I0126 07:57:19.496472 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="217de6b0-bcc1-40a2-b9bd-b61ebd6b3d5f" containerName="controller-manager" Jan 26 07:57:19 crc kubenswrapper[4806]: I0126 07:57:19.496615 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a8e9db2-4f5a-4499-962a-c8f784f509c6" containerName="route-controller-manager" Jan 26 07:57:19 crc kubenswrapper[4806]: I0126 07:57:19.496632 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="217de6b0-bcc1-40a2-b9bd-b61ebd6b3d5f" containerName="controller-manager" Jan 26 07:57:19 crc kubenswrapper[4806]: I0126 07:57:19.497158 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-855c4f8dd7-hvp5r" Jan 26 07:57:19 crc kubenswrapper[4806]: I0126 07:57:19.499502 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6485f867bf-pzb4p"] Jan 26 07:57:19 crc kubenswrapper[4806]: I0126 07:57:19.502165 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6485f867bf-pzb4p" Jan 26 07:57:19 crc kubenswrapper[4806]: I0126 07:57:19.518474 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6485f867bf-pzb4p"] Jan 26 07:57:19 crc kubenswrapper[4806]: I0126 07:57:19.534540 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-855c4f8dd7-hvp5r"] Jan 26 07:57:19 crc kubenswrapper[4806]: I0126 07:57:19.584211 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-677859bc77-4cmtx" event={"ID":"4a8e9db2-4f5a-4499-962a-c8f784f509c6","Type":"ContainerDied","Data":"184f4e14b3b8b9b68afb64d9af7e05fddf7a068994be23d8d16c1cb7e7c4f090"} Jan 26 07:57:19 crc kubenswrapper[4806]: I0126 07:57:19.584267 4806 scope.go:117] "RemoveContainer" containerID="8628bc1daff07c10fb3ccb6936c4281a5090982cb2de289fe349d61bd65b32b9" Jan 26 07:57:19 crc kubenswrapper[4806]: I0126 07:57:19.584403 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-677859bc77-4cmtx" Jan 26 07:57:19 crc kubenswrapper[4806]: I0126 07:57:19.596842 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0775fd2f-e9a2-480c-9d9a-06a699a798cd-serving-cert\") pod \"route-controller-manager-855c4f8dd7-hvp5r\" (UID: \"0775fd2f-e9a2-480c-9d9a-06a699a798cd\") " pod="openshift-route-controller-manager/route-controller-manager-855c4f8dd7-hvp5r" Jan 26 07:57:19 crc kubenswrapper[4806]: I0126 07:57:19.596905 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ng9mq\" (UniqueName: \"kubernetes.io/projected/36f3b6b5-cebd-49fa-b71d-c1e067e01654-kube-api-access-ng9mq\") pod \"controller-manager-6485f867bf-pzb4p\" (UID: \"36f3b6b5-cebd-49fa-b71d-c1e067e01654\") " pod="openshift-controller-manager/controller-manager-6485f867bf-pzb4p" Jan 26 07:57:19 crc kubenswrapper[4806]: I0126 07:57:19.596939 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/36f3b6b5-cebd-49fa-b71d-c1e067e01654-proxy-ca-bundles\") pod \"controller-manager-6485f867bf-pzb4p\" (UID: \"36f3b6b5-cebd-49fa-b71d-c1e067e01654\") " pod="openshift-controller-manager/controller-manager-6485f867bf-pzb4p" Jan 26 07:57:19 crc kubenswrapper[4806]: I0126 07:57:19.596982 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36f3b6b5-cebd-49fa-b71d-c1e067e01654-config\") pod \"controller-manager-6485f867bf-pzb4p\" (UID: \"36f3b6b5-cebd-49fa-b71d-c1e067e01654\") " pod="openshift-controller-manager/controller-manager-6485f867bf-pzb4p" Jan 26 07:57:19 crc kubenswrapper[4806]: I0126 07:57:19.597017 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/36f3b6b5-cebd-49fa-b71d-c1e067e01654-client-ca\") pod \"controller-manager-6485f867bf-pzb4p\" (UID: \"36f3b6b5-cebd-49fa-b71d-c1e067e01654\") " pod="openshift-controller-manager/controller-manager-6485f867bf-pzb4p" Jan 26 07:57:19 crc kubenswrapper[4806]: I0126 07:57:19.597060 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0775fd2f-e9a2-480c-9d9a-06a699a798cd-config\") pod \"route-controller-manager-855c4f8dd7-hvp5r\" (UID: \"0775fd2f-e9a2-480c-9d9a-06a699a798cd\") " pod="openshift-route-controller-manager/route-controller-manager-855c4f8dd7-hvp5r" Jan 26 07:57:19 crc kubenswrapper[4806]: I0126 07:57:19.597087 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btn7g\" (UniqueName: \"kubernetes.io/projected/0775fd2f-e9a2-480c-9d9a-06a699a798cd-kube-api-access-btn7g\") pod \"route-controller-manager-855c4f8dd7-hvp5r\" (UID: \"0775fd2f-e9a2-480c-9d9a-06a699a798cd\") " pod="openshift-route-controller-manager/route-controller-manager-855c4f8dd7-hvp5r" Jan 26 07:57:19 crc kubenswrapper[4806]: I0126 07:57:19.597142 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/36f3b6b5-cebd-49fa-b71d-c1e067e01654-serving-cert\") pod \"controller-manager-6485f867bf-pzb4p\" (UID: \"36f3b6b5-cebd-49fa-b71d-c1e067e01654\") " pod="openshift-controller-manager/controller-manager-6485f867bf-pzb4p" Jan 26 07:57:19 crc kubenswrapper[4806]: I0126 07:57:19.597183 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0775fd2f-e9a2-480c-9d9a-06a699a798cd-client-ca\") pod \"route-controller-manager-855c4f8dd7-hvp5r\" (UID: \"0775fd2f-e9a2-480c-9d9a-06a699a798cd\") " pod="openshift-route-controller-manager/route-controller-manager-855c4f8dd7-hvp5r" Jan 26 07:57:19 crc kubenswrapper[4806]: I0126 07:57:19.597955 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-96f6d79dc-fwc4q" event={"ID":"217de6b0-bcc1-40a2-b9bd-b61ebd6b3d5f","Type":"ContainerDied","Data":"f149206e66ee1b7ab3aba2019914caa681ef5be5d69cab20f97dbd29acd5158a"} Jan 26 07:57:19 crc kubenswrapper[4806]: I0126 07:57:19.598204 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-96f6d79dc-fwc4q" Jan 26 07:57:19 crc kubenswrapper[4806]: I0126 07:57:19.607957 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-677859bc77-4cmtx"] Jan 26 07:57:19 crc kubenswrapper[4806]: I0126 07:57:19.613292 4806 scope.go:117] "RemoveContainer" containerID="4455fe9c76c39e494a6709b203e057e65fa8c2992c38b7a9846cbaf1bd320613" Jan 26 07:57:19 crc kubenswrapper[4806]: I0126 07:57:19.616616 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-677859bc77-4cmtx"] Jan 26 07:57:19 crc kubenswrapper[4806]: I0126 07:57:19.621720 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-96f6d79dc-fwc4q"] Jan 26 07:57:19 crc kubenswrapper[4806]: I0126 07:57:19.623071 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-96f6d79dc-fwc4q"] Jan 26 07:57:19 crc kubenswrapper[4806]: I0126 07:57:19.698675 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0775fd2f-e9a2-480c-9d9a-06a699a798cd-serving-cert\") pod \"route-controller-manager-855c4f8dd7-hvp5r\" (UID: \"0775fd2f-e9a2-480c-9d9a-06a699a798cd\") " pod="openshift-route-controller-manager/route-controller-manager-855c4f8dd7-hvp5r" Jan 26 07:57:19 crc kubenswrapper[4806]: I0126 07:57:19.698754 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ng9mq\" (UniqueName: \"kubernetes.io/projected/36f3b6b5-cebd-49fa-b71d-c1e067e01654-kube-api-access-ng9mq\") pod \"controller-manager-6485f867bf-pzb4p\" (UID: \"36f3b6b5-cebd-49fa-b71d-c1e067e01654\") " pod="openshift-controller-manager/controller-manager-6485f867bf-pzb4p" Jan 26 07:57:19 crc kubenswrapper[4806]: I0126 07:57:19.698780 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/36f3b6b5-cebd-49fa-b71d-c1e067e01654-proxy-ca-bundles\") pod \"controller-manager-6485f867bf-pzb4p\" (UID: \"36f3b6b5-cebd-49fa-b71d-c1e067e01654\") " pod="openshift-controller-manager/controller-manager-6485f867bf-pzb4p" Jan 26 07:57:19 crc kubenswrapper[4806]: I0126 07:57:19.698875 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36f3b6b5-cebd-49fa-b71d-c1e067e01654-config\") pod \"controller-manager-6485f867bf-pzb4p\" (UID: \"36f3b6b5-cebd-49fa-b71d-c1e067e01654\") " pod="openshift-controller-manager/controller-manager-6485f867bf-pzb4p" Jan 26 07:57:19 crc kubenswrapper[4806]: I0126 07:57:19.698926 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/36f3b6b5-cebd-49fa-b71d-c1e067e01654-client-ca\") pod \"controller-manager-6485f867bf-pzb4p\" (UID: \"36f3b6b5-cebd-49fa-b71d-c1e067e01654\") " pod="openshift-controller-manager/controller-manager-6485f867bf-pzb4p" Jan 26 07:57:19 crc kubenswrapper[4806]: I0126 07:57:19.698977 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0775fd2f-e9a2-480c-9d9a-06a699a798cd-config\") pod \"route-controller-manager-855c4f8dd7-hvp5r\" (UID: \"0775fd2f-e9a2-480c-9d9a-06a699a798cd\") " pod="openshift-route-controller-manager/route-controller-manager-855c4f8dd7-hvp5r" Jan 26 07:57:19 crc kubenswrapper[4806]: I0126 07:57:19.699001 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btn7g\" (UniqueName: \"kubernetes.io/projected/0775fd2f-e9a2-480c-9d9a-06a699a798cd-kube-api-access-btn7g\") pod \"route-controller-manager-855c4f8dd7-hvp5r\" (UID: \"0775fd2f-e9a2-480c-9d9a-06a699a798cd\") " pod="openshift-route-controller-manager/route-controller-manager-855c4f8dd7-hvp5r" Jan 26 07:57:19 crc kubenswrapper[4806]: I0126 07:57:19.699065 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/36f3b6b5-cebd-49fa-b71d-c1e067e01654-serving-cert\") pod \"controller-manager-6485f867bf-pzb4p\" (UID: \"36f3b6b5-cebd-49fa-b71d-c1e067e01654\") " pod="openshift-controller-manager/controller-manager-6485f867bf-pzb4p" Jan 26 07:57:19 crc kubenswrapper[4806]: I0126 07:57:19.699117 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0775fd2f-e9a2-480c-9d9a-06a699a798cd-client-ca\") pod \"route-controller-manager-855c4f8dd7-hvp5r\" (UID: \"0775fd2f-e9a2-480c-9d9a-06a699a798cd\") " pod="openshift-route-controller-manager/route-controller-manager-855c4f8dd7-hvp5r" Jan 26 07:57:19 crc kubenswrapper[4806]: I0126 07:57:19.700411 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/36f3b6b5-cebd-49fa-b71d-c1e067e01654-client-ca\") pod \"controller-manager-6485f867bf-pzb4p\" (UID: \"36f3b6b5-cebd-49fa-b71d-c1e067e01654\") " pod="openshift-controller-manager/controller-manager-6485f867bf-pzb4p" Jan 26 07:57:19 crc kubenswrapper[4806]: I0126 07:57:19.701581 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/36f3b6b5-cebd-49fa-b71d-c1e067e01654-proxy-ca-bundles\") pod \"controller-manager-6485f867bf-pzb4p\" (UID: \"36f3b6b5-cebd-49fa-b71d-c1e067e01654\") " pod="openshift-controller-manager/controller-manager-6485f867bf-pzb4p" Jan 26 07:57:19 crc kubenswrapper[4806]: I0126 07:57:19.701973 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0775fd2f-e9a2-480c-9d9a-06a699a798cd-config\") pod \"route-controller-manager-855c4f8dd7-hvp5r\" (UID: \"0775fd2f-e9a2-480c-9d9a-06a699a798cd\") " pod="openshift-route-controller-manager/route-controller-manager-855c4f8dd7-hvp5r" Jan 26 07:57:19 crc kubenswrapper[4806]: I0126 07:57:19.702761 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0775fd2f-e9a2-480c-9d9a-06a699a798cd-client-ca\") pod \"route-controller-manager-855c4f8dd7-hvp5r\" (UID: \"0775fd2f-e9a2-480c-9d9a-06a699a798cd\") " pod="openshift-route-controller-manager/route-controller-manager-855c4f8dd7-hvp5r" Jan 26 07:57:19 crc kubenswrapper[4806]: I0126 07:57:19.703205 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36f3b6b5-cebd-49fa-b71d-c1e067e01654-config\") pod \"controller-manager-6485f867bf-pzb4p\" (UID: \"36f3b6b5-cebd-49fa-b71d-c1e067e01654\") " pod="openshift-controller-manager/controller-manager-6485f867bf-pzb4p" Jan 26 07:57:19 crc kubenswrapper[4806]: I0126 07:57:19.704053 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0775fd2f-e9a2-480c-9d9a-06a699a798cd-serving-cert\") pod \"route-controller-manager-855c4f8dd7-hvp5r\" (UID: \"0775fd2f-e9a2-480c-9d9a-06a699a798cd\") " pod="openshift-route-controller-manager/route-controller-manager-855c4f8dd7-hvp5r" Jan 26 07:57:19 crc kubenswrapper[4806]: I0126 07:57:19.704597 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/36f3b6b5-cebd-49fa-b71d-c1e067e01654-serving-cert\") pod \"controller-manager-6485f867bf-pzb4p\" (UID: \"36f3b6b5-cebd-49fa-b71d-c1e067e01654\") " pod="openshift-controller-manager/controller-manager-6485f867bf-pzb4p" Jan 26 07:57:19 crc kubenswrapper[4806]: I0126 07:57:19.715951 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ng9mq\" (UniqueName: \"kubernetes.io/projected/36f3b6b5-cebd-49fa-b71d-c1e067e01654-kube-api-access-ng9mq\") pod \"controller-manager-6485f867bf-pzb4p\" (UID: \"36f3b6b5-cebd-49fa-b71d-c1e067e01654\") " pod="openshift-controller-manager/controller-manager-6485f867bf-pzb4p" Jan 26 07:57:19 crc kubenswrapper[4806]: I0126 07:57:19.724306 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-btn7g\" (UniqueName: \"kubernetes.io/projected/0775fd2f-e9a2-480c-9d9a-06a699a798cd-kube-api-access-btn7g\") pod \"route-controller-manager-855c4f8dd7-hvp5r\" (UID: \"0775fd2f-e9a2-480c-9d9a-06a699a798cd\") " pod="openshift-route-controller-manager/route-controller-manager-855c4f8dd7-hvp5r" Jan 26 07:57:19 crc kubenswrapper[4806]: I0126 07:57:19.815471 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-855c4f8dd7-hvp5r" Jan 26 07:57:19 crc kubenswrapper[4806]: I0126 07:57:19.825918 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6485f867bf-pzb4p" Jan 26 07:57:20 crc kubenswrapper[4806]: I0126 07:57:20.303119 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-855c4f8dd7-hvp5r"] Jan 26 07:57:20 crc kubenswrapper[4806]: I0126 07:57:20.378843 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6485f867bf-pzb4p"] Jan 26 07:57:20 crc kubenswrapper[4806]: W0126 07:57:20.391012 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod36f3b6b5_cebd_49fa_b71d_c1e067e01654.slice/crio-83ff95be8f6e57f20b4cbb87b3718f0f507dfdc8233119f01fb136b0b4226eeb WatchSource:0}: Error finding container 83ff95be8f6e57f20b4cbb87b3718f0f507dfdc8233119f01fb136b0b4226eeb: Status 404 returned error can't find the container with id 83ff95be8f6e57f20b4cbb87b3718f0f507dfdc8233119f01fb136b0b4226eeb Jan 26 07:57:20 crc kubenswrapper[4806]: I0126 07:57:20.603482 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6485f867bf-pzb4p" event={"ID":"36f3b6b5-cebd-49fa-b71d-c1e067e01654","Type":"ContainerStarted","Data":"f68b40991145135833449dedb2452446e23af217106bdfc59e507a006afcc52c"} Jan 26 07:57:20 crc kubenswrapper[4806]: I0126 07:57:20.603541 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6485f867bf-pzb4p" event={"ID":"36f3b6b5-cebd-49fa-b71d-c1e067e01654","Type":"ContainerStarted","Data":"83ff95be8f6e57f20b4cbb87b3718f0f507dfdc8233119f01fb136b0b4226eeb"} Jan 26 07:57:20 crc kubenswrapper[4806]: I0126 07:57:20.604780 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6485f867bf-pzb4p" Jan 26 07:57:20 crc kubenswrapper[4806]: I0126 07:57:20.610965 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-855c4f8dd7-hvp5r" event={"ID":"0775fd2f-e9a2-480c-9d9a-06a699a798cd","Type":"ContainerStarted","Data":"e1355d94b6e0aa1a29ea27a0658bc45042e2d43c76c293c1cdbc92f35c6bc969"} Jan 26 07:57:20 crc kubenswrapper[4806]: I0126 07:57:20.610995 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-855c4f8dd7-hvp5r" event={"ID":"0775fd2f-e9a2-480c-9d9a-06a699a798cd","Type":"ContainerStarted","Data":"c6ddd354e632ba51f8c7cabccd97911e09dc7323d3f5655e49a7687397636c19"} Jan 26 07:57:20 crc kubenswrapper[4806]: I0126 07:57:20.611704 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-855c4f8dd7-hvp5r" Jan 26 07:57:20 crc kubenswrapper[4806]: I0126 07:57:20.613474 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6485f867bf-pzb4p" Jan 26 07:57:20 crc kubenswrapper[4806]: I0126 07:57:20.625643 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6485f867bf-pzb4p" podStartSLOduration=2.625623996 podStartE2EDuration="2.625623996s" podCreationTimestamp="2026-01-26 07:57:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 07:57:20.623551646 +0000 UTC m=+219.887959702" watchObservedRunningTime="2026-01-26 07:57:20.625623996 +0000 UTC m=+219.890032052" Jan 26 07:57:20 crc kubenswrapper[4806]: I0126 07:57:20.639115 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-855c4f8dd7-hvp5r" podStartSLOduration=2.639101425 podStartE2EDuration="2.639101425s" podCreationTimestamp="2026-01-26 07:57:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 07:57:20.638186718 +0000 UTC m=+219.902594774" watchObservedRunningTime="2026-01-26 07:57:20.639101425 +0000 UTC m=+219.903509481" Jan 26 07:57:21 crc kubenswrapper[4806]: I0126 07:57:21.028664 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-855c4f8dd7-hvp5r" Jan 26 07:57:21 crc kubenswrapper[4806]: I0126 07:57:21.047481 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="217de6b0-bcc1-40a2-b9bd-b61ebd6b3d5f" path="/var/lib/kubelet/pods/217de6b0-bcc1-40a2-b9bd-b61ebd6b3d5f/volumes" Jan 26 07:57:21 crc kubenswrapper[4806]: I0126 07:57:21.048146 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a8e9db2-4f5a-4499-962a-c8f784f509c6" path="/var/lib/kubelet/pods/4a8e9db2-4f5a-4499-962a-c8f784f509c6/volumes" Jan 26 07:57:32 crc kubenswrapper[4806]: I0126 07:57:32.019200 4806 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 26 07:57:32 crc kubenswrapper[4806]: I0126 07:57:32.020916 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://1d8fe9445b34c3dea0e5508fbbd9d8c72d3b0bd3c927f3d0469fb5f86a61b7a3" gracePeriod=15 Jan 26 07:57:32 crc kubenswrapper[4806]: I0126 07:57:32.020911 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://2eda625619208720589e5f413d0df8e1e55e7fd5ab4a2e59d64e19150afb05ac" gracePeriod=15 Jan 26 07:57:32 crc kubenswrapper[4806]: I0126 07:57:32.020921 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://befc3a7310ed730ab06800d0ffe5a5ad206f5193f921d0080543d96e53c382cc" gracePeriod=15 Jan 26 07:57:32 crc kubenswrapper[4806]: I0126 07:57:32.020850 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://370998a25827938e42e6ca8f51cb20c64068d266a2a6e92f43cfa6033efe1f1f" gracePeriod=15 Jan 26 07:57:32 crc kubenswrapper[4806]: I0126 07:57:32.020924 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://6f4e90bd8d92e4159f45b887647465c23595d6a14cb9fd5a6f3b6c736345b302" gracePeriod=15 Jan 26 07:57:32 crc kubenswrapper[4806]: I0126 07:57:32.021442 4806 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 26 07:57:32 crc kubenswrapper[4806]: E0126 07:57:32.023827 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 26 07:57:32 crc kubenswrapper[4806]: I0126 07:57:32.023907 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 26 07:57:32 crc kubenswrapper[4806]: E0126 07:57:32.023967 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 26 07:57:32 crc kubenswrapper[4806]: I0126 07:57:32.024023 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 26 07:57:32 crc kubenswrapper[4806]: E0126 07:57:32.024079 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 26 07:57:32 crc kubenswrapper[4806]: I0126 07:57:32.024142 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 26 07:57:32 crc kubenswrapper[4806]: E0126 07:57:32.024199 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 26 07:57:32 crc kubenswrapper[4806]: I0126 07:57:32.024255 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 26 07:57:32 crc kubenswrapper[4806]: E0126 07:57:32.024312 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 26 07:57:32 crc kubenswrapper[4806]: I0126 07:57:32.024365 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 26 07:57:32 crc kubenswrapper[4806]: E0126 07:57:32.024449 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 26 07:57:32 crc kubenswrapper[4806]: I0126 07:57:32.024508 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 26 07:57:32 crc kubenswrapper[4806]: E0126 07:57:32.024600 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 26 07:57:32 crc kubenswrapper[4806]: I0126 07:57:32.024658 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 26 07:57:32 crc kubenswrapper[4806]: I0126 07:57:32.024813 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 26 07:57:32 crc kubenswrapper[4806]: I0126 07:57:32.024876 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 26 07:57:32 crc kubenswrapper[4806]: I0126 07:57:32.024933 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 26 07:57:32 crc kubenswrapper[4806]: I0126 07:57:32.024993 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 26 07:57:32 crc kubenswrapper[4806]: I0126 07:57:32.025058 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 26 07:57:32 crc kubenswrapper[4806]: I0126 07:57:32.025115 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 26 07:57:32 crc kubenswrapper[4806]: I0126 07:57:32.026993 4806 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 26 07:57:32 crc kubenswrapper[4806]: I0126 07:57:32.031777 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 07:57:32 crc kubenswrapper[4806]: I0126 07:57:32.036038 4806 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Jan 26 07:57:32 crc kubenswrapper[4806]: I0126 07:57:32.074655 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 07:57:32 crc kubenswrapper[4806]: I0126 07:57:32.074712 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 07:57:32 crc kubenswrapper[4806]: I0126 07:57:32.074737 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 07:57:32 crc kubenswrapper[4806]: I0126 07:57:32.074765 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 07:57:32 crc kubenswrapper[4806]: I0126 07:57:32.074798 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 07:57:32 crc kubenswrapper[4806]: I0126 07:57:32.074843 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 07:57:32 crc kubenswrapper[4806]: I0126 07:57:32.074875 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 07:57:32 crc kubenswrapper[4806]: I0126 07:57:32.074903 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 07:57:32 crc kubenswrapper[4806]: I0126 07:57:32.175664 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 07:57:32 crc kubenswrapper[4806]: I0126 07:57:32.175706 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 07:57:32 crc kubenswrapper[4806]: I0126 07:57:32.175754 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 07:57:32 crc kubenswrapper[4806]: I0126 07:57:32.175773 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 07:57:32 crc kubenswrapper[4806]: I0126 07:57:32.175792 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 07:57:32 crc kubenswrapper[4806]: I0126 07:57:32.175820 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 07:57:32 crc kubenswrapper[4806]: I0126 07:57:32.175833 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 07:57:32 crc kubenswrapper[4806]: I0126 07:57:32.175872 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 07:57:32 crc kubenswrapper[4806]: I0126 07:57:32.175930 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 07:57:32 crc kubenswrapper[4806]: I0126 07:57:32.176168 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 07:57:32 crc kubenswrapper[4806]: I0126 07:57:32.176266 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 07:57:32 crc kubenswrapper[4806]: I0126 07:57:32.176304 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 07:57:32 crc kubenswrapper[4806]: I0126 07:57:32.176336 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 07:57:32 crc kubenswrapper[4806]: I0126 07:57:32.176368 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 07:57:32 crc kubenswrapper[4806]: I0126 07:57:32.176402 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 07:57:32 crc kubenswrapper[4806]: I0126 07:57:32.176668 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 07:57:32 crc kubenswrapper[4806]: I0126 07:57:32.687563 4806 generic.go:334] "Generic (PLEG): container finished" podID="fc2b475a-c1a6-46d8-bbc6-a8a7f5934df1" containerID="ba1df00b886e4f451e117cf26613045ab8106bd35b88b17625275bbe96b1e401" exitCode=0 Jan 26 07:57:32 crc kubenswrapper[4806]: I0126 07:57:32.687644 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"fc2b475a-c1a6-46d8-bbc6-a8a7f5934df1","Type":"ContainerDied","Data":"ba1df00b886e4f451e117cf26613045ab8106bd35b88b17625275bbe96b1e401"} Jan 26 07:57:32 crc kubenswrapper[4806]: I0126 07:57:32.688416 4806 status_manager.go:851] "Failed to get status for pod" podUID="fc2b475a-c1a6-46d8-bbc6-a8a7f5934df1" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.66:6443: connect: connection refused" Jan 26 07:57:32 crc kubenswrapper[4806]: I0126 07:57:32.690146 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 26 07:57:32 crc kubenswrapper[4806]: I0126 07:57:32.691499 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 26 07:57:32 crc kubenswrapper[4806]: I0126 07:57:32.692356 4806 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="1d8fe9445b34c3dea0e5508fbbd9d8c72d3b0bd3c927f3d0469fb5f86a61b7a3" exitCode=0 Jan 26 07:57:32 crc kubenswrapper[4806]: I0126 07:57:32.692382 4806 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="2eda625619208720589e5f413d0df8e1e55e7fd5ab4a2e59d64e19150afb05ac" exitCode=0 Jan 26 07:57:32 crc kubenswrapper[4806]: I0126 07:57:32.692393 4806 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="befc3a7310ed730ab06800d0ffe5a5ad206f5193f921d0080543d96e53c382cc" exitCode=0 Jan 26 07:57:32 crc kubenswrapper[4806]: I0126 07:57:32.692402 4806 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="6f4e90bd8d92e4159f45b887647465c23595d6a14cb9fd5a6f3b6c736345b302" exitCode=2 Jan 26 07:57:32 crc kubenswrapper[4806]: I0126 07:57:32.692447 4806 scope.go:117] "RemoveContainer" containerID="6442d8b57ec5789bc5ad219da279d42cd053598b4a8a843b4190a619750483d3" Jan 26 07:57:33 crc kubenswrapper[4806]: I0126 07:57:33.700935 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 26 07:57:33 crc kubenswrapper[4806]: E0126 07:57:33.962152 4806 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.66:6443: connect: connection refused" Jan 26 07:57:33 crc kubenswrapper[4806]: E0126 07:57:33.962747 4806 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.66:6443: connect: connection refused" Jan 26 07:57:33 crc kubenswrapper[4806]: E0126 07:57:33.963003 4806 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.66:6443: connect: connection refused" Jan 26 07:57:33 crc kubenswrapper[4806]: E0126 07:57:33.963179 4806 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.66:6443: connect: connection refused" Jan 26 07:57:33 crc kubenswrapper[4806]: E0126 07:57:33.963352 4806 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.66:6443: connect: connection refused" Jan 26 07:57:33 crc kubenswrapper[4806]: I0126 07:57:33.963374 4806 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 26 07:57:33 crc kubenswrapper[4806]: E0126 07:57:33.963556 4806 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.66:6443: connect: connection refused" interval="200ms" Jan 26 07:57:34 crc kubenswrapper[4806]: I0126 07:57:34.144990 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 26 07:57:34 crc kubenswrapper[4806]: I0126 07:57:34.145614 4806 status_manager.go:851] "Failed to get status for pod" podUID="fc2b475a-c1a6-46d8-bbc6-a8a7f5934df1" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.66:6443: connect: connection refused" Jan 26 07:57:34 crc kubenswrapper[4806]: E0126 07:57:34.164222 4806 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.66:6443: connect: connection refused" interval="400ms" Jan 26 07:57:34 crc kubenswrapper[4806]: I0126 07:57:34.212290 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/fc2b475a-c1a6-46d8-bbc6-a8a7f5934df1-var-lock\") pod \"fc2b475a-c1a6-46d8-bbc6-a8a7f5934df1\" (UID: \"fc2b475a-c1a6-46d8-bbc6-a8a7f5934df1\") " Jan 26 07:57:34 crc kubenswrapper[4806]: I0126 07:57:34.212400 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fc2b475a-c1a6-46d8-bbc6-a8a7f5934df1-kubelet-dir\") pod \"fc2b475a-c1a6-46d8-bbc6-a8a7f5934df1\" (UID: \"fc2b475a-c1a6-46d8-bbc6-a8a7f5934df1\") " Jan 26 07:57:34 crc kubenswrapper[4806]: I0126 07:57:34.212513 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fc2b475a-c1a6-46d8-bbc6-a8a7f5934df1-kube-api-access\") pod \"fc2b475a-c1a6-46d8-bbc6-a8a7f5934df1\" (UID: \"fc2b475a-c1a6-46d8-bbc6-a8a7f5934df1\") " Jan 26 07:57:34 crc kubenswrapper[4806]: I0126 07:57:34.213393 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fc2b475a-c1a6-46d8-bbc6-a8a7f5934df1-var-lock" (OuterVolumeSpecName: "var-lock") pod "fc2b475a-c1a6-46d8-bbc6-a8a7f5934df1" (UID: "fc2b475a-c1a6-46d8-bbc6-a8a7f5934df1"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 07:57:34 crc kubenswrapper[4806]: I0126 07:57:34.213460 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fc2b475a-c1a6-46d8-bbc6-a8a7f5934df1-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "fc2b475a-c1a6-46d8-bbc6-a8a7f5934df1" (UID: "fc2b475a-c1a6-46d8-bbc6-a8a7f5934df1"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 07:57:34 crc kubenswrapper[4806]: I0126 07:57:34.223974 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc2b475a-c1a6-46d8-bbc6-a8a7f5934df1-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "fc2b475a-c1a6-46d8-bbc6-a8a7f5934df1" (UID: "fc2b475a-c1a6-46d8-bbc6-a8a7f5934df1"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 07:57:34 crc kubenswrapper[4806]: I0126 07:57:34.313913 4806 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fc2b475a-c1a6-46d8-bbc6-a8a7f5934df1-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 26 07:57:34 crc kubenswrapper[4806]: I0126 07:57:34.313952 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fc2b475a-c1a6-46d8-bbc6-a8a7f5934df1-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 07:57:34 crc kubenswrapper[4806]: I0126 07:57:34.313962 4806 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/fc2b475a-c1a6-46d8-bbc6-a8a7f5934df1-var-lock\") on node \"crc\" DevicePath \"\"" Jan 26 07:57:34 crc kubenswrapper[4806]: I0126 07:57:34.492332 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 26 07:57:34 crc kubenswrapper[4806]: I0126 07:57:34.494436 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 07:57:34 crc kubenswrapper[4806]: I0126 07:57:34.495052 4806 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.66:6443: connect: connection refused" Jan 26 07:57:34 crc kubenswrapper[4806]: I0126 07:57:34.495365 4806 status_manager.go:851] "Failed to get status for pod" podUID="fc2b475a-c1a6-46d8-bbc6-a8a7f5934df1" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.66:6443: connect: connection refused" Jan 26 07:57:34 crc kubenswrapper[4806]: E0126 07:57:34.565114 4806 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.66:6443: connect: connection refused" interval="800ms" Jan 26 07:57:34 crc kubenswrapper[4806]: I0126 07:57:34.618395 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 26 07:57:34 crc kubenswrapper[4806]: I0126 07:57:34.618449 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 26 07:57:34 crc kubenswrapper[4806]: I0126 07:57:34.618584 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 26 07:57:34 crc kubenswrapper[4806]: I0126 07:57:34.618807 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 07:57:34 crc kubenswrapper[4806]: I0126 07:57:34.618907 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 07:57:34 crc kubenswrapper[4806]: I0126 07:57:34.618987 4806 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 26 07:57:34 crc kubenswrapper[4806]: I0126 07:57:34.618813 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 07:57:34 crc kubenswrapper[4806]: I0126 07:57:34.710691 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 26 07:57:34 crc kubenswrapper[4806]: I0126 07:57:34.712589 4806 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="370998a25827938e42e6ca8f51cb20c64068d266a2a6e92f43cfa6033efe1f1f" exitCode=0 Jan 26 07:57:34 crc kubenswrapper[4806]: I0126 07:57:34.712744 4806 scope.go:117] "RemoveContainer" containerID="1d8fe9445b34c3dea0e5508fbbd9d8c72d3b0bd3c927f3d0469fb5f86a61b7a3" Jan 26 07:57:34 crc kubenswrapper[4806]: I0126 07:57:34.713010 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 07:57:34 crc kubenswrapper[4806]: I0126 07:57:34.723902 4806 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 26 07:57:34 crc kubenswrapper[4806]: I0126 07:57:34.726740 4806 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 26 07:57:34 crc kubenswrapper[4806]: I0126 07:57:34.727618 4806 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.66:6443: connect: connection refused" Jan 26 07:57:34 crc kubenswrapper[4806]: I0126 07:57:34.728134 4806 status_manager.go:851] "Failed to get status for pod" podUID="fc2b475a-c1a6-46d8-bbc6-a8a7f5934df1" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.66:6443: connect: connection refused" Jan 26 07:57:34 crc kubenswrapper[4806]: I0126 07:57:34.729855 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"fc2b475a-c1a6-46d8-bbc6-a8a7f5934df1","Type":"ContainerDied","Data":"dd933404c856bc701675ec20780237b4d3553e674c57b77e81f49298bae3b8ac"} Jan 26 07:57:34 crc kubenswrapper[4806]: I0126 07:57:34.729887 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd933404c856bc701675ec20780237b4d3553e674c57b77e81f49298bae3b8ac" Jan 26 07:57:34 crc kubenswrapper[4806]: I0126 07:57:34.729955 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 26 07:57:34 crc kubenswrapper[4806]: I0126 07:57:34.743107 4806 status_manager.go:851] "Failed to get status for pod" podUID="fc2b475a-c1a6-46d8-bbc6-a8a7f5934df1" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.66:6443: connect: connection refused" Jan 26 07:57:34 crc kubenswrapper[4806]: I0126 07:57:34.743327 4806 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.66:6443: connect: connection refused" Jan 26 07:57:34 crc kubenswrapper[4806]: I0126 07:57:34.755446 4806 scope.go:117] "RemoveContainer" containerID="2eda625619208720589e5f413d0df8e1e55e7fd5ab4a2e59d64e19150afb05ac" Jan 26 07:57:34 crc kubenswrapper[4806]: I0126 07:57:34.771223 4806 scope.go:117] "RemoveContainer" containerID="befc3a7310ed730ab06800d0ffe5a5ad206f5193f921d0080543d96e53c382cc" Jan 26 07:57:34 crc kubenswrapper[4806]: I0126 07:57:34.786695 4806 scope.go:117] "RemoveContainer" containerID="6f4e90bd8d92e4159f45b887647465c23595d6a14cb9fd5a6f3b6c736345b302" Jan 26 07:57:34 crc kubenswrapper[4806]: I0126 07:57:34.801184 4806 scope.go:117] "RemoveContainer" containerID="370998a25827938e42e6ca8f51cb20c64068d266a2a6e92f43cfa6033efe1f1f" Jan 26 07:57:34 crc kubenswrapper[4806]: I0126 07:57:34.819859 4806 scope.go:117] "RemoveContainer" containerID="02368f0d186c7de1c3dc25d5c60985799ded799314f332c5132230a413d531a6" Jan 26 07:57:34 crc kubenswrapper[4806]: I0126 07:57:34.849008 4806 scope.go:117] "RemoveContainer" containerID="1d8fe9445b34c3dea0e5508fbbd9d8c72d3b0bd3c927f3d0469fb5f86a61b7a3" Jan 26 07:57:34 crc kubenswrapper[4806]: E0126 07:57:34.852586 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d8fe9445b34c3dea0e5508fbbd9d8c72d3b0bd3c927f3d0469fb5f86a61b7a3\": container with ID starting with 1d8fe9445b34c3dea0e5508fbbd9d8c72d3b0bd3c927f3d0469fb5f86a61b7a3 not found: ID does not exist" containerID="1d8fe9445b34c3dea0e5508fbbd9d8c72d3b0bd3c927f3d0469fb5f86a61b7a3" Jan 26 07:57:34 crc kubenswrapper[4806]: I0126 07:57:34.852647 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d8fe9445b34c3dea0e5508fbbd9d8c72d3b0bd3c927f3d0469fb5f86a61b7a3"} err="failed to get container status \"1d8fe9445b34c3dea0e5508fbbd9d8c72d3b0bd3c927f3d0469fb5f86a61b7a3\": rpc error: code = NotFound desc = could not find container \"1d8fe9445b34c3dea0e5508fbbd9d8c72d3b0bd3c927f3d0469fb5f86a61b7a3\": container with ID starting with 1d8fe9445b34c3dea0e5508fbbd9d8c72d3b0bd3c927f3d0469fb5f86a61b7a3 not found: ID does not exist" Jan 26 07:57:34 crc kubenswrapper[4806]: I0126 07:57:34.852688 4806 scope.go:117] "RemoveContainer" containerID="2eda625619208720589e5f413d0df8e1e55e7fd5ab4a2e59d64e19150afb05ac" Jan 26 07:57:34 crc kubenswrapper[4806]: E0126 07:57:34.853082 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2eda625619208720589e5f413d0df8e1e55e7fd5ab4a2e59d64e19150afb05ac\": container with ID starting with 2eda625619208720589e5f413d0df8e1e55e7fd5ab4a2e59d64e19150afb05ac not found: ID does not exist" containerID="2eda625619208720589e5f413d0df8e1e55e7fd5ab4a2e59d64e19150afb05ac" Jan 26 07:57:34 crc kubenswrapper[4806]: I0126 07:57:34.853194 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2eda625619208720589e5f413d0df8e1e55e7fd5ab4a2e59d64e19150afb05ac"} err="failed to get container status \"2eda625619208720589e5f413d0df8e1e55e7fd5ab4a2e59d64e19150afb05ac\": rpc error: code = NotFound desc = could not find container \"2eda625619208720589e5f413d0df8e1e55e7fd5ab4a2e59d64e19150afb05ac\": container with ID starting with 2eda625619208720589e5f413d0df8e1e55e7fd5ab4a2e59d64e19150afb05ac not found: ID does not exist" Jan 26 07:57:34 crc kubenswrapper[4806]: I0126 07:57:34.853276 4806 scope.go:117] "RemoveContainer" containerID="befc3a7310ed730ab06800d0ffe5a5ad206f5193f921d0080543d96e53c382cc" Jan 26 07:57:34 crc kubenswrapper[4806]: E0126 07:57:34.853862 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"befc3a7310ed730ab06800d0ffe5a5ad206f5193f921d0080543d96e53c382cc\": container with ID starting with befc3a7310ed730ab06800d0ffe5a5ad206f5193f921d0080543d96e53c382cc not found: ID does not exist" containerID="befc3a7310ed730ab06800d0ffe5a5ad206f5193f921d0080543d96e53c382cc" Jan 26 07:57:34 crc kubenswrapper[4806]: I0126 07:57:34.853955 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"befc3a7310ed730ab06800d0ffe5a5ad206f5193f921d0080543d96e53c382cc"} err="failed to get container status \"befc3a7310ed730ab06800d0ffe5a5ad206f5193f921d0080543d96e53c382cc\": rpc error: code = NotFound desc = could not find container \"befc3a7310ed730ab06800d0ffe5a5ad206f5193f921d0080543d96e53c382cc\": container with ID starting with befc3a7310ed730ab06800d0ffe5a5ad206f5193f921d0080543d96e53c382cc not found: ID does not exist" Jan 26 07:57:34 crc kubenswrapper[4806]: I0126 07:57:34.854033 4806 scope.go:117] "RemoveContainer" containerID="6f4e90bd8d92e4159f45b887647465c23595d6a14cb9fd5a6f3b6c736345b302" Jan 26 07:57:34 crc kubenswrapper[4806]: E0126 07:57:34.854410 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f4e90bd8d92e4159f45b887647465c23595d6a14cb9fd5a6f3b6c736345b302\": container with ID starting with 6f4e90bd8d92e4159f45b887647465c23595d6a14cb9fd5a6f3b6c736345b302 not found: ID does not exist" containerID="6f4e90bd8d92e4159f45b887647465c23595d6a14cb9fd5a6f3b6c736345b302" Jan 26 07:57:34 crc kubenswrapper[4806]: I0126 07:57:34.854506 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f4e90bd8d92e4159f45b887647465c23595d6a14cb9fd5a6f3b6c736345b302"} err="failed to get container status \"6f4e90bd8d92e4159f45b887647465c23595d6a14cb9fd5a6f3b6c736345b302\": rpc error: code = NotFound desc = could not find container \"6f4e90bd8d92e4159f45b887647465c23595d6a14cb9fd5a6f3b6c736345b302\": container with ID starting with 6f4e90bd8d92e4159f45b887647465c23595d6a14cb9fd5a6f3b6c736345b302 not found: ID does not exist" Jan 26 07:57:34 crc kubenswrapper[4806]: I0126 07:57:34.854597 4806 scope.go:117] "RemoveContainer" containerID="370998a25827938e42e6ca8f51cb20c64068d266a2a6e92f43cfa6033efe1f1f" Jan 26 07:57:34 crc kubenswrapper[4806]: E0126 07:57:34.855578 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"370998a25827938e42e6ca8f51cb20c64068d266a2a6e92f43cfa6033efe1f1f\": container with ID starting with 370998a25827938e42e6ca8f51cb20c64068d266a2a6e92f43cfa6033efe1f1f not found: ID does not exist" containerID="370998a25827938e42e6ca8f51cb20c64068d266a2a6e92f43cfa6033efe1f1f" Jan 26 07:57:34 crc kubenswrapper[4806]: I0126 07:57:34.855679 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"370998a25827938e42e6ca8f51cb20c64068d266a2a6e92f43cfa6033efe1f1f"} err="failed to get container status \"370998a25827938e42e6ca8f51cb20c64068d266a2a6e92f43cfa6033efe1f1f\": rpc error: code = NotFound desc = could not find container \"370998a25827938e42e6ca8f51cb20c64068d266a2a6e92f43cfa6033efe1f1f\": container with ID starting with 370998a25827938e42e6ca8f51cb20c64068d266a2a6e92f43cfa6033efe1f1f not found: ID does not exist" Jan 26 07:57:34 crc kubenswrapper[4806]: I0126 07:57:34.855751 4806 scope.go:117] "RemoveContainer" containerID="02368f0d186c7de1c3dc25d5c60985799ded799314f332c5132230a413d531a6" Jan 26 07:57:34 crc kubenswrapper[4806]: E0126 07:57:34.856109 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"02368f0d186c7de1c3dc25d5c60985799ded799314f332c5132230a413d531a6\": container with ID starting with 02368f0d186c7de1c3dc25d5c60985799ded799314f332c5132230a413d531a6 not found: ID does not exist" containerID="02368f0d186c7de1c3dc25d5c60985799ded799314f332c5132230a413d531a6" Jan 26 07:57:34 crc kubenswrapper[4806]: I0126 07:57:34.856199 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02368f0d186c7de1c3dc25d5c60985799ded799314f332c5132230a413d531a6"} err="failed to get container status \"02368f0d186c7de1c3dc25d5c60985799ded799314f332c5132230a413d531a6\": rpc error: code = NotFound desc = could not find container \"02368f0d186c7de1c3dc25d5c60985799ded799314f332c5132230a413d531a6\": container with ID starting with 02368f0d186c7de1c3dc25d5c60985799ded799314f332c5132230a413d531a6 not found: ID does not exist" Jan 26 07:57:35 crc kubenswrapper[4806]: I0126 07:57:35.047941 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 26 07:57:35 crc kubenswrapper[4806]: E0126 07:57:35.366289 4806 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.66:6443: connect: connection refused" interval="1.6s" Jan 26 07:57:36 crc kubenswrapper[4806]: E0126 07:57:36.967366 4806 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.66:6443: connect: connection refused" interval="3.2s" Jan 26 07:57:37 crc kubenswrapper[4806]: E0126 07:57:37.071121 4806 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.66:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 07:57:37 crc kubenswrapper[4806]: I0126 07:57:37.071683 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 07:57:37 crc kubenswrapper[4806]: E0126 07:57:37.097826 4806 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.66:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188e38e45eafae68 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 07:57:37.095454312 +0000 UTC m=+236.359862368,LastTimestamp:2026-01-26 07:57:37.095454312 +0000 UTC m=+236.359862368,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 07:57:37 crc kubenswrapper[4806]: E0126 07:57:37.130229 4806 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.66:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-tncnb" volumeName="registry-storage" Jan 26 07:57:37 crc kubenswrapper[4806]: I0126 07:57:37.746183 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"86de97beb670491e720c5e95fb3a9dd8fce93f8dd5ccef493cde80b5b8ffc973"} Jan 26 07:57:37 crc kubenswrapper[4806]: I0126 07:57:37.746569 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"600097f776f0addb065693c6a287ab29b7a89af1bc050a4f74eb2c2fa7effff2"} Jan 26 07:57:37 crc kubenswrapper[4806]: I0126 07:57:37.747161 4806 status_manager.go:851] "Failed to get status for pod" podUID="fc2b475a-c1a6-46d8-bbc6-a8a7f5934df1" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.66:6443: connect: connection refused" Jan 26 07:57:37 crc kubenswrapper[4806]: E0126 07:57:37.747475 4806 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.66:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 07:57:39 crc kubenswrapper[4806]: E0126 07:57:39.890119 4806 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.66:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188e38e45eafae68 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 07:57:37.095454312 +0000 UTC m=+236.359862368,LastTimestamp:2026-01-26 07:57:37.095454312 +0000 UTC m=+236.359862368,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 07:57:40 crc kubenswrapper[4806]: E0126 07:57:40.168857 4806 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.66:6443: connect: connection refused" interval="6.4s" Jan 26 07:57:41 crc kubenswrapper[4806]: I0126 07:57:41.046097 4806 status_manager.go:851] "Failed to get status for pod" podUID="fc2b475a-c1a6-46d8-bbc6-a8a7f5934df1" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.66:6443: connect: connection refused" Jan 26 07:57:44 crc kubenswrapper[4806]: I0126 07:57:44.792361 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 26 07:57:44 crc kubenswrapper[4806]: I0126 07:57:44.792831 4806 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="4c111f2ec62507ccc570af4f4da19232bfa0946bb9e67bde595d5a3d66eff6a6" exitCode=1 Jan 26 07:57:44 crc kubenswrapper[4806]: I0126 07:57:44.792876 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"4c111f2ec62507ccc570af4f4da19232bfa0946bb9e67bde595d5a3d66eff6a6"} Jan 26 07:57:44 crc kubenswrapper[4806]: I0126 07:57:44.793651 4806 scope.go:117] "RemoveContainer" containerID="4c111f2ec62507ccc570af4f4da19232bfa0946bb9e67bde595d5a3d66eff6a6" Jan 26 07:57:44 crc kubenswrapper[4806]: I0126 07:57:44.793937 4806 status_manager.go:851] "Failed to get status for pod" podUID="fc2b475a-c1a6-46d8-bbc6-a8a7f5934df1" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.66:6443: connect: connection refused" Jan 26 07:57:44 crc kubenswrapper[4806]: I0126 07:57:44.794612 4806 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.66:6443: connect: connection refused" Jan 26 07:57:45 crc kubenswrapper[4806]: I0126 07:57:45.803599 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 26 07:57:45 crc kubenswrapper[4806]: I0126 07:57:45.804182 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"b3795ddd56a7721b7a198f2fb6cb650014c207d206ff7a016c5d45b05d1293a2"} Jan 26 07:57:45 crc kubenswrapper[4806]: I0126 07:57:45.806373 4806 status_manager.go:851] "Failed to get status for pod" podUID="fc2b475a-c1a6-46d8-bbc6-a8a7f5934df1" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.66:6443: connect: connection refused" Jan 26 07:57:45 crc kubenswrapper[4806]: I0126 07:57:45.806898 4806 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.66:6443: connect: connection refused" Jan 26 07:57:46 crc kubenswrapper[4806]: I0126 07:57:46.041592 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 07:57:46 crc kubenswrapper[4806]: I0126 07:57:46.042585 4806 status_manager.go:851] "Failed to get status for pod" podUID="fc2b475a-c1a6-46d8-bbc6-a8a7f5934df1" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.66:6443: connect: connection refused" Jan 26 07:57:46 crc kubenswrapper[4806]: I0126 07:57:46.042786 4806 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.66:6443: connect: connection refused" Jan 26 07:57:46 crc kubenswrapper[4806]: I0126 07:57:46.054357 4806 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5f837be3-c97c-4ec6-9ac0-1b2c210a2bd1" Jan 26 07:57:46 crc kubenswrapper[4806]: I0126 07:57:46.054400 4806 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5f837be3-c97c-4ec6-9ac0-1b2c210a2bd1" Jan 26 07:57:46 crc kubenswrapper[4806]: E0126 07:57:46.054816 4806 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.66:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 07:57:46 crc kubenswrapper[4806]: I0126 07:57:46.055247 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 07:57:46 crc kubenswrapper[4806]: W0126 07:57:46.071333 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-539747b5e71e5991c124ec137bf191e2952801dd0819b3d7cd903bd35febc0f8 WatchSource:0}: Error finding container 539747b5e71e5991c124ec137bf191e2952801dd0819b3d7cd903bd35febc0f8: Status 404 returned error can't find the container with id 539747b5e71e5991c124ec137bf191e2952801dd0819b3d7cd903bd35febc0f8 Jan 26 07:57:46 crc kubenswrapper[4806]: E0126 07:57:46.569628 4806 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.66:6443: connect: connection refused" interval="7s" Jan 26 07:57:46 crc kubenswrapper[4806]: I0126 07:57:46.814090 4806 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="4d105a3fe6a96fec8272ff6ff31b47ef5873ba3126b18e9c9e39cc2c41b38e60" exitCode=0 Jan 26 07:57:46 crc kubenswrapper[4806]: I0126 07:57:46.814139 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"4d105a3fe6a96fec8272ff6ff31b47ef5873ba3126b18e9c9e39cc2c41b38e60"} Jan 26 07:57:46 crc kubenswrapper[4806]: I0126 07:57:46.814170 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"539747b5e71e5991c124ec137bf191e2952801dd0819b3d7cd903bd35febc0f8"} Jan 26 07:57:46 crc kubenswrapper[4806]: I0126 07:57:46.814468 4806 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5f837be3-c97c-4ec6-9ac0-1b2c210a2bd1" Jan 26 07:57:46 crc kubenswrapper[4806]: I0126 07:57:46.814483 4806 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5f837be3-c97c-4ec6-9ac0-1b2c210a2bd1" Jan 26 07:57:46 crc kubenswrapper[4806]: E0126 07:57:46.814893 4806 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.66:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 07:57:46 crc kubenswrapper[4806]: I0126 07:57:46.815014 4806 status_manager.go:851] "Failed to get status for pod" podUID="fc2b475a-c1a6-46d8-bbc6-a8a7f5934df1" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.66:6443: connect: connection refused" Jan 26 07:57:46 crc kubenswrapper[4806]: I0126 07:57:46.815391 4806 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.66:6443: connect: connection refused" Jan 26 07:57:47 crc kubenswrapper[4806]: I0126 07:57:47.840155 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"5af8c2df8f2b20406a7edadadf89489b29e48f80a71812b81c9e353f44c27a23"} Jan 26 07:57:47 crc kubenswrapper[4806]: I0126 07:57:47.840466 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"eff43bc87df9fba95a27d6525788874e938d1058c3322b0b6bea480d6087fbc6"} Jan 26 07:57:47 crc kubenswrapper[4806]: I0126 07:57:47.840476 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"07fec68774824ea4263cd7746689ed28bb2a898b982fcf97f40f0cb770dbdb15"} Jan 26 07:57:47 crc kubenswrapper[4806]: I0126 07:57:47.840485 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"ebc382e287c00f5b4060963248afc6a8c80a6f7936943823286b05ac1fad1701"} Jan 26 07:57:48 crc kubenswrapper[4806]: I0126 07:57:48.439888 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 07:57:48 crc kubenswrapper[4806]: I0126 07:57:48.847093 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"31eeb6d76ec0f42d0c437b44c8801f0c37cadd10701fb4720c9f205290445d47"} Jan 26 07:57:48 crc kubenswrapper[4806]: I0126 07:57:48.847297 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 07:57:48 crc kubenswrapper[4806]: I0126 07:57:48.847405 4806 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5f837be3-c97c-4ec6-9ac0-1b2c210a2bd1" Jan 26 07:57:48 crc kubenswrapper[4806]: I0126 07:57:48.847432 4806 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5f837be3-c97c-4ec6-9ac0-1b2c210a2bd1" Jan 26 07:57:51 crc kubenswrapper[4806]: I0126 07:57:51.056089 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 07:57:51 crc kubenswrapper[4806]: I0126 07:57:51.056389 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 07:57:51 crc kubenswrapper[4806]: I0126 07:57:51.061500 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 07:57:53 crc kubenswrapper[4806]: I0126 07:57:53.867426 4806 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 07:57:53 crc kubenswrapper[4806]: I0126 07:57:53.958332 4806 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="4e6c3724-3da4-44b5-9ad9-d61795ea8abc" Jan 26 07:57:54 crc kubenswrapper[4806]: I0126 07:57:54.351800 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 07:57:54 crc kubenswrapper[4806]: I0126 07:57:54.355874 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 07:57:54 crc kubenswrapper[4806]: I0126 07:57:54.877443 4806 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5f837be3-c97c-4ec6-9ac0-1b2c210a2bd1" Jan 26 07:57:54 crc kubenswrapper[4806]: I0126 07:57:54.877840 4806 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="5f837be3-c97c-4ec6-9ac0-1b2c210a2bd1" Jan 26 07:57:54 crc kubenswrapper[4806]: I0126 07:57:54.879977 4806 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="4e6c3724-3da4-44b5-9ad9-d61795ea8abc" Jan 26 07:57:58 crc kubenswrapper[4806]: I0126 07:57:58.446856 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 07:58:03 crc kubenswrapper[4806]: I0126 07:58:03.664078 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 26 07:58:03 crc kubenswrapper[4806]: I0126 07:58:03.695664 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 26 07:58:03 crc kubenswrapper[4806]: I0126 07:58:03.975839 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 26 07:58:04 crc kubenswrapper[4806]: I0126 07:58:04.246991 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 26 07:58:04 crc kubenswrapper[4806]: I0126 07:58:04.406249 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 26 07:58:04 crc kubenswrapper[4806]: I0126 07:58:04.434571 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 26 07:58:04 crc kubenswrapper[4806]: I0126 07:58:04.668069 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 26 07:58:04 crc kubenswrapper[4806]: I0126 07:58:04.838334 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 26 07:58:05 crc kubenswrapper[4806]: I0126 07:58:05.125973 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 26 07:58:05 crc kubenswrapper[4806]: I0126 07:58:05.518377 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 26 07:58:05 crc kubenswrapper[4806]: I0126 07:58:05.614337 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 26 07:58:05 crc kubenswrapper[4806]: I0126 07:58:05.746742 4806 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 26 07:58:05 crc kubenswrapper[4806]: I0126 07:58:05.779631 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 26 07:58:05 crc kubenswrapper[4806]: I0126 07:58:05.781885 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 26 07:58:05 crc kubenswrapper[4806]: I0126 07:58:05.907397 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 26 07:58:06 crc kubenswrapper[4806]: I0126 07:58:06.161105 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 26 07:58:06 crc kubenswrapper[4806]: I0126 07:58:06.450708 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 26 07:58:06 crc kubenswrapper[4806]: I0126 07:58:06.498805 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 26 07:58:06 crc kubenswrapper[4806]: I0126 07:58:06.525334 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 26 07:58:06 crc kubenswrapper[4806]: I0126 07:58:06.540706 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 26 07:58:06 crc kubenswrapper[4806]: I0126 07:58:06.548702 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 26 07:58:06 crc kubenswrapper[4806]: I0126 07:58:06.557350 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 26 07:58:06 crc kubenswrapper[4806]: I0126 07:58:06.584251 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 26 07:58:06 crc kubenswrapper[4806]: I0126 07:58:06.673313 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 26 07:58:06 crc kubenswrapper[4806]: I0126 07:58:06.676119 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 26 07:58:06 crc kubenswrapper[4806]: I0126 07:58:06.687246 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 26 07:58:06 crc kubenswrapper[4806]: I0126 07:58:06.987410 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 26 07:58:07 crc kubenswrapper[4806]: I0126 07:58:07.194908 4806 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 26 07:58:07 crc kubenswrapper[4806]: I0126 07:58:07.199520 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 26 07:58:07 crc kubenswrapper[4806]: I0126 07:58:07.199592 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 26 07:58:07 crc kubenswrapper[4806]: I0126 07:58:07.205318 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 07:58:07 crc kubenswrapper[4806]: I0126 07:58:07.205411 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 07:58:07 crc kubenswrapper[4806]: I0126 07:58:07.206179 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 26 07:58:07 crc kubenswrapper[4806]: I0126 07:58:07.229168 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=14.229145242 podStartE2EDuration="14.229145242s" podCreationTimestamp="2026-01-26 07:57:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 07:58:07.217316302 +0000 UTC m=+266.481724368" watchObservedRunningTime="2026-01-26 07:58:07.229145242 +0000 UTC m=+266.493553318" Jan 26 07:58:07 crc kubenswrapper[4806]: I0126 07:58:07.304268 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 26 07:58:07 crc kubenswrapper[4806]: I0126 07:58:07.391416 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 26 07:58:07 crc kubenswrapper[4806]: I0126 07:58:07.452311 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 26 07:58:07 crc kubenswrapper[4806]: I0126 07:58:07.586381 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 26 07:58:07 crc kubenswrapper[4806]: I0126 07:58:07.638330 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 26 07:58:07 crc kubenswrapper[4806]: I0126 07:58:07.767897 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 26 07:58:07 crc kubenswrapper[4806]: I0126 07:58:07.796006 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 26 07:58:07 crc kubenswrapper[4806]: I0126 07:58:07.827679 4806 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 26 07:58:07 crc kubenswrapper[4806]: I0126 07:58:07.871468 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 26 07:58:07 crc kubenswrapper[4806]: I0126 07:58:07.918369 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 26 07:58:07 crc kubenswrapper[4806]: I0126 07:58:07.926428 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 26 07:58:07 crc kubenswrapper[4806]: I0126 07:58:07.955772 4806 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 26 07:58:07 crc kubenswrapper[4806]: I0126 07:58:07.978783 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 26 07:58:07 crc kubenswrapper[4806]: I0126 07:58:07.979981 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 26 07:58:08 crc kubenswrapper[4806]: I0126 07:58:08.022080 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 26 07:58:08 crc kubenswrapper[4806]: I0126 07:58:08.151037 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 26 07:58:08 crc kubenswrapper[4806]: I0126 07:58:08.170367 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 26 07:58:08 crc kubenswrapper[4806]: I0126 07:58:08.242032 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 26 07:58:08 crc kubenswrapper[4806]: I0126 07:58:08.322234 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 26 07:58:08 crc kubenswrapper[4806]: I0126 07:58:08.334877 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 26 07:58:08 crc kubenswrapper[4806]: I0126 07:58:08.476141 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 26 07:58:08 crc kubenswrapper[4806]: I0126 07:58:08.510857 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 26 07:58:08 crc kubenswrapper[4806]: I0126 07:58:08.592848 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 26 07:58:08 crc kubenswrapper[4806]: I0126 07:58:08.595408 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 26 07:58:08 crc kubenswrapper[4806]: I0126 07:58:08.605653 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 26 07:58:08 crc kubenswrapper[4806]: I0126 07:58:08.643008 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 26 07:58:08 crc kubenswrapper[4806]: I0126 07:58:08.703024 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 26 07:58:08 crc kubenswrapper[4806]: I0126 07:58:08.751760 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 26 07:58:08 crc kubenswrapper[4806]: I0126 07:58:08.763864 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 26 07:58:08 crc kubenswrapper[4806]: I0126 07:58:08.861893 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 26 07:58:09 crc kubenswrapper[4806]: I0126 07:58:09.002079 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 26 07:58:09 crc kubenswrapper[4806]: I0126 07:58:09.084361 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 26 07:58:09 crc kubenswrapper[4806]: I0126 07:58:09.131697 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 26 07:58:09 crc kubenswrapper[4806]: I0126 07:58:09.172512 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 26 07:58:09 crc kubenswrapper[4806]: I0126 07:58:09.179878 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 26 07:58:09 crc kubenswrapper[4806]: I0126 07:58:09.270591 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 26 07:58:09 crc kubenswrapper[4806]: I0126 07:58:09.313677 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 26 07:58:09 crc kubenswrapper[4806]: I0126 07:58:09.355484 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 26 07:58:09 crc kubenswrapper[4806]: I0126 07:58:09.376889 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 26 07:58:09 crc kubenswrapper[4806]: I0126 07:58:09.488704 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 26 07:58:09 crc kubenswrapper[4806]: I0126 07:58:09.515220 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 26 07:58:09 crc kubenswrapper[4806]: I0126 07:58:09.561208 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 26 07:58:09 crc kubenswrapper[4806]: I0126 07:58:09.571824 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 26 07:58:09 crc kubenswrapper[4806]: I0126 07:58:09.577933 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 26 07:58:09 crc kubenswrapper[4806]: I0126 07:58:09.627580 4806 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 26 07:58:09 crc kubenswrapper[4806]: I0126 07:58:09.725292 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 26 07:58:09 crc kubenswrapper[4806]: I0126 07:58:09.733186 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 26 07:58:09 crc kubenswrapper[4806]: I0126 07:58:09.765871 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 26 07:58:09 crc kubenswrapper[4806]: I0126 07:58:09.769031 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 26 07:58:09 crc kubenswrapper[4806]: I0126 07:58:09.844370 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 26 07:58:09 crc kubenswrapper[4806]: I0126 07:58:09.885495 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 26 07:58:09 crc kubenswrapper[4806]: I0126 07:58:09.890069 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 26 07:58:10 crc kubenswrapper[4806]: I0126 07:58:10.109446 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 26 07:58:10 crc kubenswrapper[4806]: I0126 07:58:10.113938 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 26 07:58:10 crc kubenswrapper[4806]: I0126 07:58:10.118354 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 26 07:58:10 crc kubenswrapper[4806]: I0126 07:58:10.149543 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 26 07:58:10 crc kubenswrapper[4806]: I0126 07:58:10.193984 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 26 07:58:10 crc kubenswrapper[4806]: I0126 07:58:10.221935 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 26 07:58:10 crc kubenswrapper[4806]: I0126 07:58:10.263722 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 26 07:58:10 crc kubenswrapper[4806]: I0126 07:58:10.271455 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 26 07:58:10 crc kubenswrapper[4806]: I0126 07:58:10.354542 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 26 07:58:10 crc kubenswrapper[4806]: I0126 07:58:10.376408 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 26 07:58:10 crc kubenswrapper[4806]: I0126 07:58:10.382106 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 26 07:58:10 crc kubenswrapper[4806]: I0126 07:58:10.535664 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 26 07:58:10 crc kubenswrapper[4806]: I0126 07:58:10.550190 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 26 07:58:10 crc kubenswrapper[4806]: I0126 07:58:10.606915 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 26 07:58:10 crc kubenswrapper[4806]: I0126 07:58:10.621642 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 26 07:58:10 crc kubenswrapper[4806]: I0126 07:58:10.632780 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 26 07:58:10 crc kubenswrapper[4806]: I0126 07:58:10.662234 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 26 07:58:10 crc kubenswrapper[4806]: I0126 07:58:10.680116 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 26 07:58:10 crc kubenswrapper[4806]: I0126 07:58:10.713562 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 26 07:58:10 crc kubenswrapper[4806]: I0126 07:58:10.819010 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 26 07:58:10 crc kubenswrapper[4806]: I0126 07:58:10.875548 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 26 07:58:10 crc kubenswrapper[4806]: I0126 07:58:10.943022 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 26 07:58:10 crc kubenswrapper[4806]: I0126 07:58:10.972936 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 26 07:58:11 crc kubenswrapper[4806]: I0126 07:58:11.010110 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 26 07:58:11 crc kubenswrapper[4806]: I0126 07:58:11.035379 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 26 07:58:11 crc kubenswrapper[4806]: I0126 07:58:11.045003 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 26 07:58:11 crc kubenswrapper[4806]: I0126 07:58:11.085299 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 26 07:58:11 crc kubenswrapper[4806]: I0126 07:58:11.109043 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 26 07:58:11 crc kubenswrapper[4806]: I0126 07:58:11.153412 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 26 07:58:11 crc kubenswrapper[4806]: I0126 07:58:11.199348 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 26 07:58:11 crc kubenswrapper[4806]: I0126 07:58:11.217290 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 26 07:58:11 crc kubenswrapper[4806]: I0126 07:58:11.386726 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 26 07:58:11 crc kubenswrapper[4806]: I0126 07:58:11.414862 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 26 07:58:11 crc kubenswrapper[4806]: I0126 07:58:11.438051 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 26 07:58:11 crc kubenswrapper[4806]: I0126 07:58:11.462193 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 26 07:58:11 crc kubenswrapper[4806]: I0126 07:58:11.598679 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 26 07:58:11 crc kubenswrapper[4806]: I0126 07:58:11.711482 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 26 07:58:11 crc kubenswrapper[4806]: I0126 07:58:11.757267 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 26 07:58:11 crc kubenswrapper[4806]: I0126 07:58:11.793021 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 26 07:58:11 crc kubenswrapper[4806]: I0126 07:58:11.797602 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 26 07:58:11 crc kubenswrapper[4806]: I0126 07:58:11.799290 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 26 07:58:11 crc kubenswrapper[4806]: I0126 07:58:11.826003 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 26 07:58:11 crc kubenswrapper[4806]: I0126 07:58:11.858906 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 26 07:58:11 crc kubenswrapper[4806]: I0126 07:58:11.904913 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 26 07:58:11 crc kubenswrapper[4806]: I0126 07:58:11.950049 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 26 07:58:12 crc kubenswrapper[4806]: I0126 07:58:12.062385 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 26 07:58:12 crc kubenswrapper[4806]: I0126 07:58:12.152247 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 26 07:58:12 crc kubenswrapper[4806]: I0126 07:58:12.221679 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 26 07:58:12 crc kubenswrapper[4806]: I0126 07:58:12.310252 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 26 07:58:12 crc kubenswrapper[4806]: I0126 07:58:12.320233 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 26 07:58:12 crc kubenswrapper[4806]: I0126 07:58:12.324878 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 26 07:58:12 crc kubenswrapper[4806]: I0126 07:58:12.389221 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 26 07:58:12 crc kubenswrapper[4806]: I0126 07:58:12.493089 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 26 07:58:12 crc kubenswrapper[4806]: I0126 07:58:12.566792 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 26 07:58:12 crc kubenswrapper[4806]: I0126 07:58:12.713396 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 26 07:58:12 crc kubenswrapper[4806]: I0126 07:58:12.716363 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 26 07:58:12 crc kubenswrapper[4806]: I0126 07:58:12.791925 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 26 07:58:12 crc kubenswrapper[4806]: I0126 07:58:12.793730 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 26 07:58:12 crc kubenswrapper[4806]: I0126 07:58:12.812027 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 26 07:58:12 crc kubenswrapper[4806]: I0126 07:58:12.817885 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 26 07:58:12 crc kubenswrapper[4806]: I0126 07:58:12.855002 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 26 07:58:12 crc kubenswrapper[4806]: I0126 07:58:12.873875 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 26 07:58:13 crc kubenswrapper[4806]: I0126 07:58:13.007489 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 26 07:58:13 crc kubenswrapper[4806]: I0126 07:58:13.101634 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 26 07:58:13 crc kubenswrapper[4806]: I0126 07:58:13.216788 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 26 07:58:13 crc kubenswrapper[4806]: I0126 07:58:13.217983 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 26 07:58:13 crc kubenswrapper[4806]: I0126 07:58:13.287720 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 26 07:58:13 crc kubenswrapper[4806]: I0126 07:58:13.337502 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 26 07:58:13 crc kubenswrapper[4806]: I0126 07:58:13.506044 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 26 07:58:13 crc kubenswrapper[4806]: I0126 07:58:13.537077 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 26 07:58:13 crc kubenswrapper[4806]: I0126 07:58:13.678589 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 26 07:58:13 crc kubenswrapper[4806]: I0126 07:58:13.737507 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 26 07:58:13 crc kubenswrapper[4806]: I0126 07:58:13.745237 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 26 07:58:13 crc kubenswrapper[4806]: I0126 07:58:13.796288 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 26 07:58:13 crc kubenswrapper[4806]: I0126 07:58:13.796881 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 26 07:58:13 crc kubenswrapper[4806]: I0126 07:58:13.827804 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 26 07:58:14 crc kubenswrapper[4806]: I0126 07:58:14.098830 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 26 07:58:14 crc kubenswrapper[4806]: I0126 07:58:14.148177 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 26 07:58:14 crc kubenswrapper[4806]: I0126 07:58:14.158468 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 26 07:58:14 crc kubenswrapper[4806]: I0126 07:58:14.205478 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 26 07:58:14 crc kubenswrapper[4806]: I0126 07:58:14.263353 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 26 07:58:14 crc kubenswrapper[4806]: I0126 07:58:14.388306 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 26 07:58:14 crc kubenswrapper[4806]: I0126 07:58:14.420204 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 26 07:58:14 crc kubenswrapper[4806]: I0126 07:58:14.459040 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 26 07:58:14 crc kubenswrapper[4806]: I0126 07:58:14.460869 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 26 07:58:14 crc kubenswrapper[4806]: I0126 07:58:14.505916 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 26 07:58:14 crc kubenswrapper[4806]: I0126 07:58:14.535200 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 26 07:58:14 crc kubenswrapper[4806]: I0126 07:58:14.557267 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 26 07:58:14 crc kubenswrapper[4806]: I0126 07:58:14.683254 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 26 07:58:14 crc kubenswrapper[4806]: I0126 07:58:14.863072 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 26 07:58:14 crc kubenswrapper[4806]: I0126 07:58:14.916870 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 26 07:58:14 crc kubenswrapper[4806]: I0126 07:58:14.962598 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 26 07:58:14 crc kubenswrapper[4806]: I0126 07:58:14.980168 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 26 07:58:14 crc kubenswrapper[4806]: I0126 07:58:14.992242 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 26 07:58:15 crc kubenswrapper[4806]: I0126 07:58:15.052344 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 26 07:58:15 crc kubenswrapper[4806]: I0126 07:58:15.149865 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 26 07:58:15 crc kubenswrapper[4806]: I0126 07:58:15.167563 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 26 07:58:15 crc kubenswrapper[4806]: I0126 07:58:15.215067 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 26 07:58:15 crc kubenswrapper[4806]: I0126 07:58:15.229720 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 26 07:58:15 crc kubenswrapper[4806]: I0126 07:58:15.259651 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 26 07:58:15 crc kubenswrapper[4806]: I0126 07:58:15.293356 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 26 07:58:15 crc kubenswrapper[4806]: I0126 07:58:15.304270 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 26 07:58:15 crc kubenswrapper[4806]: I0126 07:58:15.319934 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 26 07:58:15 crc kubenswrapper[4806]: I0126 07:58:15.337981 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 26 07:58:15 crc kubenswrapper[4806]: I0126 07:58:15.409088 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 26 07:58:15 crc kubenswrapper[4806]: I0126 07:58:15.415546 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 26 07:58:15 crc kubenswrapper[4806]: I0126 07:58:15.425659 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 26 07:58:15 crc kubenswrapper[4806]: I0126 07:58:15.435230 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 26 07:58:15 crc kubenswrapper[4806]: I0126 07:58:15.472843 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 26 07:58:15 crc kubenswrapper[4806]: I0126 07:58:15.500435 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 26 07:58:15 crc kubenswrapper[4806]: I0126 07:58:15.511977 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 26 07:58:15 crc kubenswrapper[4806]: I0126 07:58:15.529405 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 26 07:58:15 crc kubenswrapper[4806]: I0126 07:58:15.591000 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 26 07:58:15 crc kubenswrapper[4806]: I0126 07:58:15.596812 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 26 07:58:15 crc kubenswrapper[4806]: I0126 07:58:15.661881 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 26 07:58:15 crc kubenswrapper[4806]: I0126 07:58:15.705428 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 26 07:58:15 crc kubenswrapper[4806]: I0126 07:58:15.765582 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 26 07:58:15 crc kubenswrapper[4806]: I0126 07:58:15.769303 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 26 07:58:15 crc kubenswrapper[4806]: I0126 07:58:15.779024 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 26 07:58:15 crc kubenswrapper[4806]: I0126 07:58:15.825003 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 26 07:58:15 crc kubenswrapper[4806]: I0126 07:58:15.828156 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 26 07:58:15 crc kubenswrapper[4806]: I0126 07:58:15.843723 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 26 07:58:15 crc kubenswrapper[4806]: I0126 07:58:15.865613 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 26 07:58:15 crc kubenswrapper[4806]: I0126 07:58:15.899804 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 26 07:58:15 crc kubenswrapper[4806]: I0126 07:58:15.928963 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 26 07:58:16 crc kubenswrapper[4806]: I0126 07:58:16.023145 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 26 07:58:16 crc kubenswrapper[4806]: I0126 07:58:16.059199 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 26 07:58:16 crc kubenswrapper[4806]: I0126 07:58:16.069072 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 26 07:58:16 crc kubenswrapper[4806]: I0126 07:58:16.100368 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 26 07:58:16 crc kubenswrapper[4806]: I0126 07:58:16.128536 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 26 07:58:16 crc kubenswrapper[4806]: I0126 07:58:16.148792 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 26 07:58:16 crc kubenswrapper[4806]: I0126 07:58:16.152119 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 26 07:58:16 crc kubenswrapper[4806]: I0126 07:58:16.176962 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 26 07:58:16 crc kubenswrapper[4806]: I0126 07:58:16.218980 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 26 07:58:16 crc kubenswrapper[4806]: I0126 07:58:16.314050 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 26 07:58:16 crc kubenswrapper[4806]: I0126 07:58:16.326162 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 26 07:58:16 crc kubenswrapper[4806]: I0126 07:58:16.450585 4806 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 26 07:58:16 crc kubenswrapper[4806]: I0126 07:58:16.450961 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://86de97beb670491e720c5e95fb3a9dd8fce93f8dd5ccef493cde80b5b8ffc973" gracePeriod=5 Jan 26 07:58:16 crc kubenswrapper[4806]: I0126 07:58:16.555939 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 26 07:58:16 crc kubenswrapper[4806]: I0126 07:58:16.566300 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 26 07:58:16 crc kubenswrapper[4806]: I0126 07:58:16.589257 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 26 07:58:16 crc kubenswrapper[4806]: I0126 07:58:16.619124 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 26 07:58:16 crc kubenswrapper[4806]: I0126 07:58:16.625275 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 26 07:58:16 crc kubenswrapper[4806]: I0126 07:58:16.671346 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 26 07:58:16 crc kubenswrapper[4806]: I0126 07:58:16.692554 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 26 07:58:16 crc kubenswrapper[4806]: I0126 07:58:16.715106 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 26 07:58:16 crc kubenswrapper[4806]: I0126 07:58:16.737792 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 26 07:58:16 crc kubenswrapper[4806]: I0126 07:58:16.747182 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 26 07:58:16 crc kubenswrapper[4806]: I0126 07:58:16.833113 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 26 07:58:16 crc kubenswrapper[4806]: I0126 07:58:16.852041 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 26 07:58:17 crc kubenswrapper[4806]: I0126 07:58:17.038870 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 26 07:58:17 crc kubenswrapper[4806]: I0126 07:58:17.092190 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 26 07:58:17 crc kubenswrapper[4806]: I0126 07:58:17.104352 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 26 07:58:17 crc kubenswrapper[4806]: I0126 07:58:17.159332 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 26 07:58:17 crc kubenswrapper[4806]: I0126 07:58:17.249506 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 26 07:58:17 crc kubenswrapper[4806]: I0126 07:58:17.502765 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 26 07:58:17 crc kubenswrapper[4806]: I0126 07:58:17.620451 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 26 07:58:17 crc kubenswrapper[4806]: I0126 07:58:17.625406 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 26 07:58:17 crc kubenswrapper[4806]: I0126 07:58:17.636504 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 26 07:58:17 crc kubenswrapper[4806]: I0126 07:58:17.872135 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 26 07:58:17 crc kubenswrapper[4806]: I0126 07:58:17.936372 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 26 07:58:18 crc kubenswrapper[4806]: I0126 07:58:18.037778 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 26 07:58:18 crc kubenswrapper[4806]: I0126 07:58:18.150480 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 26 07:58:18 crc kubenswrapper[4806]: I0126 07:58:18.299236 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 26 07:58:18 crc kubenswrapper[4806]: I0126 07:58:18.929626 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 26 07:58:19 crc kubenswrapper[4806]: I0126 07:58:19.008855 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 26 07:58:19 crc kubenswrapper[4806]: I0126 07:58:19.473588 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 26 07:58:19 crc kubenswrapper[4806]: I0126 07:58:19.524288 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 26 07:58:19 crc kubenswrapper[4806]: I0126 07:58:19.651250 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 26 07:58:19 crc kubenswrapper[4806]: I0126 07:58:19.784018 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 26 07:58:20 crc kubenswrapper[4806]: I0126 07:58:20.215568 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 26 07:58:20 crc kubenswrapper[4806]: I0126 07:58:20.469175 4806 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 26 07:58:20 crc kubenswrapper[4806]: I0126 07:58:20.589328 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bq7zd"] Jan 26 07:58:20 crc kubenswrapper[4806]: I0126 07:58:20.589604 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-bq7zd" podUID="0531f954-d1d9-42f0-bd29-f8ff5b0871b4" containerName="registry-server" containerID="cri-o://1082596f16b184b7a4358a615449d25564638f2cb9a44bbc2084ed2e6fe2e0d2" gracePeriod=30 Jan 26 07:58:20 crc kubenswrapper[4806]: I0126 07:58:20.603355 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-79ddh"] Jan 26 07:58:20 crc kubenswrapper[4806]: I0126 07:58:20.604030 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-79ddh" podUID="df97f49a-b950-45f2-8c66-52f2c6c33163" containerName="registry-server" containerID="cri-o://7ae882badc1d1fc9fe36a1400418f7f65b07c4c628603331aba0c7be64238ef2" gracePeriod=30 Jan 26 07:58:20 crc kubenswrapper[4806]: I0126 07:58:20.609289 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vd8ts"] Jan 26 07:58:20 crc kubenswrapper[4806]: I0126 07:58:20.609842 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-vd8ts" podUID="84d147bb-634e-40fb-a631-91ff228c0801" containerName="marketplace-operator" containerID="cri-o://5f08d65064dbbb7d8267d85acd04d18b836184e8e70e780e307341d0f8bcdef4" gracePeriod=30 Jan 26 07:58:20 crc kubenswrapper[4806]: I0126 07:58:20.620368 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ts6cz"] Jan 26 07:58:20 crc kubenswrapper[4806]: I0126 07:58:20.620777 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-ts6cz" podUID="a078c937-6bed-4604-a0a1-25c9c7d2503d" containerName="registry-server" containerID="cri-o://763832a6f11cba7a659fb2a5f7e0a288e2a51bb64b901f54acd3ab3d3cadecb5" gracePeriod=30 Jan 26 07:58:20 crc kubenswrapper[4806]: I0126 07:58:20.630090 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5p4nt"] Jan 26 07:58:20 crc kubenswrapper[4806]: I0126 07:58:20.631188 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-5p4nt" podUID="c31e36f6-aabe-4f1e-8e7e-3bb086ec1cd3" containerName="registry-server" containerID="cri-o://3e86adb9182afa3b208548f2d08bd7436c4a16dd25644a95bcbb36aaf3d3e9d1" gracePeriod=30 Jan 26 07:58:20 crc kubenswrapper[4806]: I0126 07:58:20.674285 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-4mgsn"] Jan 26 07:58:20 crc kubenswrapper[4806]: E0126 07:58:20.674506 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc2b475a-c1a6-46d8-bbc6-a8a7f5934df1" containerName="installer" Jan 26 07:58:20 crc kubenswrapper[4806]: I0126 07:58:20.674540 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc2b475a-c1a6-46d8-bbc6-a8a7f5934df1" containerName="installer" Jan 26 07:58:20 crc kubenswrapper[4806]: E0126 07:58:20.674549 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 26 07:58:20 crc kubenswrapper[4806]: I0126 07:58:20.674555 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 26 07:58:20 crc kubenswrapper[4806]: I0126 07:58:20.674650 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc2b475a-c1a6-46d8-bbc6-a8a7f5934df1" containerName="installer" Jan 26 07:58:20 crc kubenswrapper[4806]: I0126 07:58:20.674664 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 26 07:58:20 crc kubenswrapper[4806]: I0126 07:58:20.675056 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-4mgsn" Jan 26 07:58:20 crc kubenswrapper[4806]: I0126 07:58:20.692727 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-4mgsn"] Jan 26 07:58:20 crc kubenswrapper[4806]: I0126 07:58:20.747848 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/54cc9617-4cbc-4346-916a-cded431da40b-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-4mgsn\" (UID: \"54cc9617-4cbc-4346-916a-cded431da40b\") " pod="openshift-marketplace/marketplace-operator-79b997595-4mgsn" Jan 26 07:58:20 crc kubenswrapper[4806]: I0126 07:58:20.748228 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nl6z2\" (UniqueName: \"kubernetes.io/projected/54cc9617-4cbc-4346-916a-cded431da40b-kube-api-access-nl6z2\") pod \"marketplace-operator-79b997595-4mgsn\" (UID: \"54cc9617-4cbc-4346-916a-cded431da40b\") " pod="openshift-marketplace/marketplace-operator-79b997595-4mgsn" Jan 26 07:58:20 crc kubenswrapper[4806]: I0126 07:58:20.748256 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/54cc9617-4cbc-4346-916a-cded431da40b-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-4mgsn\" (UID: \"54cc9617-4cbc-4346-916a-cded431da40b\") " pod="openshift-marketplace/marketplace-operator-79b997595-4mgsn" Jan 26 07:58:20 crc kubenswrapper[4806]: I0126 07:58:20.849673 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/54cc9617-4cbc-4346-916a-cded431da40b-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-4mgsn\" (UID: \"54cc9617-4cbc-4346-916a-cded431da40b\") " pod="openshift-marketplace/marketplace-operator-79b997595-4mgsn" Jan 26 07:58:20 crc kubenswrapper[4806]: I0126 07:58:20.849725 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nl6z2\" (UniqueName: \"kubernetes.io/projected/54cc9617-4cbc-4346-916a-cded431da40b-kube-api-access-nl6z2\") pod \"marketplace-operator-79b997595-4mgsn\" (UID: \"54cc9617-4cbc-4346-916a-cded431da40b\") " pod="openshift-marketplace/marketplace-operator-79b997595-4mgsn" Jan 26 07:58:20 crc kubenswrapper[4806]: I0126 07:58:20.849745 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/54cc9617-4cbc-4346-916a-cded431da40b-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-4mgsn\" (UID: \"54cc9617-4cbc-4346-916a-cded431da40b\") " pod="openshift-marketplace/marketplace-operator-79b997595-4mgsn" Jan 26 07:58:20 crc kubenswrapper[4806]: I0126 07:58:20.851810 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/54cc9617-4cbc-4346-916a-cded431da40b-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-4mgsn\" (UID: \"54cc9617-4cbc-4346-916a-cded431da40b\") " pod="openshift-marketplace/marketplace-operator-79b997595-4mgsn" Jan 26 07:58:20 crc kubenswrapper[4806]: I0126 07:58:20.877541 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/54cc9617-4cbc-4346-916a-cded431da40b-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-4mgsn\" (UID: \"54cc9617-4cbc-4346-916a-cded431da40b\") " pod="openshift-marketplace/marketplace-operator-79b997595-4mgsn" Jan 26 07:58:20 crc kubenswrapper[4806]: I0126 07:58:20.877597 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nl6z2\" (UniqueName: \"kubernetes.io/projected/54cc9617-4cbc-4346-916a-cded431da40b-kube-api-access-nl6z2\") pod \"marketplace-operator-79b997595-4mgsn\" (UID: \"54cc9617-4cbc-4346-916a-cded431da40b\") " pod="openshift-marketplace/marketplace-operator-79b997595-4mgsn" Jan 26 07:58:21 crc kubenswrapper[4806]: I0126 07:58:20.999590 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-4mgsn" Jan 26 07:58:21 crc kubenswrapper[4806]: I0126 07:58:21.037353 4806 generic.go:334] "Generic (PLEG): container finished" podID="0531f954-d1d9-42f0-bd29-f8ff5b0871b4" containerID="1082596f16b184b7a4358a615449d25564638f2cb9a44bbc2084ed2e6fe2e0d2" exitCode=0 Jan 26 07:58:21 crc kubenswrapper[4806]: I0126 07:58:21.037429 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bq7zd" event={"ID":"0531f954-d1d9-42f0-bd29-f8ff5b0871b4","Type":"ContainerDied","Data":"1082596f16b184b7a4358a615449d25564638f2cb9a44bbc2084ed2e6fe2e0d2"} Jan 26 07:58:21 crc kubenswrapper[4806]: I0126 07:58:21.045999 4806 generic.go:334] "Generic (PLEG): container finished" podID="a078c937-6bed-4604-a0a1-25c9c7d2503d" containerID="763832a6f11cba7a659fb2a5f7e0a288e2a51bb64b901f54acd3ab3d3cadecb5" exitCode=0 Jan 26 07:58:21 crc kubenswrapper[4806]: I0126 07:58:21.048287 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ts6cz" event={"ID":"a078c937-6bed-4604-a0a1-25c9c7d2503d","Type":"ContainerDied","Data":"763832a6f11cba7a659fb2a5f7e0a288e2a51bb64b901f54acd3ab3d3cadecb5"} Jan 26 07:58:21 crc kubenswrapper[4806]: I0126 07:58:21.049403 4806 generic.go:334] "Generic (PLEG): container finished" podID="c31e36f6-aabe-4f1e-8e7e-3bb086ec1cd3" containerID="3e86adb9182afa3b208548f2d08bd7436c4a16dd25644a95bcbb36aaf3d3e9d1" exitCode=0 Jan 26 07:58:21 crc kubenswrapper[4806]: I0126 07:58:21.049452 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5p4nt" event={"ID":"c31e36f6-aabe-4f1e-8e7e-3bb086ec1cd3","Type":"ContainerDied","Data":"3e86adb9182afa3b208548f2d08bd7436c4a16dd25644a95bcbb36aaf3d3e9d1"} Jan 26 07:58:21 crc kubenswrapper[4806]: I0126 07:58:21.050599 4806 generic.go:334] "Generic (PLEG): container finished" podID="84d147bb-634e-40fb-a631-91ff228c0801" containerID="5f08d65064dbbb7d8267d85acd04d18b836184e8e70e780e307341d0f8bcdef4" exitCode=0 Jan 26 07:58:21 crc kubenswrapper[4806]: I0126 07:58:21.050655 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-vd8ts" event={"ID":"84d147bb-634e-40fb-a631-91ff228c0801","Type":"ContainerDied","Data":"5f08d65064dbbb7d8267d85acd04d18b836184e8e70e780e307341d0f8bcdef4"} Jan 26 07:58:21 crc kubenswrapper[4806]: I0126 07:58:21.063460 4806 generic.go:334] "Generic (PLEG): container finished" podID="df97f49a-b950-45f2-8c66-52f2c6c33163" containerID="7ae882badc1d1fc9fe36a1400418f7f65b07c4c628603331aba0c7be64238ef2" exitCode=0 Jan 26 07:58:21 crc kubenswrapper[4806]: I0126 07:58:21.063508 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-79ddh" event={"ID":"df97f49a-b950-45f2-8c66-52f2c6c33163","Type":"ContainerDied","Data":"7ae882badc1d1fc9fe36a1400418f7f65b07c4c628603331aba0c7be64238ef2"} Jan 26 07:58:21 crc kubenswrapper[4806]: I0126 07:58:21.079496 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-vd8ts" Jan 26 07:58:21 crc kubenswrapper[4806]: I0126 07:58:21.158004 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/84d147bb-634e-40fb-a631-91ff228c0801-marketplace-operator-metrics\") pod \"84d147bb-634e-40fb-a631-91ff228c0801\" (UID: \"84d147bb-634e-40fb-a631-91ff228c0801\") " Jan 26 07:58:21 crc kubenswrapper[4806]: I0126 07:58:21.158121 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n9phg\" (UniqueName: \"kubernetes.io/projected/84d147bb-634e-40fb-a631-91ff228c0801-kube-api-access-n9phg\") pod \"84d147bb-634e-40fb-a631-91ff228c0801\" (UID: \"84d147bb-634e-40fb-a631-91ff228c0801\") " Jan 26 07:58:21 crc kubenswrapper[4806]: I0126 07:58:21.158140 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/84d147bb-634e-40fb-a631-91ff228c0801-marketplace-trusted-ca\") pod \"84d147bb-634e-40fb-a631-91ff228c0801\" (UID: \"84d147bb-634e-40fb-a631-91ff228c0801\") " Jan 26 07:58:21 crc kubenswrapper[4806]: I0126 07:58:21.159631 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84d147bb-634e-40fb-a631-91ff228c0801-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "84d147bb-634e-40fb-a631-91ff228c0801" (UID: "84d147bb-634e-40fb-a631-91ff228c0801"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 07:58:21 crc kubenswrapper[4806]: I0126 07:58:21.166464 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84d147bb-634e-40fb-a631-91ff228c0801-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "84d147bb-634e-40fb-a631-91ff228c0801" (UID: "84d147bb-634e-40fb-a631-91ff228c0801"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 07:58:21 crc kubenswrapper[4806]: I0126 07:58:21.168275 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84d147bb-634e-40fb-a631-91ff228c0801-kube-api-access-n9phg" (OuterVolumeSpecName: "kube-api-access-n9phg") pod "84d147bb-634e-40fb-a631-91ff228c0801" (UID: "84d147bb-634e-40fb-a631-91ff228c0801"). InnerVolumeSpecName "kube-api-access-n9phg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 07:58:21 crc kubenswrapper[4806]: I0126 07:58:21.257369 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-79ddh" Jan 26 07:58:21 crc kubenswrapper[4806]: I0126 07:58:21.260141 4806 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/84d147bb-634e-40fb-a631-91ff228c0801-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 26 07:58:21 crc kubenswrapper[4806]: I0126 07:58:21.260163 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n9phg\" (UniqueName: \"kubernetes.io/projected/84d147bb-634e-40fb-a631-91ff228c0801-kube-api-access-n9phg\") on node \"crc\" DevicePath \"\"" Jan 26 07:58:21 crc kubenswrapper[4806]: I0126 07:58:21.260172 4806 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/84d147bb-634e-40fb-a631-91ff228c0801-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 07:58:21 crc kubenswrapper[4806]: I0126 07:58:21.287788 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ts6cz" Jan 26 07:58:21 crc kubenswrapper[4806]: I0126 07:58:21.297952 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bq7zd" Jan 26 07:58:21 crc kubenswrapper[4806]: I0126 07:58:21.360994 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0531f954-d1d9-42f0-bd29-f8ff5b0871b4-catalog-content\") pod \"0531f954-d1d9-42f0-bd29-f8ff5b0871b4\" (UID: \"0531f954-d1d9-42f0-bd29-f8ff5b0871b4\") " Jan 26 07:58:21 crc kubenswrapper[4806]: I0126 07:58:21.361041 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0531f954-d1d9-42f0-bd29-f8ff5b0871b4-utilities\") pod \"0531f954-d1d9-42f0-bd29-f8ff5b0871b4\" (UID: \"0531f954-d1d9-42f0-bd29-f8ff5b0871b4\") " Jan 26 07:58:21 crc kubenswrapper[4806]: I0126 07:58:21.361066 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df97f49a-b950-45f2-8c66-52f2c6c33163-utilities\") pod \"df97f49a-b950-45f2-8c66-52f2c6c33163\" (UID: \"df97f49a-b950-45f2-8c66-52f2c6c33163\") " Jan 26 07:58:21 crc kubenswrapper[4806]: I0126 07:58:21.361095 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a078c937-6bed-4604-a0a1-25c9c7d2503d-catalog-content\") pod \"a078c937-6bed-4604-a0a1-25c9c7d2503d\" (UID: \"a078c937-6bed-4604-a0a1-25c9c7d2503d\") " Jan 26 07:58:21 crc kubenswrapper[4806]: I0126 07:58:21.361117 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qpx27\" (UniqueName: \"kubernetes.io/projected/0531f954-d1d9-42f0-bd29-f8ff5b0871b4-kube-api-access-qpx27\") pod \"0531f954-d1d9-42f0-bd29-f8ff5b0871b4\" (UID: \"0531f954-d1d9-42f0-bd29-f8ff5b0871b4\") " Jan 26 07:58:21 crc kubenswrapper[4806]: I0126 07:58:21.361136 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8wwxr\" (UniqueName: \"kubernetes.io/projected/a078c937-6bed-4604-a0a1-25c9c7d2503d-kube-api-access-8wwxr\") pod \"a078c937-6bed-4604-a0a1-25c9c7d2503d\" (UID: \"a078c937-6bed-4604-a0a1-25c9c7d2503d\") " Jan 26 07:58:21 crc kubenswrapper[4806]: I0126 07:58:21.361917 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a078c937-6bed-4604-a0a1-25c9c7d2503d-utilities\") pod \"a078c937-6bed-4604-a0a1-25c9c7d2503d\" (UID: \"a078c937-6bed-4604-a0a1-25c9c7d2503d\") " Jan 26 07:58:21 crc kubenswrapper[4806]: I0126 07:58:21.361967 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df97f49a-b950-45f2-8c66-52f2c6c33163-catalog-content\") pod \"df97f49a-b950-45f2-8c66-52f2c6c33163\" (UID: \"df97f49a-b950-45f2-8c66-52f2c6c33163\") " Jan 26 07:58:21 crc kubenswrapper[4806]: I0126 07:58:21.362015 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-slhdg\" (UniqueName: \"kubernetes.io/projected/df97f49a-b950-45f2-8c66-52f2c6c33163-kube-api-access-slhdg\") pod \"df97f49a-b950-45f2-8c66-52f2c6c33163\" (UID: \"df97f49a-b950-45f2-8c66-52f2c6c33163\") " Jan 26 07:58:21 crc kubenswrapper[4806]: I0126 07:58:21.363981 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0531f954-d1d9-42f0-bd29-f8ff5b0871b4-utilities" (OuterVolumeSpecName: "utilities") pod "0531f954-d1d9-42f0-bd29-f8ff5b0871b4" (UID: "0531f954-d1d9-42f0-bd29-f8ff5b0871b4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 07:58:21 crc kubenswrapper[4806]: I0126 07:58:21.364021 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df97f49a-b950-45f2-8c66-52f2c6c33163-utilities" (OuterVolumeSpecName: "utilities") pod "df97f49a-b950-45f2-8c66-52f2c6c33163" (UID: "df97f49a-b950-45f2-8c66-52f2c6c33163"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 07:58:21 crc kubenswrapper[4806]: I0126 07:58:21.366083 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df97f49a-b950-45f2-8c66-52f2c6c33163-kube-api-access-slhdg" (OuterVolumeSpecName: "kube-api-access-slhdg") pod "df97f49a-b950-45f2-8c66-52f2c6c33163" (UID: "df97f49a-b950-45f2-8c66-52f2c6c33163"). InnerVolumeSpecName "kube-api-access-slhdg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 07:58:21 crc kubenswrapper[4806]: I0126 07:58:21.366154 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a078c937-6bed-4604-a0a1-25c9c7d2503d-utilities" (OuterVolumeSpecName: "utilities") pod "a078c937-6bed-4604-a0a1-25c9c7d2503d" (UID: "a078c937-6bed-4604-a0a1-25c9c7d2503d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 07:58:21 crc kubenswrapper[4806]: I0126 07:58:21.367654 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a078c937-6bed-4604-a0a1-25c9c7d2503d-kube-api-access-8wwxr" (OuterVolumeSpecName: "kube-api-access-8wwxr") pod "a078c937-6bed-4604-a0a1-25c9c7d2503d" (UID: "a078c937-6bed-4604-a0a1-25c9c7d2503d"). InnerVolumeSpecName "kube-api-access-8wwxr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 07:58:21 crc kubenswrapper[4806]: I0126 07:58:21.369783 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0531f954-d1d9-42f0-bd29-f8ff5b0871b4-kube-api-access-qpx27" (OuterVolumeSpecName: "kube-api-access-qpx27") pod "0531f954-d1d9-42f0-bd29-f8ff5b0871b4" (UID: "0531f954-d1d9-42f0-bd29-f8ff5b0871b4"). InnerVolumeSpecName "kube-api-access-qpx27". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 07:58:21 crc kubenswrapper[4806]: I0126 07:58:21.398895 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a078c937-6bed-4604-a0a1-25c9c7d2503d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a078c937-6bed-4604-a0a1-25c9c7d2503d" (UID: "a078c937-6bed-4604-a0a1-25c9c7d2503d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 07:58:21 crc kubenswrapper[4806]: I0126 07:58:21.412304 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5p4nt" Jan 26 07:58:21 crc kubenswrapper[4806]: I0126 07:58:21.439559 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0531f954-d1d9-42f0-bd29-f8ff5b0871b4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0531f954-d1d9-42f0-bd29-f8ff5b0871b4" (UID: "0531f954-d1d9-42f0-bd29-f8ff5b0871b4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 07:58:21 crc kubenswrapper[4806]: I0126 07:58:21.449879 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df97f49a-b950-45f2-8c66-52f2c6c33163-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "df97f49a-b950-45f2-8c66-52f2c6c33163" (UID: "df97f49a-b950-45f2-8c66-52f2c6c33163"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 07:58:21 crc kubenswrapper[4806]: I0126 07:58:21.462818 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c31e36f6-aabe-4f1e-8e7e-3bb086ec1cd3-utilities\") pod \"c31e36f6-aabe-4f1e-8e7e-3bb086ec1cd3\" (UID: \"c31e36f6-aabe-4f1e-8e7e-3bb086ec1cd3\") " Jan 26 07:58:21 crc kubenswrapper[4806]: I0126 07:58:21.462930 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9qk7b\" (UniqueName: \"kubernetes.io/projected/c31e36f6-aabe-4f1e-8e7e-3bb086ec1cd3-kube-api-access-9qk7b\") pod \"c31e36f6-aabe-4f1e-8e7e-3bb086ec1cd3\" (UID: \"c31e36f6-aabe-4f1e-8e7e-3bb086ec1cd3\") " Jan 26 07:58:21 crc kubenswrapper[4806]: I0126 07:58:21.462991 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c31e36f6-aabe-4f1e-8e7e-3bb086ec1cd3-catalog-content\") pod \"c31e36f6-aabe-4f1e-8e7e-3bb086ec1cd3\" (UID: \"c31e36f6-aabe-4f1e-8e7e-3bb086ec1cd3\") " Jan 26 07:58:21 crc kubenswrapper[4806]: I0126 07:58:21.463304 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-slhdg\" (UniqueName: \"kubernetes.io/projected/df97f49a-b950-45f2-8c66-52f2c6c33163-kube-api-access-slhdg\") on node \"crc\" DevicePath \"\"" Jan 26 07:58:21 crc kubenswrapper[4806]: I0126 07:58:21.463317 4806 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0531f954-d1d9-42f0-bd29-f8ff5b0871b4-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 07:58:21 crc kubenswrapper[4806]: I0126 07:58:21.463325 4806 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0531f954-d1d9-42f0-bd29-f8ff5b0871b4-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 07:58:21 crc kubenswrapper[4806]: I0126 07:58:21.463336 4806 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df97f49a-b950-45f2-8c66-52f2c6c33163-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 07:58:21 crc kubenswrapper[4806]: I0126 07:58:21.463363 4806 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a078c937-6bed-4604-a0a1-25c9c7d2503d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 07:58:21 crc kubenswrapper[4806]: I0126 07:58:21.463389 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qpx27\" (UniqueName: \"kubernetes.io/projected/0531f954-d1d9-42f0-bd29-f8ff5b0871b4-kube-api-access-qpx27\") on node \"crc\" DevicePath \"\"" Jan 26 07:58:21 crc kubenswrapper[4806]: I0126 07:58:21.463400 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8wwxr\" (UniqueName: \"kubernetes.io/projected/a078c937-6bed-4604-a0a1-25c9c7d2503d-kube-api-access-8wwxr\") on node \"crc\" DevicePath \"\"" Jan 26 07:58:21 crc kubenswrapper[4806]: I0126 07:58:21.463409 4806 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a078c937-6bed-4604-a0a1-25c9c7d2503d-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 07:58:21 crc kubenswrapper[4806]: I0126 07:58:21.463421 4806 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df97f49a-b950-45f2-8c66-52f2c6c33163-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 07:58:21 crc kubenswrapper[4806]: I0126 07:58:21.465276 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c31e36f6-aabe-4f1e-8e7e-3bb086ec1cd3-utilities" (OuterVolumeSpecName: "utilities") pod "c31e36f6-aabe-4f1e-8e7e-3bb086ec1cd3" (UID: "c31e36f6-aabe-4f1e-8e7e-3bb086ec1cd3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 07:58:21 crc kubenswrapper[4806]: I0126 07:58:21.468057 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c31e36f6-aabe-4f1e-8e7e-3bb086ec1cd3-kube-api-access-9qk7b" (OuterVolumeSpecName: "kube-api-access-9qk7b") pod "c31e36f6-aabe-4f1e-8e7e-3bb086ec1cd3" (UID: "c31e36f6-aabe-4f1e-8e7e-3bb086ec1cd3"). InnerVolumeSpecName "kube-api-access-9qk7b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 07:58:21 crc kubenswrapper[4806]: I0126 07:58:21.564225 4806 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c31e36f6-aabe-4f1e-8e7e-3bb086ec1cd3-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 07:58:21 crc kubenswrapper[4806]: I0126 07:58:21.564259 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9qk7b\" (UniqueName: \"kubernetes.io/projected/c31e36f6-aabe-4f1e-8e7e-3bb086ec1cd3-kube-api-access-9qk7b\") on node \"crc\" DevicePath \"\"" Jan 26 07:58:21 crc kubenswrapper[4806]: I0126 07:58:21.577059 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-4mgsn"] Jan 26 07:58:21 crc kubenswrapper[4806]: I0126 07:58:21.590052 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c31e36f6-aabe-4f1e-8e7e-3bb086ec1cd3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c31e36f6-aabe-4f1e-8e7e-3bb086ec1cd3" (UID: "c31e36f6-aabe-4f1e-8e7e-3bb086ec1cd3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 07:58:21 crc kubenswrapper[4806]: I0126 07:58:21.666472 4806 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c31e36f6-aabe-4f1e-8e7e-3bb086ec1cd3-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 07:58:22 crc kubenswrapper[4806]: I0126 07:58:22.018590 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 26 07:58:22 crc kubenswrapper[4806]: I0126 07:58:22.018677 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 07:58:22 crc kubenswrapper[4806]: I0126 07:58:22.069917 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-vd8ts" event={"ID":"84d147bb-634e-40fb-a631-91ff228c0801","Type":"ContainerDied","Data":"3ea9b6dafb11bcb753041d747bb04fe310c95a0c57dda8f232c5f214cb5d9a91"} Jan 26 07:58:22 crc kubenswrapper[4806]: I0126 07:58:22.069986 4806 scope.go:117] "RemoveContainer" containerID="5f08d65064dbbb7d8267d85acd04d18b836184e8e70e780e307341d0f8bcdef4" Jan 26 07:58:22 crc kubenswrapper[4806]: I0126 07:58:22.070012 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-vd8ts" Jan 26 07:58:22 crc kubenswrapper[4806]: I0126 07:58:22.071364 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-4mgsn" event={"ID":"54cc9617-4cbc-4346-916a-cded431da40b","Type":"ContainerStarted","Data":"d822d14262e11ff9d3dcecd13a5777ebb79f3b2df8d24bda6d369c318be13161"} Jan 26 07:58:22 crc kubenswrapper[4806]: I0126 07:58:22.071454 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-4mgsn" event={"ID":"54cc9617-4cbc-4346-916a-cded431da40b","Type":"ContainerStarted","Data":"8bcec8912ed97cb5cb2b0ded6a22f85310b3de0dcb296cf63b9c366339ae8479"} Jan 26 07:58:22 crc kubenswrapper[4806]: I0126 07:58:22.071474 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-4mgsn" Jan 26 07:58:22 crc kubenswrapper[4806]: I0126 07:58:22.071717 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 26 07:58:22 crc kubenswrapper[4806]: I0126 07:58:22.071797 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 26 07:58:22 crc kubenswrapper[4806]: I0126 07:58:22.071838 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 26 07:58:22 crc kubenswrapper[4806]: I0126 07:58:22.071866 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 26 07:58:22 crc kubenswrapper[4806]: I0126 07:58:22.071904 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 26 07:58:22 crc kubenswrapper[4806]: I0126 07:58:22.072001 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 07:58:22 crc kubenswrapper[4806]: I0126 07:58:22.072053 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 07:58:22 crc kubenswrapper[4806]: I0126 07:58:22.072127 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 07:58:22 crc kubenswrapper[4806]: I0126 07:58:22.072157 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 07:58:22 crc kubenswrapper[4806]: I0126 07:58:22.072702 4806 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 26 07:58:22 crc kubenswrapper[4806]: I0126 07:58:22.072729 4806 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 26 07:58:22 crc kubenswrapper[4806]: I0126 07:58:22.072742 4806 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 26 07:58:22 crc kubenswrapper[4806]: I0126 07:58:22.072754 4806 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 26 07:58:22 crc kubenswrapper[4806]: I0126 07:58:22.074450 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-79ddh" event={"ID":"df97f49a-b950-45f2-8c66-52f2c6c33163","Type":"ContainerDied","Data":"d6f7e14f6f659f0c374f0128c4d28383bdeed890dbcf5b42f10df91c952fa6b1"} Jan 26 07:58:22 crc kubenswrapper[4806]: I0126 07:58:22.074624 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-79ddh" Jan 26 07:58:22 crc kubenswrapper[4806]: I0126 07:58:22.076696 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-4mgsn" Jan 26 07:58:22 crc kubenswrapper[4806]: I0126 07:58:22.077455 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bq7zd" event={"ID":"0531f954-d1d9-42f0-bd29-f8ff5b0871b4","Type":"ContainerDied","Data":"ceae611e088a277c36bfe42d897da1092b828ee94aeb89c5dfef7356c336c5c1"} Jan 26 07:58:22 crc kubenswrapper[4806]: I0126 07:58:22.077607 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bq7zd" Jan 26 07:58:22 crc kubenswrapper[4806]: I0126 07:58:22.080085 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 07:58:22 crc kubenswrapper[4806]: I0126 07:58:22.080203 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 26 07:58:22 crc kubenswrapper[4806]: I0126 07:58:22.080240 4806 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="86de97beb670491e720c5e95fb3a9dd8fce93f8dd5ccef493cde80b5b8ffc973" exitCode=137 Jan 26 07:58:22 crc kubenswrapper[4806]: I0126 07:58:22.080396 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 07:58:22 crc kubenswrapper[4806]: I0126 07:58:22.083244 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5p4nt" event={"ID":"c31e36f6-aabe-4f1e-8e7e-3bb086ec1cd3","Type":"ContainerDied","Data":"b63d2e646b24fa7834cb91f6f08c3ab94f38b88f43ead7aee31e10e506c2b45f"} Jan 26 07:58:22 crc kubenswrapper[4806]: I0126 07:58:22.083341 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5p4nt" Jan 26 07:58:22 crc kubenswrapper[4806]: I0126 07:58:22.088395 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ts6cz" event={"ID":"a078c937-6bed-4604-a0a1-25c9c7d2503d","Type":"ContainerDied","Data":"442cb782efd043b3c8a6e90a2886b0e89480b6555ca7004b410e5567c8c18e19"} Jan 26 07:58:22 crc kubenswrapper[4806]: I0126 07:58:22.088584 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ts6cz" Jan 26 07:58:22 crc kubenswrapper[4806]: I0126 07:58:22.098323 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-4mgsn" podStartSLOduration=2.098304011 podStartE2EDuration="2.098304011s" podCreationTimestamp="2026-01-26 07:58:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 07:58:22.095693596 +0000 UTC m=+281.360101642" watchObservedRunningTime="2026-01-26 07:58:22.098304011 +0000 UTC m=+281.362712057" Jan 26 07:58:22 crc kubenswrapper[4806]: I0126 07:58:22.128551 4806 scope.go:117] "RemoveContainer" containerID="7ae882badc1d1fc9fe36a1400418f7f65b07c4c628603331aba0c7be64238ef2" Jan 26 07:58:22 crc kubenswrapper[4806]: I0126 07:58:22.166412 4806 scope.go:117] "RemoveContainer" containerID="4d03042de38e2da530de8bbfab73a394136fb2c89fdb30c69b2eafeed14a82b2" Jan 26 07:58:22 crc kubenswrapper[4806]: I0126 07:58:22.174273 4806 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 26 07:58:22 crc kubenswrapper[4806]: I0126 07:58:22.192598 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bq7zd"] Jan 26 07:58:22 crc kubenswrapper[4806]: I0126 07:58:22.194208 4806 scope.go:117] "RemoveContainer" containerID="7f8dee80216d1651426ad8599d8f663f4a7b01e897924ccba87dc62ee3139bde" Jan 26 07:58:22 crc kubenswrapper[4806]: I0126 07:58:22.196556 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-bq7zd"] Jan 26 07:58:22 crc kubenswrapper[4806]: I0126 07:58:22.208369 4806 scope.go:117] "RemoveContainer" containerID="1082596f16b184b7a4358a615449d25564638f2cb9a44bbc2084ed2e6fe2e0d2" Jan 26 07:58:22 crc kubenswrapper[4806]: I0126 07:58:22.221671 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-79ddh"] Jan 26 07:58:22 crc kubenswrapper[4806]: I0126 07:58:22.223008 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-79ddh"] Jan 26 07:58:22 crc kubenswrapper[4806]: I0126 07:58:22.225086 4806 scope.go:117] "RemoveContainer" containerID="380b5ecf0eb94b7d73db31d33491c08b08b39c28920e47913488d09cd109854b" Jan 26 07:58:22 crc kubenswrapper[4806]: I0126 07:58:22.249585 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5p4nt"] Jan 26 07:58:22 crc kubenswrapper[4806]: I0126 07:58:22.259628 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-5p4nt"] Jan 26 07:58:22 crc kubenswrapper[4806]: I0126 07:58:22.263768 4806 scope.go:117] "RemoveContainer" containerID="9f73378235c52bb9162bfaa56d05c51d249e027ec303ea8e8c5de45218d50f49" Jan 26 07:58:22 crc kubenswrapper[4806]: I0126 07:58:22.283276 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vd8ts"] Jan 26 07:58:22 crc kubenswrapper[4806]: I0126 07:58:22.287187 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vd8ts"] Jan 26 07:58:22 crc kubenswrapper[4806]: I0126 07:58:22.291341 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ts6cz"] Jan 26 07:58:22 crc kubenswrapper[4806]: I0126 07:58:22.294486 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-ts6cz"] Jan 26 07:58:22 crc kubenswrapper[4806]: I0126 07:58:22.300079 4806 scope.go:117] "RemoveContainer" containerID="86de97beb670491e720c5e95fb3a9dd8fce93f8dd5ccef493cde80b5b8ffc973" Jan 26 07:58:22 crc kubenswrapper[4806]: I0126 07:58:22.319696 4806 scope.go:117] "RemoveContainer" containerID="86de97beb670491e720c5e95fb3a9dd8fce93f8dd5ccef493cde80b5b8ffc973" Jan 26 07:58:22 crc kubenswrapper[4806]: E0126 07:58:22.320535 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"86de97beb670491e720c5e95fb3a9dd8fce93f8dd5ccef493cde80b5b8ffc973\": container with ID starting with 86de97beb670491e720c5e95fb3a9dd8fce93f8dd5ccef493cde80b5b8ffc973 not found: ID does not exist" containerID="86de97beb670491e720c5e95fb3a9dd8fce93f8dd5ccef493cde80b5b8ffc973" Jan 26 07:58:22 crc kubenswrapper[4806]: I0126 07:58:22.320591 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86de97beb670491e720c5e95fb3a9dd8fce93f8dd5ccef493cde80b5b8ffc973"} err="failed to get container status \"86de97beb670491e720c5e95fb3a9dd8fce93f8dd5ccef493cde80b5b8ffc973\": rpc error: code = NotFound desc = could not find container \"86de97beb670491e720c5e95fb3a9dd8fce93f8dd5ccef493cde80b5b8ffc973\": container with ID starting with 86de97beb670491e720c5e95fb3a9dd8fce93f8dd5ccef493cde80b5b8ffc973 not found: ID does not exist" Jan 26 07:58:22 crc kubenswrapper[4806]: I0126 07:58:22.320626 4806 scope.go:117] "RemoveContainer" containerID="3e86adb9182afa3b208548f2d08bd7436c4a16dd25644a95bcbb36aaf3d3e9d1" Jan 26 07:58:22 crc kubenswrapper[4806]: I0126 07:58:22.335397 4806 scope.go:117] "RemoveContainer" containerID="ad4da5d9b0ba3adfe9a8c76216b1731a8aecddf2c5a6c649032f015d085c7f78" Jan 26 07:58:22 crc kubenswrapper[4806]: I0126 07:58:22.351695 4806 scope.go:117] "RemoveContainer" containerID="4dd53dcd0162e198ce2edfac6a6d9cf0d545222b546903179b887fc7b0343059" Jan 26 07:58:22 crc kubenswrapper[4806]: I0126 07:58:22.368327 4806 scope.go:117] "RemoveContainer" containerID="763832a6f11cba7a659fb2a5f7e0a288e2a51bb64b901f54acd3ab3d3cadecb5" Jan 26 07:58:22 crc kubenswrapper[4806]: I0126 07:58:22.384059 4806 scope.go:117] "RemoveContainer" containerID="d633fc37d630dcb69de0324a4873c8763cb10c676151f0af5806d6f3c27daa82" Jan 26 07:58:22 crc kubenswrapper[4806]: I0126 07:58:22.397841 4806 scope.go:117] "RemoveContainer" containerID="c632ee6f5c612e8ebfbd2f6a9c7186f34b5dba409c11fd08809f428c0b20f8c3" Jan 26 07:58:23 crc kubenswrapper[4806]: I0126 07:58:23.052588 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0531f954-d1d9-42f0-bd29-f8ff5b0871b4" path="/var/lib/kubelet/pods/0531f954-d1d9-42f0-bd29-f8ff5b0871b4/volumes" Jan 26 07:58:23 crc kubenswrapper[4806]: I0126 07:58:23.053908 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84d147bb-634e-40fb-a631-91ff228c0801" path="/var/lib/kubelet/pods/84d147bb-634e-40fb-a631-91ff228c0801/volumes" Jan 26 07:58:23 crc kubenswrapper[4806]: I0126 07:58:23.054587 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a078c937-6bed-4604-a0a1-25c9c7d2503d" path="/var/lib/kubelet/pods/a078c937-6bed-4604-a0a1-25c9c7d2503d/volumes" Jan 26 07:58:23 crc kubenswrapper[4806]: I0126 07:58:23.055901 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c31e36f6-aabe-4f1e-8e7e-3bb086ec1cd3" path="/var/lib/kubelet/pods/c31e36f6-aabe-4f1e-8e7e-3bb086ec1cd3/volumes" Jan 26 07:58:23 crc kubenswrapper[4806]: I0126 07:58:23.056714 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df97f49a-b950-45f2-8c66-52f2c6c33163" path="/var/lib/kubelet/pods/df97f49a-b950-45f2-8c66-52f2c6c33163/volumes" Jan 26 07:58:23 crc kubenswrapper[4806]: I0126 07:58:23.057349 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 26 07:58:40 crc kubenswrapper[4806]: I0126 07:58:40.884154 4806 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 26 07:58:45 crc kubenswrapper[4806]: I0126 07:58:45.969270 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 26 07:59:36 crc kubenswrapper[4806]: I0126 07:59:36.399489 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-xfp4f"] Jan 26 07:59:36 crc kubenswrapper[4806]: E0126 07:59:36.400430 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0531f954-d1d9-42f0-bd29-f8ff5b0871b4" containerName="extract-utilities" Jan 26 07:59:36 crc kubenswrapper[4806]: I0126 07:59:36.400445 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="0531f954-d1d9-42f0-bd29-f8ff5b0871b4" containerName="extract-utilities" Jan 26 07:59:36 crc kubenswrapper[4806]: E0126 07:59:36.400460 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a078c937-6bed-4604-a0a1-25c9c7d2503d" containerName="extract-utilities" Jan 26 07:59:36 crc kubenswrapper[4806]: I0126 07:59:36.400467 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="a078c937-6bed-4604-a0a1-25c9c7d2503d" containerName="extract-utilities" Jan 26 07:59:36 crc kubenswrapper[4806]: E0126 07:59:36.400483 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df97f49a-b950-45f2-8c66-52f2c6c33163" containerName="registry-server" Jan 26 07:59:36 crc kubenswrapper[4806]: I0126 07:59:36.400491 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="df97f49a-b950-45f2-8c66-52f2c6c33163" containerName="registry-server" Jan 26 07:59:36 crc kubenswrapper[4806]: E0126 07:59:36.400500 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df97f49a-b950-45f2-8c66-52f2c6c33163" containerName="extract-content" Jan 26 07:59:36 crc kubenswrapper[4806]: I0126 07:59:36.400509 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="df97f49a-b950-45f2-8c66-52f2c6c33163" containerName="extract-content" Jan 26 07:59:36 crc kubenswrapper[4806]: E0126 07:59:36.400557 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c31e36f6-aabe-4f1e-8e7e-3bb086ec1cd3" containerName="registry-server" Jan 26 07:59:36 crc kubenswrapper[4806]: I0126 07:59:36.400567 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="c31e36f6-aabe-4f1e-8e7e-3bb086ec1cd3" containerName="registry-server" Jan 26 07:59:36 crc kubenswrapper[4806]: E0126 07:59:36.400579 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c31e36f6-aabe-4f1e-8e7e-3bb086ec1cd3" containerName="extract-content" Jan 26 07:59:36 crc kubenswrapper[4806]: I0126 07:59:36.400587 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="c31e36f6-aabe-4f1e-8e7e-3bb086ec1cd3" containerName="extract-content" Jan 26 07:59:36 crc kubenswrapper[4806]: E0126 07:59:36.400599 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df97f49a-b950-45f2-8c66-52f2c6c33163" containerName="extract-utilities" Jan 26 07:59:36 crc kubenswrapper[4806]: I0126 07:59:36.400607 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="df97f49a-b950-45f2-8c66-52f2c6c33163" containerName="extract-utilities" Jan 26 07:59:36 crc kubenswrapper[4806]: E0126 07:59:36.400619 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84d147bb-634e-40fb-a631-91ff228c0801" containerName="marketplace-operator" Jan 26 07:59:36 crc kubenswrapper[4806]: I0126 07:59:36.400626 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="84d147bb-634e-40fb-a631-91ff228c0801" containerName="marketplace-operator" Jan 26 07:59:36 crc kubenswrapper[4806]: E0126 07:59:36.400640 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0531f954-d1d9-42f0-bd29-f8ff5b0871b4" containerName="registry-server" Jan 26 07:59:36 crc kubenswrapper[4806]: I0126 07:59:36.400647 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="0531f954-d1d9-42f0-bd29-f8ff5b0871b4" containerName="registry-server" Jan 26 07:59:36 crc kubenswrapper[4806]: E0126 07:59:36.400657 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c31e36f6-aabe-4f1e-8e7e-3bb086ec1cd3" containerName="extract-utilities" Jan 26 07:59:36 crc kubenswrapper[4806]: I0126 07:59:36.400665 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="c31e36f6-aabe-4f1e-8e7e-3bb086ec1cd3" containerName="extract-utilities" Jan 26 07:59:36 crc kubenswrapper[4806]: E0126 07:59:36.400676 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a078c937-6bed-4604-a0a1-25c9c7d2503d" containerName="registry-server" Jan 26 07:59:36 crc kubenswrapper[4806]: I0126 07:59:36.400684 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="a078c937-6bed-4604-a0a1-25c9c7d2503d" containerName="registry-server" Jan 26 07:59:36 crc kubenswrapper[4806]: E0126 07:59:36.400696 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a078c937-6bed-4604-a0a1-25c9c7d2503d" containerName="extract-content" Jan 26 07:59:36 crc kubenswrapper[4806]: I0126 07:59:36.400703 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="a078c937-6bed-4604-a0a1-25c9c7d2503d" containerName="extract-content" Jan 26 07:59:36 crc kubenswrapper[4806]: E0126 07:59:36.400714 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0531f954-d1d9-42f0-bd29-f8ff5b0871b4" containerName="extract-content" Jan 26 07:59:36 crc kubenswrapper[4806]: I0126 07:59:36.400722 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="0531f954-d1d9-42f0-bd29-f8ff5b0871b4" containerName="extract-content" Jan 26 07:59:36 crc kubenswrapper[4806]: I0126 07:59:36.400832 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="a078c937-6bed-4604-a0a1-25c9c7d2503d" containerName="registry-server" Jan 26 07:59:36 crc kubenswrapper[4806]: I0126 07:59:36.400845 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="0531f954-d1d9-42f0-bd29-f8ff5b0871b4" containerName="registry-server" Jan 26 07:59:36 crc kubenswrapper[4806]: I0126 07:59:36.400860 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="df97f49a-b950-45f2-8c66-52f2c6c33163" containerName="registry-server" Jan 26 07:59:36 crc kubenswrapper[4806]: I0126 07:59:36.400870 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="84d147bb-634e-40fb-a631-91ff228c0801" containerName="marketplace-operator" Jan 26 07:59:36 crc kubenswrapper[4806]: I0126 07:59:36.400880 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="c31e36f6-aabe-4f1e-8e7e-3bb086ec1cd3" containerName="registry-server" Jan 26 07:59:36 crc kubenswrapper[4806]: I0126 07:59:36.402269 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xfp4f" Jan 26 07:59:36 crc kubenswrapper[4806]: I0126 07:59:36.405257 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 26 07:59:36 crc kubenswrapper[4806]: I0126 07:59:36.451674 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xfp4f"] Jan 26 07:59:36 crc kubenswrapper[4806]: I0126 07:59:36.599610 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-f427x"] Jan 26 07:59:36 crc kubenswrapper[4806]: I0126 07:59:36.600598 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f427x" Jan 26 07:59:36 crc kubenswrapper[4806]: I0126 07:59:36.602839 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 26 07:59:36 crc kubenswrapper[4806]: I0126 07:59:36.603581 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/557bfe89-c128-469e-8e26-f80ecb3a1cb1-utilities\") pod \"community-operators-xfp4f\" (UID: \"557bfe89-c128-469e-8e26-f80ecb3a1cb1\") " pod="openshift-marketplace/community-operators-xfp4f" Jan 26 07:59:36 crc kubenswrapper[4806]: I0126 07:59:36.603701 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vngml\" (UniqueName: \"kubernetes.io/projected/557bfe89-c128-469e-8e26-f80ecb3a1cb1-kube-api-access-vngml\") pod \"community-operators-xfp4f\" (UID: \"557bfe89-c128-469e-8e26-f80ecb3a1cb1\") " pod="openshift-marketplace/community-operators-xfp4f" Jan 26 07:59:36 crc kubenswrapper[4806]: I0126 07:59:36.603733 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/557bfe89-c128-469e-8e26-f80ecb3a1cb1-catalog-content\") pod \"community-operators-xfp4f\" (UID: \"557bfe89-c128-469e-8e26-f80ecb3a1cb1\") " pod="openshift-marketplace/community-operators-xfp4f" Jan 26 07:59:36 crc kubenswrapper[4806]: I0126 07:59:36.617026 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-f427x"] Jan 26 07:59:36 crc kubenswrapper[4806]: I0126 07:59:36.705176 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c92773d-ebe0-4739-9668-f826721f9a36-catalog-content\") pod \"certified-operators-f427x\" (UID: \"0c92773d-ebe0-4739-9668-f826721f9a36\") " pod="openshift-marketplace/certified-operators-f427x" Jan 26 07:59:36 crc kubenswrapper[4806]: I0126 07:59:36.705392 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tj8rj\" (UniqueName: \"kubernetes.io/projected/0c92773d-ebe0-4739-9668-f826721f9a36-kube-api-access-tj8rj\") pod \"certified-operators-f427x\" (UID: \"0c92773d-ebe0-4739-9668-f826721f9a36\") " pod="openshift-marketplace/certified-operators-f427x" Jan 26 07:59:36 crc kubenswrapper[4806]: I0126 07:59:36.705460 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vngml\" (UniqueName: \"kubernetes.io/projected/557bfe89-c128-469e-8e26-f80ecb3a1cb1-kube-api-access-vngml\") pod \"community-operators-xfp4f\" (UID: \"557bfe89-c128-469e-8e26-f80ecb3a1cb1\") " pod="openshift-marketplace/community-operators-xfp4f" Jan 26 07:59:36 crc kubenswrapper[4806]: I0126 07:59:36.705666 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c92773d-ebe0-4739-9668-f826721f9a36-utilities\") pod \"certified-operators-f427x\" (UID: \"0c92773d-ebe0-4739-9668-f826721f9a36\") " pod="openshift-marketplace/certified-operators-f427x" Jan 26 07:59:36 crc kubenswrapper[4806]: I0126 07:59:36.705765 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/557bfe89-c128-469e-8e26-f80ecb3a1cb1-catalog-content\") pod \"community-operators-xfp4f\" (UID: \"557bfe89-c128-469e-8e26-f80ecb3a1cb1\") " pod="openshift-marketplace/community-operators-xfp4f" Jan 26 07:59:36 crc kubenswrapper[4806]: I0126 07:59:36.705847 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/557bfe89-c128-469e-8e26-f80ecb3a1cb1-utilities\") pod \"community-operators-xfp4f\" (UID: \"557bfe89-c128-469e-8e26-f80ecb3a1cb1\") " pod="openshift-marketplace/community-operators-xfp4f" Jan 26 07:59:36 crc kubenswrapper[4806]: I0126 07:59:36.706267 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/557bfe89-c128-469e-8e26-f80ecb3a1cb1-catalog-content\") pod \"community-operators-xfp4f\" (UID: \"557bfe89-c128-469e-8e26-f80ecb3a1cb1\") " pod="openshift-marketplace/community-operators-xfp4f" Jan 26 07:59:36 crc kubenswrapper[4806]: I0126 07:59:36.707735 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/557bfe89-c128-469e-8e26-f80ecb3a1cb1-utilities\") pod \"community-operators-xfp4f\" (UID: \"557bfe89-c128-469e-8e26-f80ecb3a1cb1\") " pod="openshift-marketplace/community-operators-xfp4f" Jan 26 07:59:36 crc kubenswrapper[4806]: I0126 07:59:36.727087 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vngml\" (UniqueName: \"kubernetes.io/projected/557bfe89-c128-469e-8e26-f80ecb3a1cb1-kube-api-access-vngml\") pod \"community-operators-xfp4f\" (UID: \"557bfe89-c128-469e-8e26-f80ecb3a1cb1\") " pod="openshift-marketplace/community-operators-xfp4f" Jan 26 07:59:36 crc kubenswrapper[4806]: I0126 07:59:36.727410 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xfp4f" Jan 26 07:59:36 crc kubenswrapper[4806]: I0126 07:59:36.808368 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c92773d-ebe0-4739-9668-f826721f9a36-catalog-content\") pod \"certified-operators-f427x\" (UID: \"0c92773d-ebe0-4739-9668-f826721f9a36\") " pod="openshift-marketplace/certified-operators-f427x" Jan 26 07:59:36 crc kubenswrapper[4806]: I0126 07:59:36.808463 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tj8rj\" (UniqueName: \"kubernetes.io/projected/0c92773d-ebe0-4739-9668-f826721f9a36-kube-api-access-tj8rj\") pod \"certified-operators-f427x\" (UID: \"0c92773d-ebe0-4739-9668-f826721f9a36\") " pod="openshift-marketplace/certified-operators-f427x" Jan 26 07:59:36 crc kubenswrapper[4806]: I0126 07:59:36.808504 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c92773d-ebe0-4739-9668-f826721f9a36-utilities\") pod \"certified-operators-f427x\" (UID: \"0c92773d-ebe0-4739-9668-f826721f9a36\") " pod="openshift-marketplace/certified-operators-f427x" Jan 26 07:59:36 crc kubenswrapper[4806]: I0126 07:59:36.809281 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c92773d-ebe0-4739-9668-f826721f9a36-utilities\") pod \"certified-operators-f427x\" (UID: \"0c92773d-ebe0-4739-9668-f826721f9a36\") " pod="openshift-marketplace/certified-operators-f427x" Jan 26 07:59:36 crc kubenswrapper[4806]: I0126 07:59:36.809897 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c92773d-ebe0-4739-9668-f826721f9a36-catalog-content\") pod \"certified-operators-f427x\" (UID: \"0c92773d-ebe0-4739-9668-f826721f9a36\") " pod="openshift-marketplace/certified-operators-f427x" Jan 26 07:59:36 crc kubenswrapper[4806]: I0126 07:59:36.831739 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tj8rj\" (UniqueName: \"kubernetes.io/projected/0c92773d-ebe0-4739-9668-f826721f9a36-kube-api-access-tj8rj\") pod \"certified-operators-f427x\" (UID: \"0c92773d-ebe0-4739-9668-f826721f9a36\") " pod="openshift-marketplace/certified-operators-f427x" Jan 26 07:59:36 crc kubenswrapper[4806]: I0126 07:59:36.921999 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f427x" Jan 26 07:59:37 crc kubenswrapper[4806]: I0126 07:59:37.163187 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-f427x"] Jan 26 07:59:37 crc kubenswrapper[4806]: I0126 07:59:37.168217 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xfp4f"] Jan 26 07:59:37 crc kubenswrapper[4806]: I0126 07:59:37.561473 4806 generic.go:334] "Generic (PLEG): container finished" podID="0c92773d-ebe0-4739-9668-f826721f9a36" containerID="e14cf1f80cde1f2f122559c1fd0acbfa47dca2690f34c3cf12c8a4cfa3864cfa" exitCode=0 Jan 26 07:59:37 crc kubenswrapper[4806]: I0126 07:59:37.561531 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f427x" event={"ID":"0c92773d-ebe0-4739-9668-f826721f9a36","Type":"ContainerDied","Data":"e14cf1f80cde1f2f122559c1fd0acbfa47dca2690f34c3cf12c8a4cfa3864cfa"} Jan 26 07:59:37 crc kubenswrapper[4806]: I0126 07:59:37.562337 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f427x" event={"ID":"0c92773d-ebe0-4739-9668-f826721f9a36","Type":"ContainerStarted","Data":"807e56befed45599a22bf28d1978114ae595e52c7e705496c2aa07293399cacc"} Jan 26 07:59:37 crc kubenswrapper[4806]: I0126 07:59:37.564596 4806 generic.go:334] "Generic (PLEG): container finished" podID="557bfe89-c128-469e-8e26-f80ecb3a1cb1" containerID="1de2527e5e07336a29030e581fc929386d5fe8295e5feaacc0ec9f758841dcdc" exitCode=0 Jan 26 07:59:37 crc kubenswrapper[4806]: I0126 07:59:37.564641 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xfp4f" event={"ID":"557bfe89-c128-469e-8e26-f80ecb3a1cb1","Type":"ContainerDied","Data":"1de2527e5e07336a29030e581fc929386d5fe8295e5feaacc0ec9f758841dcdc"} Jan 26 07:59:37 crc kubenswrapper[4806]: I0126 07:59:37.564668 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xfp4f" event={"ID":"557bfe89-c128-469e-8e26-f80ecb3a1cb1","Type":"ContainerStarted","Data":"6eb84cd01fead5bafdd196a55b979ae10d078be5aad19a4e05b13e82dc3a5f29"} Jan 26 07:59:38 crc kubenswrapper[4806]: I0126 07:59:38.574504 4806 generic.go:334] "Generic (PLEG): container finished" podID="0c92773d-ebe0-4739-9668-f826721f9a36" containerID="bcc6b5f0e771e24f11095a38c7c0c121dd9d564083de2704bc90ea64f38bd0a7" exitCode=0 Jan 26 07:59:38 crc kubenswrapper[4806]: I0126 07:59:38.574756 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f427x" event={"ID":"0c92773d-ebe0-4739-9668-f826721f9a36","Type":"ContainerDied","Data":"bcc6b5f0e771e24f11095a38c7c0c121dd9d564083de2704bc90ea64f38bd0a7"} Jan 26 07:59:38 crc kubenswrapper[4806]: I0126 07:59:38.578648 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xfp4f" event={"ID":"557bfe89-c128-469e-8e26-f80ecb3a1cb1","Type":"ContainerStarted","Data":"ed8adc95bc6b105b0d89a3e53039627c1b2f83a2cf024f3d258bcc3a91768f25"} Jan 26 07:59:38 crc kubenswrapper[4806]: I0126 07:59:38.655973 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-qdw7c"] Jan 26 07:59:38 crc kubenswrapper[4806]: I0126 07:59:38.656834 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-qdw7c" Jan 26 07:59:38 crc kubenswrapper[4806]: I0126 07:59:38.674305 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-qdw7c"] Jan 26 07:59:38 crc kubenswrapper[4806]: I0126 07:59:38.771240 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/91810d32-1f8f-416c-890c-2c9c01199476-ca-trust-extracted\") pod \"image-registry-66df7c8f76-qdw7c\" (UID: \"91810d32-1f8f-416c-890c-2c9c01199476\") " pod="openshift-image-registry/image-registry-66df7c8f76-qdw7c" Jan 26 07:59:38 crc kubenswrapper[4806]: I0126 07:59:38.771859 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9cgkv\" (UniqueName: \"kubernetes.io/projected/91810d32-1f8f-416c-890c-2c9c01199476-kube-api-access-9cgkv\") pod \"image-registry-66df7c8f76-qdw7c\" (UID: \"91810d32-1f8f-416c-890c-2c9c01199476\") " pod="openshift-image-registry/image-registry-66df7c8f76-qdw7c" Jan 26 07:59:38 crc kubenswrapper[4806]: I0126 07:59:38.771943 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/91810d32-1f8f-416c-890c-2c9c01199476-trusted-ca\") pod \"image-registry-66df7c8f76-qdw7c\" (UID: \"91810d32-1f8f-416c-890c-2c9c01199476\") " pod="openshift-image-registry/image-registry-66df7c8f76-qdw7c" Jan 26 07:59:38 crc kubenswrapper[4806]: I0126 07:59:38.772008 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/91810d32-1f8f-416c-890c-2c9c01199476-registry-certificates\") pod \"image-registry-66df7c8f76-qdw7c\" (UID: \"91810d32-1f8f-416c-890c-2c9c01199476\") " pod="openshift-image-registry/image-registry-66df7c8f76-qdw7c" Jan 26 07:59:38 crc kubenswrapper[4806]: I0126 07:59:38.772083 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-qdw7c\" (UID: \"91810d32-1f8f-416c-890c-2c9c01199476\") " pod="openshift-image-registry/image-registry-66df7c8f76-qdw7c" Jan 26 07:59:38 crc kubenswrapper[4806]: I0126 07:59:38.772198 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/91810d32-1f8f-416c-890c-2c9c01199476-registry-tls\") pod \"image-registry-66df7c8f76-qdw7c\" (UID: \"91810d32-1f8f-416c-890c-2c9c01199476\") " pod="openshift-image-registry/image-registry-66df7c8f76-qdw7c" Jan 26 07:59:38 crc kubenswrapper[4806]: I0126 07:59:38.772278 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/91810d32-1f8f-416c-890c-2c9c01199476-installation-pull-secrets\") pod \"image-registry-66df7c8f76-qdw7c\" (UID: \"91810d32-1f8f-416c-890c-2c9c01199476\") " pod="openshift-image-registry/image-registry-66df7c8f76-qdw7c" Jan 26 07:59:38 crc kubenswrapper[4806]: I0126 07:59:38.772393 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/91810d32-1f8f-416c-890c-2c9c01199476-bound-sa-token\") pod \"image-registry-66df7c8f76-qdw7c\" (UID: \"91810d32-1f8f-416c-890c-2c9c01199476\") " pod="openshift-image-registry/image-registry-66df7c8f76-qdw7c" Jan 26 07:59:38 crc kubenswrapper[4806]: I0126 07:59:38.795257 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rqfsl"] Jan 26 07:59:38 crc kubenswrapper[4806]: I0126 07:59:38.796233 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rqfsl" Jan 26 07:59:38 crc kubenswrapper[4806]: I0126 07:59:38.801993 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 26 07:59:38 crc kubenswrapper[4806]: I0126 07:59:38.810883 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-qdw7c\" (UID: \"91810d32-1f8f-416c-890c-2c9c01199476\") " pod="openshift-image-registry/image-registry-66df7c8f76-qdw7c" Jan 26 07:59:38 crc kubenswrapper[4806]: I0126 07:59:38.850949 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rqfsl"] Jan 26 07:59:38 crc kubenswrapper[4806]: I0126 07:59:38.873568 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/91810d32-1f8f-416c-890c-2c9c01199476-bound-sa-token\") pod \"image-registry-66df7c8f76-qdw7c\" (UID: \"91810d32-1f8f-416c-890c-2c9c01199476\") " pod="openshift-image-registry/image-registry-66df7c8f76-qdw7c" Jan 26 07:59:38 crc kubenswrapper[4806]: I0126 07:59:38.873635 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/91810d32-1f8f-416c-890c-2c9c01199476-ca-trust-extracted\") pod \"image-registry-66df7c8f76-qdw7c\" (UID: \"91810d32-1f8f-416c-890c-2c9c01199476\") " pod="openshift-image-registry/image-registry-66df7c8f76-qdw7c" Jan 26 07:59:38 crc kubenswrapper[4806]: I0126 07:59:38.873655 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9cgkv\" (UniqueName: \"kubernetes.io/projected/91810d32-1f8f-416c-890c-2c9c01199476-kube-api-access-9cgkv\") pod \"image-registry-66df7c8f76-qdw7c\" (UID: \"91810d32-1f8f-416c-890c-2c9c01199476\") " pod="openshift-image-registry/image-registry-66df7c8f76-qdw7c" Jan 26 07:59:38 crc kubenswrapper[4806]: I0126 07:59:38.873688 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/91810d32-1f8f-416c-890c-2c9c01199476-trusted-ca\") pod \"image-registry-66df7c8f76-qdw7c\" (UID: \"91810d32-1f8f-416c-890c-2c9c01199476\") " pod="openshift-image-registry/image-registry-66df7c8f76-qdw7c" Jan 26 07:59:38 crc kubenswrapper[4806]: I0126 07:59:38.873707 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/91810d32-1f8f-416c-890c-2c9c01199476-registry-certificates\") pod \"image-registry-66df7c8f76-qdw7c\" (UID: \"91810d32-1f8f-416c-890c-2c9c01199476\") " pod="openshift-image-registry/image-registry-66df7c8f76-qdw7c" Jan 26 07:59:38 crc kubenswrapper[4806]: I0126 07:59:38.873744 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/91810d32-1f8f-416c-890c-2c9c01199476-registry-tls\") pod \"image-registry-66df7c8f76-qdw7c\" (UID: \"91810d32-1f8f-416c-890c-2c9c01199476\") " pod="openshift-image-registry/image-registry-66df7c8f76-qdw7c" Jan 26 07:59:38 crc kubenswrapper[4806]: I0126 07:59:38.873769 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/91810d32-1f8f-416c-890c-2c9c01199476-installation-pull-secrets\") pod \"image-registry-66df7c8f76-qdw7c\" (UID: \"91810d32-1f8f-416c-890c-2c9c01199476\") " pod="openshift-image-registry/image-registry-66df7c8f76-qdw7c" Jan 26 07:59:38 crc kubenswrapper[4806]: I0126 07:59:38.875286 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/91810d32-1f8f-416c-890c-2c9c01199476-ca-trust-extracted\") pod \"image-registry-66df7c8f76-qdw7c\" (UID: \"91810d32-1f8f-416c-890c-2c9c01199476\") " pod="openshift-image-registry/image-registry-66df7c8f76-qdw7c" Jan 26 07:59:38 crc kubenswrapper[4806]: I0126 07:59:38.876094 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/91810d32-1f8f-416c-890c-2c9c01199476-trusted-ca\") pod \"image-registry-66df7c8f76-qdw7c\" (UID: \"91810d32-1f8f-416c-890c-2c9c01199476\") " pod="openshift-image-registry/image-registry-66df7c8f76-qdw7c" Jan 26 07:59:38 crc kubenswrapper[4806]: I0126 07:59:38.876384 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/91810d32-1f8f-416c-890c-2c9c01199476-registry-certificates\") pod \"image-registry-66df7c8f76-qdw7c\" (UID: \"91810d32-1f8f-416c-890c-2c9c01199476\") " pod="openshift-image-registry/image-registry-66df7c8f76-qdw7c" Jan 26 07:59:38 crc kubenswrapper[4806]: I0126 07:59:38.879660 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/91810d32-1f8f-416c-890c-2c9c01199476-registry-tls\") pod \"image-registry-66df7c8f76-qdw7c\" (UID: \"91810d32-1f8f-416c-890c-2c9c01199476\") " pod="openshift-image-registry/image-registry-66df7c8f76-qdw7c" Jan 26 07:59:38 crc kubenswrapper[4806]: I0126 07:59:38.888723 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/91810d32-1f8f-416c-890c-2c9c01199476-installation-pull-secrets\") pod \"image-registry-66df7c8f76-qdw7c\" (UID: \"91810d32-1f8f-416c-890c-2c9c01199476\") " pod="openshift-image-registry/image-registry-66df7c8f76-qdw7c" Jan 26 07:59:38 crc kubenswrapper[4806]: I0126 07:59:38.894801 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9cgkv\" (UniqueName: \"kubernetes.io/projected/91810d32-1f8f-416c-890c-2c9c01199476-kube-api-access-9cgkv\") pod \"image-registry-66df7c8f76-qdw7c\" (UID: \"91810d32-1f8f-416c-890c-2c9c01199476\") " pod="openshift-image-registry/image-registry-66df7c8f76-qdw7c" Jan 26 07:59:38 crc kubenswrapper[4806]: I0126 07:59:38.895278 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/91810d32-1f8f-416c-890c-2c9c01199476-bound-sa-token\") pod \"image-registry-66df7c8f76-qdw7c\" (UID: \"91810d32-1f8f-416c-890c-2c9c01199476\") " pod="openshift-image-registry/image-registry-66df7c8f76-qdw7c" Jan 26 07:59:38 crc kubenswrapper[4806]: I0126 07:59:38.974621 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b94c8b95-1f08-4f96-a9c0-47aef79a823b-catalog-content\") pod \"redhat-marketplace-rqfsl\" (UID: \"b94c8b95-1f08-4f96-a9c0-47aef79a823b\") " pod="openshift-marketplace/redhat-marketplace-rqfsl" Jan 26 07:59:38 crc kubenswrapper[4806]: I0126 07:59:38.974692 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b94c8b95-1f08-4f96-a9c0-47aef79a823b-utilities\") pod \"redhat-marketplace-rqfsl\" (UID: \"b94c8b95-1f08-4f96-a9c0-47aef79a823b\") " pod="openshift-marketplace/redhat-marketplace-rqfsl" Jan 26 07:59:38 crc kubenswrapper[4806]: I0126 07:59:38.974741 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbcx2\" (UniqueName: \"kubernetes.io/projected/b94c8b95-1f08-4f96-a9c0-47aef79a823b-kube-api-access-bbcx2\") pod \"redhat-marketplace-rqfsl\" (UID: \"b94c8b95-1f08-4f96-a9c0-47aef79a823b\") " pod="openshift-marketplace/redhat-marketplace-rqfsl" Jan 26 07:59:38 crc kubenswrapper[4806]: I0126 07:59:38.990324 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-qdw7c" Jan 26 07:59:38 crc kubenswrapper[4806]: I0126 07:59:38.993892 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-lnpjg"] Jan 26 07:59:38 crc kubenswrapper[4806]: I0126 07:59:38.995508 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lnpjg" Jan 26 07:59:38 crc kubenswrapper[4806]: I0126 07:59:38.999633 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 26 07:59:39 crc kubenswrapper[4806]: I0126 07:59:39.012834 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lnpjg"] Jan 26 07:59:39 crc kubenswrapper[4806]: I0126 07:59:39.075676 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bbcx2\" (UniqueName: \"kubernetes.io/projected/b94c8b95-1f08-4f96-a9c0-47aef79a823b-kube-api-access-bbcx2\") pod \"redhat-marketplace-rqfsl\" (UID: \"b94c8b95-1f08-4f96-a9c0-47aef79a823b\") " pod="openshift-marketplace/redhat-marketplace-rqfsl" Jan 26 07:59:39 crc kubenswrapper[4806]: I0126 07:59:39.075741 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b94c8b95-1f08-4f96-a9c0-47aef79a823b-catalog-content\") pod \"redhat-marketplace-rqfsl\" (UID: \"b94c8b95-1f08-4f96-a9c0-47aef79a823b\") " pod="openshift-marketplace/redhat-marketplace-rqfsl" Jan 26 07:59:39 crc kubenswrapper[4806]: I0126 07:59:39.075786 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b94c8b95-1f08-4f96-a9c0-47aef79a823b-utilities\") pod \"redhat-marketplace-rqfsl\" (UID: \"b94c8b95-1f08-4f96-a9c0-47aef79a823b\") " pod="openshift-marketplace/redhat-marketplace-rqfsl" Jan 26 07:59:39 crc kubenswrapper[4806]: I0126 07:59:39.076492 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b94c8b95-1f08-4f96-a9c0-47aef79a823b-catalog-content\") pod \"redhat-marketplace-rqfsl\" (UID: \"b94c8b95-1f08-4f96-a9c0-47aef79a823b\") " pod="openshift-marketplace/redhat-marketplace-rqfsl" Jan 26 07:59:39 crc kubenswrapper[4806]: I0126 07:59:39.076506 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b94c8b95-1f08-4f96-a9c0-47aef79a823b-utilities\") pod \"redhat-marketplace-rqfsl\" (UID: \"b94c8b95-1f08-4f96-a9c0-47aef79a823b\") " pod="openshift-marketplace/redhat-marketplace-rqfsl" Jan 26 07:59:39 crc kubenswrapper[4806]: I0126 07:59:39.097244 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bbcx2\" (UniqueName: \"kubernetes.io/projected/b94c8b95-1f08-4f96-a9c0-47aef79a823b-kube-api-access-bbcx2\") pod \"redhat-marketplace-rqfsl\" (UID: \"b94c8b95-1f08-4f96-a9c0-47aef79a823b\") " pod="openshift-marketplace/redhat-marketplace-rqfsl" Jan 26 07:59:39 crc kubenswrapper[4806]: I0126 07:59:39.132101 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rqfsl" Jan 26 07:59:39 crc kubenswrapper[4806]: I0126 07:59:39.177529 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6b68x\" (UniqueName: \"kubernetes.io/projected/c55665b2-fe11-48ad-9699-5bf16993d344-kube-api-access-6b68x\") pod \"redhat-operators-lnpjg\" (UID: \"c55665b2-fe11-48ad-9699-5bf16993d344\") " pod="openshift-marketplace/redhat-operators-lnpjg" Jan 26 07:59:39 crc kubenswrapper[4806]: I0126 07:59:39.177704 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c55665b2-fe11-48ad-9699-5bf16993d344-utilities\") pod \"redhat-operators-lnpjg\" (UID: \"c55665b2-fe11-48ad-9699-5bf16993d344\") " pod="openshift-marketplace/redhat-operators-lnpjg" Jan 26 07:59:39 crc kubenswrapper[4806]: I0126 07:59:39.177857 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c55665b2-fe11-48ad-9699-5bf16993d344-catalog-content\") pod \"redhat-operators-lnpjg\" (UID: \"c55665b2-fe11-48ad-9699-5bf16993d344\") " pod="openshift-marketplace/redhat-operators-lnpjg" Jan 26 07:59:39 crc kubenswrapper[4806]: I0126 07:59:39.263578 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-qdw7c"] Jan 26 07:59:39 crc kubenswrapper[4806]: I0126 07:59:39.280446 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c55665b2-fe11-48ad-9699-5bf16993d344-catalog-content\") pod \"redhat-operators-lnpjg\" (UID: \"c55665b2-fe11-48ad-9699-5bf16993d344\") " pod="openshift-marketplace/redhat-operators-lnpjg" Jan 26 07:59:39 crc kubenswrapper[4806]: I0126 07:59:39.280815 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6b68x\" (UniqueName: \"kubernetes.io/projected/c55665b2-fe11-48ad-9699-5bf16993d344-kube-api-access-6b68x\") pod \"redhat-operators-lnpjg\" (UID: \"c55665b2-fe11-48ad-9699-5bf16993d344\") " pod="openshift-marketplace/redhat-operators-lnpjg" Jan 26 07:59:39 crc kubenswrapper[4806]: I0126 07:59:39.280881 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c55665b2-fe11-48ad-9699-5bf16993d344-utilities\") pod \"redhat-operators-lnpjg\" (UID: \"c55665b2-fe11-48ad-9699-5bf16993d344\") " pod="openshift-marketplace/redhat-operators-lnpjg" Jan 26 07:59:39 crc kubenswrapper[4806]: I0126 07:59:39.281125 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c55665b2-fe11-48ad-9699-5bf16993d344-catalog-content\") pod \"redhat-operators-lnpjg\" (UID: \"c55665b2-fe11-48ad-9699-5bf16993d344\") " pod="openshift-marketplace/redhat-operators-lnpjg" Jan 26 07:59:39 crc kubenswrapper[4806]: I0126 07:59:39.281347 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c55665b2-fe11-48ad-9699-5bf16993d344-utilities\") pod \"redhat-operators-lnpjg\" (UID: \"c55665b2-fe11-48ad-9699-5bf16993d344\") " pod="openshift-marketplace/redhat-operators-lnpjg" Jan 26 07:59:39 crc kubenswrapper[4806]: I0126 07:59:39.300381 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6b68x\" (UniqueName: \"kubernetes.io/projected/c55665b2-fe11-48ad-9699-5bf16993d344-kube-api-access-6b68x\") pod \"redhat-operators-lnpjg\" (UID: \"c55665b2-fe11-48ad-9699-5bf16993d344\") " pod="openshift-marketplace/redhat-operators-lnpjg" Jan 26 07:59:39 crc kubenswrapper[4806]: I0126 07:59:39.343825 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rqfsl"] Jan 26 07:59:39 crc kubenswrapper[4806]: I0126 07:59:39.381041 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lnpjg" Jan 26 07:59:39 crc kubenswrapper[4806]: I0126 07:59:39.588676 4806 generic.go:334] "Generic (PLEG): container finished" podID="557bfe89-c128-469e-8e26-f80ecb3a1cb1" containerID="ed8adc95bc6b105b0d89a3e53039627c1b2f83a2cf024f3d258bcc3a91768f25" exitCode=0 Jan 26 07:59:39 crc kubenswrapper[4806]: I0126 07:59:39.588750 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xfp4f" event={"ID":"557bfe89-c128-469e-8e26-f80ecb3a1cb1","Type":"ContainerDied","Data":"ed8adc95bc6b105b0d89a3e53039627c1b2f83a2cf024f3d258bcc3a91768f25"} Jan 26 07:59:39 crc kubenswrapper[4806]: I0126 07:59:39.590681 4806 generic.go:334] "Generic (PLEG): container finished" podID="b94c8b95-1f08-4f96-a9c0-47aef79a823b" containerID="a6bcc4c46f56f905565e25796d84e2da8ea8df06b1cb029e0973e5fa299b84ae" exitCode=0 Jan 26 07:59:39 crc kubenswrapper[4806]: I0126 07:59:39.590767 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rqfsl" event={"ID":"b94c8b95-1f08-4f96-a9c0-47aef79a823b","Type":"ContainerDied","Data":"a6bcc4c46f56f905565e25796d84e2da8ea8df06b1cb029e0973e5fa299b84ae"} Jan 26 07:59:39 crc kubenswrapper[4806]: I0126 07:59:39.590811 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rqfsl" event={"ID":"b94c8b95-1f08-4f96-a9c0-47aef79a823b","Type":"ContainerStarted","Data":"54187b48a1221e549387847159ec92a88a4b5c298f5d6158dd539efe1a9006e8"} Jan 26 07:59:39 crc kubenswrapper[4806]: I0126 07:59:39.593021 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-qdw7c" event={"ID":"91810d32-1f8f-416c-890c-2c9c01199476","Type":"ContainerStarted","Data":"369bbe0997e4e347cdc642587f5e9d81e4dee5fc7e0648917529966769df43f0"} Jan 26 07:59:39 crc kubenswrapper[4806]: I0126 07:59:39.593059 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-qdw7c" event={"ID":"91810d32-1f8f-416c-890c-2c9c01199476","Type":"ContainerStarted","Data":"f3aa1e7ad759f95f1a4a7a2484703bff602f8749f104fd0390a388ddbc5b7225"} Jan 26 07:59:39 crc kubenswrapper[4806]: I0126 07:59:39.593306 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-qdw7c" Jan 26 07:59:39 crc kubenswrapper[4806]: I0126 07:59:39.607980 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f427x" event={"ID":"0c92773d-ebe0-4739-9668-f826721f9a36","Type":"ContainerStarted","Data":"17bc6dc5e622fa89ec1643b3654ef15fe2a7dff16c2f81de730073fe3391aef6"} Jan 26 07:59:39 crc kubenswrapper[4806]: I0126 07:59:39.608157 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lnpjg"] Jan 26 07:59:39 crc kubenswrapper[4806]: I0126 07:59:39.643378 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-f427x" podStartSLOduration=2.238251921 podStartE2EDuration="3.643360646s" podCreationTimestamp="2026-01-26 07:59:36 +0000 UTC" firstStartedPulling="2026-01-26 07:59:37.565984251 +0000 UTC m=+356.830392307" lastFinishedPulling="2026-01-26 07:59:38.971092976 +0000 UTC m=+358.235501032" observedRunningTime="2026-01-26 07:59:39.641955704 +0000 UTC m=+358.906363780" watchObservedRunningTime="2026-01-26 07:59:39.643360646 +0000 UTC m=+358.907768702" Jan 26 07:59:39 crc kubenswrapper[4806]: I0126 07:59:39.667987 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-qdw7c" podStartSLOduration=1.667816745 podStartE2EDuration="1.667816745s" podCreationTimestamp="2026-01-26 07:59:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 07:59:39.657990122 +0000 UTC m=+358.922398178" watchObservedRunningTime="2026-01-26 07:59:39.667816745 +0000 UTC m=+358.932224811" Jan 26 07:59:40 crc kubenswrapper[4806]: I0126 07:59:40.614823 4806 generic.go:334] "Generic (PLEG): container finished" podID="c55665b2-fe11-48ad-9699-5bf16993d344" containerID="351c0ed7a7edababf2328184aa3561d0328ae6509120b1be9923bc8266b8777a" exitCode=0 Jan 26 07:59:40 crc kubenswrapper[4806]: I0126 07:59:40.615798 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lnpjg" event={"ID":"c55665b2-fe11-48ad-9699-5bf16993d344","Type":"ContainerDied","Data":"351c0ed7a7edababf2328184aa3561d0328ae6509120b1be9923bc8266b8777a"} Jan 26 07:59:40 crc kubenswrapper[4806]: I0126 07:59:40.615851 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lnpjg" event={"ID":"c55665b2-fe11-48ad-9699-5bf16993d344","Type":"ContainerStarted","Data":"647e7c77bc304c111981ba967ee47a3c10a9ec01c0f45917a9120c0b7a73047e"} Jan 26 07:59:42 crc kubenswrapper[4806]: I0126 07:59:42.631504 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lnpjg" event={"ID":"c55665b2-fe11-48ad-9699-5bf16993d344","Type":"ContainerStarted","Data":"98baf79153f59dec8d139019b444c83102506223d3f10884602d7516cb1a065a"} Jan 26 07:59:42 crc kubenswrapper[4806]: I0126 07:59:42.634305 4806 generic.go:334] "Generic (PLEG): container finished" podID="b94c8b95-1f08-4f96-a9c0-47aef79a823b" containerID="125b3869567d22755304e61f878e38db4252cf8645a99eb21c5b5d9a1dd625ce" exitCode=0 Jan 26 07:59:42 crc kubenswrapper[4806]: I0126 07:59:42.634359 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rqfsl" event={"ID":"b94c8b95-1f08-4f96-a9c0-47aef79a823b","Type":"ContainerDied","Data":"125b3869567d22755304e61f878e38db4252cf8645a99eb21c5b5d9a1dd625ce"} Jan 26 07:59:42 crc kubenswrapper[4806]: I0126 07:59:42.637186 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xfp4f" event={"ID":"557bfe89-c128-469e-8e26-f80ecb3a1cb1","Type":"ContainerStarted","Data":"b3b2e3d02b94f5962c056e6798555c17441362651d1679903d6e355b9992ac3d"} Jan 26 07:59:42 crc kubenswrapper[4806]: I0126 07:59:42.703839 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-xfp4f" podStartSLOduration=1.8624288679999998 podStartE2EDuration="6.703820478s" podCreationTimestamp="2026-01-26 07:59:36 +0000 UTC" firstStartedPulling="2026-01-26 07:59:37.566006951 +0000 UTC m=+356.830415047" lastFinishedPulling="2026-01-26 07:59:42.407398601 +0000 UTC m=+361.671806657" observedRunningTime="2026-01-26 07:59:42.700302203 +0000 UTC m=+361.964710259" watchObservedRunningTime="2026-01-26 07:59:42.703820478 +0000 UTC m=+361.968228534" Jan 26 07:59:43 crc kubenswrapper[4806]: I0126 07:59:43.644807 4806 generic.go:334] "Generic (PLEG): container finished" podID="c55665b2-fe11-48ad-9699-5bf16993d344" containerID="98baf79153f59dec8d139019b444c83102506223d3f10884602d7516cb1a065a" exitCode=0 Jan 26 07:59:43 crc kubenswrapper[4806]: I0126 07:59:43.644872 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lnpjg" event={"ID":"c55665b2-fe11-48ad-9699-5bf16993d344","Type":"ContainerDied","Data":"98baf79153f59dec8d139019b444c83102506223d3f10884602d7516cb1a065a"} Jan 26 07:59:43 crc kubenswrapper[4806]: I0126 07:59:43.648214 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rqfsl" event={"ID":"b94c8b95-1f08-4f96-a9c0-47aef79a823b","Type":"ContainerStarted","Data":"88defa89218697f3c61fc7c6a6788b08a57f5d0a69dc17f864769d3a1e9ac058"} Jan 26 07:59:43 crc kubenswrapper[4806]: I0126 07:59:43.690278 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rqfsl" podStartSLOduration=2.196916237 podStartE2EDuration="5.690251483s" podCreationTimestamp="2026-01-26 07:59:38 +0000 UTC" firstStartedPulling="2026-01-26 07:59:39.591805939 +0000 UTC m=+358.856213995" lastFinishedPulling="2026-01-26 07:59:43.085141185 +0000 UTC m=+362.349549241" observedRunningTime="2026-01-26 07:59:43.687606714 +0000 UTC m=+362.952014770" watchObservedRunningTime="2026-01-26 07:59:43.690251483 +0000 UTC m=+362.954659539" Jan 26 07:59:44 crc kubenswrapper[4806]: I0126 07:59:44.656504 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lnpjg" event={"ID":"c55665b2-fe11-48ad-9699-5bf16993d344","Type":"ContainerStarted","Data":"24a45d2fc7aa98c803e8a1330fc6e3bb3920daa449143282d134c7f008bbe212"} Jan 26 07:59:45 crc kubenswrapper[4806]: I0126 07:59:45.807133 4806 patch_prober.go:28] interesting pod/machine-config-daemon-k2tlk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 07:59:45 crc kubenswrapper[4806]: I0126 07:59:45.807651 4806 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 07:59:46 crc kubenswrapper[4806]: I0126 07:59:46.728864 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-xfp4f" Jan 26 07:59:46 crc kubenswrapper[4806]: I0126 07:59:46.729428 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-xfp4f" Jan 26 07:59:46 crc kubenswrapper[4806]: I0126 07:59:46.788312 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-xfp4f" Jan 26 07:59:46 crc kubenswrapper[4806]: I0126 07:59:46.819454 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-lnpjg" podStartSLOduration=5.269023086 podStartE2EDuration="8.819429003s" podCreationTimestamp="2026-01-26 07:59:38 +0000 UTC" firstStartedPulling="2026-01-26 07:59:40.655379524 +0000 UTC m=+359.919787580" lastFinishedPulling="2026-01-26 07:59:44.205785441 +0000 UTC m=+363.470193497" observedRunningTime="2026-01-26 07:59:45.687237432 +0000 UTC m=+364.951645488" watchObservedRunningTime="2026-01-26 07:59:46.819429003 +0000 UTC m=+366.083837059" Jan 26 07:59:46 crc kubenswrapper[4806]: I0126 07:59:46.922384 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-f427x" Jan 26 07:59:46 crc kubenswrapper[4806]: I0126 07:59:46.922492 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-f427x" Jan 26 07:59:46 crc kubenswrapper[4806]: I0126 07:59:46.978973 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-f427x" Jan 26 07:59:47 crc kubenswrapper[4806]: I0126 07:59:47.727545 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-f427x" Jan 26 07:59:47 crc kubenswrapper[4806]: I0126 07:59:47.732664 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-xfp4f" Jan 26 07:59:49 crc kubenswrapper[4806]: I0126 07:59:49.133203 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rqfsl" Jan 26 07:59:49 crc kubenswrapper[4806]: I0126 07:59:49.133278 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rqfsl" Jan 26 07:59:49 crc kubenswrapper[4806]: I0126 07:59:49.197144 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rqfsl" Jan 26 07:59:49 crc kubenswrapper[4806]: I0126 07:59:49.381612 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-lnpjg" Jan 26 07:59:49 crc kubenswrapper[4806]: I0126 07:59:49.382058 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-lnpjg" Jan 26 07:59:49 crc kubenswrapper[4806]: I0126 07:59:49.738432 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rqfsl" Jan 26 07:59:50 crc kubenswrapper[4806]: I0126 07:59:50.431765 4806 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lnpjg" podUID="c55665b2-fe11-48ad-9699-5bf16993d344" containerName="registry-server" probeResult="failure" output=< Jan 26 07:59:50 crc kubenswrapper[4806]: timeout: failed to connect service ":50051" within 1s Jan 26 07:59:50 crc kubenswrapper[4806]: > Jan 26 07:59:58 crc kubenswrapper[4806]: I0126 07:59:58.996620 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-qdw7c" Jan 26 07:59:59 crc kubenswrapper[4806]: I0126 07:59:59.059994 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-tncnb"] Jan 26 07:59:59 crc kubenswrapper[4806]: I0126 07:59:59.436448 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-lnpjg" Jan 26 07:59:59 crc kubenswrapper[4806]: I0126 07:59:59.481993 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-lnpjg" Jan 26 08:00:00 crc kubenswrapper[4806]: I0126 08:00:00.209300 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490240-ttrd8"] Jan 26 08:00:00 crc kubenswrapper[4806]: I0126 08:00:00.210422 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490240-ttrd8" Jan 26 08:00:00 crc kubenswrapper[4806]: I0126 08:00:00.212909 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 26 08:00:00 crc kubenswrapper[4806]: I0126 08:00:00.213968 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 26 08:00:00 crc kubenswrapper[4806]: I0126 08:00:00.227947 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490240-ttrd8"] Jan 26 08:00:00 crc kubenswrapper[4806]: I0126 08:00:00.353620 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b8e8ed9e-0309-4618-8376-ab447ae9bb09-secret-volume\") pod \"collect-profiles-29490240-ttrd8\" (UID: \"b8e8ed9e-0309-4618-8376-ab447ae9bb09\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490240-ttrd8" Jan 26 08:00:00 crc kubenswrapper[4806]: I0126 08:00:00.353708 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8tcp\" (UniqueName: \"kubernetes.io/projected/b8e8ed9e-0309-4618-8376-ab447ae9bb09-kube-api-access-k8tcp\") pod \"collect-profiles-29490240-ttrd8\" (UID: \"b8e8ed9e-0309-4618-8376-ab447ae9bb09\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490240-ttrd8" Jan 26 08:00:00 crc kubenswrapper[4806]: I0126 08:00:00.353739 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b8e8ed9e-0309-4618-8376-ab447ae9bb09-config-volume\") pod \"collect-profiles-29490240-ttrd8\" (UID: \"b8e8ed9e-0309-4618-8376-ab447ae9bb09\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490240-ttrd8" Jan 26 08:00:00 crc kubenswrapper[4806]: I0126 08:00:00.454714 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8tcp\" (UniqueName: \"kubernetes.io/projected/b8e8ed9e-0309-4618-8376-ab447ae9bb09-kube-api-access-k8tcp\") pod \"collect-profiles-29490240-ttrd8\" (UID: \"b8e8ed9e-0309-4618-8376-ab447ae9bb09\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490240-ttrd8" Jan 26 08:00:00 crc kubenswrapper[4806]: I0126 08:00:00.455075 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b8e8ed9e-0309-4618-8376-ab447ae9bb09-config-volume\") pod \"collect-profiles-29490240-ttrd8\" (UID: \"b8e8ed9e-0309-4618-8376-ab447ae9bb09\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490240-ttrd8" Jan 26 08:00:00 crc kubenswrapper[4806]: I0126 08:00:00.455125 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b8e8ed9e-0309-4618-8376-ab447ae9bb09-secret-volume\") pod \"collect-profiles-29490240-ttrd8\" (UID: \"b8e8ed9e-0309-4618-8376-ab447ae9bb09\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490240-ttrd8" Jan 26 08:00:00 crc kubenswrapper[4806]: I0126 08:00:00.456209 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b8e8ed9e-0309-4618-8376-ab447ae9bb09-config-volume\") pod \"collect-profiles-29490240-ttrd8\" (UID: \"b8e8ed9e-0309-4618-8376-ab447ae9bb09\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490240-ttrd8" Jan 26 08:00:00 crc kubenswrapper[4806]: I0126 08:00:00.462964 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b8e8ed9e-0309-4618-8376-ab447ae9bb09-secret-volume\") pod \"collect-profiles-29490240-ttrd8\" (UID: \"b8e8ed9e-0309-4618-8376-ab447ae9bb09\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490240-ttrd8" Jan 26 08:00:00 crc kubenswrapper[4806]: I0126 08:00:00.477545 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8tcp\" (UniqueName: \"kubernetes.io/projected/b8e8ed9e-0309-4618-8376-ab447ae9bb09-kube-api-access-k8tcp\") pod \"collect-profiles-29490240-ttrd8\" (UID: \"b8e8ed9e-0309-4618-8376-ab447ae9bb09\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490240-ttrd8" Jan 26 08:00:00 crc kubenswrapper[4806]: I0126 08:00:00.529563 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490240-ttrd8" Jan 26 08:00:00 crc kubenswrapper[4806]: I0126 08:00:00.937017 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490240-ttrd8"] Jan 26 08:00:01 crc kubenswrapper[4806]: I0126 08:00:01.761479 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490240-ttrd8" event={"ID":"b8e8ed9e-0309-4618-8376-ab447ae9bb09","Type":"ContainerStarted","Data":"e8682235b9845e8c365a166b9f226190cac1aaca261a7be8d42a81b7ee42fd8a"} Jan 26 08:00:03 crc kubenswrapper[4806]: I0126 08:00:03.780684 4806 generic.go:334] "Generic (PLEG): container finished" podID="b8e8ed9e-0309-4618-8376-ab447ae9bb09" containerID="7225795f0dc41b00698fbefc84298c1387c254dd54fa9aabb3738812d9426911" exitCode=0 Jan 26 08:00:03 crc kubenswrapper[4806]: I0126 08:00:03.781200 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490240-ttrd8" event={"ID":"b8e8ed9e-0309-4618-8376-ab447ae9bb09","Type":"ContainerDied","Data":"7225795f0dc41b00698fbefc84298c1387c254dd54fa9aabb3738812d9426911"} Jan 26 08:00:05 crc kubenswrapper[4806]: I0126 08:00:05.051694 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490240-ttrd8" Jan 26 08:00:05 crc kubenswrapper[4806]: I0126 08:00:05.224743 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b8e8ed9e-0309-4618-8376-ab447ae9bb09-secret-volume\") pod \"b8e8ed9e-0309-4618-8376-ab447ae9bb09\" (UID: \"b8e8ed9e-0309-4618-8376-ab447ae9bb09\") " Jan 26 08:00:05 crc kubenswrapper[4806]: I0126 08:00:05.224908 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b8e8ed9e-0309-4618-8376-ab447ae9bb09-config-volume\") pod \"b8e8ed9e-0309-4618-8376-ab447ae9bb09\" (UID: \"b8e8ed9e-0309-4618-8376-ab447ae9bb09\") " Jan 26 08:00:05 crc kubenswrapper[4806]: I0126 08:00:05.225010 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k8tcp\" (UniqueName: \"kubernetes.io/projected/b8e8ed9e-0309-4618-8376-ab447ae9bb09-kube-api-access-k8tcp\") pod \"b8e8ed9e-0309-4618-8376-ab447ae9bb09\" (UID: \"b8e8ed9e-0309-4618-8376-ab447ae9bb09\") " Jan 26 08:00:05 crc kubenswrapper[4806]: I0126 08:00:05.226331 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8e8ed9e-0309-4618-8376-ab447ae9bb09-config-volume" (OuterVolumeSpecName: "config-volume") pod "b8e8ed9e-0309-4618-8376-ab447ae9bb09" (UID: "b8e8ed9e-0309-4618-8376-ab447ae9bb09"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 08:00:05 crc kubenswrapper[4806]: I0126 08:00:05.229704 4806 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b8e8ed9e-0309-4618-8376-ab447ae9bb09-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 08:00:05 crc kubenswrapper[4806]: I0126 08:00:05.239918 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8e8ed9e-0309-4618-8376-ab447ae9bb09-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "b8e8ed9e-0309-4618-8376-ab447ae9bb09" (UID: "b8e8ed9e-0309-4618-8376-ab447ae9bb09"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:00:05 crc kubenswrapper[4806]: I0126 08:00:05.239948 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8e8ed9e-0309-4618-8376-ab447ae9bb09-kube-api-access-k8tcp" (OuterVolumeSpecName: "kube-api-access-k8tcp") pod "b8e8ed9e-0309-4618-8376-ab447ae9bb09" (UID: "b8e8ed9e-0309-4618-8376-ab447ae9bb09"). InnerVolumeSpecName "kube-api-access-k8tcp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:00:05 crc kubenswrapper[4806]: I0126 08:00:05.331280 4806 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b8e8ed9e-0309-4618-8376-ab447ae9bb09-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 08:00:05 crc kubenswrapper[4806]: I0126 08:00:05.331566 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k8tcp\" (UniqueName: \"kubernetes.io/projected/b8e8ed9e-0309-4618-8376-ab447ae9bb09-kube-api-access-k8tcp\") on node \"crc\" DevicePath \"\"" Jan 26 08:00:05 crc kubenswrapper[4806]: I0126 08:00:05.800198 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490240-ttrd8" event={"ID":"b8e8ed9e-0309-4618-8376-ab447ae9bb09","Type":"ContainerDied","Data":"e8682235b9845e8c365a166b9f226190cac1aaca261a7be8d42a81b7ee42fd8a"} Jan 26 08:00:05 crc kubenswrapper[4806]: I0126 08:00:05.800289 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e8682235b9845e8c365a166b9f226190cac1aaca261a7be8d42a81b7ee42fd8a" Jan 26 08:00:05 crc kubenswrapper[4806]: I0126 08:00:05.800396 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490240-ttrd8" Jan 26 08:00:15 crc kubenswrapper[4806]: I0126 08:00:15.806488 4806 patch_prober.go:28] interesting pod/machine-config-daemon-k2tlk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 08:00:15 crc kubenswrapper[4806]: I0126 08:00:15.806991 4806 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 08:00:24 crc kubenswrapper[4806]: I0126 08:00:24.108546 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-tncnb" podUID="3bcae027-4e25-4c41-bbc9-639927f58691" containerName="registry" containerID="cri-o://29ef45d46170a370c06a09ebee24da06d861a02478eb5c93239234f9362fee75" gracePeriod=30 Jan 26 08:00:24 crc kubenswrapper[4806]: I0126 08:00:24.498826 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-tncnb" Jan 26 08:00:24 crc kubenswrapper[4806]: I0126 08:00:24.542943 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/3bcae027-4e25-4c41-bbc9-639927f58691-installation-pull-secrets\") pod \"3bcae027-4e25-4c41-bbc9-639927f58691\" (UID: \"3bcae027-4e25-4c41-bbc9-639927f58691\") " Jan 26 08:00:24 crc kubenswrapper[4806]: I0126 08:00:24.543353 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-459vt\" (UniqueName: \"kubernetes.io/projected/3bcae027-4e25-4c41-bbc9-639927f58691-kube-api-access-459vt\") pod \"3bcae027-4e25-4c41-bbc9-639927f58691\" (UID: \"3bcae027-4e25-4c41-bbc9-639927f58691\") " Jan 26 08:00:24 crc kubenswrapper[4806]: I0126 08:00:24.543509 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"3bcae027-4e25-4c41-bbc9-639927f58691\" (UID: \"3bcae027-4e25-4c41-bbc9-639927f58691\") " Jan 26 08:00:24 crc kubenswrapper[4806]: I0126 08:00:24.543561 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/3bcae027-4e25-4c41-bbc9-639927f58691-registry-certificates\") pod \"3bcae027-4e25-4c41-bbc9-639927f58691\" (UID: \"3bcae027-4e25-4c41-bbc9-639927f58691\") " Jan 26 08:00:24 crc kubenswrapper[4806]: I0126 08:00:24.543620 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/3bcae027-4e25-4c41-bbc9-639927f58691-registry-tls\") pod \"3bcae027-4e25-4c41-bbc9-639927f58691\" (UID: \"3bcae027-4e25-4c41-bbc9-639927f58691\") " Jan 26 08:00:24 crc kubenswrapper[4806]: I0126 08:00:24.543641 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3bcae027-4e25-4c41-bbc9-639927f58691-trusted-ca\") pod \"3bcae027-4e25-4c41-bbc9-639927f58691\" (UID: \"3bcae027-4e25-4c41-bbc9-639927f58691\") " Jan 26 08:00:24 crc kubenswrapper[4806]: I0126 08:00:24.543701 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/3bcae027-4e25-4c41-bbc9-639927f58691-ca-trust-extracted\") pod \"3bcae027-4e25-4c41-bbc9-639927f58691\" (UID: \"3bcae027-4e25-4c41-bbc9-639927f58691\") " Jan 26 08:00:24 crc kubenswrapper[4806]: I0126 08:00:24.543734 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3bcae027-4e25-4c41-bbc9-639927f58691-bound-sa-token\") pod \"3bcae027-4e25-4c41-bbc9-639927f58691\" (UID: \"3bcae027-4e25-4c41-bbc9-639927f58691\") " Jan 26 08:00:24 crc kubenswrapper[4806]: I0126 08:00:24.544916 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3bcae027-4e25-4c41-bbc9-639927f58691-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "3bcae027-4e25-4c41-bbc9-639927f58691" (UID: "3bcae027-4e25-4c41-bbc9-639927f58691"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 08:00:24 crc kubenswrapper[4806]: I0126 08:00:24.544933 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3bcae027-4e25-4c41-bbc9-639927f58691-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "3bcae027-4e25-4c41-bbc9-639927f58691" (UID: "3bcae027-4e25-4c41-bbc9-639927f58691"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 08:00:24 crc kubenswrapper[4806]: I0126 08:00:24.549384 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3bcae027-4e25-4c41-bbc9-639927f58691-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "3bcae027-4e25-4c41-bbc9-639927f58691" (UID: "3bcae027-4e25-4c41-bbc9-639927f58691"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:00:24 crc kubenswrapper[4806]: I0126 08:00:24.549640 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3bcae027-4e25-4c41-bbc9-639927f58691-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "3bcae027-4e25-4c41-bbc9-639927f58691" (UID: "3bcae027-4e25-4c41-bbc9-639927f58691"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:00:24 crc kubenswrapper[4806]: I0126 08:00:24.553732 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3bcae027-4e25-4c41-bbc9-639927f58691-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "3bcae027-4e25-4c41-bbc9-639927f58691" (UID: "3bcae027-4e25-4c41-bbc9-639927f58691"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:00:24 crc kubenswrapper[4806]: I0126 08:00:24.554682 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3bcae027-4e25-4c41-bbc9-639927f58691-kube-api-access-459vt" (OuterVolumeSpecName: "kube-api-access-459vt") pod "3bcae027-4e25-4c41-bbc9-639927f58691" (UID: "3bcae027-4e25-4c41-bbc9-639927f58691"). InnerVolumeSpecName "kube-api-access-459vt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:00:24 crc kubenswrapper[4806]: I0126 08:00:24.561481 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3bcae027-4e25-4c41-bbc9-639927f58691-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "3bcae027-4e25-4c41-bbc9-639927f58691" (UID: "3bcae027-4e25-4c41-bbc9-639927f58691"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 08:00:24 crc kubenswrapper[4806]: I0126 08:00:24.575677 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "3bcae027-4e25-4c41-bbc9-639927f58691" (UID: "3bcae027-4e25-4c41-bbc9-639927f58691"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 26 08:00:24 crc kubenswrapper[4806]: I0126 08:00:24.645060 4806 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/3bcae027-4e25-4c41-bbc9-639927f58691-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 26 08:00:24 crc kubenswrapper[4806]: I0126 08:00:24.645311 4806 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3bcae027-4e25-4c41-bbc9-639927f58691-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 08:00:24 crc kubenswrapper[4806]: I0126 08:00:24.645371 4806 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/3bcae027-4e25-4c41-bbc9-639927f58691-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 26 08:00:24 crc kubenswrapper[4806]: I0126 08:00:24.645420 4806 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3bcae027-4e25-4c41-bbc9-639927f58691-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 26 08:00:24 crc kubenswrapper[4806]: I0126 08:00:24.645496 4806 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/3bcae027-4e25-4c41-bbc9-639927f58691-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 26 08:00:24 crc kubenswrapper[4806]: I0126 08:00:24.645583 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-459vt\" (UniqueName: \"kubernetes.io/projected/3bcae027-4e25-4c41-bbc9-639927f58691-kube-api-access-459vt\") on node \"crc\" DevicePath \"\"" Jan 26 08:00:24 crc kubenswrapper[4806]: I0126 08:00:24.645637 4806 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/3bcae027-4e25-4c41-bbc9-639927f58691-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 26 08:00:24 crc kubenswrapper[4806]: I0126 08:00:24.907937 4806 generic.go:334] "Generic (PLEG): container finished" podID="3bcae027-4e25-4c41-bbc9-639927f58691" containerID="29ef45d46170a370c06a09ebee24da06d861a02478eb5c93239234f9362fee75" exitCode=0 Jan 26 08:00:24 crc kubenswrapper[4806]: I0126 08:00:24.907995 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-tncnb" event={"ID":"3bcae027-4e25-4c41-bbc9-639927f58691","Type":"ContainerDied","Data":"29ef45d46170a370c06a09ebee24da06d861a02478eb5c93239234f9362fee75"} Jan 26 08:00:24 crc kubenswrapper[4806]: I0126 08:00:24.908040 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-tncnb" event={"ID":"3bcae027-4e25-4c41-bbc9-639927f58691","Type":"ContainerDied","Data":"e462c7aa5efd405dbdbea2c0c4ed6ec5e86b59fde72ec70de998ba0788a45ab0"} Jan 26 08:00:24 crc kubenswrapper[4806]: I0126 08:00:24.908047 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-tncnb" Jan 26 08:00:24 crc kubenswrapper[4806]: I0126 08:00:24.908069 4806 scope.go:117] "RemoveContainer" containerID="29ef45d46170a370c06a09ebee24da06d861a02478eb5c93239234f9362fee75" Jan 26 08:00:24 crc kubenswrapper[4806]: I0126 08:00:24.929078 4806 scope.go:117] "RemoveContainer" containerID="29ef45d46170a370c06a09ebee24da06d861a02478eb5c93239234f9362fee75" Jan 26 08:00:24 crc kubenswrapper[4806]: E0126 08:00:24.930501 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"29ef45d46170a370c06a09ebee24da06d861a02478eb5c93239234f9362fee75\": container with ID starting with 29ef45d46170a370c06a09ebee24da06d861a02478eb5c93239234f9362fee75 not found: ID does not exist" containerID="29ef45d46170a370c06a09ebee24da06d861a02478eb5c93239234f9362fee75" Jan 26 08:00:24 crc kubenswrapper[4806]: I0126 08:00:24.930556 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29ef45d46170a370c06a09ebee24da06d861a02478eb5c93239234f9362fee75"} err="failed to get container status \"29ef45d46170a370c06a09ebee24da06d861a02478eb5c93239234f9362fee75\": rpc error: code = NotFound desc = could not find container \"29ef45d46170a370c06a09ebee24da06d861a02478eb5c93239234f9362fee75\": container with ID starting with 29ef45d46170a370c06a09ebee24da06d861a02478eb5c93239234f9362fee75 not found: ID does not exist" Jan 26 08:00:24 crc kubenswrapper[4806]: I0126 08:00:24.942953 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-tncnb"] Jan 26 08:00:24 crc kubenswrapper[4806]: I0126 08:00:24.950953 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-tncnb"] Jan 26 08:00:25 crc kubenswrapper[4806]: I0126 08:00:25.049154 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3bcae027-4e25-4c41-bbc9-639927f58691" path="/var/lib/kubelet/pods/3bcae027-4e25-4c41-bbc9-639927f58691/volumes" Jan 26 08:00:45 crc kubenswrapper[4806]: I0126 08:00:45.806077 4806 patch_prober.go:28] interesting pod/machine-config-daemon-k2tlk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 08:00:45 crc kubenswrapper[4806]: I0126 08:00:45.806721 4806 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 08:00:45 crc kubenswrapper[4806]: I0126 08:00:45.806785 4806 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" Jan 26 08:00:45 crc kubenswrapper[4806]: I0126 08:00:45.807603 4806 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"61cccd600d491aa95cc07ae3edd2fe4d985307d841d68d06d1cce694939e53c9"} pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 08:00:45 crc kubenswrapper[4806]: I0126 08:00:45.807697 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" containerName="machine-config-daemon" containerID="cri-o://61cccd600d491aa95cc07ae3edd2fe4d985307d841d68d06d1cce694939e53c9" gracePeriod=600 Jan 26 08:00:45 crc kubenswrapper[4806]: E0126 08:00:45.916765 4806 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd07502a2_50b0_4012_b335_340a1c694c50.slice/crio-61cccd600d491aa95cc07ae3edd2fe4d985307d841d68d06d1cce694939e53c9.scope\": RecentStats: unable to find data in memory cache]" Jan 26 08:00:46 crc kubenswrapper[4806]: I0126 08:00:46.068324 4806 generic.go:334] "Generic (PLEG): container finished" podID="d07502a2-50b0-4012-b335-340a1c694c50" containerID="61cccd600d491aa95cc07ae3edd2fe4d985307d841d68d06d1cce694939e53c9" exitCode=0 Jan 26 08:00:46 crc kubenswrapper[4806]: I0126 08:00:46.068367 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" event={"ID":"d07502a2-50b0-4012-b335-340a1c694c50","Type":"ContainerDied","Data":"61cccd600d491aa95cc07ae3edd2fe4d985307d841d68d06d1cce694939e53c9"} Jan 26 08:00:46 crc kubenswrapper[4806]: I0126 08:00:46.068406 4806 scope.go:117] "RemoveContainer" containerID="824bb40c04b0ea474c45020c9c84ce7b7ee18c52beef9fee9618ba5a6e59be04" Jan 26 08:00:47 crc kubenswrapper[4806]: I0126 08:00:47.076879 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" event={"ID":"d07502a2-50b0-4012-b335-340a1c694c50","Type":"ContainerStarted","Data":"25fe21fbdefc972bf60875548f11358df4e04c7bb242af40b8201587c399a5cc"} Jan 26 08:03:15 crc kubenswrapper[4806]: I0126 08:03:15.805938 4806 patch_prober.go:28] interesting pod/machine-config-daemon-k2tlk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 08:03:15 crc kubenswrapper[4806]: I0126 08:03:15.806544 4806 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 08:03:22 crc kubenswrapper[4806]: I0126 08:03:22.969806 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-shxsb"] Jan 26 08:03:22 crc kubenswrapper[4806]: E0126 08:03:22.970836 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bcae027-4e25-4c41-bbc9-639927f58691" containerName="registry" Jan 26 08:03:22 crc kubenswrapper[4806]: I0126 08:03:22.970856 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bcae027-4e25-4c41-bbc9-639927f58691" containerName="registry" Jan 26 08:03:22 crc kubenswrapper[4806]: E0126 08:03:22.970879 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8e8ed9e-0309-4618-8376-ab447ae9bb09" containerName="collect-profiles" Jan 26 08:03:22 crc kubenswrapper[4806]: I0126 08:03:22.970891 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8e8ed9e-0309-4618-8376-ab447ae9bb09" containerName="collect-profiles" Jan 26 08:03:22 crc kubenswrapper[4806]: I0126 08:03:22.971088 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8e8ed9e-0309-4618-8376-ab447ae9bb09" containerName="collect-profiles" Jan 26 08:03:22 crc kubenswrapper[4806]: I0126 08:03:22.971112 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="3bcae027-4e25-4c41-bbc9-639927f58691" containerName="registry" Jan 26 08:03:22 crc kubenswrapper[4806]: I0126 08:03:22.971710 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-shxsb" Jan 26 08:03:22 crc kubenswrapper[4806]: I0126 08:03:22.973183 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-x8z22"] Jan 26 08:03:22 crc kubenswrapper[4806]: I0126 08:03:22.973955 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-x8z22" Jan 26 08:03:22 crc kubenswrapper[4806]: I0126 08:03:22.978718 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 26 08:03:22 crc kubenswrapper[4806]: I0126 08:03:22.978752 4806 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-6g224" Jan 26 08:03:22 crc kubenswrapper[4806]: I0126 08:03:22.979036 4806 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-s9l87" Jan 26 08:03:22 crc kubenswrapper[4806]: I0126 08:03:22.980412 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 26 08:03:22 crc kubenswrapper[4806]: I0126 08:03:22.996341 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-shxsb"] Jan 26 08:03:23 crc kubenswrapper[4806]: I0126 08:03:23.000354 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-x8z22"] Jan 26 08:03:23 crc kubenswrapper[4806]: I0126 08:03:23.009359 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-j6gd8"] Jan 26 08:03:23 crc kubenswrapper[4806]: I0126 08:03:23.010022 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-j6gd8" Jan 26 08:03:23 crc kubenswrapper[4806]: I0126 08:03:23.011623 4806 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-rgtcr" Jan 26 08:03:23 crc kubenswrapper[4806]: I0126 08:03:23.037595 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-j6gd8"] Jan 26 08:03:23 crc kubenswrapper[4806]: I0126 08:03:23.095409 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxlfn\" (UniqueName: \"kubernetes.io/projected/dcc781ef-dcbe-4eb5-9291-3486d5ef0d00-kube-api-access-rxlfn\") pod \"cert-manager-cainjector-cf98fcc89-x8z22\" (UID: \"dcc781ef-dcbe-4eb5-9291-3486d5ef0d00\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-x8z22" Jan 26 08:03:23 crc kubenswrapper[4806]: I0126 08:03:23.095470 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rs5ss\" (UniqueName: \"kubernetes.io/projected/d8ad5ee7-8dde-482e-9c75-0114fb096dfb-kube-api-access-rs5ss\") pod \"cert-manager-858654f9db-shxsb\" (UID: \"d8ad5ee7-8dde-482e-9c75-0114fb096dfb\") " pod="cert-manager/cert-manager-858654f9db-shxsb" Jan 26 08:03:23 crc kubenswrapper[4806]: I0126 08:03:23.196969 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czmln\" (UniqueName: \"kubernetes.io/projected/bd1576cf-642f-4fc6-86fb-2d144fbd299c-kube-api-access-czmln\") pod \"cert-manager-webhook-687f57d79b-j6gd8\" (UID: \"bd1576cf-642f-4fc6-86fb-2d144fbd299c\") " pod="cert-manager/cert-manager-webhook-687f57d79b-j6gd8" Jan 26 08:03:23 crc kubenswrapper[4806]: I0126 08:03:23.197439 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxlfn\" (UniqueName: \"kubernetes.io/projected/dcc781ef-dcbe-4eb5-9291-3486d5ef0d00-kube-api-access-rxlfn\") pod \"cert-manager-cainjector-cf98fcc89-x8z22\" (UID: \"dcc781ef-dcbe-4eb5-9291-3486d5ef0d00\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-x8z22" Jan 26 08:03:23 crc kubenswrapper[4806]: I0126 08:03:23.197502 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rs5ss\" (UniqueName: \"kubernetes.io/projected/d8ad5ee7-8dde-482e-9c75-0114fb096dfb-kube-api-access-rs5ss\") pod \"cert-manager-858654f9db-shxsb\" (UID: \"d8ad5ee7-8dde-482e-9c75-0114fb096dfb\") " pod="cert-manager/cert-manager-858654f9db-shxsb" Jan 26 08:03:23 crc kubenswrapper[4806]: I0126 08:03:23.219739 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rs5ss\" (UniqueName: \"kubernetes.io/projected/d8ad5ee7-8dde-482e-9c75-0114fb096dfb-kube-api-access-rs5ss\") pod \"cert-manager-858654f9db-shxsb\" (UID: \"d8ad5ee7-8dde-482e-9c75-0114fb096dfb\") " pod="cert-manager/cert-manager-858654f9db-shxsb" Jan 26 08:03:23 crc kubenswrapper[4806]: I0126 08:03:23.225345 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxlfn\" (UniqueName: \"kubernetes.io/projected/dcc781ef-dcbe-4eb5-9291-3486d5ef0d00-kube-api-access-rxlfn\") pod \"cert-manager-cainjector-cf98fcc89-x8z22\" (UID: \"dcc781ef-dcbe-4eb5-9291-3486d5ef0d00\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-x8z22" Jan 26 08:03:23 crc kubenswrapper[4806]: I0126 08:03:23.289335 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-shxsb" Jan 26 08:03:23 crc kubenswrapper[4806]: I0126 08:03:23.295742 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-x8z22" Jan 26 08:03:23 crc kubenswrapper[4806]: I0126 08:03:23.299201 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czmln\" (UniqueName: \"kubernetes.io/projected/bd1576cf-642f-4fc6-86fb-2d144fbd299c-kube-api-access-czmln\") pod \"cert-manager-webhook-687f57d79b-j6gd8\" (UID: \"bd1576cf-642f-4fc6-86fb-2d144fbd299c\") " pod="cert-manager/cert-manager-webhook-687f57d79b-j6gd8" Jan 26 08:03:23 crc kubenswrapper[4806]: I0126 08:03:23.323813 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czmln\" (UniqueName: \"kubernetes.io/projected/bd1576cf-642f-4fc6-86fb-2d144fbd299c-kube-api-access-czmln\") pod \"cert-manager-webhook-687f57d79b-j6gd8\" (UID: \"bd1576cf-642f-4fc6-86fb-2d144fbd299c\") " pod="cert-manager/cert-manager-webhook-687f57d79b-j6gd8" Jan 26 08:03:23 crc kubenswrapper[4806]: I0126 08:03:23.324180 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-j6gd8" Jan 26 08:03:23 crc kubenswrapper[4806]: I0126 08:03:23.594126 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-x8z22"] Jan 26 08:03:23 crc kubenswrapper[4806]: I0126 08:03:23.604273 4806 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 08:03:23 crc kubenswrapper[4806]: I0126 08:03:23.847998 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-shxsb"] Jan 26 08:03:23 crc kubenswrapper[4806]: I0126 08:03:23.859794 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-j6gd8"] Jan 26 08:03:23 crc kubenswrapper[4806]: W0126 08:03:23.876561 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbd1576cf_642f_4fc6_86fb_2d144fbd299c.slice/crio-9620a60552599266a78bb5fcaad052b124608529f0a5ad188a2c4706ab1ba936 WatchSource:0}: Error finding container 9620a60552599266a78bb5fcaad052b124608529f0a5ad188a2c4706ab1ba936: Status 404 returned error can't find the container with id 9620a60552599266a78bb5fcaad052b124608529f0a5ad188a2c4706ab1ba936 Jan 26 08:03:24 crc kubenswrapper[4806]: I0126 08:03:24.044943 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-shxsb" event={"ID":"d8ad5ee7-8dde-482e-9c75-0114fb096dfb","Type":"ContainerStarted","Data":"18335acce28a2ef8a874bc9f1b087f58e1a6e441d6084dd819fd10e8952c3538"} Jan 26 08:03:24 crc kubenswrapper[4806]: I0126 08:03:24.045993 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-j6gd8" event={"ID":"bd1576cf-642f-4fc6-86fb-2d144fbd299c","Type":"ContainerStarted","Data":"9620a60552599266a78bb5fcaad052b124608529f0a5ad188a2c4706ab1ba936"} Jan 26 08:03:24 crc kubenswrapper[4806]: I0126 08:03:24.047096 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-x8z22" event={"ID":"dcc781ef-dcbe-4eb5-9291-3486d5ef0d00","Type":"ContainerStarted","Data":"033f3ee2456ea8be944a253b8fcb3961ba0e8bcae456df9173e70e0dc63c1927"} Jan 26 08:03:27 crc kubenswrapper[4806]: I0126 08:03:27.063115 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-x8z22" event={"ID":"dcc781ef-dcbe-4eb5-9291-3486d5ef0d00","Type":"ContainerStarted","Data":"9a8828d984ec07bf5581980bdb10ceaa57bfb49063ba5d9f058a89962a117e74"} Jan 26 08:03:28 crc kubenswrapper[4806]: I0126 08:03:28.070128 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-shxsb" event={"ID":"d8ad5ee7-8dde-482e-9c75-0114fb096dfb","Type":"ContainerStarted","Data":"79f00be20fbc6b888007c442baf5a0a9d5fe204135ac0f37d10a25c63af3c13a"} Jan 26 08:03:28 crc kubenswrapper[4806]: I0126 08:03:28.071598 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-j6gd8" event={"ID":"bd1576cf-642f-4fc6-86fb-2d144fbd299c","Type":"ContainerStarted","Data":"1ea05a4164fa0b017fdff54c55ab6cac2f6301d723d09f8c218c41865090b273"} Jan 26 08:03:28 crc kubenswrapper[4806]: I0126 08:03:28.085332 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-x8z22" podStartSLOduration=3.906870691 podStartE2EDuration="6.085313075s" podCreationTimestamp="2026-01-26 08:03:22 +0000 UTC" firstStartedPulling="2026-01-26 08:03:23.604055989 +0000 UTC m=+582.868464045" lastFinishedPulling="2026-01-26 08:03:25.782498373 +0000 UTC m=+585.046906429" observedRunningTime="2026-01-26 08:03:27.084947163 +0000 UTC m=+586.349355229" watchObservedRunningTime="2026-01-26 08:03:28.085313075 +0000 UTC m=+587.349721131" Jan 26 08:03:28 crc kubenswrapper[4806]: I0126 08:03:28.089256 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-shxsb" podStartSLOduration=2.885528796 podStartE2EDuration="6.089219883s" podCreationTimestamp="2026-01-26 08:03:22 +0000 UTC" firstStartedPulling="2026-01-26 08:03:23.865669715 +0000 UTC m=+583.130077771" lastFinishedPulling="2026-01-26 08:03:27.069360802 +0000 UTC m=+586.333768858" observedRunningTime="2026-01-26 08:03:28.084403701 +0000 UTC m=+587.348811757" watchObservedRunningTime="2026-01-26 08:03:28.089219883 +0000 UTC m=+587.353627949" Jan 26 08:03:28 crc kubenswrapper[4806]: I0126 08:03:28.109191 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-j6gd8" podStartSLOduration=2.862418971 podStartE2EDuration="6.109168354s" podCreationTimestamp="2026-01-26 08:03:22 +0000 UTC" firstStartedPulling="2026-01-26 08:03:23.878554404 +0000 UTC m=+583.142962460" lastFinishedPulling="2026-01-26 08:03:27.125303787 +0000 UTC m=+586.389711843" observedRunningTime="2026-01-26 08:03:28.10403509 +0000 UTC m=+587.368443146" watchObservedRunningTime="2026-01-26 08:03:28.109168354 +0000 UTC m=+587.373576420" Jan 26 08:03:28 crc kubenswrapper[4806]: I0126 08:03:28.324652 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-j6gd8" Jan 26 08:03:32 crc kubenswrapper[4806]: I0126 08:03:32.469691 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-8mw7z"] Jan 26 08:03:32 crc kubenswrapper[4806]: I0126 08:03:32.470593 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" podUID="1f8b8acb-f4cf-41db-82f8-94ffd21c1594" containerName="ovn-controller" containerID="cri-o://634f160a9064240a06e9b425f70ceae6390db7205f898efdc601437e72c362df" gracePeriod=30 Jan 26 08:03:32 crc kubenswrapper[4806]: I0126 08:03:32.471058 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" podUID="1f8b8acb-f4cf-41db-82f8-94ffd21c1594" containerName="sbdb" containerID="cri-o://6e9d86c9f33a8ad421c133f7b674eff9362011ad9ce60cbe1b917bda84c85047" gracePeriod=30 Jan 26 08:03:32 crc kubenswrapper[4806]: I0126 08:03:32.471122 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" podUID="1f8b8acb-f4cf-41db-82f8-94ffd21c1594" containerName="nbdb" containerID="cri-o://9d102c6b4ea9d79168f0fec24c556c8fec8e3c1191646eebcbe8ed0289edfb6d" gracePeriod=30 Jan 26 08:03:32 crc kubenswrapper[4806]: I0126 08:03:32.471173 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" podUID="1f8b8acb-f4cf-41db-82f8-94ffd21c1594" containerName="northd" containerID="cri-o://c90dede1fc0753cad500555db7d92be73cb9b51fe948e3e65bed66134d85628c" gracePeriod=30 Jan 26 08:03:32 crc kubenswrapper[4806]: I0126 08:03:32.471238 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" podUID="1f8b8acb-f4cf-41db-82f8-94ffd21c1594" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://adbed86e005e6e9cecb4320e0dff72eab3cfd505ff656a64e6643cf47828c789" gracePeriod=30 Jan 26 08:03:32 crc kubenswrapper[4806]: I0126 08:03:32.471280 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" podUID="1f8b8acb-f4cf-41db-82f8-94ffd21c1594" containerName="kube-rbac-proxy-node" containerID="cri-o://21d2ea6bc7f0a91da2b372587419ec1b34cfe3aca8f70dd4e360bf3e818bb103" gracePeriod=30 Jan 26 08:03:32 crc kubenswrapper[4806]: I0126 08:03:32.471324 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" podUID="1f8b8acb-f4cf-41db-82f8-94ffd21c1594" containerName="ovn-acl-logging" containerID="cri-o://e05c58fd693a7827b1ff019c6e160e4d2198004aca34245ee6e08cd37f5627da" gracePeriod=30 Jan 26 08:03:32 crc kubenswrapper[4806]: I0126 08:03:32.504973 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" podUID="1f8b8acb-f4cf-41db-82f8-94ffd21c1594" containerName="ovnkube-controller" containerID="cri-o://7f29f3d947b8f6cdecce74518b658620be055f64b272f6e599f723e1ae44f773" gracePeriod=30 Jan 26 08:03:32 crc kubenswrapper[4806]: I0126 08:03:32.825816 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8mw7z_1f8b8acb-f4cf-41db-82f8-94ffd21c1594/ovnkube-controller/3.log" Jan 26 08:03:32 crc kubenswrapper[4806]: I0126 08:03:32.828131 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8mw7z_1f8b8acb-f4cf-41db-82f8-94ffd21c1594/ovn-acl-logging/0.log" Jan 26 08:03:32 crc kubenswrapper[4806]: I0126 08:03:32.828679 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8mw7z_1f8b8acb-f4cf-41db-82f8-94ffd21c1594/ovn-controller/0.log" Jan 26 08:03:32 crc kubenswrapper[4806]: I0126 08:03:32.829114 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" Jan 26 08:03:32 crc kubenswrapper[4806]: I0126 08:03:32.876808 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-4lfsr"] Jan 26 08:03:32 crc kubenswrapper[4806]: E0126 08:03:32.877058 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f8b8acb-f4cf-41db-82f8-94ffd21c1594" containerName="ovn-controller" Jan 26 08:03:32 crc kubenswrapper[4806]: I0126 08:03:32.877073 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f8b8acb-f4cf-41db-82f8-94ffd21c1594" containerName="ovn-controller" Jan 26 08:03:32 crc kubenswrapper[4806]: E0126 08:03:32.877089 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f8b8acb-f4cf-41db-82f8-94ffd21c1594" containerName="kubecfg-setup" Jan 26 08:03:32 crc kubenswrapper[4806]: I0126 08:03:32.877094 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f8b8acb-f4cf-41db-82f8-94ffd21c1594" containerName="kubecfg-setup" Jan 26 08:03:32 crc kubenswrapper[4806]: E0126 08:03:32.877104 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f8b8acb-f4cf-41db-82f8-94ffd21c1594" containerName="sbdb" Jan 26 08:03:32 crc kubenswrapper[4806]: I0126 08:03:32.877111 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f8b8acb-f4cf-41db-82f8-94ffd21c1594" containerName="sbdb" Jan 26 08:03:32 crc kubenswrapper[4806]: E0126 08:03:32.877119 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f8b8acb-f4cf-41db-82f8-94ffd21c1594" containerName="kube-rbac-proxy-ovn-metrics" Jan 26 08:03:32 crc kubenswrapper[4806]: I0126 08:03:32.877126 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f8b8acb-f4cf-41db-82f8-94ffd21c1594" containerName="kube-rbac-proxy-ovn-metrics" Jan 26 08:03:32 crc kubenswrapper[4806]: E0126 08:03:32.877133 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f8b8acb-f4cf-41db-82f8-94ffd21c1594" containerName="ovnkube-controller" Jan 26 08:03:32 crc kubenswrapper[4806]: I0126 08:03:32.877139 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f8b8acb-f4cf-41db-82f8-94ffd21c1594" containerName="ovnkube-controller" Jan 26 08:03:32 crc kubenswrapper[4806]: E0126 08:03:32.877146 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f8b8acb-f4cf-41db-82f8-94ffd21c1594" containerName="ovnkube-controller" Jan 26 08:03:32 crc kubenswrapper[4806]: I0126 08:03:32.877151 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f8b8acb-f4cf-41db-82f8-94ffd21c1594" containerName="ovnkube-controller" Jan 26 08:03:32 crc kubenswrapper[4806]: E0126 08:03:32.877159 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f8b8acb-f4cf-41db-82f8-94ffd21c1594" containerName="ovnkube-controller" Jan 26 08:03:32 crc kubenswrapper[4806]: I0126 08:03:32.877165 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f8b8acb-f4cf-41db-82f8-94ffd21c1594" containerName="ovnkube-controller" Jan 26 08:03:32 crc kubenswrapper[4806]: E0126 08:03:32.877172 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f8b8acb-f4cf-41db-82f8-94ffd21c1594" containerName="northd" Jan 26 08:03:32 crc kubenswrapper[4806]: I0126 08:03:32.877179 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f8b8acb-f4cf-41db-82f8-94ffd21c1594" containerName="northd" Jan 26 08:03:32 crc kubenswrapper[4806]: E0126 08:03:32.877186 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f8b8acb-f4cf-41db-82f8-94ffd21c1594" containerName="ovn-acl-logging" Jan 26 08:03:32 crc kubenswrapper[4806]: I0126 08:03:32.877191 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f8b8acb-f4cf-41db-82f8-94ffd21c1594" containerName="ovn-acl-logging" Jan 26 08:03:32 crc kubenswrapper[4806]: E0126 08:03:32.877204 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f8b8acb-f4cf-41db-82f8-94ffd21c1594" containerName="kube-rbac-proxy-node" Jan 26 08:03:32 crc kubenswrapper[4806]: I0126 08:03:32.877210 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f8b8acb-f4cf-41db-82f8-94ffd21c1594" containerName="kube-rbac-proxy-node" Jan 26 08:03:32 crc kubenswrapper[4806]: E0126 08:03:32.877220 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f8b8acb-f4cf-41db-82f8-94ffd21c1594" containerName="nbdb" Jan 26 08:03:32 crc kubenswrapper[4806]: I0126 08:03:32.877227 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f8b8acb-f4cf-41db-82f8-94ffd21c1594" containerName="nbdb" Jan 26 08:03:32 crc kubenswrapper[4806]: I0126 08:03:32.877364 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f8b8acb-f4cf-41db-82f8-94ffd21c1594" containerName="ovnkube-controller" Jan 26 08:03:32 crc kubenswrapper[4806]: I0126 08:03:32.877376 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f8b8acb-f4cf-41db-82f8-94ffd21c1594" containerName="ovn-acl-logging" Jan 26 08:03:32 crc kubenswrapper[4806]: I0126 08:03:32.877384 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f8b8acb-f4cf-41db-82f8-94ffd21c1594" containerName="sbdb" Jan 26 08:03:32 crc kubenswrapper[4806]: I0126 08:03:32.877392 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f8b8acb-f4cf-41db-82f8-94ffd21c1594" containerName="ovnkube-controller" Jan 26 08:03:32 crc kubenswrapper[4806]: I0126 08:03:32.877399 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f8b8acb-f4cf-41db-82f8-94ffd21c1594" containerName="ovnkube-controller" Jan 26 08:03:32 crc kubenswrapper[4806]: I0126 08:03:32.877406 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f8b8acb-f4cf-41db-82f8-94ffd21c1594" containerName="kube-rbac-proxy-node" Jan 26 08:03:32 crc kubenswrapper[4806]: I0126 08:03:32.877413 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f8b8acb-f4cf-41db-82f8-94ffd21c1594" containerName="northd" Jan 26 08:03:32 crc kubenswrapper[4806]: I0126 08:03:32.877420 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f8b8acb-f4cf-41db-82f8-94ffd21c1594" containerName="ovnkube-controller" Jan 26 08:03:32 crc kubenswrapper[4806]: I0126 08:03:32.877430 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f8b8acb-f4cf-41db-82f8-94ffd21c1594" containerName="nbdb" Jan 26 08:03:32 crc kubenswrapper[4806]: I0126 08:03:32.877439 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f8b8acb-f4cf-41db-82f8-94ffd21c1594" containerName="kube-rbac-proxy-ovn-metrics" Jan 26 08:03:32 crc kubenswrapper[4806]: I0126 08:03:32.877447 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f8b8acb-f4cf-41db-82f8-94ffd21c1594" containerName="ovn-controller" Jan 26 08:03:32 crc kubenswrapper[4806]: E0126 08:03:32.877608 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f8b8acb-f4cf-41db-82f8-94ffd21c1594" containerName="ovnkube-controller" Jan 26 08:03:32 crc kubenswrapper[4806]: I0126 08:03:32.877617 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f8b8acb-f4cf-41db-82f8-94ffd21c1594" containerName="ovnkube-controller" Jan 26 08:03:32 crc kubenswrapper[4806]: E0126 08:03:32.877625 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f8b8acb-f4cf-41db-82f8-94ffd21c1594" containerName="ovnkube-controller" Jan 26 08:03:32 crc kubenswrapper[4806]: I0126 08:03:32.877631 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f8b8acb-f4cf-41db-82f8-94ffd21c1594" containerName="ovnkube-controller" Jan 26 08:03:32 crc kubenswrapper[4806]: I0126 08:03:32.877714 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f8b8acb-f4cf-41db-82f8-94ffd21c1594" containerName="ovnkube-controller" Jan 26 08:03:32 crc kubenswrapper[4806]: I0126 08:03:32.879301 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-4lfsr" Jan 26 08:03:32 crc kubenswrapper[4806]: I0126 08:03:32.884156 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1f8b8acb-f4cf-41db-82f8-94ffd21c1594-host-slash\") pod \"1f8b8acb-f4cf-41db-82f8-94ffd21c1594\" (UID: \"1f8b8acb-f4cf-41db-82f8-94ffd21c1594\") " Jan 26 08:03:32 crc kubenswrapper[4806]: I0126 08:03:32.884206 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/1f8b8acb-f4cf-41db-82f8-94ffd21c1594-host-kubelet\") pod \"1f8b8acb-f4cf-41db-82f8-94ffd21c1594\" (UID: \"1f8b8acb-f4cf-41db-82f8-94ffd21c1594\") " Jan 26 08:03:32 crc kubenswrapper[4806]: I0126 08:03:32.884244 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1f8b8acb-f4cf-41db-82f8-94ffd21c1594-host-var-lib-cni-networks-ovn-kubernetes\") pod \"1f8b8acb-f4cf-41db-82f8-94ffd21c1594\" (UID: \"1f8b8acb-f4cf-41db-82f8-94ffd21c1594\") " Jan 26 08:03:32 crc kubenswrapper[4806]: I0126 08:03:32.884280 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1f8b8acb-f4cf-41db-82f8-94ffd21c1594-host-run-ovn-kubernetes\") pod \"1f8b8acb-f4cf-41db-82f8-94ffd21c1594\" (UID: \"1f8b8acb-f4cf-41db-82f8-94ffd21c1594\") " Jan 26 08:03:32 crc kubenswrapper[4806]: I0126 08:03:32.884302 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1f8b8acb-f4cf-41db-82f8-94ffd21c1594-host-slash" (OuterVolumeSpecName: "host-slash") pod "1f8b8acb-f4cf-41db-82f8-94ffd21c1594" (UID: "1f8b8acb-f4cf-41db-82f8-94ffd21c1594"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 08:03:32 crc kubenswrapper[4806]: I0126 08:03:32.884322 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/1f8b8acb-f4cf-41db-82f8-94ffd21c1594-run-systemd\") pod \"1f8b8acb-f4cf-41db-82f8-94ffd21c1594\" (UID: \"1f8b8acb-f4cf-41db-82f8-94ffd21c1594\") " Jan 26 08:03:32 crc kubenswrapper[4806]: I0126 08:03:32.884396 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/1f8b8acb-f4cf-41db-82f8-94ffd21c1594-run-ovn\") pod \"1f8b8acb-f4cf-41db-82f8-94ffd21c1594\" (UID: \"1f8b8acb-f4cf-41db-82f8-94ffd21c1594\") " Jan 26 08:03:32 crc kubenswrapper[4806]: I0126 08:03:32.884453 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/1f8b8acb-f4cf-41db-82f8-94ffd21c1594-systemd-units\") pod \"1f8b8acb-f4cf-41db-82f8-94ffd21c1594\" (UID: \"1f8b8acb-f4cf-41db-82f8-94ffd21c1594\") " Jan 26 08:03:32 crc kubenswrapper[4806]: I0126 08:03:32.884491 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1f8b8acb-f4cf-41db-82f8-94ffd21c1594-run-openvswitch\") pod \"1f8b8acb-f4cf-41db-82f8-94ffd21c1594\" (UID: \"1f8b8acb-f4cf-41db-82f8-94ffd21c1594\") " Jan 26 08:03:32 crc kubenswrapper[4806]: I0126 08:03:32.884552 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/1f8b8acb-f4cf-41db-82f8-94ffd21c1594-log-socket\") pod \"1f8b8acb-f4cf-41db-82f8-94ffd21c1594\" (UID: \"1f8b8acb-f4cf-41db-82f8-94ffd21c1594\") " Jan 26 08:03:32 crc kubenswrapper[4806]: I0126 08:03:32.884579 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/1f8b8acb-f4cf-41db-82f8-94ffd21c1594-node-log\") pod \"1f8b8acb-f4cf-41db-82f8-94ffd21c1594\" (UID: \"1f8b8acb-f4cf-41db-82f8-94ffd21c1594\") " Jan 26 08:03:32 crc kubenswrapper[4806]: I0126 08:03:32.884613 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1f8b8acb-f4cf-41db-82f8-94ffd21c1594-etc-openvswitch\") pod \"1f8b8acb-f4cf-41db-82f8-94ffd21c1594\" (UID: \"1f8b8acb-f4cf-41db-82f8-94ffd21c1594\") " Jan 26 08:03:32 crc kubenswrapper[4806]: I0126 08:03:32.884641 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/1f8b8acb-f4cf-41db-82f8-94ffd21c1594-ovnkube-script-lib\") pod \"1f8b8acb-f4cf-41db-82f8-94ffd21c1594\" (UID: \"1f8b8acb-f4cf-41db-82f8-94ffd21c1594\") " Jan 26 08:03:32 crc kubenswrapper[4806]: I0126 08:03:32.884683 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1f8b8acb-f4cf-41db-82f8-94ffd21c1594-host-run-netns\") pod \"1f8b8acb-f4cf-41db-82f8-94ffd21c1594\" (UID: \"1f8b8acb-f4cf-41db-82f8-94ffd21c1594\") " Jan 26 08:03:32 crc kubenswrapper[4806]: I0126 08:03:32.884709 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1f8b8acb-f4cf-41db-82f8-94ffd21c1594-ovn-node-metrics-cert\") pod \"1f8b8acb-f4cf-41db-82f8-94ffd21c1594\" (UID: \"1f8b8acb-f4cf-41db-82f8-94ffd21c1594\") " Jan 26 08:03:32 crc kubenswrapper[4806]: I0126 08:03:32.884743 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1f8b8acb-f4cf-41db-82f8-94ffd21c1594-ovnkube-config\") pod \"1f8b8acb-f4cf-41db-82f8-94ffd21c1594\" (UID: \"1f8b8acb-f4cf-41db-82f8-94ffd21c1594\") " Jan 26 08:03:32 crc kubenswrapper[4806]: I0126 08:03:32.884777 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bh82q\" (UniqueName: \"kubernetes.io/projected/1f8b8acb-f4cf-41db-82f8-94ffd21c1594-kube-api-access-bh82q\") pod \"1f8b8acb-f4cf-41db-82f8-94ffd21c1594\" (UID: \"1f8b8acb-f4cf-41db-82f8-94ffd21c1594\") " Jan 26 08:03:32 crc kubenswrapper[4806]: I0126 08:03:32.884809 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1f8b8acb-f4cf-41db-82f8-94ffd21c1594-env-overrides\") pod \"1f8b8acb-f4cf-41db-82f8-94ffd21c1594\" (UID: \"1f8b8acb-f4cf-41db-82f8-94ffd21c1594\") " Jan 26 08:03:32 crc kubenswrapper[4806]: I0126 08:03:32.884843 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1f8b8acb-f4cf-41db-82f8-94ffd21c1594-host-cni-netd\") pod \"1f8b8acb-f4cf-41db-82f8-94ffd21c1594\" (UID: \"1f8b8acb-f4cf-41db-82f8-94ffd21c1594\") " Jan 26 08:03:32 crc kubenswrapper[4806]: I0126 08:03:32.884873 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1f8b8acb-f4cf-41db-82f8-94ffd21c1594-host-cni-bin\") pod \"1f8b8acb-f4cf-41db-82f8-94ffd21c1594\" (UID: \"1f8b8acb-f4cf-41db-82f8-94ffd21c1594\") " Jan 26 08:03:32 crc kubenswrapper[4806]: I0126 08:03:32.884901 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1f8b8acb-f4cf-41db-82f8-94ffd21c1594-var-lib-openvswitch\") pod \"1f8b8acb-f4cf-41db-82f8-94ffd21c1594\" (UID: \"1f8b8acb-f4cf-41db-82f8-94ffd21c1594\") " Jan 26 08:03:32 crc kubenswrapper[4806]: I0126 08:03:32.885314 4806 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1f8b8acb-f4cf-41db-82f8-94ffd21c1594-host-slash\") on node \"crc\" DevicePath \"\"" Jan 26 08:03:32 crc kubenswrapper[4806]: I0126 08:03:32.885360 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1f8b8acb-f4cf-41db-82f8-94ffd21c1594-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "1f8b8acb-f4cf-41db-82f8-94ffd21c1594" (UID: "1f8b8acb-f4cf-41db-82f8-94ffd21c1594"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 08:03:32 crc kubenswrapper[4806]: I0126 08:03:32.885388 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1f8b8acb-f4cf-41db-82f8-94ffd21c1594-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "1f8b8acb-f4cf-41db-82f8-94ffd21c1594" (UID: "1f8b8acb-f4cf-41db-82f8-94ffd21c1594"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 08:03:32 crc kubenswrapper[4806]: I0126 08:03:32.885414 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1f8b8acb-f4cf-41db-82f8-94ffd21c1594-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "1f8b8acb-f4cf-41db-82f8-94ffd21c1594" (UID: "1f8b8acb-f4cf-41db-82f8-94ffd21c1594"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 08:03:32 crc kubenswrapper[4806]: I0126 08:03:32.885437 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1f8b8acb-f4cf-41db-82f8-94ffd21c1594-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "1f8b8acb-f4cf-41db-82f8-94ffd21c1594" (UID: "1f8b8acb-f4cf-41db-82f8-94ffd21c1594"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 08:03:32 crc kubenswrapper[4806]: I0126 08:03:32.885464 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1f8b8acb-f4cf-41db-82f8-94ffd21c1594-log-socket" (OuterVolumeSpecName: "log-socket") pod "1f8b8acb-f4cf-41db-82f8-94ffd21c1594" (UID: "1f8b8acb-f4cf-41db-82f8-94ffd21c1594"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 08:03:32 crc kubenswrapper[4806]: I0126 08:03:32.885487 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1f8b8acb-f4cf-41db-82f8-94ffd21c1594-node-log" (OuterVolumeSpecName: "node-log") pod "1f8b8acb-f4cf-41db-82f8-94ffd21c1594" (UID: "1f8b8acb-f4cf-41db-82f8-94ffd21c1594"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 08:03:32 crc kubenswrapper[4806]: I0126 08:03:32.885509 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1f8b8acb-f4cf-41db-82f8-94ffd21c1594-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "1f8b8acb-f4cf-41db-82f8-94ffd21c1594" (UID: "1f8b8acb-f4cf-41db-82f8-94ffd21c1594"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 08:03:32 crc kubenswrapper[4806]: I0126 08:03:32.885652 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1f8b8acb-f4cf-41db-82f8-94ffd21c1594-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "1f8b8acb-f4cf-41db-82f8-94ffd21c1594" (UID: "1f8b8acb-f4cf-41db-82f8-94ffd21c1594"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 08:03:32 crc kubenswrapper[4806]: I0126 08:03:32.885715 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1f8b8acb-f4cf-41db-82f8-94ffd21c1594-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "1f8b8acb-f4cf-41db-82f8-94ffd21c1594" (UID: "1f8b8acb-f4cf-41db-82f8-94ffd21c1594"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 08:03:32 crc kubenswrapper[4806]: I0126 08:03:32.885739 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1f8b8acb-f4cf-41db-82f8-94ffd21c1594-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "1f8b8acb-f4cf-41db-82f8-94ffd21c1594" (UID: "1f8b8acb-f4cf-41db-82f8-94ffd21c1594"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 08:03:32 crc kubenswrapper[4806]: I0126 08:03:32.886040 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f8b8acb-f4cf-41db-82f8-94ffd21c1594-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "1f8b8acb-f4cf-41db-82f8-94ffd21c1594" (UID: "1f8b8acb-f4cf-41db-82f8-94ffd21c1594"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 08:03:32 crc kubenswrapper[4806]: I0126 08:03:32.886335 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f8b8acb-f4cf-41db-82f8-94ffd21c1594-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "1f8b8acb-f4cf-41db-82f8-94ffd21c1594" (UID: "1f8b8acb-f4cf-41db-82f8-94ffd21c1594"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 08:03:32 crc kubenswrapper[4806]: I0126 08:03:32.886373 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1f8b8acb-f4cf-41db-82f8-94ffd21c1594-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "1f8b8acb-f4cf-41db-82f8-94ffd21c1594" (UID: "1f8b8acb-f4cf-41db-82f8-94ffd21c1594"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 08:03:32 crc kubenswrapper[4806]: I0126 08:03:32.886399 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1f8b8acb-f4cf-41db-82f8-94ffd21c1594-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "1f8b8acb-f4cf-41db-82f8-94ffd21c1594" (UID: "1f8b8acb-f4cf-41db-82f8-94ffd21c1594"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 08:03:32 crc kubenswrapper[4806]: I0126 08:03:32.887830 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1f8b8acb-f4cf-41db-82f8-94ffd21c1594-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "1f8b8acb-f4cf-41db-82f8-94ffd21c1594" (UID: "1f8b8acb-f4cf-41db-82f8-94ffd21c1594"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 08:03:32 crc kubenswrapper[4806]: I0126 08:03:32.888118 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f8b8acb-f4cf-41db-82f8-94ffd21c1594-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "1f8b8acb-f4cf-41db-82f8-94ffd21c1594" (UID: "1f8b8acb-f4cf-41db-82f8-94ffd21c1594"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 08:03:32 crc kubenswrapper[4806]: I0126 08:03:32.890422 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f8b8acb-f4cf-41db-82f8-94ffd21c1594-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "1f8b8acb-f4cf-41db-82f8-94ffd21c1594" (UID: "1f8b8acb-f4cf-41db-82f8-94ffd21c1594"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:03:32 crc kubenswrapper[4806]: I0126 08:03:32.890444 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f8b8acb-f4cf-41db-82f8-94ffd21c1594-kube-api-access-bh82q" (OuterVolumeSpecName: "kube-api-access-bh82q") pod "1f8b8acb-f4cf-41db-82f8-94ffd21c1594" (UID: "1f8b8acb-f4cf-41db-82f8-94ffd21c1594"). InnerVolumeSpecName "kube-api-access-bh82q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:03:32 crc kubenswrapper[4806]: I0126 08:03:32.900078 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1f8b8acb-f4cf-41db-82f8-94ffd21c1594-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "1f8b8acb-f4cf-41db-82f8-94ffd21c1594" (UID: "1f8b8acb-f4cf-41db-82f8-94ffd21c1594"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 08:03:32 crc kubenswrapper[4806]: I0126 08:03:32.986601 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/89f46edf-56e3-4041-badf-e70f8c4bddb7-systemd-units\") pod \"ovnkube-node-4lfsr\" (UID: \"89f46edf-56e3-4041-badf-e70f8c4bddb7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4lfsr" Jan 26 08:03:32 crc kubenswrapper[4806]: I0126 08:03:32.986643 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/89f46edf-56e3-4041-badf-e70f8c4bddb7-env-overrides\") pod \"ovnkube-node-4lfsr\" (UID: \"89f46edf-56e3-4041-badf-e70f8c4bddb7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4lfsr" Jan 26 08:03:32 crc kubenswrapper[4806]: I0126 08:03:32.986666 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/89f46edf-56e3-4041-badf-e70f8c4bddb7-node-log\") pod \"ovnkube-node-4lfsr\" (UID: \"89f46edf-56e3-4041-badf-e70f8c4bddb7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4lfsr" Jan 26 08:03:32 crc kubenswrapper[4806]: I0126 08:03:32.986690 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/89f46edf-56e3-4041-badf-e70f8c4bddb7-run-systemd\") pod \"ovnkube-node-4lfsr\" (UID: \"89f46edf-56e3-4041-badf-e70f8c4bddb7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4lfsr" Jan 26 08:03:32 crc kubenswrapper[4806]: I0126 08:03:32.986716 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/89f46edf-56e3-4041-badf-e70f8c4bddb7-ovn-node-metrics-cert\") pod \"ovnkube-node-4lfsr\" (UID: \"89f46edf-56e3-4041-badf-e70f8c4bddb7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4lfsr" Jan 26 08:03:32 crc kubenswrapper[4806]: I0126 08:03:32.986739 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/89f46edf-56e3-4041-badf-e70f8c4bddb7-host-cni-netd\") pod \"ovnkube-node-4lfsr\" (UID: \"89f46edf-56e3-4041-badf-e70f8c4bddb7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4lfsr" Jan 26 08:03:32 crc kubenswrapper[4806]: I0126 08:03:32.986761 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/89f46edf-56e3-4041-badf-e70f8c4bddb7-ovnkube-config\") pod \"ovnkube-node-4lfsr\" (UID: \"89f46edf-56e3-4041-badf-e70f8c4bddb7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4lfsr" Jan 26 08:03:32 crc kubenswrapper[4806]: I0126 08:03:32.986784 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/89f46edf-56e3-4041-badf-e70f8c4bddb7-host-run-netns\") pod \"ovnkube-node-4lfsr\" (UID: \"89f46edf-56e3-4041-badf-e70f8c4bddb7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4lfsr" Jan 26 08:03:32 crc kubenswrapper[4806]: I0126 08:03:32.986809 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/89f46edf-56e3-4041-badf-e70f8c4bddb7-run-openvswitch\") pod \"ovnkube-node-4lfsr\" (UID: \"89f46edf-56e3-4041-badf-e70f8c4bddb7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4lfsr" Jan 26 08:03:32 crc kubenswrapper[4806]: I0126 08:03:32.986832 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/89f46edf-56e3-4041-badf-e70f8c4bddb7-host-cni-bin\") pod \"ovnkube-node-4lfsr\" (UID: \"89f46edf-56e3-4041-badf-e70f8c4bddb7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4lfsr" Jan 26 08:03:32 crc kubenswrapper[4806]: I0126 08:03:32.986879 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/89f46edf-56e3-4041-badf-e70f8c4bddb7-run-ovn\") pod \"ovnkube-node-4lfsr\" (UID: \"89f46edf-56e3-4041-badf-e70f8c4bddb7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4lfsr" Jan 26 08:03:32 crc kubenswrapper[4806]: I0126 08:03:32.986911 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/89f46edf-56e3-4041-badf-e70f8c4bddb7-log-socket\") pod \"ovnkube-node-4lfsr\" (UID: \"89f46edf-56e3-4041-badf-e70f8c4bddb7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4lfsr" Jan 26 08:03:32 crc kubenswrapper[4806]: I0126 08:03:32.986933 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtlxr\" (UniqueName: \"kubernetes.io/projected/89f46edf-56e3-4041-badf-e70f8c4bddb7-kube-api-access-jtlxr\") pod \"ovnkube-node-4lfsr\" (UID: \"89f46edf-56e3-4041-badf-e70f8c4bddb7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4lfsr" Jan 26 08:03:32 crc kubenswrapper[4806]: I0126 08:03:32.986953 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/89f46edf-56e3-4041-badf-e70f8c4bddb7-host-kubelet\") pod \"ovnkube-node-4lfsr\" (UID: \"89f46edf-56e3-4041-badf-e70f8c4bddb7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4lfsr" Jan 26 08:03:32 crc kubenswrapper[4806]: I0126 08:03:32.986976 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/89f46edf-56e3-4041-badf-e70f8c4bddb7-var-lib-openvswitch\") pod \"ovnkube-node-4lfsr\" (UID: \"89f46edf-56e3-4041-badf-e70f8c4bddb7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4lfsr" Jan 26 08:03:32 crc kubenswrapper[4806]: I0126 08:03:32.986997 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/89f46edf-56e3-4041-badf-e70f8c4bddb7-host-slash\") pod \"ovnkube-node-4lfsr\" (UID: \"89f46edf-56e3-4041-badf-e70f8c4bddb7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4lfsr" Jan 26 08:03:32 crc kubenswrapper[4806]: I0126 08:03:32.987026 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/89f46edf-56e3-4041-badf-e70f8c4bddb7-etc-openvswitch\") pod \"ovnkube-node-4lfsr\" (UID: \"89f46edf-56e3-4041-badf-e70f8c4bddb7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4lfsr" Jan 26 08:03:32 crc kubenswrapper[4806]: I0126 08:03:32.987049 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/89f46edf-56e3-4041-badf-e70f8c4bddb7-ovnkube-script-lib\") pod \"ovnkube-node-4lfsr\" (UID: \"89f46edf-56e3-4041-badf-e70f8c4bddb7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4lfsr" Jan 26 08:03:32 crc kubenswrapper[4806]: I0126 08:03:32.987101 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/89f46edf-56e3-4041-badf-e70f8c4bddb7-host-run-ovn-kubernetes\") pod \"ovnkube-node-4lfsr\" (UID: \"89f46edf-56e3-4041-badf-e70f8c4bddb7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4lfsr" Jan 26 08:03:32 crc kubenswrapper[4806]: I0126 08:03:32.987128 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/89f46edf-56e3-4041-badf-e70f8c4bddb7-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-4lfsr\" (UID: \"89f46edf-56e3-4041-badf-e70f8c4bddb7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4lfsr" Jan 26 08:03:32 crc kubenswrapper[4806]: I0126 08:03:32.987178 4806 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1f8b8acb-f4cf-41db-82f8-94ffd21c1594-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 26 08:03:32 crc kubenswrapper[4806]: I0126 08:03:32.987194 4806 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1f8b8acb-f4cf-41db-82f8-94ffd21c1594-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 26 08:03:32 crc kubenswrapper[4806]: I0126 08:03:32.987206 4806 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1f8b8acb-f4cf-41db-82f8-94ffd21c1594-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 26 08:03:32 crc kubenswrapper[4806]: I0126 08:03:32.987215 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bh82q\" (UniqueName: \"kubernetes.io/projected/1f8b8acb-f4cf-41db-82f8-94ffd21c1594-kube-api-access-bh82q\") on node \"crc\" DevicePath \"\"" Jan 26 08:03:32 crc kubenswrapper[4806]: I0126 08:03:32.987223 4806 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1f8b8acb-f4cf-41db-82f8-94ffd21c1594-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 26 08:03:32 crc kubenswrapper[4806]: I0126 08:03:32.987232 4806 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1f8b8acb-f4cf-41db-82f8-94ffd21c1594-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 26 08:03:32 crc kubenswrapper[4806]: I0126 08:03:32.987241 4806 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1f8b8acb-f4cf-41db-82f8-94ffd21c1594-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 26 08:03:32 crc kubenswrapper[4806]: I0126 08:03:32.987250 4806 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1f8b8acb-f4cf-41db-82f8-94ffd21c1594-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 26 08:03:32 crc kubenswrapper[4806]: I0126 08:03:32.987258 4806 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/1f8b8acb-f4cf-41db-82f8-94ffd21c1594-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 26 08:03:32 crc kubenswrapper[4806]: I0126 08:03:32.987268 4806 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1f8b8acb-f4cf-41db-82f8-94ffd21c1594-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 26 08:03:32 crc kubenswrapper[4806]: I0126 08:03:32.987276 4806 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1f8b8acb-f4cf-41db-82f8-94ffd21c1594-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 26 08:03:32 crc kubenswrapper[4806]: I0126 08:03:32.987286 4806 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/1f8b8acb-f4cf-41db-82f8-94ffd21c1594-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 26 08:03:32 crc kubenswrapper[4806]: I0126 08:03:32.987294 4806 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/1f8b8acb-f4cf-41db-82f8-94ffd21c1594-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 26 08:03:32 crc kubenswrapper[4806]: I0126 08:03:32.987301 4806 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/1f8b8acb-f4cf-41db-82f8-94ffd21c1594-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 26 08:03:32 crc kubenswrapper[4806]: I0126 08:03:32.987309 4806 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1f8b8acb-f4cf-41db-82f8-94ffd21c1594-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 26 08:03:32 crc kubenswrapper[4806]: I0126 08:03:32.987318 4806 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/1f8b8acb-f4cf-41db-82f8-94ffd21c1594-log-socket\") on node \"crc\" DevicePath \"\"" Jan 26 08:03:32 crc kubenswrapper[4806]: I0126 08:03:32.987326 4806 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/1f8b8acb-f4cf-41db-82f8-94ffd21c1594-node-log\") on node \"crc\" DevicePath \"\"" Jan 26 08:03:32 crc kubenswrapper[4806]: I0126 08:03:32.987335 4806 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1f8b8acb-f4cf-41db-82f8-94ffd21c1594-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 26 08:03:32 crc kubenswrapper[4806]: I0126 08:03:32.987343 4806 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/1f8b8acb-f4cf-41db-82f8-94ffd21c1594-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.088389 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/89f46edf-56e3-4041-badf-e70f8c4bddb7-host-cni-bin\") pod \"ovnkube-node-4lfsr\" (UID: \"89f46edf-56e3-4041-badf-e70f8c4bddb7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4lfsr" Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.088449 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/89f46edf-56e3-4041-badf-e70f8c4bddb7-host-cni-bin\") pod \"ovnkube-node-4lfsr\" (UID: \"89f46edf-56e3-4041-badf-e70f8c4bddb7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4lfsr" Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.088458 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/89f46edf-56e3-4041-badf-e70f8c4bddb7-run-ovn\") pod \"ovnkube-node-4lfsr\" (UID: \"89f46edf-56e3-4041-badf-e70f8c4bddb7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4lfsr" Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.088498 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/89f46edf-56e3-4041-badf-e70f8c4bddb7-run-ovn\") pod \"ovnkube-node-4lfsr\" (UID: \"89f46edf-56e3-4041-badf-e70f8c4bddb7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4lfsr" Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.088570 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/89f46edf-56e3-4041-badf-e70f8c4bddb7-log-socket\") pod \"ovnkube-node-4lfsr\" (UID: \"89f46edf-56e3-4041-badf-e70f8c4bddb7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4lfsr" Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.088612 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jtlxr\" (UniqueName: \"kubernetes.io/projected/89f46edf-56e3-4041-badf-e70f8c4bddb7-kube-api-access-jtlxr\") pod \"ovnkube-node-4lfsr\" (UID: \"89f46edf-56e3-4041-badf-e70f8c4bddb7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4lfsr" Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.088639 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/89f46edf-56e3-4041-badf-e70f8c4bddb7-host-kubelet\") pod \"ovnkube-node-4lfsr\" (UID: \"89f46edf-56e3-4041-badf-e70f8c4bddb7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4lfsr" Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.088663 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/89f46edf-56e3-4041-badf-e70f8c4bddb7-var-lib-openvswitch\") pod \"ovnkube-node-4lfsr\" (UID: \"89f46edf-56e3-4041-badf-e70f8c4bddb7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4lfsr" Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.088699 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/89f46edf-56e3-4041-badf-e70f8c4bddb7-ovnkube-script-lib\") pod \"ovnkube-node-4lfsr\" (UID: \"89f46edf-56e3-4041-badf-e70f8c4bddb7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4lfsr" Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.088719 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/89f46edf-56e3-4041-badf-e70f8c4bddb7-host-slash\") pod \"ovnkube-node-4lfsr\" (UID: \"89f46edf-56e3-4041-badf-e70f8c4bddb7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4lfsr" Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.088740 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/89f46edf-56e3-4041-badf-e70f8c4bddb7-etc-openvswitch\") pod \"ovnkube-node-4lfsr\" (UID: \"89f46edf-56e3-4041-badf-e70f8c4bddb7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4lfsr" Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.088654 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/89f46edf-56e3-4041-badf-e70f8c4bddb7-log-socket\") pod \"ovnkube-node-4lfsr\" (UID: \"89f46edf-56e3-4041-badf-e70f8c4bddb7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4lfsr" Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.088767 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/89f46edf-56e3-4041-badf-e70f8c4bddb7-host-run-ovn-kubernetes\") pod \"ovnkube-node-4lfsr\" (UID: \"89f46edf-56e3-4041-badf-e70f8c4bddb7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4lfsr" Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.088794 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/89f46edf-56e3-4041-badf-e70f8c4bddb7-host-run-ovn-kubernetes\") pod \"ovnkube-node-4lfsr\" (UID: \"89f46edf-56e3-4041-badf-e70f8c4bddb7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4lfsr" Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.088828 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/89f46edf-56e3-4041-badf-e70f8c4bddb7-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-4lfsr\" (UID: \"89f46edf-56e3-4041-badf-e70f8c4bddb7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4lfsr" Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.088866 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/89f46edf-56e3-4041-badf-e70f8c4bddb7-systemd-units\") pod \"ovnkube-node-4lfsr\" (UID: \"89f46edf-56e3-4041-badf-e70f8c4bddb7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4lfsr" Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.088885 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/89f46edf-56e3-4041-badf-e70f8c4bddb7-env-overrides\") pod \"ovnkube-node-4lfsr\" (UID: \"89f46edf-56e3-4041-badf-e70f8c4bddb7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4lfsr" Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.088906 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/89f46edf-56e3-4041-badf-e70f8c4bddb7-node-log\") pod \"ovnkube-node-4lfsr\" (UID: \"89f46edf-56e3-4041-badf-e70f8c4bddb7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4lfsr" Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.088921 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/89f46edf-56e3-4041-badf-e70f8c4bddb7-host-kubelet\") pod \"ovnkube-node-4lfsr\" (UID: \"89f46edf-56e3-4041-badf-e70f8c4bddb7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4lfsr" Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.088929 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/89f46edf-56e3-4041-badf-e70f8c4bddb7-run-systemd\") pod \"ovnkube-node-4lfsr\" (UID: \"89f46edf-56e3-4041-badf-e70f8c4bddb7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4lfsr" Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.088955 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/89f46edf-56e3-4041-badf-e70f8c4bddb7-run-systemd\") pod \"ovnkube-node-4lfsr\" (UID: \"89f46edf-56e3-4041-badf-e70f8c4bddb7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4lfsr" Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.088969 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/89f46edf-56e3-4041-badf-e70f8c4bddb7-ovn-node-metrics-cert\") pod \"ovnkube-node-4lfsr\" (UID: \"89f46edf-56e3-4041-badf-e70f8c4bddb7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4lfsr" Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.088988 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/89f46edf-56e3-4041-badf-e70f8c4bddb7-host-cni-netd\") pod \"ovnkube-node-4lfsr\" (UID: \"89f46edf-56e3-4041-badf-e70f8c4bddb7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4lfsr" Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.089010 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/89f46edf-56e3-4041-badf-e70f8c4bddb7-ovnkube-config\") pod \"ovnkube-node-4lfsr\" (UID: \"89f46edf-56e3-4041-badf-e70f8c4bddb7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4lfsr" Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.089046 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/89f46edf-56e3-4041-badf-e70f8c4bddb7-host-run-netns\") pod \"ovnkube-node-4lfsr\" (UID: \"89f46edf-56e3-4041-badf-e70f8c4bddb7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4lfsr" Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.089068 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/89f46edf-56e3-4041-badf-e70f8c4bddb7-run-openvswitch\") pod \"ovnkube-node-4lfsr\" (UID: \"89f46edf-56e3-4041-badf-e70f8c4bddb7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4lfsr" Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.089110 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/89f46edf-56e3-4041-badf-e70f8c4bddb7-run-openvswitch\") pod \"ovnkube-node-4lfsr\" (UID: \"89f46edf-56e3-4041-badf-e70f8c4bddb7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4lfsr" Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.089249 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/89f46edf-56e3-4041-badf-e70f8c4bddb7-var-lib-openvswitch\") pod \"ovnkube-node-4lfsr\" (UID: \"89f46edf-56e3-4041-badf-e70f8c4bddb7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4lfsr" Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.089296 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/89f46edf-56e3-4041-badf-e70f8c4bddb7-host-cni-netd\") pod \"ovnkube-node-4lfsr\" (UID: \"89f46edf-56e3-4041-badf-e70f8c4bddb7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4lfsr" Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.089485 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/89f46edf-56e3-4041-badf-e70f8c4bddb7-host-slash\") pod \"ovnkube-node-4lfsr\" (UID: \"89f46edf-56e3-4041-badf-e70f8c4bddb7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4lfsr" Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.089544 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/89f46edf-56e3-4041-badf-e70f8c4bddb7-etc-openvswitch\") pod \"ovnkube-node-4lfsr\" (UID: \"89f46edf-56e3-4041-badf-e70f8c4bddb7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4lfsr" Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.089579 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/89f46edf-56e3-4041-badf-e70f8c4bddb7-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-4lfsr\" (UID: \"89f46edf-56e3-4041-badf-e70f8c4bddb7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4lfsr" Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.089600 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/89f46edf-56e3-4041-badf-e70f8c4bddb7-host-run-netns\") pod \"ovnkube-node-4lfsr\" (UID: \"89f46edf-56e3-4041-badf-e70f8c4bddb7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4lfsr" Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.089622 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/89f46edf-56e3-4041-badf-e70f8c4bddb7-node-log\") pod \"ovnkube-node-4lfsr\" (UID: \"89f46edf-56e3-4041-badf-e70f8c4bddb7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4lfsr" Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.089650 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/89f46edf-56e3-4041-badf-e70f8c4bddb7-systemd-units\") pod \"ovnkube-node-4lfsr\" (UID: \"89f46edf-56e3-4041-badf-e70f8c4bddb7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4lfsr" Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.089755 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/89f46edf-56e3-4041-badf-e70f8c4bddb7-ovnkube-script-lib\") pod \"ovnkube-node-4lfsr\" (UID: \"89f46edf-56e3-4041-badf-e70f8c4bddb7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4lfsr" Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.090069 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/89f46edf-56e3-4041-badf-e70f8c4bddb7-ovnkube-config\") pod \"ovnkube-node-4lfsr\" (UID: \"89f46edf-56e3-4041-badf-e70f8c4bddb7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4lfsr" Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.090102 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/89f46edf-56e3-4041-badf-e70f8c4bddb7-env-overrides\") pod \"ovnkube-node-4lfsr\" (UID: \"89f46edf-56e3-4041-badf-e70f8c4bddb7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4lfsr" Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.091969 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/89f46edf-56e3-4041-badf-e70f8c4bddb7-ovn-node-metrics-cert\") pod \"ovnkube-node-4lfsr\" (UID: \"89f46edf-56e3-4041-badf-e70f8c4bddb7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4lfsr" Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.105461 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8mw7z_1f8b8acb-f4cf-41db-82f8-94ffd21c1594/ovnkube-controller/3.log" Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.107853 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8mw7z_1f8b8acb-f4cf-41db-82f8-94ffd21c1594/ovn-acl-logging/0.log" Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.108354 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-8mw7z_1f8b8acb-f4cf-41db-82f8-94ffd21c1594/ovn-controller/0.log" Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.108707 4806 generic.go:334] "Generic (PLEG): container finished" podID="1f8b8acb-f4cf-41db-82f8-94ffd21c1594" containerID="7f29f3d947b8f6cdecce74518b658620be055f64b272f6e599f723e1ae44f773" exitCode=0 Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.108780 4806 generic.go:334] "Generic (PLEG): container finished" podID="1f8b8acb-f4cf-41db-82f8-94ffd21c1594" containerID="6e9d86c9f33a8ad421c133f7b674eff9362011ad9ce60cbe1b917bda84c85047" exitCode=0 Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.108847 4806 generic.go:334] "Generic (PLEG): container finished" podID="1f8b8acb-f4cf-41db-82f8-94ffd21c1594" containerID="9d102c6b4ea9d79168f0fec24c556c8fec8e3c1191646eebcbe8ed0289edfb6d" exitCode=0 Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.108907 4806 generic.go:334] "Generic (PLEG): container finished" podID="1f8b8acb-f4cf-41db-82f8-94ffd21c1594" containerID="c90dede1fc0753cad500555db7d92be73cb9b51fe948e3e65bed66134d85628c" exitCode=0 Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.108970 4806 generic.go:334] "Generic (PLEG): container finished" podID="1f8b8acb-f4cf-41db-82f8-94ffd21c1594" containerID="adbed86e005e6e9cecb4320e0dff72eab3cfd505ff656a64e6643cf47828c789" exitCode=0 Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.109032 4806 generic.go:334] "Generic (PLEG): container finished" podID="1f8b8acb-f4cf-41db-82f8-94ffd21c1594" containerID="21d2ea6bc7f0a91da2b372587419ec1b34cfe3aca8f70dd4e360bf3e818bb103" exitCode=0 Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.109085 4806 generic.go:334] "Generic (PLEG): container finished" podID="1f8b8acb-f4cf-41db-82f8-94ffd21c1594" containerID="e05c58fd693a7827b1ff019c6e160e4d2198004aca34245ee6e08cd37f5627da" exitCode=143 Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.109146 4806 generic.go:334] "Generic (PLEG): container finished" podID="1f8b8acb-f4cf-41db-82f8-94ffd21c1594" containerID="634f160a9064240a06e9b425f70ceae6390db7205f898efdc601437e72c362df" exitCode=143 Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.108794 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" event={"ID":"1f8b8acb-f4cf-41db-82f8-94ffd21c1594","Type":"ContainerDied","Data":"7f29f3d947b8f6cdecce74518b658620be055f64b272f6e599f723e1ae44f773"} Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.108863 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.109381 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" event={"ID":"1f8b8acb-f4cf-41db-82f8-94ffd21c1594","Type":"ContainerDied","Data":"6e9d86c9f33a8ad421c133f7b674eff9362011ad9ce60cbe1b917bda84c85047"} Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.109452 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" event={"ID":"1f8b8acb-f4cf-41db-82f8-94ffd21c1594","Type":"ContainerDied","Data":"9d102c6b4ea9d79168f0fec24c556c8fec8e3c1191646eebcbe8ed0289edfb6d"} Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.109564 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" event={"ID":"1f8b8acb-f4cf-41db-82f8-94ffd21c1594","Type":"ContainerDied","Data":"c90dede1fc0753cad500555db7d92be73cb9b51fe948e3e65bed66134d85628c"} Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.109632 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" event={"ID":"1f8b8acb-f4cf-41db-82f8-94ffd21c1594","Type":"ContainerDied","Data":"adbed86e005e6e9cecb4320e0dff72eab3cfd505ff656a64e6643cf47828c789"} Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.109400 4806 scope.go:117] "RemoveContainer" containerID="7f29f3d947b8f6cdecce74518b658620be055f64b272f6e599f723e1ae44f773" Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.109697 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" event={"ID":"1f8b8acb-f4cf-41db-82f8-94ffd21c1594","Type":"ContainerDied","Data":"21d2ea6bc7f0a91da2b372587419ec1b34cfe3aca8f70dd4e360bf3e818bb103"} Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.109914 4806 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"df2814fbaab959fae53eddddeffb17ab97a5a8ad31fbc20bcca4cdad140a24e2"} Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.110010 4806 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6e9d86c9f33a8ad421c133f7b674eff9362011ad9ce60cbe1b917bda84c85047"} Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.110066 4806 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9d102c6b4ea9d79168f0fec24c556c8fec8e3c1191646eebcbe8ed0289edfb6d"} Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.110120 4806 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c90dede1fc0753cad500555db7d92be73cb9b51fe948e3e65bed66134d85628c"} Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.110166 4806 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"adbed86e005e6e9cecb4320e0dff72eab3cfd505ff656a64e6643cf47828c789"} Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.110216 4806 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"21d2ea6bc7f0a91da2b372587419ec1b34cfe3aca8f70dd4e360bf3e818bb103"} Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.110262 4806 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e05c58fd693a7827b1ff019c6e160e4d2198004aca34245ee6e08cd37f5627da"} Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.110312 4806 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"634f160a9064240a06e9b425f70ceae6390db7205f898efdc601437e72c362df"} Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.110361 4806 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2105d09a59a290aa3e1915a2b8c2c8aeb3a3e94ddfd4b48dc574a5362f522d18"} Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.110411 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" event={"ID":"1f8b8acb-f4cf-41db-82f8-94ffd21c1594","Type":"ContainerDied","Data":"e05c58fd693a7827b1ff019c6e160e4d2198004aca34245ee6e08cd37f5627da"} Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.110564 4806 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7f29f3d947b8f6cdecce74518b658620be055f64b272f6e599f723e1ae44f773"} Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.110620 4806 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"df2814fbaab959fae53eddddeffb17ab97a5a8ad31fbc20bcca4cdad140a24e2"} Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.110666 4806 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6e9d86c9f33a8ad421c133f7b674eff9362011ad9ce60cbe1b917bda84c85047"} Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.110710 4806 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9d102c6b4ea9d79168f0fec24c556c8fec8e3c1191646eebcbe8ed0289edfb6d"} Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.110754 4806 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c90dede1fc0753cad500555db7d92be73cb9b51fe948e3e65bed66134d85628c"} Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.110807 4806 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"adbed86e005e6e9cecb4320e0dff72eab3cfd505ff656a64e6643cf47828c789"} Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.110853 4806 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"21d2ea6bc7f0a91da2b372587419ec1b34cfe3aca8f70dd4e360bf3e818bb103"} Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.110903 4806 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e05c58fd693a7827b1ff019c6e160e4d2198004aca34245ee6e08cd37f5627da"} Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.110953 4806 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"634f160a9064240a06e9b425f70ceae6390db7205f898efdc601437e72c362df"} Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.111005 4806 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2105d09a59a290aa3e1915a2b8c2c8aeb3a3e94ddfd4b48dc574a5362f522d18"} Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.111132 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" event={"ID":"1f8b8acb-f4cf-41db-82f8-94ffd21c1594","Type":"ContainerDied","Data":"634f160a9064240a06e9b425f70ceae6390db7205f898efdc601437e72c362df"} Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.111201 4806 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7f29f3d947b8f6cdecce74518b658620be055f64b272f6e599f723e1ae44f773"} Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.111252 4806 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"df2814fbaab959fae53eddddeffb17ab97a5a8ad31fbc20bcca4cdad140a24e2"} Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.111299 4806 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6e9d86c9f33a8ad421c133f7b674eff9362011ad9ce60cbe1b917bda84c85047"} Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.111344 4806 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9d102c6b4ea9d79168f0fec24c556c8fec8e3c1191646eebcbe8ed0289edfb6d"} Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.111389 4806 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c90dede1fc0753cad500555db7d92be73cb9b51fe948e3e65bed66134d85628c"} Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.111433 4806 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"adbed86e005e6e9cecb4320e0dff72eab3cfd505ff656a64e6643cf47828c789"} Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.111482 4806 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"21d2ea6bc7f0a91da2b372587419ec1b34cfe3aca8f70dd4e360bf3e818bb103"} Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.111551 4806 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e05c58fd693a7827b1ff019c6e160e4d2198004aca34245ee6e08cd37f5627da"} Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.111629 4806 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"634f160a9064240a06e9b425f70ceae6390db7205f898efdc601437e72c362df"} Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.111681 4806 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2105d09a59a290aa3e1915a2b8c2c8aeb3a3e94ddfd4b48dc574a5362f522d18"} Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.111737 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8mw7z" event={"ID":"1f8b8acb-f4cf-41db-82f8-94ffd21c1594","Type":"ContainerDied","Data":"2165bee14b56b0c6a41e3958e9481ed141857d7e598045c0db4cf2040477f3d7"} Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.111793 4806 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7f29f3d947b8f6cdecce74518b658620be055f64b272f6e599f723e1ae44f773"} Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.111846 4806 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"df2814fbaab959fae53eddddeffb17ab97a5a8ad31fbc20bcca4cdad140a24e2"} Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.111891 4806 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6e9d86c9f33a8ad421c133f7b674eff9362011ad9ce60cbe1b917bda84c85047"} Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.111939 4806 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9d102c6b4ea9d79168f0fec24c556c8fec8e3c1191646eebcbe8ed0289edfb6d"} Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.111987 4806 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c90dede1fc0753cad500555db7d92be73cb9b51fe948e3e65bed66134d85628c"} Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.112032 4806 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"adbed86e005e6e9cecb4320e0dff72eab3cfd505ff656a64e6643cf47828c789"} Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.112125 4806 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"21d2ea6bc7f0a91da2b372587419ec1b34cfe3aca8f70dd4e360bf3e818bb103"} Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.112177 4806 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e05c58fd693a7827b1ff019c6e160e4d2198004aca34245ee6e08cd37f5627da"} Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.112228 4806 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"634f160a9064240a06e9b425f70ceae6390db7205f898efdc601437e72c362df"} Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.112276 4806 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2105d09a59a290aa3e1915a2b8c2c8aeb3a3e94ddfd4b48dc574a5362f522d18"} Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.112327 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-d7glh_4320ae6b-0d73-47d7-9f2c-f3c5b6b69041/kube-multus/2.log" Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.112848 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-d7glh_4320ae6b-0d73-47d7-9f2c-f3c5b6b69041/kube-multus/1.log" Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.112900 4806 generic.go:334] "Generic (PLEG): container finished" podID="4320ae6b-0d73-47d7-9f2c-f3c5b6b69041" containerID="e417d91e63473ab979f371a0a51d02ca944a89619a0becc7adeeadfc324a0b88" exitCode=2 Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.112932 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-d7glh" event={"ID":"4320ae6b-0d73-47d7-9f2c-f3c5b6b69041","Type":"ContainerDied","Data":"e417d91e63473ab979f371a0a51d02ca944a89619a0becc7adeeadfc324a0b88"} Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.112954 4806 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0b2b56f712aa5b59214153bb170a5866aea874524f826e53be3e703b8ff912fd"} Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.113581 4806 scope.go:117] "RemoveContainer" containerID="e417d91e63473ab979f371a0a51d02ca944a89619a0becc7adeeadfc324a0b88" Jan 26 08:03:33 crc kubenswrapper[4806]: E0126 08:03:33.113784 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-d7glh_openshift-multus(4320ae6b-0d73-47d7-9f2c-f3c5b6b69041)\"" pod="openshift-multus/multus-d7glh" podUID="4320ae6b-0d73-47d7-9f2c-f3c5b6b69041" Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.115389 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jtlxr\" (UniqueName: \"kubernetes.io/projected/89f46edf-56e3-4041-badf-e70f8c4bddb7-kube-api-access-jtlxr\") pod \"ovnkube-node-4lfsr\" (UID: \"89f46edf-56e3-4041-badf-e70f8c4bddb7\") " pod="openshift-ovn-kubernetes/ovnkube-node-4lfsr" Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.133809 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-8mw7z"] Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.137473 4806 scope.go:117] "RemoveContainer" containerID="df2814fbaab959fae53eddddeffb17ab97a5a8ad31fbc20bcca4cdad140a24e2" Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.140087 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-8mw7z"] Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.163314 4806 scope.go:117] "RemoveContainer" containerID="6e9d86c9f33a8ad421c133f7b674eff9362011ad9ce60cbe1b917bda84c85047" Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.179908 4806 scope.go:117] "RemoveContainer" containerID="9d102c6b4ea9d79168f0fec24c556c8fec8e3c1191646eebcbe8ed0289edfb6d" Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.196670 4806 scope.go:117] "RemoveContainer" containerID="c90dede1fc0753cad500555db7d92be73cb9b51fe948e3e65bed66134d85628c" Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.197437 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-4lfsr" Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.217155 4806 scope.go:117] "RemoveContainer" containerID="adbed86e005e6e9cecb4320e0dff72eab3cfd505ff656a64e6643cf47828c789" Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.243399 4806 scope.go:117] "RemoveContainer" containerID="21d2ea6bc7f0a91da2b372587419ec1b34cfe3aca8f70dd4e360bf3e818bb103" Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.262389 4806 scope.go:117] "RemoveContainer" containerID="e05c58fd693a7827b1ff019c6e160e4d2198004aca34245ee6e08cd37f5627da" Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.281511 4806 scope.go:117] "RemoveContainer" containerID="634f160a9064240a06e9b425f70ceae6390db7205f898efdc601437e72c362df" Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.304623 4806 scope.go:117] "RemoveContainer" containerID="2105d09a59a290aa3e1915a2b8c2c8aeb3a3e94ddfd4b48dc574a5362f522d18" Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.326736 4806 scope.go:117] "RemoveContainer" containerID="7f29f3d947b8f6cdecce74518b658620be055f64b272f6e599f723e1ae44f773" Jan 26 08:03:33 crc kubenswrapper[4806]: E0126 08:03:33.327086 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f29f3d947b8f6cdecce74518b658620be055f64b272f6e599f723e1ae44f773\": container with ID starting with 7f29f3d947b8f6cdecce74518b658620be055f64b272f6e599f723e1ae44f773 not found: ID does not exist" containerID="7f29f3d947b8f6cdecce74518b658620be055f64b272f6e599f723e1ae44f773" Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.327139 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f29f3d947b8f6cdecce74518b658620be055f64b272f6e599f723e1ae44f773"} err="failed to get container status \"7f29f3d947b8f6cdecce74518b658620be055f64b272f6e599f723e1ae44f773\": rpc error: code = NotFound desc = could not find container \"7f29f3d947b8f6cdecce74518b658620be055f64b272f6e599f723e1ae44f773\": container with ID starting with 7f29f3d947b8f6cdecce74518b658620be055f64b272f6e599f723e1ae44f773 not found: ID does not exist" Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.327165 4806 scope.go:117] "RemoveContainer" containerID="df2814fbaab959fae53eddddeffb17ab97a5a8ad31fbc20bcca4cdad140a24e2" Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.327401 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-j6gd8" Jan 26 08:03:33 crc kubenswrapper[4806]: E0126 08:03:33.328228 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df2814fbaab959fae53eddddeffb17ab97a5a8ad31fbc20bcca4cdad140a24e2\": container with ID starting with df2814fbaab959fae53eddddeffb17ab97a5a8ad31fbc20bcca4cdad140a24e2 not found: ID does not exist" containerID="df2814fbaab959fae53eddddeffb17ab97a5a8ad31fbc20bcca4cdad140a24e2" Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.328251 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df2814fbaab959fae53eddddeffb17ab97a5a8ad31fbc20bcca4cdad140a24e2"} err="failed to get container status \"df2814fbaab959fae53eddddeffb17ab97a5a8ad31fbc20bcca4cdad140a24e2\": rpc error: code = NotFound desc = could not find container \"df2814fbaab959fae53eddddeffb17ab97a5a8ad31fbc20bcca4cdad140a24e2\": container with ID starting with df2814fbaab959fae53eddddeffb17ab97a5a8ad31fbc20bcca4cdad140a24e2 not found: ID does not exist" Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.328270 4806 scope.go:117] "RemoveContainer" containerID="6e9d86c9f33a8ad421c133f7b674eff9362011ad9ce60cbe1b917bda84c85047" Jan 26 08:03:33 crc kubenswrapper[4806]: E0126 08:03:33.329333 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e9d86c9f33a8ad421c133f7b674eff9362011ad9ce60cbe1b917bda84c85047\": container with ID starting with 6e9d86c9f33a8ad421c133f7b674eff9362011ad9ce60cbe1b917bda84c85047 not found: ID does not exist" containerID="6e9d86c9f33a8ad421c133f7b674eff9362011ad9ce60cbe1b917bda84c85047" Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.329364 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e9d86c9f33a8ad421c133f7b674eff9362011ad9ce60cbe1b917bda84c85047"} err="failed to get container status \"6e9d86c9f33a8ad421c133f7b674eff9362011ad9ce60cbe1b917bda84c85047\": rpc error: code = NotFound desc = could not find container \"6e9d86c9f33a8ad421c133f7b674eff9362011ad9ce60cbe1b917bda84c85047\": container with ID starting with 6e9d86c9f33a8ad421c133f7b674eff9362011ad9ce60cbe1b917bda84c85047 not found: ID does not exist" Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.329381 4806 scope.go:117] "RemoveContainer" containerID="9d102c6b4ea9d79168f0fec24c556c8fec8e3c1191646eebcbe8ed0289edfb6d" Jan 26 08:03:33 crc kubenswrapper[4806]: E0126 08:03:33.329884 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d102c6b4ea9d79168f0fec24c556c8fec8e3c1191646eebcbe8ed0289edfb6d\": container with ID starting with 9d102c6b4ea9d79168f0fec24c556c8fec8e3c1191646eebcbe8ed0289edfb6d not found: ID does not exist" containerID="9d102c6b4ea9d79168f0fec24c556c8fec8e3c1191646eebcbe8ed0289edfb6d" Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.329902 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d102c6b4ea9d79168f0fec24c556c8fec8e3c1191646eebcbe8ed0289edfb6d"} err="failed to get container status \"9d102c6b4ea9d79168f0fec24c556c8fec8e3c1191646eebcbe8ed0289edfb6d\": rpc error: code = NotFound desc = could not find container \"9d102c6b4ea9d79168f0fec24c556c8fec8e3c1191646eebcbe8ed0289edfb6d\": container with ID starting with 9d102c6b4ea9d79168f0fec24c556c8fec8e3c1191646eebcbe8ed0289edfb6d not found: ID does not exist" Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.329917 4806 scope.go:117] "RemoveContainer" containerID="c90dede1fc0753cad500555db7d92be73cb9b51fe948e3e65bed66134d85628c" Jan 26 08:03:33 crc kubenswrapper[4806]: E0126 08:03:33.330196 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c90dede1fc0753cad500555db7d92be73cb9b51fe948e3e65bed66134d85628c\": container with ID starting with c90dede1fc0753cad500555db7d92be73cb9b51fe948e3e65bed66134d85628c not found: ID does not exist" containerID="c90dede1fc0753cad500555db7d92be73cb9b51fe948e3e65bed66134d85628c" Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.330213 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c90dede1fc0753cad500555db7d92be73cb9b51fe948e3e65bed66134d85628c"} err="failed to get container status \"c90dede1fc0753cad500555db7d92be73cb9b51fe948e3e65bed66134d85628c\": rpc error: code = NotFound desc = could not find container \"c90dede1fc0753cad500555db7d92be73cb9b51fe948e3e65bed66134d85628c\": container with ID starting with c90dede1fc0753cad500555db7d92be73cb9b51fe948e3e65bed66134d85628c not found: ID does not exist" Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.330228 4806 scope.go:117] "RemoveContainer" containerID="adbed86e005e6e9cecb4320e0dff72eab3cfd505ff656a64e6643cf47828c789" Jan 26 08:03:33 crc kubenswrapper[4806]: E0126 08:03:33.330487 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"adbed86e005e6e9cecb4320e0dff72eab3cfd505ff656a64e6643cf47828c789\": container with ID starting with adbed86e005e6e9cecb4320e0dff72eab3cfd505ff656a64e6643cf47828c789 not found: ID does not exist" containerID="adbed86e005e6e9cecb4320e0dff72eab3cfd505ff656a64e6643cf47828c789" Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.330510 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"adbed86e005e6e9cecb4320e0dff72eab3cfd505ff656a64e6643cf47828c789"} err="failed to get container status \"adbed86e005e6e9cecb4320e0dff72eab3cfd505ff656a64e6643cf47828c789\": rpc error: code = NotFound desc = could not find container \"adbed86e005e6e9cecb4320e0dff72eab3cfd505ff656a64e6643cf47828c789\": container with ID starting with adbed86e005e6e9cecb4320e0dff72eab3cfd505ff656a64e6643cf47828c789 not found: ID does not exist" Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.330729 4806 scope.go:117] "RemoveContainer" containerID="21d2ea6bc7f0a91da2b372587419ec1b34cfe3aca8f70dd4e360bf3e818bb103" Jan 26 08:03:33 crc kubenswrapper[4806]: E0126 08:03:33.330958 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"21d2ea6bc7f0a91da2b372587419ec1b34cfe3aca8f70dd4e360bf3e818bb103\": container with ID starting with 21d2ea6bc7f0a91da2b372587419ec1b34cfe3aca8f70dd4e360bf3e818bb103 not found: ID does not exist" containerID="21d2ea6bc7f0a91da2b372587419ec1b34cfe3aca8f70dd4e360bf3e818bb103" Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.330995 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21d2ea6bc7f0a91da2b372587419ec1b34cfe3aca8f70dd4e360bf3e818bb103"} err="failed to get container status \"21d2ea6bc7f0a91da2b372587419ec1b34cfe3aca8f70dd4e360bf3e818bb103\": rpc error: code = NotFound desc = could not find container \"21d2ea6bc7f0a91da2b372587419ec1b34cfe3aca8f70dd4e360bf3e818bb103\": container with ID starting with 21d2ea6bc7f0a91da2b372587419ec1b34cfe3aca8f70dd4e360bf3e818bb103 not found: ID does not exist" Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.331018 4806 scope.go:117] "RemoveContainer" containerID="e05c58fd693a7827b1ff019c6e160e4d2198004aca34245ee6e08cd37f5627da" Jan 26 08:03:33 crc kubenswrapper[4806]: E0126 08:03:33.331233 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e05c58fd693a7827b1ff019c6e160e4d2198004aca34245ee6e08cd37f5627da\": container with ID starting with e05c58fd693a7827b1ff019c6e160e4d2198004aca34245ee6e08cd37f5627da not found: ID does not exist" containerID="e05c58fd693a7827b1ff019c6e160e4d2198004aca34245ee6e08cd37f5627da" Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.331260 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e05c58fd693a7827b1ff019c6e160e4d2198004aca34245ee6e08cd37f5627da"} err="failed to get container status \"e05c58fd693a7827b1ff019c6e160e4d2198004aca34245ee6e08cd37f5627da\": rpc error: code = NotFound desc = could not find container \"e05c58fd693a7827b1ff019c6e160e4d2198004aca34245ee6e08cd37f5627da\": container with ID starting with e05c58fd693a7827b1ff019c6e160e4d2198004aca34245ee6e08cd37f5627da not found: ID does not exist" Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.331279 4806 scope.go:117] "RemoveContainer" containerID="634f160a9064240a06e9b425f70ceae6390db7205f898efdc601437e72c362df" Jan 26 08:03:33 crc kubenswrapper[4806]: E0126 08:03:33.331533 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"634f160a9064240a06e9b425f70ceae6390db7205f898efdc601437e72c362df\": container with ID starting with 634f160a9064240a06e9b425f70ceae6390db7205f898efdc601437e72c362df not found: ID does not exist" containerID="634f160a9064240a06e9b425f70ceae6390db7205f898efdc601437e72c362df" Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.331562 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"634f160a9064240a06e9b425f70ceae6390db7205f898efdc601437e72c362df"} err="failed to get container status \"634f160a9064240a06e9b425f70ceae6390db7205f898efdc601437e72c362df\": rpc error: code = NotFound desc = could not find container \"634f160a9064240a06e9b425f70ceae6390db7205f898efdc601437e72c362df\": container with ID starting with 634f160a9064240a06e9b425f70ceae6390db7205f898efdc601437e72c362df not found: ID does not exist" Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.331576 4806 scope.go:117] "RemoveContainer" containerID="2105d09a59a290aa3e1915a2b8c2c8aeb3a3e94ddfd4b48dc574a5362f522d18" Jan 26 08:03:33 crc kubenswrapper[4806]: E0126 08:03:33.331839 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2105d09a59a290aa3e1915a2b8c2c8aeb3a3e94ddfd4b48dc574a5362f522d18\": container with ID starting with 2105d09a59a290aa3e1915a2b8c2c8aeb3a3e94ddfd4b48dc574a5362f522d18 not found: ID does not exist" containerID="2105d09a59a290aa3e1915a2b8c2c8aeb3a3e94ddfd4b48dc574a5362f522d18" Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.331860 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2105d09a59a290aa3e1915a2b8c2c8aeb3a3e94ddfd4b48dc574a5362f522d18"} err="failed to get container status \"2105d09a59a290aa3e1915a2b8c2c8aeb3a3e94ddfd4b48dc574a5362f522d18\": rpc error: code = NotFound desc = could not find container \"2105d09a59a290aa3e1915a2b8c2c8aeb3a3e94ddfd4b48dc574a5362f522d18\": container with ID starting with 2105d09a59a290aa3e1915a2b8c2c8aeb3a3e94ddfd4b48dc574a5362f522d18 not found: ID does not exist" Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.331875 4806 scope.go:117] "RemoveContainer" containerID="7f29f3d947b8f6cdecce74518b658620be055f64b272f6e599f723e1ae44f773" Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.332077 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f29f3d947b8f6cdecce74518b658620be055f64b272f6e599f723e1ae44f773"} err="failed to get container status \"7f29f3d947b8f6cdecce74518b658620be055f64b272f6e599f723e1ae44f773\": rpc error: code = NotFound desc = could not find container \"7f29f3d947b8f6cdecce74518b658620be055f64b272f6e599f723e1ae44f773\": container with ID starting with 7f29f3d947b8f6cdecce74518b658620be055f64b272f6e599f723e1ae44f773 not found: ID does not exist" Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.332096 4806 scope.go:117] "RemoveContainer" containerID="df2814fbaab959fae53eddddeffb17ab97a5a8ad31fbc20bcca4cdad140a24e2" Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.332297 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df2814fbaab959fae53eddddeffb17ab97a5a8ad31fbc20bcca4cdad140a24e2"} err="failed to get container status \"df2814fbaab959fae53eddddeffb17ab97a5a8ad31fbc20bcca4cdad140a24e2\": rpc error: code = NotFound desc = could not find container \"df2814fbaab959fae53eddddeffb17ab97a5a8ad31fbc20bcca4cdad140a24e2\": container with ID starting with df2814fbaab959fae53eddddeffb17ab97a5a8ad31fbc20bcca4cdad140a24e2 not found: ID does not exist" Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.332365 4806 scope.go:117] "RemoveContainer" containerID="6e9d86c9f33a8ad421c133f7b674eff9362011ad9ce60cbe1b917bda84c85047" Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.332668 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e9d86c9f33a8ad421c133f7b674eff9362011ad9ce60cbe1b917bda84c85047"} err="failed to get container status \"6e9d86c9f33a8ad421c133f7b674eff9362011ad9ce60cbe1b917bda84c85047\": rpc error: code = NotFound desc = could not find container \"6e9d86c9f33a8ad421c133f7b674eff9362011ad9ce60cbe1b917bda84c85047\": container with ID starting with 6e9d86c9f33a8ad421c133f7b674eff9362011ad9ce60cbe1b917bda84c85047 not found: ID does not exist" Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.332702 4806 scope.go:117] "RemoveContainer" containerID="9d102c6b4ea9d79168f0fec24c556c8fec8e3c1191646eebcbe8ed0289edfb6d" Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.332918 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d102c6b4ea9d79168f0fec24c556c8fec8e3c1191646eebcbe8ed0289edfb6d"} err="failed to get container status \"9d102c6b4ea9d79168f0fec24c556c8fec8e3c1191646eebcbe8ed0289edfb6d\": rpc error: code = NotFound desc = could not find container \"9d102c6b4ea9d79168f0fec24c556c8fec8e3c1191646eebcbe8ed0289edfb6d\": container with ID starting with 9d102c6b4ea9d79168f0fec24c556c8fec8e3c1191646eebcbe8ed0289edfb6d not found: ID does not exist" Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.332947 4806 scope.go:117] "RemoveContainer" containerID="c90dede1fc0753cad500555db7d92be73cb9b51fe948e3e65bed66134d85628c" Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.333282 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c90dede1fc0753cad500555db7d92be73cb9b51fe948e3e65bed66134d85628c"} err="failed to get container status \"c90dede1fc0753cad500555db7d92be73cb9b51fe948e3e65bed66134d85628c\": rpc error: code = NotFound desc = could not find container \"c90dede1fc0753cad500555db7d92be73cb9b51fe948e3e65bed66134d85628c\": container with ID starting with c90dede1fc0753cad500555db7d92be73cb9b51fe948e3e65bed66134d85628c not found: ID does not exist" Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.333337 4806 scope.go:117] "RemoveContainer" containerID="adbed86e005e6e9cecb4320e0dff72eab3cfd505ff656a64e6643cf47828c789" Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.333662 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"adbed86e005e6e9cecb4320e0dff72eab3cfd505ff656a64e6643cf47828c789"} err="failed to get container status \"adbed86e005e6e9cecb4320e0dff72eab3cfd505ff656a64e6643cf47828c789\": rpc error: code = NotFound desc = could not find container \"adbed86e005e6e9cecb4320e0dff72eab3cfd505ff656a64e6643cf47828c789\": container with ID starting with adbed86e005e6e9cecb4320e0dff72eab3cfd505ff656a64e6643cf47828c789 not found: ID does not exist" Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.333683 4806 scope.go:117] "RemoveContainer" containerID="21d2ea6bc7f0a91da2b372587419ec1b34cfe3aca8f70dd4e360bf3e818bb103" Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.333884 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21d2ea6bc7f0a91da2b372587419ec1b34cfe3aca8f70dd4e360bf3e818bb103"} err="failed to get container status \"21d2ea6bc7f0a91da2b372587419ec1b34cfe3aca8f70dd4e360bf3e818bb103\": rpc error: code = NotFound desc = could not find container \"21d2ea6bc7f0a91da2b372587419ec1b34cfe3aca8f70dd4e360bf3e818bb103\": container with ID starting with 21d2ea6bc7f0a91da2b372587419ec1b34cfe3aca8f70dd4e360bf3e818bb103 not found: ID does not exist" Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.333904 4806 scope.go:117] "RemoveContainer" containerID="e05c58fd693a7827b1ff019c6e160e4d2198004aca34245ee6e08cd37f5627da" Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.334116 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e05c58fd693a7827b1ff019c6e160e4d2198004aca34245ee6e08cd37f5627da"} err="failed to get container status \"e05c58fd693a7827b1ff019c6e160e4d2198004aca34245ee6e08cd37f5627da\": rpc error: code = NotFound desc = could not find container \"e05c58fd693a7827b1ff019c6e160e4d2198004aca34245ee6e08cd37f5627da\": container with ID starting with e05c58fd693a7827b1ff019c6e160e4d2198004aca34245ee6e08cd37f5627da not found: ID does not exist" Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.334138 4806 scope.go:117] "RemoveContainer" containerID="634f160a9064240a06e9b425f70ceae6390db7205f898efdc601437e72c362df" Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.334337 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"634f160a9064240a06e9b425f70ceae6390db7205f898efdc601437e72c362df"} err="failed to get container status \"634f160a9064240a06e9b425f70ceae6390db7205f898efdc601437e72c362df\": rpc error: code = NotFound desc = could not find container \"634f160a9064240a06e9b425f70ceae6390db7205f898efdc601437e72c362df\": container with ID starting with 634f160a9064240a06e9b425f70ceae6390db7205f898efdc601437e72c362df not found: ID does not exist" Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.334365 4806 scope.go:117] "RemoveContainer" containerID="2105d09a59a290aa3e1915a2b8c2c8aeb3a3e94ddfd4b48dc574a5362f522d18" Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.334582 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2105d09a59a290aa3e1915a2b8c2c8aeb3a3e94ddfd4b48dc574a5362f522d18"} err="failed to get container status \"2105d09a59a290aa3e1915a2b8c2c8aeb3a3e94ddfd4b48dc574a5362f522d18\": rpc error: code = NotFound desc = could not find container \"2105d09a59a290aa3e1915a2b8c2c8aeb3a3e94ddfd4b48dc574a5362f522d18\": container with ID starting with 2105d09a59a290aa3e1915a2b8c2c8aeb3a3e94ddfd4b48dc574a5362f522d18 not found: ID does not exist" Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.334604 4806 scope.go:117] "RemoveContainer" containerID="7f29f3d947b8f6cdecce74518b658620be055f64b272f6e599f723e1ae44f773" Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.335682 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f29f3d947b8f6cdecce74518b658620be055f64b272f6e599f723e1ae44f773"} err="failed to get container status \"7f29f3d947b8f6cdecce74518b658620be055f64b272f6e599f723e1ae44f773\": rpc error: code = NotFound desc = could not find container \"7f29f3d947b8f6cdecce74518b658620be055f64b272f6e599f723e1ae44f773\": container with ID starting with 7f29f3d947b8f6cdecce74518b658620be055f64b272f6e599f723e1ae44f773 not found: ID does not exist" Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.335730 4806 scope.go:117] "RemoveContainer" containerID="df2814fbaab959fae53eddddeffb17ab97a5a8ad31fbc20bcca4cdad140a24e2" Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.336197 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df2814fbaab959fae53eddddeffb17ab97a5a8ad31fbc20bcca4cdad140a24e2"} err="failed to get container status \"df2814fbaab959fae53eddddeffb17ab97a5a8ad31fbc20bcca4cdad140a24e2\": rpc error: code = NotFound desc = could not find container \"df2814fbaab959fae53eddddeffb17ab97a5a8ad31fbc20bcca4cdad140a24e2\": container with ID starting with df2814fbaab959fae53eddddeffb17ab97a5a8ad31fbc20bcca4cdad140a24e2 not found: ID does not exist" Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.336226 4806 scope.go:117] "RemoveContainer" containerID="6e9d86c9f33a8ad421c133f7b674eff9362011ad9ce60cbe1b917bda84c85047" Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.337065 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e9d86c9f33a8ad421c133f7b674eff9362011ad9ce60cbe1b917bda84c85047"} err="failed to get container status \"6e9d86c9f33a8ad421c133f7b674eff9362011ad9ce60cbe1b917bda84c85047\": rpc error: code = NotFound desc = could not find container \"6e9d86c9f33a8ad421c133f7b674eff9362011ad9ce60cbe1b917bda84c85047\": container with ID starting with 6e9d86c9f33a8ad421c133f7b674eff9362011ad9ce60cbe1b917bda84c85047 not found: ID does not exist" Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.337092 4806 scope.go:117] "RemoveContainer" containerID="9d102c6b4ea9d79168f0fec24c556c8fec8e3c1191646eebcbe8ed0289edfb6d" Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.337947 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d102c6b4ea9d79168f0fec24c556c8fec8e3c1191646eebcbe8ed0289edfb6d"} err="failed to get container status \"9d102c6b4ea9d79168f0fec24c556c8fec8e3c1191646eebcbe8ed0289edfb6d\": rpc error: code = NotFound desc = could not find container \"9d102c6b4ea9d79168f0fec24c556c8fec8e3c1191646eebcbe8ed0289edfb6d\": container with ID starting with 9d102c6b4ea9d79168f0fec24c556c8fec8e3c1191646eebcbe8ed0289edfb6d not found: ID does not exist" Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.337969 4806 scope.go:117] "RemoveContainer" containerID="c90dede1fc0753cad500555db7d92be73cb9b51fe948e3e65bed66134d85628c" Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.338482 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c90dede1fc0753cad500555db7d92be73cb9b51fe948e3e65bed66134d85628c"} err="failed to get container status \"c90dede1fc0753cad500555db7d92be73cb9b51fe948e3e65bed66134d85628c\": rpc error: code = NotFound desc = could not find container \"c90dede1fc0753cad500555db7d92be73cb9b51fe948e3e65bed66134d85628c\": container with ID starting with c90dede1fc0753cad500555db7d92be73cb9b51fe948e3e65bed66134d85628c not found: ID does not exist" Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.338547 4806 scope.go:117] "RemoveContainer" containerID="adbed86e005e6e9cecb4320e0dff72eab3cfd505ff656a64e6643cf47828c789" Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.339652 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"adbed86e005e6e9cecb4320e0dff72eab3cfd505ff656a64e6643cf47828c789"} err="failed to get container status \"adbed86e005e6e9cecb4320e0dff72eab3cfd505ff656a64e6643cf47828c789\": rpc error: code = NotFound desc = could not find container \"adbed86e005e6e9cecb4320e0dff72eab3cfd505ff656a64e6643cf47828c789\": container with ID starting with adbed86e005e6e9cecb4320e0dff72eab3cfd505ff656a64e6643cf47828c789 not found: ID does not exist" Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.339683 4806 scope.go:117] "RemoveContainer" containerID="21d2ea6bc7f0a91da2b372587419ec1b34cfe3aca8f70dd4e360bf3e818bb103" Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.340018 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21d2ea6bc7f0a91da2b372587419ec1b34cfe3aca8f70dd4e360bf3e818bb103"} err="failed to get container status \"21d2ea6bc7f0a91da2b372587419ec1b34cfe3aca8f70dd4e360bf3e818bb103\": rpc error: code = NotFound desc = could not find container \"21d2ea6bc7f0a91da2b372587419ec1b34cfe3aca8f70dd4e360bf3e818bb103\": container with ID starting with 21d2ea6bc7f0a91da2b372587419ec1b34cfe3aca8f70dd4e360bf3e818bb103 not found: ID does not exist" Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.340063 4806 scope.go:117] "RemoveContainer" containerID="e05c58fd693a7827b1ff019c6e160e4d2198004aca34245ee6e08cd37f5627da" Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.340617 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e05c58fd693a7827b1ff019c6e160e4d2198004aca34245ee6e08cd37f5627da"} err="failed to get container status \"e05c58fd693a7827b1ff019c6e160e4d2198004aca34245ee6e08cd37f5627da\": rpc error: code = NotFound desc = could not find container \"e05c58fd693a7827b1ff019c6e160e4d2198004aca34245ee6e08cd37f5627da\": container with ID starting with e05c58fd693a7827b1ff019c6e160e4d2198004aca34245ee6e08cd37f5627da not found: ID does not exist" Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.340769 4806 scope.go:117] "RemoveContainer" containerID="634f160a9064240a06e9b425f70ceae6390db7205f898efdc601437e72c362df" Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.343937 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"634f160a9064240a06e9b425f70ceae6390db7205f898efdc601437e72c362df"} err="failed to get container status \"634f160a9064240a06e9b425f70ceae6390db7205f898efdc601437e72c362df\": rpc error: code = NotFound desc = could not find container \"634f160a9064240a06e9b425f70ceae6390db7205f898efdc601437e72c362df\": container with ID starting with 634f160a9064240a06e9b425f70ceae6390db7205f898efdc601437e72c362df not found: ID does not exist" Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.344000 4806 scope.go:117] "RemoveContainer" containerID="2105d09a59a290aa3e1915a2b8c2c8aeb3a3e94ddfd4b48dc574a5362f522d18" Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.347022 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2105d09a59a290aa3e1915a2b8c2c8aeb3a3e94ddfd4b48dc574a5362f522d18"} err="failed to get container status \"2105d09a59a290aa3e1915a2b8c2c8aeb3a3e94ddfd4b48dc574a5362f522d18\": rpc error: code = NotFound desc = could not find container \"2105d09a59a290aa3e1915a2b8c2c8aeb3a3e94ddfd4b48dc574a5362f522d18\": container with ID starting with 2105d09a59a290aa3e1915a2b8c2c8aeb3a3e94ddfd4b48dc574a5362f522d18 not found: ID does not exist" Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.347051 4806 scope.go:117] "RemoveContainer" containerID="7f29f3d947b8f6cdecce74518b658620be055f64b272f6e599f723e1ae44f773" Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.347385 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f29f3d947b8f6cdecce74518b658620be055f64b272f6e599f723e1ae44f773"} err="failed to get container status \"7f29f3d947b8f6cdecce74518b658620be055f64b272f6e599f723e1ae44f773\": rpc error: code = NotFound desc = could not find container \"7f29f3d947b8f6cdecce74518b658620be055f64b272f6e599f723e1ae44f773\": container with ID starting with 7f29f3d947b8f6cdecce74518b658620be055f64b272f6e599f723e1ae44f773 not found: ID does not exist" Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.347443 4806 scope.go:117] "RemoveContainer" containerID="df2814fbaab959fae53eddddeffb17ab97a5a8ad31fbc20bcca4cdad140a24e2" Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.347978 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df2814fbaab959fae53eddddeffb17ab97a5a8ad31fbc20bcca4cdad140a24e2"} err="failed to get container status \"df2814fbaab959fae53eddddeffb17ab97a5a8ad31fbc20bcca4cdad140a24e2\": rpc error: code = NotFound desc = could not find container \"df2814fbaab959fae53eddddeffb17ab97a5a8ad31fbc20bcca4cdad140a24e2\": container with ID starting with df2814fbaab959fae53eddddeffb17ab97a5a8ad31fbc20bcca4cdad140a24e2 not found: ID does not exist" Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.348001 4806 scope.go:117] "RemoveContainer" containerID="6e9d86c9f33a8ad421c133f7b674eff9362011ad9ce60cbe1b917bda84c85047" Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.349384 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e9d86c9f33a8ad421c133f7b674eff9362011ad9ce60cbe1b917bda84c85047"} err="failed to get container status \"6e9d86c9f33a8ad421c133f7b674eff9362011ad9ce60cbe1b917bda84c85047\": rpc error: code = NotFound desc = could not find container \"6e9d86c9f33a8ad421c133f7b674eff9362011ad9ce60cbe1b917bda84c85047\": container with ID starting with 6e9d86c9f33a8ad421c133f7b674eff9362011ad9ce60cbe1b917bda84c85047 not found: ID does not exist" Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.349416 4806 scope.go:117] "RemoveContainer" containerID="9d102c6b4ea9d79168f0fec24c556c8fec8e3c1191646eebcbe8ed0289edfb6d" Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.352127 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d102c6b4ea9d79168f0fec24c556c8fec8e3c1191646eebcbe8ed0289edfb6d"} err="failed to get container status \"9d102c6b4ea9d79168f0fec24c556c8fec8e3c1191646eebcbe8ed0289edfb6d\": rpc error: code = NotFound desc = could not find container \"9d102c6b4ea9d79168f0fec24c556c8fec8e3c1191646eebcbe8ed0289edfb6d\": container with ID starting with 9d102c6b4ea9d79168f0fec24c556c8fec8e3c1191646eebcbe8ed0289edfb6d not found: ID does not exist" Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.352262 4806 scope.go:117] "RemoveContainer" containerID="c90dede1fc0753cad500555db7d92be73cb9b51fe948e3e65bed66134d85628c" Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.353146 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c90dede1fc0753cad500555db7d92be73cb9b51fe948e3e65bed66134d85628c"} err="failed to get container status \"c90dede1fc0753cad500555db7d92be73cb9b51fe948e3e65bed66134d85628c\": rpc error: code = NotFound desc = could not find container \"c90dede1fc0753cad500555db7d92be73cb9b51fe948e3e65bed66134d85628c\": container with ID starting with c90dede1fc0753cad500555db7d92be73cb9b51fe948e3e65bed66134d85628c not found: ID does not exist" Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.353951 4806 scope.go:117] "RemoveContainer" containerID="adbed86e005e6e9cecb4320e0dff72eab3cfd505ff656a64e6643cf47828c789" Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.354319 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"adbed86e005e6e9cecb4320e0dff72eab3cfd505ff656a64e6643cf47828c789"} err="failed to get container status \"adbed86e005e6e9cecb4320e0dff72eab3cfd505ff656a64e6643cf47828c789\": rpc error: code = NotFound desc = could not find container \"adbed86e005e6e9cecb4320e0dff72eab3cfd505ff656a64e6643cf47828c789\": container with ID starting with adbed86e005e6e9cecb4320e0dff72eab3cfd505ff656a64e6643cf47828c789 not found: ID does not exist" Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.354351 4806 scope.go:117] "RemoveContainer" containerID="21d2ea6bc7f0a91da2b372587419ec1b34cfe3aca8f70dd4e360bf3e818bb103" Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.354719 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21d2ea6bc7f0a91da2b372587419ec1b34cfe3aca8f70dd4e360bf3e818bb103"} err="failed to get container status \"21d2ea6bc7f0a91da2b372587419ec1b34cfe3aca8f70dd4e360bf3e818bb103\": rpc error: code = NotFound desc = could not find container \"21d2ea6bc7f0a91da2b372587419ec1b34cfe3aca8f70dd4e360bf3e818bb103\": container with ID starting with 21d2ea6bc7f0a91da2b372587419ec1b34cfe3aca8f70dd4e360bf3e818bb103 not found: ID does not exist" Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.354746 4806 scope.go:117] "RemoveContainer" containerID="e05c58fd693a7827b1ff019c6e160e4d2198004aca34245ee6e08cd37f5627da" Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.355026 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e05c58fd693a7827b1ff019c6e160e4d2198004aca34245ee6e08cd37f5627da"} err="failed to get container status \"e05c58fd693a7827b1ff019c6e160e4d2198004aca34245ee6e08cd37f5627da\": rpc error: code = NotFound desc = could not find container \"e05c58fd693a7827b1ff019c6e160e4d2198004aca34245ee6e08cd37f5627da\": container with ID starting with e05c58fd693a7827b1ff019c6e160e4d2198004aca34245ee6e08cd37f5627da not found: ID does not exist" Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.355074 4806 scope.go:117] "RemoveContainer" containerID="634f160a9064240a06e9b425f70ceae6390db7205f898efdc601437e72c362df" Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.355422 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"634f160a9064240a06e9b425f70ceae6390db7205f898efdc601437e72c362df"} err="failed to get container status \"634f160a9064240a06e9b425f70ceae6390db7205f898efdc601437e72c362df\": rpc error: code = NotFound desc = could not find container \"634f160a9064240a06e9b425f70ceae6390db7205f898efdc601437e72c362df\": container with ID starting with 634f160a9064240a06e9b425f70ceae6390db7205f898efdc601437e72c362df not found: ID does not exist" Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.355455 4806 scope.go:117] "RemoveContainer" containerID="2105d09a59a290aa3e1915a2b8c2c8aeb3a3e94ddfd4b48dc574a5362f522d18" Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.355751 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2105d09a59a290aa3e1915a2b8c2c8aeb3a3e94ddfd4b48dc574a5362f522d18"} err="failed to get container status \"2105d09a59a290aa3e1915a2b8c2c8aeb3a3e94ddfd4b48dc574a5362f522d18\": rpc error: code = NotFound desc = could not find container \"2105d09a59a290aa3e1915a2b8c2c8aeb3a3e94ddfd4b48dc574a5362f522d18\": container with ID starting with 2105d09a59a290aa3e1915a2b8c2c8aeb3a3e94ddfd4b48dc574a5362f522d18 not found: ID does not exist" Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.355771 4806 scope.go:117] "RemoveContainer" containerID="7f29f3d947b8f6cdecce74518b658620be055f64b272f6e599f723e1ae44f773" Jan 26 08:03:33 crc kubenswrapper[4806]: I0126 08:03:33.355962 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f29f3d947b8f6cdecce74518b658620be055f64b272f6e599f723e1ae44f773"} err="failed to get container status \"7f29f3d947b8f6cdecce74518b658620be055f64b272f6e599f723e1ae44f773\": rpc error: code = NotFound desc = could not find container \"7f29f3d947b8f6cdecce74518b658620be055f64b272f6e599f723e1ae44f773\": container with ID starting with 7f29f3d947b8f6cdecce74518b658620be055f64b272f6e599f723e1ae44f773 not found: ID does not exist" Jan 26 08:03:34 crc kubenswrapper[4806]: I0126 08:03:34.123844 4806 generic.go:334] "Generic (PLEG): container finished" podID="89f46edf-56e3-4041-badf-e70f8c4bddb7" containerID="bd9a0988c365a0b831411ded5bf5ba970142c021594f5153e017819db5a9223c" exitCode=0 Jan 26 08:03:34 crc kubenswrapper[4806]: I0126 08:03:34.123890 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4lfsr" event={"ID":"89f46edf-56e3-4041-badf-e70f8c4bddb7","Type":"ContainerDied","Data":"bd9a0988c365a0b831411ded5bf5ba970142c021594f5153e017819db5a9223c"} Jan 26 08:03:34 crc kubenswrapper[4806]: I0126 08:03:34.123918 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4lfsr" event={"ID":"89f46edf-56e3-4041-badf-e70f8c4bddb7","Type":"ContainerStarted","Data":"1cb6c4581c2b5b7ab47c561614fdc9534b1dcc21e9b2936faf7ba7becca9ae69"} Jan 26 08:03:35 crc kubenswrapper[4806]: I0126 08:03:35.048609 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f8b8acb-f4cf-41db-82f8-94ffd21c1594" path="/var/lib/kubelet/pods/1f8b8acb-f4cf-41db-82f8-94ffd21c1594/volumes" Jan 26 08:03:35 crc kubenswrapper[4806]: I0126 08:03:35.132784 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4lfsr" event={"ID":"89f46edf-56e3-4041-badf-e70f8c4bddb7","Type":"ContainerStarted","Data":"d25cabe1afc0b7682e3e74caab7bd79b59fa9fac88d14c58c34d35a554190000"} Jan 26 08:03:35 crc kubenswrapper[4806]: I0126 08:03:35.133157 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4lfsr" event={"ID":"89f46edf-56e3-4041-badf-e70f8c4bddb7","Type":"ContainerStarted","Data":"4b766b7942ac84bae32dbc190fb907297d060513a0ae43e91a07ccffb46fb961"} Jan 26 08:03:35 crc kubenswrapper[4806]: I0126 08:03:35.133168 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4lfsr" event={"ID":"89f46edf-56e3-4041-badf-e70f8c4bddb7","Type":"ContainerStarted","Data":"ce2a5f73724041084409c599cb79ba3a6e331617e6d8ef4d5b58316b2357a8e3"} Jan 26 08:03:35 crc kubenswrapper[4806]: I0126 08:03:35.133177 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4lfsr" event={"ID":"89f46edf-56e3-4041-badf-e70f8c4bddb7","Type":"ContainerStarted","Data":"bc7bf2bc5c7e342a83d52ac45591db7b445a42e91ba51a94eb8e446e7b652276"} Jan 26 08:03:35 crc kubenswrapper[4806]: I0126 08:03:35.133186 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4lfsr" event={"ID":"89f46edf-56e3-4041-badf-e70f8c4bddb7","Type":"ContainerStarted","Data":"89fef9d4f6be58a861c1c2eb3fe0120b2c6ca81a43bd0d79f9bfadd542b962af"} Jan 26 08:03:35 crc kubenswrapper[4806]: I0126 08:03:35.133200 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4lfsr" event={"ID":"89f46edf-56e3-4041-badf-e70f8c4bddb7","Type":"ContainerStarted","Data":"7b9362b7a2b9a4b9c407a4894bc2054af2a981ecca34db8d022e0acce2b72ef9"} Jan 26 08:03:37 crc kubenswrapper[4806]: I0126 08:03:37.146753 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4lfsr" event={"ID":"89f46edf-56e3-4041-badf-e70f8c4bddb7","Type":"ContainerStarted","Data":"cfc791347e9cb5f161ea2975954d394c9a104eb7d406b56bbb6f2c6b9b4c6d77"} Jan 26 08:03:40 crc kubenswrapper[4806]: I0126 08:03:40.175260 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4lfsr" event={"ID":"89f46edf-56e3-4041-badf-e70f8c4bddb7","Type":"ContainerStarted","Data":"a1830f4b8565c554eadfb7820b9b29f7c95da579727e6bf7c93dbd4d69293ed6"} Jan 26 08:03:40 crc kubenswrapper[4806]: I0126 08:03:40.176661 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-4lfsr" Jan 26 08:03:40 crc kubenswrapper[4806]: I0126 08:03:40.176696 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-4lfsr" Jan 26 08:03:40 crc kubenswrapper[4806]: I0126 08:03:40.176717 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-4lfsr" Jan 26 08:03:40 crc kubenswrapper[4806]: I0126 08:03:40.206546 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-4lfsr" Jan 26 08:03:40 crc kubenswrapper[4806]: I0126 08:03:40.213261 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-4lfsr" podStartSLOduration=8.213240629 podStartE2EDuration="8.213240629s" podCreationTimestamp="2026-01-26 08:03:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 08:03:40.206952431 +0000 UTC m=+599.471360517" watchObservedRunningTime="2026-01-26 08:03:40.213240629 +0000 UTC m=+599.477648685" Jan 26 08:03:40 crc kubenswrapper[4806]: I0126 08:03:40.224371 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-4lfsr" Jan 26 08:03:41 crc kubenswrapper[4806]: I0126 08:03:41.386980 4806 scope.go:117] "RemoveContainer" containerID="0b2b56f712aa5b59214153bb170a5866aea874524f826e53be3e703b8ff912fd" Jan 26 08:03:42 crc kubenswrapper[4806]: I0126 08:03:42.190998 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-d7glh_4320ae6b-0d73-47d7-9f2c-f3c5b6b69041/kube-multus/2.log" Jan 26 08:03:45 crc kubenswrapper[4806]: I0126 08:03:45.806164 4806 patch_prober.go:28] interesting pod/machine-config-daemon-k2tlk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 08:03:45 crc kubenswrapper[4806]: I0126 08:03:45.806618 4806 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 08:03:46 crc kubenswrapper[4806]: I0126 08:03:46.042182 4806 scope.go:117] "RemoveContainer" containerID="e417d91e63473ab979f371a0a51d02ca944a89619a0becc7adeeadfc324a0b88" Jan 26 08:03:46 crc kubenswrapper[4806]: E0126 08:03:46.042414 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-d7glh_openshift-multus(4320ae6b-0d73-47d7-9f2c-f3c5b6b69041)\"" pod="openshift-multus/multus-d7glh" podUID="4320ae6b-0d73-47d7-9f2c-f3c5b6b69041" Jan 26 08:03:59 crc kubenswrapper[4806]: I0126 08:03:59.042723 4806 scope.go:117] "RemoveContainer" containerID="e417d91e63473ab979f371a0a51d02ca944a89619a0becc7adeeadfc324a0b88" Jan 26 08:04:00 crc kubenswrapper[4806]: I0126 08:04:00.322425 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-d7glh_4320ae6b-0d73-47d7-9f2c-f3c5b6b69041/kube-multus/2.log" Jan 26 08:04:00 crc kubenswrapper[4806]: I0126 08:04:00.324978 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-d7glh" event={"ID":"4320ae6b-0d73-47d7-9f2c-f3c5b6b69041","Type":"ContainerStarted","Data":"d858f03767b79290a1ca89fde2767620af279f4471c141a2a4283422dc93f4a6"} Jan 26 08:04:03 crc kubenswrapper[4806]: I0126 08:04:03.232453 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-4lfsr" Jan 26 08:04:15 crc kubenswrapper[4806]: I0126 08:04:15.806149 4806 patch_prober.go:28] interesting pod/machine-config-daemon-k2tlk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 08:04:15 crc kubenswrapper[4806]: I0126 08:04:15.806938 4806 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 08:04:15 crc kubenswrapper[4806]: I0126 08:04:15.807002 4806 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" Jan 26 08:04:15 crc kubenswrapper[4806]: I0126 08:04:15.807846 4806 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"25fe21fbdefc972bf60875548f11358df4e04c7bb242af40b8201587c399a5cc"} pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 08:04:15 crc kubenswrapper[4806]: I0126 08:04:15.807983 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" containerName="machine-config-daemon" containerID="cri-o://25fe21fbdefc972bf60875548f11358df4e04c7bb242af40b8201587c399a5cc" gracePeriod=600 Jan 26 08:04:16 crc kubenswrapper[4806]: I0126 08:04:16.434655 4806 generic.go:334] "Generic (PLEG): container finished" podID="d07502a2-50b0-4012-b335-340a1c694c50" containerID="25fe21fbdefc972bf60875548f11358df4e04c7bb242af40b8201587c399a5cc" exitCode=0 Jan 26 08:04:16 crc kubenswrapper[4806]: I0126 08:04:16.434731 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" event={"ID":"d07502a2-50b0-4012-b335-340a1c694c50","Type":"ContainerDied","Data":"25fe21fbdefc972bf60875548f11358df4e04c7bb242af40b8201587c399a5cc"} Jan 26 08:04:16 crc kubenswrapper[4806]: I0126 08:04:16.435094 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" event={"ID":"d07502a2-50b0-4012-b335-340a1c694c50","Type":"ContainerStarted","Data":"1043e4eeb08886878cec455f2ca6376f949985237b4b0930fb8995d1f97399b2"} Jan 26 08:04:16 crc kubenswrapper[4806]: I0126 08:04:16.435146 4806 scope.go:117] "RemoveContainer" containerID="61cccd600d491aa95cc07ae3edd2fe4d985307d841d68d06d1cce694939e53c9" Jan 26 08:04:18 crc kubenswrapper[4806]: I0126 08:04:18.146160 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139nf4p"] Jan 26 08:04:18 crc kubenswrapper[4806]: I0126 08:04:18.147505 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139nf4p" Jan 26 08:04:18 crc kubenswrapper[4806]: I0126 08:04:18.150023 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 26 08:04:18 crc kubenswrapper[4806]: I0126 08:04:18.152510 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7e4ad2fa-5351-43e6-b30b-646bd63ade85-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139nf4p\" (UID: \"7e4ad2fa-5351-43e6-b30b-646bd63ade85\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139nf4p" Jan 26 08:04:18 crc kubenswrapper[4806]: I0126 08:04:18.152567 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lscr8\" (UniqueName: \"kubernetes.io/projected/7e4ad2fa-5351-43e6-b30b-646bd63ade85-kube-api-access-lscr8\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139nf4p\" (UID: \"7e4ad2fa-5351-43e6-b30b-646bd63ade85\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139nf4p" Jan 26 08:04:18 crc kubenswrapper[4806]: I0126 08:04:18.152622 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7e4ad2fa-5351-43e6-b30b-646bd63ade85-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139nf4p\" (UID: \"7e4ad2fa-5351-43e6-b30b-646bd63ade85\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139nf4p" Jan 26 08:04:18 crc kubenswrapper[4806]: I0126 08:04:18.158461 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139nf4p"] Jan 26 08:04:18 crc kubenswrapper[4806]: I0126 08:04:18.253947 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7e4ad2fa-5351-43e6-b30b-646bd63ade85-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139nf4p\" (UID: \"7e4ad2fa-5351-43e6-b30b-646bd63ade85\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139nf4p" Jan 26 08:04:18 crc kubenswrapper[4806]: I0126 08:04:18.254204 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lscr8\" (UniqueName: \"kubernetes.io/projected/7e4ad2fa-5351-43e6-b30b-646bd63ade85-kube-api-access-lscr8\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139nf4p\" (UID: \"7e4ad2fa-5351-43e6-b30b-646bd63ade85\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139nf4p" Jan 26 08:04:18 crc kubenswrapper[4806]: I0126 08:04:18.254345 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7e4ad2fa-5351-43e6-b30b-646bd63ade85-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139nf4p\" (UID: \"7e4ad2fa-5351-43e6-b30b-646bd63ade85\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139nf4p" Jan 26 08:04:18 crc kubenswrapper[4806]: I0126 08:04:18.254488 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7e4ad2fa-5351-43e6-b30b-646bd63ade85-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139nf4p\" (UID: \"7e4ad2fa-5351-43e6-b30b-646bd63ade85\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139nf4p" Jan 26 08:04:18 crc kubenswrapper[4806]: I0126 08:04:18.254700 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7e4ad2fa-5351-43e6-b30b-646bd63ade85-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139nf4p\" (UID: \"7e4ad2fa-5351-43e6-b30b-646bd63ade85\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139nf4p" Jan 26 08:04:18 crc kubenswrapper[4806]: I0126 08:04:18.272290 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lscr8\" (UniqueName: \"kubernetes.io/projected/7e4ad2fa-5351-43e6-b30b-646bd63ade85-kube-api-access-lscr8\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139nf4p\" (UID: \"7e4ad2fa-5351-43e6-b30b-646bd63ade85\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139nf4p" Jan 26 08:04:18 crc kubenswrapper[4806]: I0126 08:04:18.465914 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139nf4p" Jan 26 08:04:18 crc kubenswrapper[4806]: I0126 08:04:18.666116 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139nf4p"] Jan 26 08:04:19 crc kubenswrapper[4806]: I0126 08:04:19.452657 4806 generic.go:334] "Generic (PLEG): container finished" podID="7e4ad2fa-5351-43e6-b30b-646bd63ade85" containerID="0a1647d84abce4d40327ae93a8125b738767b789611102b647b3453ed2a2e0aa" exitCode=0 Jan 26 08:04:19 crc kubenswrapper[4806]: I0126 08:04:19.452774 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139nf4p" event={"ID":"7e4ad2fa-5351-43e6-b30b-646bd63ade85","Type":"ContainerDied","Data":"0a1647d84abce4d40327ae93a8125b738767b789611102b647b3453ed2a2e0aa"} Jan 26 08:04:19 crc kubenswrapper[4806]: I0126 08:04:19.454092 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139nf4p" event={"ID":"7e4ad2fa-5351-43e6-b30b-646bd63ade85","Type":"ContainerStarted","Data":"631693ce3f620db4394be5aa309908d162d40a3c5eaf5075d89914b3d4ab095e"} Jan 26 08:04:21 crc kubenswrapper[4806]: I0126 08:04:21.466233 4806 generic.go:334] "Generic (PLEG): container finished" podID="7e4ad2fa-5351-43e6-b30b-646bd63ade85" containerID="e068c5cc280710fd9a9ab22d64c0c7bf97ad4dc15064ec711c145db14f500608" exitCode=0 Jan 26 08:04:21 crc kubenswrapper[4806]: I0126 08:04:21.466322 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139nf4p" event={"ID":"7e4ad2fa-5351-43e6-b30b-646bd63ade85","Type":"ContainerDied","Data":"e068c5cc280710fd9a9ab22d64c0c7bf97ad4dc15064ec711c145db14f500608"} Jan 26 08:04:22 crc kubenswrapper[4806]: I0126 08:04:22.477588 4806 generic.go:334] "Generic (PLEG): container finished" podID="7e4ad2fa-5351-43e6-b30b-646bd63ade85" containerID="d5a5a245365896571b5ed1ba5a2301f826a73ff0a28c4a1f4e92002edcc3716d" exitCode=0 Jan 26 08:04:22 crc kubenswrapper[4806]: I0126 08:04:22.477663 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139nf4p" event={"ID":"7e4ad2fa-5351-43e6-b30b-646bd63ade85","Type":"ContainerDied","Data":"d5a5a245365896571b5ed1ba5a2301f826a73ff0a28c4a1f4e92002edcc3716d"} Jan 26 08:04:23 crc kubenswrapper[4806]: I0126 08:04:23.777790 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139nf4p" Jan 26 08:04:23 crc kubenswrapper[4806]: I0126 08:04:23.935923 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7e4ad2fa-5351-43e6-b30b-646bd63ade85-bundle\") pod \"7e4ad2fa-5351-43e6-b30b-646bd63ade85\" (UID: \"7e4ad2fa-5351-43e6-b30b-646bd63ade85\") " Jan 26 08:04:23 crc kubenswrapper[4806]: I0126 08:04:23.936147 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7e4ad2fa-5351-43e6-b30b-646bd63ade85-util\") pod \"7e4ad2fa-5351-43e6-b30b-646bd63ade85\" (UID: \"7e4ad2fa-5351-43e6-b30b-646bd63ade85\") " Jan 26 08:04:23 crc kubenswrapper[4806]: I0126 08:04:23.936390 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lscr8\" (UniqueName: \"kubernetes.io/projected/7e4ad2fa-5351-43e6-b30b-646bd63ade85-kube-api-access-lscr8\") pod \"7e4ad2fa-5351-43e6-b30b-646bd63ade85\" (UID: \"7e4ad2fa-5351-43e6-b30b-646bd63ade85\") " Jan 26 08:04:23 crc kubenswrapper[4806]: I0126 08:04:23.937185 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7e4ad2fa-5351-43e6-b30b-646bd63ade85-bundle" (OuterVolumeSpecName: "bundle") pod "7e4ad2fa-5351-43e6-b30b-646bd63ade85" (UID: "7e4ad2fa-5351-43e6-b30b-646bd63ade85"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 08:04:23 crc kubenswrapper[4806]: I0126 08:04:23.943736 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e4ad2fa-5351-43e6-b30b-646bd63ade85-kube-api-access-lscr8" (OuterVolumeSpecName: "kube-api-access-lscr8") pod "7e4ad2fa-5351-43e6-b30b-646bd63ade85" (UID: "7e4ad2fa-5351-43e6-b30b-646bd63ade85"). InnerVolumeSpecName "kube-api-access-lscr8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:04:24 crc kubenswrapper[4806]: I0126 08:04:24.038448 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lscr8\" (UniqueName: \"kubernetes.io/projected/7e4ad2fa-5351-43e6-b30b-646bd63ade85-kube-api-access-lscr8\") on node \"crc\" DevicePath \"\"" Jan 26 08:04:24 crc kubenswrapper[4806]: I0126 08:04:24.038484 4806 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7e4ad2fa-5351-43e6-b30b-646bd63ade85-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 08:04:24 crc kubenswrapper[4806]: I0126 08:04:24.186107 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7e4ad2fa-5351-43e6-b30b-646bd63ade85-util" (OuterVolumeSpecName: "util") pod "7e4ad2fa-5351-43e6-b30b-646bd63ade85" (UID: "7e4ad2fa-5351-43e6-b30b-646bd63ade85"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 08:04:24 crc kubenswrapper[4806]: I0126 08:04:24.240679 4806 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7e4ad2fa-5351-43e6-b30b-646bd63ade85-util\") on node \"crc\" DevicePath \"\"" Jan 26 08:04:24 crc kubenswrapper[4806]: I0126 08:04:24.497633 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139nf4p" event={"ID":"7e4ad2fa-5351-43e6-b30b-646bd63ade85","Type":"ContainerDied","Data":"631693ce3f620db4394be5aa309908d162d40a3c5eaf5075d89914b3d4ab095e"} Jan 26 08:04:24 crc kubenswrapper[4806]: I0126 08:04:24.497684 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="631693ce3f620db4394be5aa309908d162d40a3c5eaf5075d89914b3d4ab095e" Jan 26 08:04:24 crc kubenswrapper[4806]: I0126 08:04:24.497700 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139nf4p" Jan 26 08:04:26 crc kubenswrapper[4806]: I0126 08:04:26.910287 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-r8wcz"] Jan 26 08:04:26 crc kubenswrapper[4806]: E0126 08:04:26.910488 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e4ad2fa-5351-43e6-b30b-646bd63ade85" containerName="extract" Jan 26 08:04:26 crc kubenswrapper[4806]: I0126 08:04:26.910499 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e4ad2fa-5351-43e6-b30b-646bd63ade85" containerName="extract" Jan 26 08:04:26 crc kubenswrapper[4806]: E0126 08:04:26.910532 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e4ad2fa-5351-43e6-b30b-646bd63ade85" containerName="pull" Jan 26 08:04:26 crc kubenswrapper[4806]: I0126 08:04:26.910538 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e4ad2fa-5351-43e6-b30b-646bd63ade85" containerName="pull" Jan 26 08:04:26 crc kubenswrapper[4806]: E0126 08:04:26.910547 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e4ad2fa-5351-43e6-b30b-646bd63ade85" containerName="util" Jan 26 08:04:26 crc kubenswrapper[4806]: I0126 08:04:26.910553 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e4ad2fa-5351-43e6-b30b-646bd63ade85" containerName="util" Jan 26 08:04:26 crc kubenswrapper[4806]: I0126 08:04:26.910646 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e4ad2fa-5351-43e6-b30b-646bd63ade85" containerName="extract" Jan 26 08:04:26 crc kubenswrapper[4806]: I0126 08:04:26.911000 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-r8wcz" Jan 26 08:04:26 crc kubenswrapper[4806]: I0126 08:04:26.913749 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-znt28" Jan 26 08:04:26 crc kubenswrapper[4806]: I0126 08:04:26.914149 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 26 08:04:26 crc kubenswrapper[4806]: I0126 08:04:26.915502 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 26 08:04:26 crc kubenswrapper[4806]: I0126 08:04:26.921942 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-r8wcz"] Jan 26 08:04:26 crc kubenswrapper[4806]: I0126 08:04:26.979318 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2b4d\" (UniqueName: \"kubernetes.io/projected/4a979c8c-9902-414a-8458-4cac2b34e61d-kube-api-access-j2b4d\") pod \"nmstate-operator-646758c888-r8wcz\" (UID: \"4a979c8c-9902-414a-8458-4cac2b34e61d\") " pod="openshift-nmstate/nmstate-operator-646758c888-r8wcz" Jan 26 08:04:27 crc kubenswrapper[4806]: I0126 08:04:27.081350 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2b4d\" (UniqueName: \"kubernetes.io/projected/4a979c8c-9902-414a-8458-4cac2b34e61d-kube-api-access-j2b4d\") pod \"nmstate-operator-646758c888-r8wcz\" (UID: \"4a979c8c-9902-414a-8458-4cac2b34e61d\") " pod="openshift-nmstate/nmstate-operator-646758c888-r8wcz" Jan 26 08:04:27 crc kubenswrapper[4806]: I0126 08:04:27.105740 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2b4d\" (UniqueName: \"kubernetes.io/projected/4a979c8c-9902-414a-8458-4cac2b34e61d-kube-api-access-j2b4d\") pod \"nmstate-operator-646758c888-r8wcz\" (UID: \"4a979c8c-9902-414a-8458-4cac2b34e61d\") " pod="openshift-nmstate/nmstate-operator-646758c888-r8wcz" Jan 26 08:04:27 crc kubenswrapper[4806]: I0126 08:04:27.241266 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-r8wcz" Jan 26 08:04:27 crc kubenswrapper[4806]: I0126 08:04:27.516353 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-r8wcz"] Jan 26 08:04:27 crc kubenswrapper[4806]: W0126 08:04:27.547760 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4a979c8c_9902_414a_8458_4cac2b34e61d.slice/crio-057dfb1d4db5979170da20392e2100d83741cb897cb0dd5f1b0cc619c4bcf249 WatchSource:0}: Error finding container 057dfb1d4db5979170da20392e2100d83741cb897cb0dd5f1b0cc619c4bcf249: Status 404 returned error can't find the container with id 057dfb1d4db5979170da20392e2100d83741cb897cb0dd5f1b0cc619c4bcf249 Jan 26 08:04:28 crc kubenswrapper[4806]: I0126 08:04:28.518471 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-r8wcz" event={"ID":"4a979c8c-9902-414a-8458-4cac2b34e61d","Type":"ContainerStarted","Data":"057dfb1d4db5979170da20392e2100d83741cb897cb0dd5f1b0cc619c4bcf249"} Jan 26 08:04:30 crc kubenswrapper[4806]: I0126 08:04:30.529555 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-r8wcz" event={"ID":"4a979c8c-9902-414a-8458-4cac2b34e61d","Type":"ContainerStarted","Data":"3b69e0f5dcf1a67ffcc5c365324ca31d709649f19a56559665e4dcf998922fee"} Jan 26 08:04:30 crc kubenswrapper[4806]: I0126 08:04:30.572943 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-r8wcz" podStartSLOduration=2.19080837 podStartE2EDuration="4.572914013s" podCreationTimestamp="2026-01-26 08:04:26 +0000 UTC" firstStartedPulling="2026-01-26 08:04:27.550335548 +0000 UTC m=+646.814743604" lastFinishedPulling="2026-01-26 08:04:29.932441191 +0000 UTC m=+649.196849247" observedRunningTime="2026-01-26 08:04:30.570663169 +0000 UTC m=+649.835071225" watchObservedRunningTime="2026-01-26 08:04:30.572914013 +0000 UTC m=+649.837322069" Jan 26 08:04:31 crc kubenswrapper[4806]: I0126 08:04:31.526824 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-chwqp"] Jan 26 08:04:31 crc kubenswrapper[4806]: I0126 08:04:31.528242 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-chwqp" Jan 26 08:04:31 crc kubenswrapper[4806]: I0126 08:04:31.532500 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-xmk9k" Jan 26 08:04:31 crc kubenswrapper[4806]: I0126 08:04:31.540726 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-bmxmt"] Jan 26 08:04:31 crc kubenswrapper[4806]: I0126 08:04:31.541506 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-bmxmt" Jan 26 08:04:31 crc kubenswrapper[4806]: I0126 08:04:31.544262 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 26 08:04:31 crc kubenswrapper[4806]: I0126 08:04:31.565798 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-bmxmt"] Jan 26 08:04:31 crc kubenswrapper[4806]: I0126 08:04:31.596048 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-jbwmp"] Jan 26 08:04:31 crc kubenswrapper[4806]: I0126 08:04:31.596980 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-jbwmp" Jan 26 08:04:31 crc kubenswrapper[4806]: I0126 08:04:31.640873 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plbml\" (UniqueName: \"kubernetes.io/projected/22afad0a-47c8-44b3-87b3-342559ef78f5-kube-api-access-plbml\") pod \"nmstate-webhook-8474b5b9d8-bmxmt\" (UID: \"22afad0a-47c8-44b3-87b3-342559ef78f5\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-bmxmt" Jan 26 08:04:31 crc kubenswrapper[4806]: I0126 08:04:31.640938 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sq5k5\" (UniqueName: \"kubernetes.io/projected/8c540a44-feb5-4c62-b4ad-f1f0dfd40576-kube-api-access-sq5k5\") pod \"nmstate-metrics-54757c584b-chwqp\" (UID: \"8c540a44-feb5-4c62-b4ad-f1f0dfd40576\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-chwqp" Jan 26 08:04:31 crc kubenswrapper[4806]: I0126 08:04:31.641020 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/22afad0a-47c8-44b3-87b3-342559ef78f5-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-bmxmt\" (UID: \"22afad0a-47c8-44b3-87b3-342559ef78f5\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-bmxmt" Jan 26 08:04:31 crc kubenswrapper[4806]: I0126 08:04:31.666156 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-chwqp"] Jan 26 08:04:31 crc kubenswrapper[4806]: I0126 08:04:31.704167 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-8wp62"] Jan 26 08:04:31 crc kubenswrapper[4806]: I0126 08:04:31.704887 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-8wp62" Jan 26 08:04:31 crc kubenswrapper[4806]: I0126 08:04:31.712935 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 26 08:04:31 crc kubenswrapper[4806]: I0126 08:04:31.716878 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-nxxd5" Jan 26 08:04:31 crc kubenswrapper[4806]: I0126 08:04:31.718557 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 26 08:04:31 crc kubenswrapper[4806]: I0126 08:04:31.725738 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-8wp62"] Jan 26 08:04:31 crc kubenswrapper[4806]: I0126 08:04:31.742689 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/e20bf81f-2252-43c2-9f16-0cca133f9b13-ovs-socket\") pod \"nmstate-handler-jbwmp\" (UID: \"e20bf81f-2252-43c2-9f16-0cca133f9b13\") " pod="openshift-nmstate/nmstate-handler-jbwmp" Jan 26 08:04:31 crc kubenswrapper[4806]: I0126 08:04:31.742813 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/22afad0a-47c8-44b3-87b3-342559ef78f5-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-bmxmt\" (UID: \"22afad0a-47c8-44b3-87b3-342559ef78f5\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-bmxmt" Jan 26 08:04:31 crc kubenswrapper[4806]: I0126 08:04:31.742906 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/e20bf81f-2252-43c2-9f16-0cca133f9b13-nmstate-lock\") pod \"nmstate-handler-jbwmp\" (UID: \"e20bf81f-2252-43c2-9f16-0cca133f9b13\") " pod="openshift-nmstate/nmstate-handler-jbwmp" Jan 26 08:04:31 crc kubenswrapper[4806]: I0126 08:04:31.743037 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hpcb\" (UniqueName: \"kubernetes.io/projected/e20bf81f-2252-43c2-9f16-0cca133f9b13-kube-api-access-6hpcb\") pod \"nmstate-handler-jbwmp\" (UID: \"e20bf81f-2252-43c2-9f16-0cca133f9b13\") " pod="openshift-nmstate/nmstate-handler-jbwmp" Jan 26 08:04:31 crc kubenswrapper[4806]: I0126 08:04:31.743077 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/e20bf81f-2252-43c2-9f16-0cca133f9b13-dbus-socket\") pod \"nmstate-handler-jbwmp\" (UID: \"e20bf81f-2252-43c2-9f16-0cca133f9b13\") " pod="openshift-nmstate/nmstate-handler-jbwmp" Jan 26 08:04:31 crc kubenswrapper[4806]: I0126 08:04:31.743094 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plbml\" (UniqueName: \"kubernetes.io/projected/22afad0a-47c8-44b3-87b3-342559ef78f5-kube-api-access-plbml\") pod \"nmstate-webhook-8474b5b9d8-bmxmt\" (UID: \"22afad0a-47c8-44b3-87b3-342559ef78f5\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-bmxmt" Jan 26 08:04:31 crc kubenswrapper[4806]: I0126 08:04:31.743117 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sq5k5\" (UniqueName: \"kubernetes.io/projected/8c540a44-feb5-4c62-b4ad-f1f0dfd40576-kube-api-access-sq5k5\") pod \"nmstate-metrics-54757c584b-chwqp\" (UID: \"8c540a44-feb5-4c62-b4ad-f1f0dfd40576\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-chwqp" Jan 26 08:04:31 crc kubenswrapper[4806]: I0126 08:04:31.749470 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/22afad0a-47c8-44b3-87b3-342559ef78f5-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-bmxmt\" (UID: \"22afad0a-47c8-44b3-87b3-342559ef78f5\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-bmxmt" Jan 26 08:04:31 crc kubenswrapper[4806]: I0126 08:04:31.761383 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-plbml\" (UniqueName: \"kubernetes.io/projected/22afad0a-47c8-44b3-87b3-342559ef78f5-kube-api-access-plbml\") pod \"nmstate-webhook-8474b5b9d8-bmxmt\" (UID: \"22afad0a-47c8-44b3-87b3-342559ef78f5\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-bmxmt" Jan 26 08:04:31 crc kubenswrapper[4806]: I0126 08:04:31.761804 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sq5k5\" (UniqueName: \"kubernetes.io/projected/8c540a44-feb5-4c62-b4ad-f1f0dfd40576-kube-api-access-sq5k5\") pod \"nmstate-metrics-54757c584b-chwqp\" (UID: \"8c540a44-feb5-4c62-b4ad-f1f0dfd40576\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-chwqp" Jan 26 08:04:31 crc kubenswrapper[4806]: I0126 08:04:31.844425 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/e20bf81f-2252-43c2-9f16-0cca133f9b13-ovs-socket\") pod \"nmstate-handler-jbwmp\" (UID: \"e20bf81f-2252-43c2-9f16-0cca133f9b13\") " pod="openshift-nmstate/nmstate-handler-jbwmp" Jan 26 08:04:31 crc kubenswrapper[4806]: I0126 08:04:31.844940 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/e20bf81f-2252-43c2-9f16-0cca133f9b13-nmstate-lock\") pod \"nmstate-handler-jbwmp\" (UID: \"e20bf81f-2252-43c2-9f16-0cca133f9b13\") " pod="openshift-nmstate/nmstate-handler-jbwmp" Jan 26 08:04:31 crc kubenswrapper[4806]: I0126 08:04:31.844967 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6hpcb\" (UniqueName: \"kubernetes.io/projected/e20bf81f-2252-43c2-9f16-0cca133f9b13-kube-api-access-6hpcb\") pod \"nmstate-handler-jbwmp\" (UID: \"e20bf81f-2252-43c2-9f16-0cca133f9b13\") " pod="openshift-nmstate/nmstate-handler-jbwmp" Jan 26 08:04:31 crc kubenswrapper[4806]: I0126 08:04:31.844558 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/e20bf81f-2252-43c2-9f16-0cca133f9b13-ovs-socket\") pod \"nmstate-handler-jbwmp\" (UID: \"e20bf81f-2252-43c2-9f16-0cca133f9b13\") " pod="openshift-nmstate/nmstate-handler-jbwmp" Jan 26 08:04:31 crc kubenswrapper[4806]: I0126 08:04:31.845019 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/e20bf81f-2252-43c2-9f16-0cca133f9b13-dbus-socket\") pod \"nmstate-handler-jbwmp\" (UID: \"e20bf81f-2252-43c2-9f16-0cca133f9b13\") " pod="openshift-nmstate/nmstate-handler-jbwmp" Jan 26 08:04:31 crc kubenswrapper[4806]: I0126 08:04:31.845055 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/e20bf81f-2252-43c2-9f16-0cca133f9b13-nmstate-lock\") pod \"nmstate-handler-jbwmp\" (UID: \"e20bf81f-2252-43c2-9f16-0cca133f9b13\") " pod="openshift-nmstate/nmstate-handler-jbwmp" Jan 26 08:04:31 crc kubenswrapper[4806]: I0126 08:04:31.845090 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/ff219c8b-8864-482f-9524-11c05e3fef70-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-8wp62\" (UID: \"ff219c8b-8864-482f-9524-11c05e3fef70\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-8wp62" Jan 26 08:04:31 crc kubenswrapper[4806]: I0126 08:04:31.844797 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-chwqp" Jan 26 08:04:31 crc kubenswrapper[4806]: I0126 08:04:31.845160 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slsv8\" (UniqueName: \"kubernetes.io/projected/ff219c8b-8864-482f-9524-11c05e3fef70-kube-api-access-slsv8\") pod \"nmstate-console-plugin-7754f76f8b-8wp62\" (UID: \"ff219c8b-8864-482f-9524-11c05e3fef70\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-8wp62" Jan 26 08:04:31 crc kubenswrapper[4806]: I0126 08:04:31.845210 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/ff219c8b-8864-482f-9524-11c05e3fef70-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-8wp62\" (UID: \"ff219c8b-8864-482f-9524-11c05e3fef70\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-8wp62" Jan 26 08:04:31 crc kubenswrapper[4806]: I0126 08:04:31.845282 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/e20bf81f-2252-43c2-9f16-0cca133f9b13-dbus-socket\") pod \"nmstate-handler-jbwmp\" (UID: \"e20bf81f-2252-43c2-9f16-0cca133f9b13\") " pod="openshift-nmstate/nmstate-handler-jbwmp" Jan 26 08:04:31 crc kubenswrapper[4806]: I0126 08:04:31.865293 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6hpcb\" (UniqueName: \"kubernetes.io/projected/e20bf81f-2252-43c2-9f16-0cca133f9b13-kube-api-access-6hpcb\") pod \"nmstate-handler-jbwmp\" (UID: \"e20bf81f-2252-43c2-9f16-0cca133f9b13\") " pod="openshift-nmstate/nmstate-handler-jbwmp" Jan 26 08:04:31 crc kubenswrapper[4806]: I0126 08:04:31.896421 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-bmxmt" Jan 26 08:04:31 crc kubenswrapper[4806]: I0126 08:04:31.911951 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-jbwmp" Jan 26 08:04:31 crc kubenswrapper[4806]: I0126 08:04:31.913189 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-7b8649454d-pcgjf"] Jan 26 08:04:31 crc kubenswrapper[4806]: I0126 08:04:31.914221 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7b8649454d-pcgjf" Jan 26 08:04:31 crc kubenswrapper[4806]: I0126 08:04:31.941145 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7b8649454d-pcgjf"] Jan 26 08:04:31 crc kubenswrapper[4806]: I0126 08:04:31.946408 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/ff219c8b-8864-482f-9524-11c05e3fef70-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-8wp62\" (UID: \"ff219c8b-8864-482f-9524-11c05e3fef70\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-8wp62" Jan 26 08:04:31 crc kubenswrapper[4806]: I0126 08:04:31.946444 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-slsv8\" (UniqueName: \"kubernetes.io/projected/ff219c8b-8864-482f-9524-11c05e3fef70-kube-api-access-slsv8\") pod \"nmstate-console-plugin-7754f76f8b-8wp62\" (UID: \"ff219c8b-8864-482f-9524-11c05e3fef70\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-8wp62" Jan 26 08:04:31 crc kubenswrapper[4806]: I0126 08:04:31.946468 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/ff219c8b-8864-482f-9524-11c05e3fef70-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-8wp62\" (UID: \"ff219c8b-8864-482f-9524-11c05e3fef70\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-8wp62" Jan 26 08:04:31 crc kubenswrapper[4806]: I0126 08:04:31.949001 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/ff219c8b-8864-482f-9524-11c05e3fef70-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-8wp62\" (UID: \"ff219c8b-8864-482f-9524-11c05e3fef70\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-8wp62" Jan 26 08:04:31 crc kubenswrapper[4806]: I0126 08:04:31.951685 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/ff219c8b-8864-482f-9524-11c05e3fef70-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-8wp62\" (UID: \"ff219c8b-8864-482f-9524-11c05e3fef70\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-8wp62" Jan 26 08:04:31 crc kubenswrapper[4806]: I0126 08:04:31.967357 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-slsv8\" (UniqueName: \"kubernetes.io/projected/ff219c8b-8864-482f-9524-11c05e3fef70-kube-api-access-slsv8\") pod \"nmstate-console-plugin-7754f76f8b-8wp62\" (UID: \"ff219c8b-8864-482f-9524-11c05e3fef70\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-8wp62" Jan 26 08:04:32 crc kubenswrapper[4806]: I0126 08:04:32.016938 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-8wp62" Jan 26 08:04:32 crc kubenswrapper[4806]: I0126 08:04:32.047821 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5f6320a8-8bd5-4aa9-8770-2803aa7e7052-service-ca\") pod \"console-7b8649454d-pcgjf\" (UID: \"5f6320a8-8bd5-4aa9-8770-2803aa7e7052\") " pod="openshift-console/console-7b8649454d-pcgjf" Jan 26 08:04:32 crc kubenswrapper[4806]: I0126 08:04:32.048271 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5f6320a8-8bd5-4aa9-8770-2803aa7e7052-console-config\") pod \"console-7b8649454d-pcgjf\" (UID: \"5f6320a8-8bd5-4aa9-8770-2803aa7e7052\") " pod="openshift-console/console-7b8649454d-pcgjf" Jan 26 08:04:32 crc kubenswrapper[4806]: I0126 08:04:32.048311 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5f6320a8-8bd5-4aa9-8770-2803aa7e7052-trusted-ca-bundle\") pod \"console-7b8649454d-pcgjf\" (UID: \"5f6320a8-8bd5-4aa9-8770-2803aa7e7052\") " pod="openshift-console/console-7b8649454d-pcgjf" Jan 26 08:04:32 crc kubenswrapper[4806]: I0126 08:04:32.048380 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnx2w\" (UniqueName: \"kubernetes.io/projected/5f6320a8-8bd5-4aa9-8770-2803aa7e7052-kube-api-access-hnx2w\") pod \"console-7b8649454d-pcgjf\" (UID: \"5f6320a8-8bd5-4aa9-8770-2803aa7e7052\") " pod="openshift-console/console-7b8649454d-pcgjf" Jan 26 08:04:32 crc kubenswrapper[4806]: I0126 08:04:32.048409 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5f6320a8-8bd5-4aa9-8770-2803aa7e7052-console-serving-cert\") pod \"console-7b8649454d-pcgjf\" (UID: \"5f6320a8-8bd5-4aa9-8770-2803aa7e7052\") " pod="openshift-console/console-7b8649454d-pcgjf" Jan 26 08:04:32 crc kubenswrapper[4806]: I0126 08:04:32.048427 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5f6320a8-8bd5-4aa9-8770-2803aa7e7052-oauth-serving-cert\") pod \"console-7b8649454d-pcgjf\" (UID: \"5f6320a8-8bd5-4aa9-8770-2803aa7e7052\") " pod="openshift-console/console-7b8649454d-pcgjf" Jan 26 08:04:32 crc kubenswrapper[4806]: I0126 08:04:32.048446 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5f6320a8-8bd5-4aa9-8770-2803aa7e7052-console-oauth-config\") pod \"console-7b8649454d-pcgjf\" (UID: \"5f6320a8-8bd5-4aa9-8770-2803aa7e7052\") " pod="openshift-console/console-7b8649454d-pcgjf" Jan 26 08:04:32 crc kubenswrapper[4806]: I0126 08:04:32.108780 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-chwqp"] Jan 26 08:04:32 crc kubenswrapper[4806]: W0126 08:04:32.123723 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8c540a44_feb5_4c62_b4ad_f1f0dfd40576.slice/crio-ab49ac77d4d67655812dfa80329fdd675ddfb7ccc57ddf95db4df5b70b4ce2ae WatchSource:0}: Error finding container ab49ac77d4d67655812dfa80329fdd675ddfb7ccc57ddf95db4df5b70b4ce2ae: Status 404 returned error can't find the container with id ab49ac77d4d67655812dfa80329fdd675ddfb7ccc57ddf95db4df5b70b4ce2ae Jan 26 08:04:32 crc kubenswrapper[4806]: I0126 08:04:32.149382 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5f6320a8-8bd5-4aa9-8770-2803aa7e7052-console-config\") pod \"console-7b8649454d-pcgjf\" (UID: \"5f6320a8-8bd5-4aa9-8770-2803aa7e7052\") " pod="openshift-console/console-7b8649454d-pcgjf" Jan 26 08:04:32 crc kubenswrapper[4806]: I0126 08:04:32.149436 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5f6320a8-8bd5-4aa9-8770-2803aa7e7052-trusted-ca-bundle\") pod \"console-7b8649454d-pcgjf\" (UID: \"5f6320a8-8bd5-4aa9-8770-2803aa7e7052\") " pod="openshift-console/console-7b8649454d-pcgjf" Jan 26 08:04:32 crc kubenswrapper[4806]: I0126 08:04:32.149483 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnx2w\" (UniqueName: \"kubernetes.io/projected/5f6320a8-8bd5-4aa9-8770-2803aa7e7052-kube-api-access-hnx2w\") pod \"console-7b8649454d-pcgjf\" (UID: \"5f6320a8-8bd5-4aa9-8770-2803aa7e7052\") " pod="openshift-console/console-7b8649454d-pcgjf" Jan 26 08:04:32 crc kubenswrapper[4806]: I0126 08:04:32.149510 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5f6320a8-8bd5-4aa9-8770-2803aa7e7052-console-serving-cert\") pod \"console-7b8649454d-pcgjf\" (UID: \"5f6320a8-8bd5-4aa9-8770-2803aa7e7052\") " pod="openshift-console/console-7b8649454d-pcgjf" Jan 26 08:04:32 crc kubenswrapper[4806]: I0126 08:04:32.149538 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5f6320a8-8bd5-4aa9-8770-2803aa7e7052-oauth-serving-cert\") pod \"console-7b8649454d-pcgjf\" (UID: \"5f6320a8-8bd5-4aa9-8770-2803aa7e7052\") " pod="openshift-console/console-7b8649454d-pcgjf" Jan 26 08:04:32 crc kubenswrapper[4806]: I0126 08:04:32.149560 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5f6320a8-8bd5-4aa9-8770-2803aa7e7052-console-oauth-config\") pod \"console-7b8649454d-pcgjf\" (UID: \"5f6320a8-8bd5-4aa9-8770-2803aa7e7052\") " pod="openshift-console/console-7b8649454d-pcgjf" Jan 26 08:04:32 crc kubenswrapper[4806]: I0126 08:04:32.149579 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5f6320a8-8bd5-4aa9-8770-2803aa7e7052-service-ca\") pod \"console-7b8649454d-pcgjf\" (UID: \"5f6320a8-8bd5-4aa9-8770-2803aa7e7052\") " pod="openshift-console/console-7b8649454d-pcgjf" Jan 26 08:04:32 crc kubenswrapper[4806]: I0126 08:04:32.151388 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5f6320a8-8bd5-4aa9-8770-2803aa7e7052-service-ca\") pod \"console-7b8649454d-pcgjf\" (UID: \"5f6320a8-8bd5-4aa9-8770-2803aa7e7052\") " pod="openshift-console/console-7b8649454d-pcgjf" Jan 26 08:04:32 crc kubenswrapper[4806]: I0126 08:04:32.151937 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5f6320a8-8bd5-4aa9-8770-2803aa7e7052-console-config\") pod \"console-7b8649454d-pcgjf\" (UID: \"5f6320a8-8bd5-4aa9-8770-2803aa7e7052\") " pod="openshift-console/console-7b8649454d-pcgjf" Jan 26 08:04:32 crc kubenswrapper[4806]: I0126 08:04:32.151951 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5f6320a8-8bd5-4aa9-8770-2803aa7e7052-trusted-ca-bundle\") pod \"console-7b8649454d-pcgjf\" (UID: \"5f6320a8-8bd5-4aa9-8770-2803aa7e7052\") " pod="openshift-console/console-7b8649454d-pcgjf" Jan 26 08:04:32 crc kubenswrapper[4806]: I0126 08:04:32.159684 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5f6320a8-8bd5-4aa9-8770-2803aa7e7052-console-serving-cert\") pod \"console-7b8649454d-pcgjf\" (UID: \"5f6320a8-8bd5-4aa9-8770-2803aa7e7052\") " pod="openshift-console/console-7b8649454d-pcgjf" Jan 26 08:04:32 crc kubenswrapper[4806]: I0126 08:04:32.160934 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5f6320a8-8bd5-4aa9-8770-2803aa7e7052-oauth-serving-cert\") pod \"console-7b8649454d-pcgjf\" (UID: \"5f6320a8-8bd5-4aa9-8770-2803aa7e7052\") " pod="openshift-console/console-7b8649454d-pcgjf" Jan 26 08:04:32 crc kubenswrapper[4806]: I0126 08:04:32.161244 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5f6320a8-8bd5-4aa9-8770-2803aa7e7052-console-oauth-config\") pod \"console-7b8649454d-pcgjf\" (UID: \"5f6320a8-8bd5-4aa9-8770-2803aa7e7052\") " pod="openshift-console/console-7b8649454d-pcgjf" Jan 26 08:04:32 crc kubenswrapper[4806]: I0126 08:04:32.169905 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-bmxmt"] Jan 26 08:04:32 crc kubenswrapper[4806]: I0126 08:04:32.179301 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hnx2w\" (UniqueName: \"kubernetes.io/projected/5f6320a8-8bd5-4aa9-8770-2803aa7e7052-kube-api-access-hnx2w\") pod \"console-7b8649454d-pcgjf\" (UID: \"5f6320a8-8bd5-4aa9-8770-2803aa7e7052\") " pod="openshift-console/console-7b8649454d-pcgjf" Jan 26 08:04:32 crc kubenswrapper[4806]: W0126 08:04:32.181722 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod22afad0a_47c8_44b3_87b3_342559ef78f5.slice/crio-81575c4bb2b4aa048fb1739d46f9e39f12c65e97ff05d24fb576488762fb7026 WatchSource:0}: Error finding container 81575c4bb2b4aa048fb1739d46f9e39f12c65e97ff05d24fb576488762fb7026: Status 404 returned error can't find the container with id 81575c4bb2b4aa048fb1739d46f9e39f12c65e97ff05d24fb576488762fb7026 Jan 26 08:04:32 crc kubenswrapper[4806]: I0126 08:04:32.231846 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-8wp62"] Jan 26 08:04:32 crc kubenswrapper[4806]: W0126 08:04:32.236315 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podff219c8b_8864_482f_9524_11c05e3fef70.slice/crio-0563af2c04fad64f88f977b926b9c0e3296e2d3eec452461b996b77d899687f1 WatchSource:0}: Error finding container 0563af2c04fad64f88f977b926b9c0e3296e2d3eec452461b996b77d899687f1: Status 404 returned error can't find the container with id 0563af2c04fad64f88f977b926b9c0e3296e2d3eec452461b996b77d899687f1 Jan 26 08:04:32 crc kubenswrapper[4806]: I0126 08:04:32.248956 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7b8649454d-pcgjf" Jan 26 08:04:32 crc kubenswrapper[4806]: I0126 08:04:32.411091 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7b8649454d-pcgjf"] Jan 26 08:04:32 crc kubenswrapper[4806]: W0126 08:04:32.417606 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5f6320a8_8bd5_4aa9_8770_2803aa7e7052.slice/crio-67d0b3e0f5de7cd2fa823c94d6876121130194d573ad435337959822992013e4 WatchSource:0}: Error finding container 67d0b3e0f5de7cd2fa823c94d6876121130194d573ad435337959822992013e4: Status 404 returned error can't find the container with id 67d0b3e0f5de7cd2fa823c94d6876121130194d573ad435337959822992013e4 Jan 26 08:04:32 crc kubenswrapper[4806]: I0126 08:04:32.541325 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7b8649454d-pcgjf" event={"ID":"5f6320a8-8bd5-4aa9-8770-2803aa7e7052","Type":"ContainerStarted","Data":"67d0b3e0f5de7cd2fa823c94d6876121130194d573ad435337959822992013e4"} Jan 26 08:04:32 crc kubenswrapper[4806]: I0126 08:04:32.542420 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-8wp62" event={"ID":"ff219c8b-8864-482f-9524-11c05e3fef70","Type":"ContainerStarted","Data":"0563af2c04fad64f88f977b926b9c0e3296e2d3eec452461b996b77d899687f1"} Jan 26 08:04:32 crc kubenswrapper[4806]: I0126 08:04:32.545123 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-chwqp" event={"ID":"8c540a44-feb5-4c62-b4ad-f1f0dfd40576","Type":"ContainerStarted","Data":"ab49ac77d4d67655812dfa80329fdd675ddfb7ccc57ddf95db4df5b70b4ce2ae"} Jan 26 08:04:32 crc kubenswrapper[4806]: I0126 08:04:32.546688 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-bmxmt" event={"ID":"22afad0a-47c8-44b3-87b3-342559ef78f5","Type":"ContainerStarted","Data":"81575c4bb2b4aa048fb1739d46f9e39f12c65e97ff05d24fb576488762fb7026"} Jan 26 08:04:32 crc kubenswrapper[4806]: I0126 08:04:32.547456 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-jbwmp" event={"ID":"e20bf81f-2252-43c2-9f16-0cca133f9b13","Type":"ContainerStarted","Data":"27fd70104cfd5679207423f7d5eb20265d681e486a8a167087a9293de4eeae9c"} Jan 26 08:04:33 crc kubenswrapper[4806]: I0126 08:04:33.555305 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7b8649454d-pcgjf" event={"ID":"5f6320a8-8bd5-4aa9-8770-2803aa7e7052","Type":"ContainerStarted","Data":"e2bc19539ac81247e1c19cb2df16ef36c48eaf999a5012734256050f1796c320"} Jan 26 08:04:33 crc kubenswrapper[4806]: I0126 08:04:33.577369 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-7b8649454d-pcgjf" podStartSLOduration=2.576734159 podStartE2EDuration="2.576734159s" podCreationTimestamp="2026-01-26 08:04:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 08:04:33.572976753 +0000 UTC m=+652.837384809" watchObservedRunningTime="2026-01-26 08:04:33.576734159 +0000 UTC m=+652.841142215" Jan 26 08:04:35 crc kubenswrapper[4806]: I0126 08:04:35.572937 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-8wp62" event={"ID":"ff219c8b-8864-482f-9524-11c05e3fef70","Type":"ContainerStarted","Data":"144fc129b64cb81ec64e1eb87a21a822da98610260665e949f424e89f66098e3"} Jan 26 08:04:35 crc kubenswrapper[4806]: I0126 08:04:35.574784 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-chwqp" event={"ID":"8c540a44-feb5-4c62-b4ad-f1f0dfd40576","Type":"ContainerStarted","Data":"5db9516167f769d88354f27137c493883b4b2133b3b8133cf8e6e2015cc65fc0"} Jan 26 08:04:35 crc kubenswrapper[4806]: I0126 08:04:35.575855 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-bmxmt" event={"ID":"22afad0a-47c8-44b3-87b3-342559ef78f5","Type":"ContainerStarted","Data":"36c1a6b6575d76302a2466fb699e0d446e796db7c7f6edad1d78f45fc1a0150b"} Jan 26 08:04:35 crc kubenswrapper[4806]: I0126 08:04:35.576207 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-bmxmt" Jan 26 08:04:35 crc kubenswrapper[4806]: I0126 08:04:35.578847 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-jbwmp" event={"ID":"e20bf81f-2252-43c2-9f16-0cca133f9b13","Type":"ContainerStarted","Data":"cb83e03f8738fc53d061bfe8a04c42786a8c620b05cbbeeca5d23f95ce04f0e9"} Jan 26 08:04:35 crc kubenswrapper[4806]: I0126 08:04:35.594418 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-8wp62" podStartSLOduration=1.984773622 podStartE2EDuration="4.594398342s" podCreationTimestamp="2026-01-26 08:04:31 +0000 UTC" firstStartedPulling="2026-01-26 08:04:32.240405907 +0000 UTC m=+651.504813963" lastFinishedPulling="2026-01-26 08:04:34.850030637 +0000 UTC m=+654.114438683" observedRunningTime="2026-01-26 08:04:35.590707468 +0000 UTC m=+654.855115524" watchObservedRunningTime="2026-01-26 08:04:35.594398342 +0000 UTC m=+654.858806398" Jan 26 08:04:35 crc kubenswrapper[4806]: I0126 08:04:35.641539 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-jbwmp" podStartSLOduration=1.737166762 podStartE2EDuration="4.641507189s" podCreationTimestamp="2026-01-26 08:04:31 +0000 UTC" firstStartedPulling="2026-01-26 08:04:31.963992925 +0000 UTC m=+651.228400981" lastFinishedPulling="2026-01-26 08:04:34.868333352 +0000 UTC m=+654.132741408" observedRunningTime="2026-01-26 08:04:35.617877673 +0000 UTC m=+654.882285729" watchObservedRunningTime="2026-01-26 08:04:35.641507189 +0000 UTC m=+654.905915245" Jan 26 08:04:35 crc kubenswrapper[4806]: I0126 08:04:35.642262 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-bmxmt" podStartSLOduration=1.9737564920000001 podStartE2EDuration="4.64225816s" podCreationTimestamp="2026-01-26 08:04:31 +0000 UTC" firstStartedPulling="2026-01-26 08:04:32.183913916 +0000 UTC m=+651.448321972" lastFinishedPulling="2026-01-26 08:04:34.852415574 +0000 UTC m=+654.116823640" observedRunningTime="2026-01-26 08:04:35.639862082 +0000 UTC m=+654.904270148" watchObservedRunningTime="2026-01-26 08:04:35.64225816 +0000 UTC m=+654.906666216" Jan 26 08:04:36 crc kubenswrapper[4806]: I0126 08:04:36.585055 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-jbwmp" Jan 26 08:04:37 crc kubenswrapper[4806]: I0126 08:04:37.590539 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-chwqp" event={"ID":"8c540a44-feb5-4c62-b4ad-f1f0dfd40576","Type":"ContainerStarted","Data":"dd972ec90b0d5fb477424aca8a5b294d0e7551829a5b00c48a56610121909bc1"} Jan 26 08:04:37 crc kubenswrapper[4806]: I0126 08:04:37.608543 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-chwqp" podStartSLOduration=1.6088562 podStartE2EDuration="6.608502306s" podCreationTimestamp="2026-01-26 08:04:31 +0000 UTC" firstStartedPulling="2026-01-26 08:04:32.12757687 +0000 UTC m=+651.391984926" lastFinishedPulling="2026-01-26 08:04:37.127222976 +0000 UTC m=+656.391631032" observedRunningTime="2026-01-26 08:04:37.604701329 +0000 UTC m=+656.869109395" watchObservedRunningTime="2026-01-26 08:04:37.608502306 +0000 UTC m=+656.872910372" Jan 26 08:04:41 crc kubenswrapper[4806]: I0126 08:04:41.949285 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-jbwmp" Jan 26 08:04:42 crc kubenswrapper[4806]: I0126 08:04:42.249900 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-7b8649454d-pcgjf" Jan 26 08:04:42 crc kubenswrapper[4806]: I0126 08:04:42.250006 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-7b8649454d-pcgjf" Jan 26 08:04:42 crc kubenswrapper[4806]: I0126 08:04:42.257220 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-7b8649454d-pcgjf" Jan 26 08:04:42 crc kubenswrapper[4806]: I0126 08:04:42.631096 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-7b8649454d-pcgjf" Jan 26 08:04:42 crc kubenswrapper[4806]: I0126 08:04:42.690369 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-qd6mh"] Jan 26 08:04:51 crc kubenswrapper[4806]: I0126 08:04:51.905938 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-bmxmt" Jan 26 08:05:04 crc kubenswrapper[4806]: I0126 08:05:04.945796 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc8fql6"] Jan 26 08:05:04 crc kubenswrapper[4806]: I0126 08:05:04.947347 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc8fql6" Jan 26 08:05:04 crc kubenswrapper[4806]: I0126 08:05:04.953285 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 26 08:05:04 crc kubenswrapper[4806]: I0126 08:05:04.969862 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc8fql6"] Jan 26 08:05:05 crc kubenswrapper[4806]: I0126 08:05:05.144209 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4da3d341-b501-409d-9834-02c5ccf5cada-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc8fql6\" (UID: \"4da3d341-b501-409d-9834-02c5ccf5cada\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc8fql6" Jan 26 08:05:05 crc kubenswrapper[4806]: I0126 08:05:05.144264 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4da3d341-b501-409d-9834-02c5ccf5cada-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc8fql6\" (UID: \"4da3d341-b501-409d-9834-02c5ccf5cada\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc8fql6" Jan 26 08:05:05 crc kubenswrapper[4806]: I0126 08:05:05.144353 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvksc\" (UniqueName: \"kubernetes.io/projected/4da3d341-b501-409d-9834-02c5ccf5cada-kube-api-access-dvksc\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc8fql6\" (UID: \"4da3d341-b501-409d-9834-02c5ccf5cada\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc8fql6" Jan 26 08:05:05 crc kubenswrapper[4806]: I0126 08:05:05.245788 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4da3d341-b501-409d-9834-02c5ccf5cada-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc8fql6\" (UID: \"4da3d341-b501-409d-9834-02c5ccf5cada\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc8fql6" Jan 26 08:05:05 crc kubenswrapper[4806]: I0126 08:05:05.245831 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4da3d341-b501-409d-9834-02c5ccf5cada-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc8fql6\" (UID: \"4da3d341-b501-409d-9834-02c5ccf5cada\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc8fql6" Jan 26 08:05:05 crc kubenswrapper[4806]: I0126 08:05:05.245862 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvksc\" (UniqueName: \"kubernetes.io/projected/4da3d341-b501-409d-9834-02c5ccf5cada-kube-api-access-dvksc\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc8fql6\" (UID: \"4da3d341-b501-409d-9834-02c5ccf5cada\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc8fql6" Jan 26 08:05:05 crc kubenswrapper[4806]: I0126 08:05:05.246641 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4da3d341-b501-409d-9834-02c5ccf5cada-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc8fql6\" (UID: \"4da3d341-b501-409d-9834-02c5ccf5cada\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc8fql6" Jan 26 08:05:05 crc kubenswrapper[4806]: I0126 08:05:05.246652 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4da3d341-b501-409d-9834-02c5ccf5cada-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc8fql6\" (UID: \"4da3d341-b501-409d-9834-02c5ccf5cada\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc8fql6" Jan 26 08:05:05 crc kubenswrapper[4806]: I0126 08:05:05.265786 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvksc\" (UniqueName: \"kubernetes.io/projected/4da3d341-b501-409d-9834-02c5ccf5cada-kube-api-access-dvksc\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc8fql6\" (UID: \"4da3d341-b501-409d-9834-02c5ccf5cada\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc8fql6" Jan 26 08:05:05 crc kubenswrapper[4806]: I0126 08:05:05.266354 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc8fql6" Jan 26 08:05:05 crc kubenswrapper[4806]: I0126 08:05:05.635693 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc8fql6"] Jan 26 08:05:05 crc kubenswrapper[4806]: I0126 08:05:05.775951 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc8fql6" event={"ID":"4da3d341-b501-409d-9834-02c5ccf5cada","Type":"ContainerStarted","Data":"1fa44ba75029077faf7ea004695b43f76032a24cd23937efd5ae2bda7688e5ce"} Jan 26 08:05:06 crc kubenswrapper[4806]: I0126 08:05:06.794347 4806 generic.go:334] "Generic (PLEG): container finished" podID="4da3d341-b501-409d-9834-02c5ccf5cada" containerID="86a70d03d301469f907c059be20305d3518c0a162221281965433c9312902f06" exitCode=0 Jan 26 08:05:06 crc kubenswrapper[4806]: I0126 08:05:06.794406 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc8fql6" event={"ID":"4da3d341-b501-409d-9834-02c5ccf5cada","Type":"ContainerDied","Data":"86a70d03d301469f907c059be20305d3518c0a162221281965433c9312902f06"} Jan 26 08:05:07 crc kubenswrapper[4806]: I0126 08:05:07.745921 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-qd6mh" podUID="ee89739e-edc1-41b5-bf4a-da80ba0a59aa" containerName="console" containerID="cri-o://e6cfd3ebbdf70134fd3614b63fbe89c4b8757d61a42572b4578ad19dc0793b54" gracePeriod=15 Jan 26 08:05:08 crc kubenswrapper[4806]: I0126 08:05:08.249952 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-qd6mh_ee89739e-edc1-41b5-bf4a-da80ba0a59aa/console/0.log" Jan 26 08:05:08 crc kubenswrapper[4806]: I0126 08:05:08.250253 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-qd6mh" Jan 26 08:05:08 crc kubenswrapper[4806]: I0126 08:05:08.282487 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-th4lg\" (UniqueName: \"kubernetes.io/projected/ee89739e-edc1-41b5-bf4a-da80ba0a59aa-kube-api-access-th4lg\") pod \"ee89739e-edc1-41b5-bf4a-da80ba0a59aa\" (UID: \"ee89739e-edc1-41b5-bf4a-da80ba0a59aa\") " Jan 26 08:05:08 crc kubenswrapper[4806]: I0126 08:05:08.282709 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ee89739e-edc1-41b5-bf4a-da80ba0a59aa-console-serving-cert\") pod \"ee89739e-edc1-41b5-bf4a-da80ba0a59aa\" (UID: \"ee89739e-edc1-41b5-bf4a-da80ba0a59aa\") " Jan 26 08:05:08 crc kubenswrapper[4806]: I0126 08:05:08.282738 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ee89739e-edc1-41b5-bf4a-da80ba0a59aa-console-config\") pod \"ee89739e-edc1-41b5-bf4a-da80ba0a59aa\" (UID: \"ee89739e-edc1-41b5-bf4a-da80ba0a59aa\") " Jan 26 08:05:08 crc kubenswrapper[4806]: I0126 08:05:08.282784 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ee89739e-edc1-41b5-bf4a-da80ba0a59aa-trusted-ca-bundle\") pod \"ee89739e-edc1-41b5-bf4a-da80ba0a59aa\" (UID: \"ee89739e-edc1-41b5-bf4a-da80ba0a59aa\") " Jan 26 08:05:08 crc kubenswrapper[4806]: I0126 08:05:08.282811 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ee89739e-edc1-41b5-bf4a-da80ba0a59aa-oauth-serving-cert\") pod \"ee89739e-edc1-41b5-bf4a-da80ba0a59aa\" (UID: \"ee89739e-edc1-41b5-bf4a-da80ba0a59aa\") " Jan 26 08:05:08 crc kubenswrapper[4806]: I0126 08:05:08.282848 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ee89739e-edc1-41b5-bf4a-da80ba0a59aa-service-ca\") pod \"ee89739e-edc1-41b5-bf4a-da80ba0a59aa\" (UID: \"ee89739e-edc1-41b5-bf4a-da80ba0a59aa\") " Jan 26 08:05:08 crc kubenswrapper[4806]: I0126 08:05:08.282877 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ee89739e-edc1-41b5-bf4a-da80ba0a59aa-console-oauth-config\") pod \"ee89739e-edc1-41b5-bf4a-da80ba0a59aa\" (UID: \"ee89739e-edc1-41b5-bf4a-da80ba0a59aa\") " Jan 26 08:05:08 crc kubenswrapper[4806]: I0126 08:05:08.283462 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee89739e-edc1-41b5-bf4a-da80ba0a59aa-console-config" (OuterVolumeSpecName: "console-config") pod "ee89739e-edc1-41b5-bf4a-da80ba0a59aa" (UID: "ee89739e-edc1-41b5-bf4a-da80ba0a59aa"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 08:05:08 crc kubenswrapper[4806]: I0126 08:05:08.283489 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee89739e-edc1-41b5-bf4a-da80ba0a59aa-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "ee89739e-edc1-41b5-bf4a-da80ba0a59aa" (UID: "ee89739e-edc1-41b5-bf4a-da80ba0a59aa"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 08:05:08 crc kubenswrapper[4806]: I0126 08:05:08.283739 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee89739e-edc1-41b5-bf4a-da80ba0a59aa-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "ee89739e-edc1-41b5-bf4a-da80ba0a59aa" (UID: "ee89739e-edc1-41b5-bf4a-da80ba0a59aa"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 08:05:08 crc kubenswrapper[4806]: I0126 08:05:08.283793 4806 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ee89739e-edc1-41b5-bf4a-da80ba0a59aa-console-config\") on node \"crc\" DevicePath \"\"" Jan 26 08:05:08 crc kubenswrapper[4806]: I0126 08:05:08.283812 4806 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ee89739e-edc1-41b5-bf4a-da80ba0a59aa-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 08:05:08 crc kubenswrapper[4806]: I0126 08:05:08.283910 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee89739e-edc1-41b5-bf4a-da80ba0a59aa-service-ca" (OuterVolumeSpecName: "service-ca") pod "ee89739e-edc1-41b5-bf4a-da80ba0a59aa" (UID: "ee89739e-edc1-41b5-bf4a-da80ba0a59aa"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 08:05:08 crc kubenswrapper[4806]: I0126 08:05:08.290782 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee89739e-edc1-41b5-bf4a-da80ba0a59aa-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "ee89739e-edc1-41b5-bf4a-da80ba0a59aa" (UID: "ee89739e-edc1-41b5-bf4a-da80ba0a59aa"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:05:08 crc kubenswrapper[4806]: I0126 08:05:08.290840 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee89739e-edc1-41b5-bf4a-da80ba0a59aa-kube-api-access-th4lg" (OuterVolumeSpecName: "kube-api-access-th4lg") pod "ee89739e-edc1-41b5-bf4a-da80ba0a59aa" (UID: "ee89739e-edc1-41b5-bf4a-da80ba0a59aa"). InnerVolumeSpecName "kube-api-access-th4lg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:05:08 crc kubenswrapper[4806]: I0126 08:05:08.299080 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee89739e-edc1-41b5-bf4a-da80ba0a59aa-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "ee89739e-edc1-41b5-bf4a-da80ba0a59aa" (UID: "ee89739e-edc1-41b5-bf4a-da80ba0a59aa"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:05:08 crc kubenswrapper[4806]: I0126 08:05:08.384656 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-th4lg\" (UniqueName: \"kubernetes.io/projected/ee89739e-edc1-41b5-bf4a-da80ba0a59aa-kube-api-access-th4lg\") on node \"crc\" DevicePath \"\"" Jan 26 08:05:08 crc kubenswrapper[4806]: I0126 08:05:08.384715 4806 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ee89739e-edc1-41b5-bf4a-da80ba0a59aa-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 08:05:08 crc kubenswrapper[4806]: I0126 08:05:08.384736 4806 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ee89739e-edc1-41b5-bf4a-da80ba0a59aa-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 08:05:08 crc kubenswrapper[4806]: I0126 08:05:08.384757 4806 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ee89739e-edc1-41b5-bf4a-da80ba0a59aa-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 08:05:08 crc kubenswrapper[4806]: I0126 08:05:08.384776 4806 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ee89739e-edc1-41b5-bf4a-da80ba0a59aa-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 26 08:05:08 crc kubenswrapper[4806]: I0126 08:05:08.812193 4806 generic.go:334] "Generic (PLEG): container finished" podID="4da3d341-b501-409d-9834-02c5ccf5cada" containerID="191a444e8e6177febc7a9efcf8c9c0ec39824bb5ea97c4b7cee40aa42581dcce" exitCode=0 Jan 26 08:05:08 crc kubenswrapper[4806]: I0126 08:05:08.812278 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc8fql6" event={"ID":"4da3d341-b501-409d-9834-02c5ccf5cada","Type":"ContainerDied","Data":"191a444e8e6177febc7a9efcf8c9c0ec39824bb5ea97c4b7cee40aa42581dcce"} Jan 26 08:05:08 crc kubenswrapper[4806]: I0126 08:05:08.815603 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-qd6mh_ee89739e-edc1-41b5-bf4a-da80ba0a59aa/console/0.log" Jan 26 08:05:08 crc kubenswrapper[4806]: I0126 08:05:08.815827 4806 generic.go:334] "Generic (PLEG): container finished" podID="ee89739e-edc1-41b5-bf4a-da80ba0a59aa" containerID="e6cfd3ebbdf70134fd3614b63fbe89c4b8757d61a42572b4578ad19dc0793b54" exitCode=2 Jan 26 08:05:08 crc kubenswrapper[4806]: I0126 08:05:08.815883 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-qd6mh" event={"ID":"ee89739e-edc1-41b5-bf4a-da80ba0a59aa","Type":"ContainerDied","Data":"e6cfd3ebbdf70134fd3614b63fbe89c4b8757d61a42572b4578ad19dc0793b54"} Jan 26 08:05:08 crc kubenswrapper[4806]: I0126 08:05:08.815923 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-qd6mh" event={"ID":"ee89739e-edc1-41b5-bf4a-da80ba0a59aa","Type":"ContainerDied","Data":"2ff1572bd7b25c2ebc3727b20392a89fd2ed2550d4dfe4c8dcb4b223ba6548ad"} Jan 26 08:05:08 crc kubenswrapper[4806]: I0126 08:05:08.815952 4806 scope.go:117] "RemoveContainer" containerID="e6cfd3ebbdf70134fd3614b63fbe89c4b8757d61a42572b4578ad19dc0793b54" Jan 26 08:05:08 crc kubenswrapper[4806]: I0126 08:05:08.816297 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-qd6mh" Jan 26 08:05:08 crc kubenswrapper[4806]: I0126 08:05:08.849594 4806 scope.go:117] "RemoveContainer" containerID="e6cfd3ebbdf70134fd3614b63fbe89c4b8757d61a42572b4578ad19dc0793b54" Jan 26 08:05:08 crc kubenswrapper[4806]: E0126 08:05:08.850112 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e6cfd3ebbdf70134fd3614b63fbe89c4b8757d61a42572b4578ad19dc0793b54\": container with ID starting with e6cfd3ebbdf70134fd3614b63fbe89c4b8757d61a42572b4578ad19dc0793b54 not found: ID does not exist" containerID="e6cfd3ebbdf70134fd3614b63fbe89c4b8757d61a42572b4578ad19dc0793b54" Jan 26 08:05:08 crc kubenswrapper[4806]: I0126 08:05:08.850166 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e6cfd3ebbdf70134fd3614b63fbe89c4b8757d61a42572b4578ad19dc0793b54"} err="failed to get container status \"e6cfd3ebbdf70134fd3614b63fbe89c4b8757d61a42572b4578ad19dc0793b54\": rpc error: code = NotFound desc = could not find container \"e6cfd3ebbdf70134fd3614b63fbe89c4b8757d61a42572b4578ad19dc0793b54\": container with ID starting with e6cfd3ebbdf70134fd3614b63fbe89c4b8757d61a42572b4578ad19dc0793b54 not found: ID does not exist" Jan 26 08:05:08 crc kubenswrapper[4806]: I0126 08:05:08.860180 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-qd6mh"] Jan 26 08:05:08 crc kubenswrapper[4806]: I0126 08:05:08.869228 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-qd6mh"] Jan 26 08:05:09 crc kubenswrapper[4806]: I0126 08:05:09.050654 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee89739e-edc1-41b5-bf4a-da80ba0a59aa" path="/var/lib/kubelet/pods/ee89739e-edc1-41b5-bf4a-da80ba0a59aa/volumes" Jan 26 08:05:09 crc kubenswrapper[4806]: I0126 08:05:09.823578 4806 generic.go:334] "Generic (PLEG): container finished" podID="4da3d341-b501-409d-9834-02c5ccf5cada" containerID="e91e65dd7fa943fb0023029d8712108ad314883cac3af8bcdd870a06a93b9882" exitCode=0 Jan 26 08:05:09 crc kubenswrapper[4806]: I0126 08:05:09.823673 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc8fql6" event={"ID":"4da3d341-b501-409d-9834-02c5ccf5cada","Type":"ContainerDied","Data":"e91e65dd7fa943fb0023029d8712108ad314883cac3af8bcdd870a06a93b9882"} Jan 26 08:05:11 crc kubenswrapper[4806]: I0126 08:05:11.129958 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc8fql6" Jan 26 08:05:11 crc kubenswrapper[4806]: I0126 08:05:11.321389 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4da3d341-b501-409d-9834-02c5ccf5cada-util\") pod \"4da3d341-b501-409d-9834-02c5ccf5cada\" (UID: \"4da3d341-b501-409d-9834-02c5ccf5cada\") " Jan 26 08:05:11 crc kubenswrapper[4806]: I0126 08:05:11.321492 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4da3d341-b501-409d-9834-02c5ccf5cada-bundle\") pod \"4da3d341-b501-409d-9834-02c5ccf5cada\" (UID: \"4da3d341-b501-409d-9834-02c5ccf5cada\") " Jan 26 08:05:11 crc kubenswrapper[4806]: I0126 08:05:11.321628 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dvksc\" (UniqueName: \"kubernetes.io/projected/4da3d341-b501-409d-9834-02c5ccf5cada-kube-api-access-dvksc\") pod \"4da3d341-b501-409d-9834-02c5ccf5cada\" (UID: \"4da3d341-b501-409d-9834-02c5ccf5cada\") " Jan 26 08:05:11 crc kubenswrapper[4806]: I0126 08:05:11.322436 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4da3d341-b501-409d-9834-02c5ccf5cada-bundle" (OuterVolumeSpecName: "bundle") pod "4da3d341-b501-409d-9834-02c5ccf5cada" (UID: "4da3d341-b501-409d-9834-02c5ccf5cada"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 08:05:11 crc kubenswrapper[4806]: I0126 08:05:11.329691 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4da3d341-b501-409d-9834-02c5ccf5cada-kube-api-access-dvksc" (OuterVolumeSpecName: "kube-api-access-dvksc") pod "4da3d341-b501-409d-9834-02c5ccf5cada" (UID: "4da3d341-b501-409d-9834-02c5ccf5cada"). InnerVolumeSpecName "kube-api-access-dvksc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:05:11 crc kubenswrapper[4806]: I0126 08:05:11.332109 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4da3d341-b501-409d-9834-02c5ccf5cada-util" (OuterVolumeSpecName: "util") pod "4da3d341-b501-409d-9834-02c5ccf5cada" (UID: "4da3d341-b501-409d-9834-02c5ccf5cada"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 08:05:11 crc kubenswrapper[4806]: I0126 08:05:11.423082 4806 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4da3d341-b501-409d-9834-02c5ccf5cada-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 08:05:11 crc kubenswrapper[4806]: I0126 08:05:11.423132 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dvksc\" (UniqueName: \"kubernetes.io/projected/4da3d341-b501-409d-9834-02c5ccf5cada-kube-api-access-dvksc\") on node \"crc\" DevicePath \"\"" Jan 26 08:05:11 crc kubenswrapper[4806]: I0126 08:05:11.423153 4806 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4da3d341-b501-409d-9834-02c5ccf5cada-util\") on node \"crc\" DevicePath \"\"" Jan 26 08:05:11 crc kubenswrapper[4806]: I0126 08:05:11.837952 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc8fql6" event={"ID":"4da3d341-b501-409d-9834-02c5ccf5cada","Type":"ContainerDied","Data":"1fa44ba75029077faf7ea004695b43f76032a24cd23937efd5ae2bda7688e5ce"} Jan 26 08:05:11 crc kubenswrapper[4806]: I0126 08:05:11.837997 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1fa44ba75029077faf7ea004695b43f76032a24cd23937efd5ae2bda7688e5ce" Jan 26 08:05:11 crc kubenswrapper[4806]: I0126 08:05:11.838371 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc8fql6" Jan 26 08:05:20 crc kubenswrapper[4806]: I0126 08:05:20.084276 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-7484b44c99-sst7d"] Jan 26 08:05:20 crc kubenswrapper[4806]: E0126 08:05:20.086067 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4da3d341-b501-409d-9834-02c5ccf5cada" containerName="util" Jan 26 08:05:20 crc kubenswrapper[4806]: I0126 08:05:20.086140 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="4da3d341-b501-409d-9834-02c5ccf5cada" containerName="util" Jan 26 08:05:20 crc kubenswrapper[4806]: E0126 08:05:20.086194 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee89739e-edc1-41b5-bf4a-da80ba0a59aa" containerName="console" Jan 26 08:05:20 crc kubenswrapper[4806]: I0126 08:05:20.086253 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee89739e-edc1-41b5-bf4a-da80ba0a59aa" containerName="console" Jan 26 08:05:20 crc kubenswrapper[4806]: E0126 08:05:20.086315 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4da3d341-b501-409d-9834-02c5ccf5cada" containerName="pull" Jan 26 08:05:20 crc kubenswrapper[4806]: I0126 08:05:20.086369 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="4da3d341-b501-409d-9834-02c5ccf5cada" containerName="pull" Jan 26 08:05:20 crc kubenswrapper[4806]: E0126 08:05:20.086426 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4da3d341-b501-409d-9834-02c5ccf5cada" containerName="extract" Jan 26 08:05:20 crc kubenswrapper[4806]: I0126 08:05:20.086478 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="4da3d341-b501-409d-9834-02c5ccf5cada" containerName="extract" Jan 26 08:05:20 crc kubenswrapper[4806]: I0126 08:05:20.086670 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="4da3d341-b501-409d-9834-02c5ccf5cada" containerName="extract" Jan 26 08:05:20 crc kubenswrapper[4806]: I0126 08:05:20.086733 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee89739e-edc1-41b5-bf4a-da80ba0a59aa" containerName="console" Jan 26 08:05:20 crc kubenswrapper[4806]: I0126 08:05:20.087711 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-7484b44c99-sst7d" Jan 26 08:05:20 crc kubenswrapper[4806]: I0126 08:05:20.125603 4806 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 26 08:05:20 crc kubenswrapper[4806]: I0126 08:05:20.126008 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 26 08:05:20 crc kubenswrapper[4806]: I0126 08:05:20.126099 4806 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 26 08:05:20 crc kubenswrapper[4806]: I0126 08:05:20.126145 4806 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-ww765" Jan 26 08:05:20 crc kubenswrapper[4806]: I0126 08:05:20.126811 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 26 08:05:20 crc kubenswrapper[4806]: I0126 08:05:20.127470 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-7484b44c99-sst7d"] Jan 26 08:05:20 crc kubenswrapper[4806]: I0126 08:05:20.239966 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kv9h6\" (UniqueName: \"kubernetes.io/projected/73ae6aae-d47c-4eb0-a300-4cf672c00caa-kube-api-access-kv9h6\") pod \"metallb-operator-controller-manager-7484b44c99-sst7d\" (UID: \"73ae6aae-d47c-4eb0-a300-4cf672c00caa\") " pod="metallb-system/metallb-operator-controller-manager-7484b44c99-sst7d" Jan 26 08:05:20 crc kubenswrapper[4806]: I0126 08:05:20.240022 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/73ae6aae-d47c-4eb0-a300-4cf672c00caa-apiservice-cert\") pod \"metallb-operator-controller-manager-7484b44c99-sst7d\" (UID: \"73ae6aae-d47c-4eb0-a300-4cf672c00caa\") " pod="metallb-system/metallb-operator-controller-manager-7484b44c99-sst7d" Jan 26 08:05:20 crc kubenswrapper[4806]: I0126 08:05:20.240059 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/73ae6aae-d47c-4eb0-a300-4cf672c00caa-webhook-cert\") pod \"metallb-operator-controller-manager-7484b44c99-sst7d\" (UID: \"73ae6aae-d47c-4eb0-a300-4cf672c00caa\") " pod="metallb-system/metallb-operator-controller-manager-7484b44c99-sst7d" Jan 26 08:05:20 crc kubenswrapper[4806]: I0126 08:05:20.341612 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/73ae6aae-d47c-4eb0-a300-4cf672c00caa-apiservice-cert\") pod \"metallb-operator-controller-manager-7484b44c99-sst7d\" (UID: \"73ae6aae-d47c-4eb0-a300-4cf672c00caa\") " pod="metallb-system/metallb-operator-controller-manager-7484b44c99-sst7d" Jan 26 08:05:20 crc kubenswrapper[4806]: I0126 08:05:20.341685 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/73ae6aae-d47c-4eb0-a300-4cf672c00caa-webhook-cert\") pod \"metallb-operator-controller-manager-7484b44c99-sst7d\" (UID: \"73ae6aae-d47c-4eb0-a300-4cf672c00caa\") " pod="metallb-system/metallb-operator-controller-manager-7484b44c99-sst7d" Jan 26 08:05:20 crc kubenswrapper[4806]: I0126 08:05:20.341756 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kv9h6\" (UniqueName: \"kubernetes.io/projected/73ae6aae-d47c-4eb0-a300-4cf672c00caa-kube-api-access-kv9h6\") pod \"metallb-operator-controller-manager-7484b44c99-sst7d\" (UID: \"73ae6aae-d47c-4eb0-a300-4cf672c00caa\") " pod="metallb-system/metallb-operator-controller-manager-7484b44c99-sst7d" Jan 26 08:05:20 crc kubenswrapper[4806]: I0126 08:05:20.348680 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/73ae6aae-d47c-4eb0-a300-4cf672c00caa-apiservice-cert\") pod \"metallb-operator-controller-manager-7484b44c99-sst7d\" (UID: \"73ae6aae-d47c-4eb0-a300-4cf672c00caa\") " pod="metallb-system/metallb-operator-controller-manager-7484b44c99-sst7d" Jan 26 08:05:20 crc kubenswrapper[4806]: I0126 08:05:20.353181 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/73ae6aae-d47c-4eb0-a300-4cf672c00caa-webhook-cert\") pod \"metallb-operator-controller-manager-7484b44c99-sst7d\" (UID: \"73ae6aae-d47c-4eb0-a300-4cf672c00caa\") " pod="metallb-system/metallb-operator-controller-manager-7484b44c99-sst7d" Jan 26 08:05:20 crc kubenswrapper[4806]: I0126 08:05:20.369593 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kv9h6\" (UniqueName: \"kubernetes.io/projected/73ae6aae-d47c-4eb0-a300-4cf672c00caa-kube-api-access-kv9h6\") pod \"metallb-operator-controller-manager-7484b44c99-sst7d\" (UID: \"73ae6aae-d47c-4eb0-a300-4cf672c00caa\") " pod="metallb-system/metallb-operator-controller-manager-7484b44c99-sst7d" Jan 26 08:05:20 crc kubenswrapper[4806]: I0126 08:05:20.415710 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-7484b44c99-sst7d" Jan 26 08:05:20 crc kubenswrapper[4806]: I0126 08:05:20.532128 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-7f57678986-nbfxh"] Jan 26 08:05:20 crc kubenswrapper[4806]: I0126 08:05:20.532874 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-7f57678986-nbfxh" Jan 26 08:05:20 crc kubenswrapper[4806]: I0126 08:05:20.534421 4806 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-9zztn" Jan 26 08:05:20 crc kubenswrapper[4806]: I0126 08:05:20.536693 4806 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 26 08:05:20 crc kubenswrapper[4806]: I0126 08:05:20.536716 4806 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 26 08:05:20 crc kubenswrapper[4806]: I0126 08:05:20.559010 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-7f57678986-nbfxh"] Jan 26 08:05:20 crc kubenswrapper[4806]: I0126 08:05:20.644993 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2e5fa748-c882-43c5-9ecd-6d3d97c944ec-apiservice-cert\") pod \"metallb-operator-webhook-server-7f57678986-nbfxh\" (UID: \"2e5fa748-c882-43c5-9ecd-6d3d97c944ec\") " pod="metallb-system/metallb-operator-webhook-server-7f57678986-nbfxh" Jan 26 08:05:20 crc kubenswrapper[4806]: I0126 08:05:20.645399 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjr8z\" (UniqueName: \"kubernetes.io/projected/2e5fa748-c882-43c5-9ecd-6d3d97c944ec-kube-api-access-jjr8z\") pod \"metallb-operator-webhook-server-7f57678986-nbfxh\" (UID: \"2e5fa748-c882-43c5-9ecd-6d3d97c944ec\") " pod="metallb-system/metallb-operator-webhook-server-7f57678986-nbfxh" Jan 26 08:05:20 crc kubenswrapper[4806]: I0126 08:05:20.645431 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2e5fa748-c882-43c5-9ecd-6d3d97c944ec-webhook-cert\") pod \"metallb-operator-webhook-server-7f57678986-nbfxh\" (UID: \"2e5fa748-c882-43c5-9ecd-6d3d97c944ec\") " pod="metallb-system/metallb-operator-webhook-server-7f57678986-nbfxh" Jan 26 08:05:20 crc kubenswrapper[4806]: I0126 08:05:20.747270 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2e5fa748-c882-43c5-9ecd-6d3d97c944ec-apiservice-cert\") pod \"metallb-operator-webhook-server-7f57678986-nbfxh\" (UID: \"2e5fa748-c882-43c5-9ecd-6d3d97c944ec\") " pod="metallb-system/metallb-operator-webhook-server-7f57678986-nbfxh" Jan 26 08:05:20 crc kubenswrapper[4806]: I0126 08:05:20.747330 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jjr8z\" (UniqueName: \"kubernetes.io/projected/2e5fa748-c882-43c5-9ecd-6d3d97c944ec-kube-api-access-jjr8z\") pod \"metallb-operator-webhook-server-7f57678986-nbfxh\" (UID: \"2e5fa748-c882-43c5-9ecd-6d3d97c944ec\") " pod="metallb-system/metallb-operator-webhook-server-7f57678986-nbfxh" Jan 26 08:05:20 crc kubenswrapper[4806]: I0126 08:05:20.747353 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2e5fa748-c882-43c5-9ecd-6d3d97c944ec-webhook-cert\") pod \"metallb-operator-webhook-server-7f57678986-nbfxh\" (UID: \"2e5fa748-c882-43c5-9ecd-6d3d97c944ec\") " pod="metallb-system/metallb-operator-webhook-server-7f57678986-nbfxh" Jan 26 08:05:20 crc kubenswrapper[4806]: I0126 08:05:20.757487 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2e5fa748-c882-43c5-9ecd-6d3d97c944ec-apiservice-cert\") pod \"metallb-operator-webhook-server-7f57678986-nbfxh\" (UID: \"2e5fa748-c882-43c5-9ecd-6d3d97c944ec\") " pod="metallb-system/metallb-operator-webhook-server-7f57678986-nbfxh" Jan 26 08:05:20 crc kubenswrapper[4806]: I0126 08:05:20.757549 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2e5fa748-c882-43c5-9ecd-6d3d97c944ec-webhook-cert\") pod \"metallb-operator-webhook-server-7f57678986-nbfxh\" (UID: \"2e5fa748-c882-43c5-9ecd-6d3d97c944ec\") " pod="metallb-system/metallb-operator-webhook-server-7f57678986-nbfxh" Jan 26 08:05:20 crc kubenswrapper[4806]: I0126 08:05:20.791933 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jjr8z\" (UniqueName: \"kubernetes.io/projected/2e5fa748-c882-43c5-9ecd-6d3d97c944ec-kube-api-access-jjr8z\") pod \"metallb-operator-webhook-server-7f57678986-nbfxh\" (UID: \"2e5fa748-c882-43c5-9ecd-6d3d97c944ec\") " pod="metallb-system/metallb-operator-webhook-server-7f57678986-nbfxh" Jan 26 08:05:20 crc kubenswrapper[4806]: I0126 08:05:20.846742 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-7f57678986-nbfxh" Jan 26 08:05:20 crc kubenswrapper[4806]: I0126 08:05:20.851037 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-7484b44c99-sst7d"] Jan 26 08:05:20 crc kubenswrapper[4806]: W0126 08:05:20.865465 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod73ae6aae_d47c_4eb0_a300_4cf672c00caa.slice/crio-4b3f03d9c4180e3b0479d40194b7285b2dbef7567efebbd42f338b19ef2f06af WatchSource:0}: Error finding container 4b3f03d9c4180e3b0479d40194b7285b2dbef7567efebbd42f338b19ef2f06af: Status 404 returned error can't find the container with id 4b3f03d9c4180e3b0479d40194b7285b2dbef7567efebbd42f338b19ef2f06af Jan 26 08:05:20 crc kubenswrapper[4806]: I0126 08:05:20.895539 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-7484b44c99-sst7d" event={"ID":"73ae6aae-d47c-4eb0-a300-4cf672c00caa","Type":"ContainerStarted","Data":"4b3f03d9c4180e3b0479d40194b7285b2dbef7567efebbd42f338b19ef2f06af"} Jan 26 08:05:21 crc kubenswrapper[4806]: I0126 08:05:21.102513 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-7f57678986-nbfxh"] Jan 26 08:05:21 crc kubenswrapper[4806]: W0126 08:05:21.106087 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2e5fa748_c882_43c5_9ecd_6d3d97c944ec.slice/crio-238ee6e2ac597f0409e1a6372308d626fc05c30066a09fdb073e11382f8093b1 WatchSource:0}: Error finding container 238ee6e2ac597f0409e1a6372308d626fc05c30066a09fdb073e11382f8093b1: Status 404 returned error can't find the container with id 238ee6e2ac597f0409e1a6372308d626fc05c30066a09fdb073e11382f8093b1 Jan 26 08:05:21 crc kubenswrapper[4806]: I0126 08:05:21.901193 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-7f57678986-nbfxh" event={"ID":"2e5fa748-c882-43c5-9ecd-6d3d97c944ec","Type":"ContainerStarted","Data":"238ee6e2ac597f0409e1a6372308d626fc05c30066a09fdb073e11382f8093b1"} Jan 26 08:05:23 crc kubenswrapper[4806]: I0126 08:05:23.920732 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-7484b44c99-sst7d" event={"ID":"73ae6aae-d47c-4eb0-a300-4cf672c00caa","Type":"ContainerStarted","Data":"68533752dac6044f3d4637f9850f0296b33d0056bbb72c8f96c9da628d10a652"} Jan 26 08:05:23 crc kubenswrapper[4806]: I0126 08:05:23.923937 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-7484b44c99-sst7d" Jan 26 08:05:23 crc kubenswrapper[4806]: I0126 08:05:23.949250 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-7484b44c99-sst7d" podStartSLOduration=1.17339797 podStartE2EDuration="3.949234208s" podCreationTimestamp="2026-01-26 08:05:20 +0000 UTC" firstStartedPulling="2026-01-26 08:05:20.867511048 +0000 UTC m=+700.131919104" lastFinishedPulling="2026-01-26 08:05:23.643347286 +0000 UTC m=+702.907755342" observedRunningTime="2026-01-26 08:05:23.944886796 +0000 UTC m=+703.209294852" watchObservedRunningTime="2026-01-26 08:05:23.949234208 +0000 UTC m=+703.213642264" Jan 26 08:05:27 crc kubenswrapper[4806]: I0126 08:05:27.949348 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-7f57678986-nbfxh" event={"ID":"2e5fa748-c882-43c5-9ecd-6d3d97c944ec","Type":"ContainerStarted","Data":"b3497154b192b9ea1e69dddfe014936df0e7bbec4bc24d2605a8d42cc419eeea"} Jan 26 08:05:27 crc kubenswrapper[4806]: I0126 08:05:27.949853 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-7f57678986-nbfxh" Jan 26 08:05:27 crc kubenswrapper[4806]: I0126 08:05:27.967984 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-7f57678986-nbfxh" podStartSLOduration=1.904563314 podStartE2EDuration="7.967962578s" podCreationTimestamp="2026-01-26 08:05:20 +0000 UTC" firstStartedPulling="2026-01-26 08:05:21.110973732 +0000 UTC m=+700.375381788" lastFinishedPulling="2026-01-26 08:05:27.174373006 +0000 UTC m=+706.438781052" observedRunningTime="2026-01-26 08:05:27.966117066 +0000 UTC m=+707.230525122" watchObservedRunningTime="2026-01-26 08:05:27.967962578 +0000 UTC m=+707.232370644" Jan 26 08:05:40 crc kubenswrapper[4806]: I0126 08:05:40.851427 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-7f57678986-nbfxh" Jan 26 08:06:00 crc kubenswrapper[4806]: I0126 08:06:00.420759 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-7484b44c99-sst7d" Jan 26 08:06:01 crc kubenswrapper[4806]: I0126 08:06:01.223710 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-jws2v"] Jan 26 08:06:01 crc kubenswrapper[4806]: I0126 08:06:01.226185 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-jws2v" Jan 26 08:06:01 crc kubenswrapper[4806]: I0126 08:06:01.231099 4806 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 26 08:06:01 crc kubenswrapper[4806]: I0126 08:06:01.231592 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 26 08:06:01 crc kubenswrapper[4806]: I0126 08:06:01.231693 4806 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-c5lwd" Jan 26 08:06:01 crc kubenswrapper[4806]: I0126 08:06:01.238654 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-s45xn"] Jan 26 08:06:01 crc kubenswrapper[4806]: I0126 08:06:01.239363 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-s45xn" Jan 26 08:06:01 crc kubenswrapper[4806]: I0126 08:06:01.267240 4806 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 26 08:06:01 crc kubenswrapper[4806]: I0126 08:06:01.272354 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-s45xn"] Jan 26 08:06:01 crc kubenswrapper[4806]: I0126 08:06:01.354450 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/f35f3a4f-d62b-4a20-85c0-09e66c185e14-reloader\") pod \"frr-k8s-jws2v\" (UID: \"f35f3a4f-d62b-4a20-85c0-09e66c185e14\") " pod="metallb-system/frr-k8s-jws2v" Jan 26 08:06:01 crc kubenswrapper[4806]: I0126 08:06:01.354539 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fe95a617-3830-4e48-99fe-fc542f07b380-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-s45xn\" (UID: \"fe95a617-3830-4e48-99fe-fc542f07b380\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-s45xn" Jan 26 08:06:01 crc kubenswrapper[4806]: I0126 08:06:01.354563 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/f35f3a4f-d62b-4a20-85c0-09e66c185e14-frr-startup\") pod \"frr-k8s-jws2v\" (UID: \"f35f3a4f-d62b-4a20-85c0-09e66c185e14\") " pod="metallb-system/frr-k8s-jws2v" Jan 26 08:06:01 crc kubenswrapper[4806]: I0126 08:06:01.354580 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f35f3a4f-d62b-4a20-85c0-09e66c185e14-metrics-certs\") pod \"frr-k8s-jws2v\" (UID: \"f35f3a4f-d62b-4a20-85c0-09e66c185e14\") " pod="metallb-system/frr-k8s-jws2v" Jan 26 08:06:01 crc kubenswrapper[4806]: I0126 08:06:01.354623 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/f35f3a4f-d62b-4a20-85c0-09e66c185e14-frr-sockets\") pod \"frr-k8s-jws2v\" (UID: \"f35f3a4f-d62b-4a20-85c0-09e66c185e14\") " pod="metallb-system/frr-k8s-jws2v" Jan 26 08:06:01 crc kubenswrapper[4806]: I0126 08:06:01.355112 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/f35f3a4f-d62b-4a20-85c0-09e66c185e14-frr-conf\") pod \"frr-k8s-jws2v\" (UID: \"f35f3a4f-d62b-4a20-85c0-09e66c185e14\") " pod="metallb-system/frr-k8s-jws2v" Jan 26 08:06:01 crc kubenswrapper[4806]: I0126 08:06:01.355230 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrq8w\" (UniqueName: \"kubernetes.io/projected/f35f3a4f-d62b-4a20-85c0-09e66c185e14-kube-api-access-nrq8w\") pod \"frr-k8s-jws2v\" (UID: \"f35f3a4f-d62b-4a20-85c0-09e66c185e14\") " pod="metallb-system/frr-k8s-jws2v" Jan 26 08:06:01 crc kubenswrapper[4806]: I0126 08:06:01.355302 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/f35f3a4f-d62b-4a20-85c0-09e66c185e14-metrics\") pod \"frr-k8s-jws2v\" (UID: \"f35f3a4f-d62b-4a20-85c0-09e66c185e14\") " pod="metallb-system/frr-k8s-jws2v" Jan 26 08:06:01 crc kubenswrapper[4806]: I0126 08:06:01.355375 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4pvk\" (UniqueName: \"kubernetes.io/projected/fe95a617-3830-4e48-99fe-fc542f07b380-kube-api-access-w4pvk\") pod \"frr-k8s-webhook-server-7df86c4f6c-s45xn\" (UID: \"fe95a617-3830-4e48-99fe-fc542f07b380\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-s45xn" Jan 26 08:06:01 crc kubenswrapper[4806]: I0126 08:06:01.373039 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-tg7xx"] Jan 26 08:06:01 crc kubenswrapper[4806]: I0126 08:06:01.374134 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-tg7xx" Jan 26 08:06:01 crc kubenswrapper[4806]: I0126 08:06:01.376808 4806 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-rd2db" Jan 26 08:06:01 crc kubenswrapper[4806]: I0126 08:06:01.376841 4806 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 26 08:06:01 crc kubenswrapper[4806]: I0126 08:06:01.376968 4806 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 26 08:06:01 crc kubenswrapper[4806]: I0126 08:06:01.379809 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 26 08:06:01 crc kubenswrapper[4806]: I0126 08:06:01.416273 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-8527s"] Jan 26 08:06:01 crc kubenswrapper[4806]: I0126 08:06:01.417122 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-8527s" Jan 26 08:06:01 crc kubenswrapper[4806]: I0126 08:06:01.423578 4806 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 26 08:06:01 crc kubenswrapper[4806]: I0126 08:06:01.445655 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-8527s"] Jan 26 08:06:01 crc kubenswrapper[4806]: I0126 08:06:01.469305 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fe95a617-3830-4e48-99fe-fc542f07b380-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-s45xn\" (UID: \"fe95a617-3830-4e48-99fe-fc542f07b380\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-s45xn" Jan 26 08:06:01 crc kubenswrapper[4806]: I0126 08:06:01.471009 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/f35f3a4f-d62b-4a20-85c0-09e66c185e14-frr-startup\") pod \"frr-k8s-jws2v\" (UID: \"f35f3a4f-d62b-4a20-85c0-09e66c185e14\") " pod="metallb-system/frr-k8s-jws2v" Jan 26 08:06:01 crc kubenswrapper[4806]: I0126 08:06:01.471451 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/f35f3a4f-d62b-4a20-85c0-09e66c185e14-frr-startup\") pod \"frr-k8s-jws2v\" (UID: \"f35f3a4f-d62b-4a20-85c0-09e66c185e14\") " pod="metallb-system/frr-k8s-jws2v" Jan 26 08:06:01 crc kubenswrapper[4806]: I0126 08:06:01.471609 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f35f3a4f-d62b-4a20-85c0-09e66c185e14-metrics-certs\") pod \"frr-k8s-jws2v\" (UID: \"f35f3a4f-d62b-4a20-85c0-09e66c185e14\") " pod="metallb-system/frr-k8s-jws2v" Jan 26 08:06:01 crc kubenswrapper[4806]: I0126 08:06:01.471709 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/f35f3a4f-d62b-4a20-85c0-09e66c185e14-frr-sockets\") pod \"frr-k8s-jws2v\" (UID: \"f35f3a4f-d62b-4a20-85c0-09e66c185e14\") " pod="metallb-system/frr-k8s-jws2v" Jan 26 08:06:01 crc kubenswrapper[4806]: I0126 08:06:01.471789 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/f35f3a4f-d62b-4a20-85c0-09e66c185e14-frr-conf\") pod \"frr-k8s-jws2v\" (UID: \"f35f3a4f-d62b-4a20-85c0-09e66c185e14\") " pod="metallb-system/frr-k8s-jws2v" Jan 26 08:06:01 crc kubenswrapper[4806]: I0126 08:06:01.471905 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrq8w\" (UniqueName: \"kubernetes.io/projected/f35f3a4f-d62b-4a20-85c0-09e66c185e14-kube-api-access-nrq8w\") pod \"frr-k8s-jws2v\" (UID: \"f35f3a4f-d62b-4a20-85c0-09e66c185e14\") " pod="metallb-system/frr-k8s-jws2v" Jan 26 08:06:01 crc kubenswrapper[4806]: I0126 08:06:01.471999 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/f35f3a4f-d62b-4a20-85c0-09e66c185e14-metrics\") pod \"frr-k8s-jws2v\" (UID: \"f35f3a4f-d62b-4a20-85c0-09e66c185e14\") " pod="metallb-system/frr-k8s-jws2v" Jan 26 08:06:01 crc kubenswrapper[4806]: I0126 08:06:01.472098 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4pvk\" (UniqueName: \"kubernetes.io/projected/fe95a617-3830-4e48-99fe-fc542f07b380-kube-api-access-w4pvk\") pod \"frr-k8s-webhook-server-7df86c4f6c-s45xn\" (UID: \"fe95a617-3830-4e48-99fe-fc542f07b380\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-s45xn" Jan 26 08:06:01 crc kubenswrapper[4806]: I0126 08:06:01.472191 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/f35f3a4f-d62b-4a20-85c0-09e66c185e14-reloader\") pod \"frr-k8s-jws2v\" (UID: \"f35f3a4f-d62b-4a20-85c0-09e66c185e14\") " pod="metallb-system/frr-k8s-jws2v" Jan 26 08:06:01 crc kubenswrapper[4806]: I0126 08:06:01.472587 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/f35f3a4f-d62b-4a20-85c0-09e66c185e14-reloader\") pod \"frr-k8s-jws2v\" (UID: \"f35f3a4f-d62b-4a20-85c0-09e66c185e14\") " pod="metallb-system/frr-k8s-jws2v" Jan 26 08:06:01 crc kubenswrapper[4806]: I0126 08:06:01.472841 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/f35f3a4f-d62b-4a20-85c0-09e66c185e14-frr-conf\") pod \"frr-k8s-jws2v\" (UID: \"f35f3a4f-d62b-4a20-85c0-09e66c185e14\") " pod="metallb-system/frr-k8s-jws2v" Jan 26 08:06:01 crc kubenswrapper[4806]: I0126 08:06:01.473234 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/f35f3a4f-d62b-4a20-85c0-09e66c185e14-metrics\") pod \"frr-k8s-jws2v\" (UID: \"f35f3a4f-d62b-4a20-85c0-09e66c185e14\") " pod="metallb-system/frr-k8s-jws2v" Jan 26 08:06:01 crc kubenswrapper[4806]: I0126 08:06:01.473428 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/f35f3a4f-d62b-4a20-85c0-09e66c185e14-frr-sockets\") pod \"frr-k8s-jws2v\" (UID: \"f35f3a4f-d62b-4a20-85c0-09e66c185e14\") " pod="metallb-system/frr-k8s-jws2v" Jan 26 08:06:01 crc kubenswrapper[4806]: I0126 08:06:01.477364 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fe95a617-3830-4e48-99fe-fc542f07b380-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-s45xn\" (UID: \"fe95a617-3830-4e48-99fe-fc542f07b380\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-s45xn" Jan 26 08:06:01 crc kubenswrapper[4806]: I0126 08:06:01.490975 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f35f3a4f-d62b-4a20-85c0-09e66c185e14-metrics-certs\") pod \"frr-k8s-jws2v\" (UID: \"f35f3a4f-d62b-4a20-85c0-09e66c185e14\") " pod="metallb-system/frr-k8s-jws2v" Jan 26 08:06:01 crc kubenswrapper[4806]: I0126 08:06:01.491810 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4pvk\" (UniqueName: \"kubernetes.io/projected/fe95a617-3830-4e48-99fe-fc542f07b380-kube-api-access-w4pvk\") pod \"frr-k8s-webhook-server-7df86c4f6c-s45xn\" (UID: \"fe95a617-3830-4e48-99fe-fc542f07b380\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-s45xn" Jan 26 08:06:01 crc kubenswrapper[4806]: I0126 08:06:01.492322 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrq8w\" (UniqueName: \"kubernetes.io/projected/f35f3a4f-d62b-4a20-85c0-09e66c185e14-kube-api-access-nrq8w\") pod \"frr-k8s-jws2v\" (UID: \"f35f3a4f-d62b-4a20-85c0-09e66c185e14\") " pod="metallb-system/frr-k8s-jws2v" Jan 26 08:06:01 crc kubenswrapper[4806]: I0126 08:06:01.543101 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-jws2v" Jan 26 08:06:01 crc kubenswrapper[4806]: I0126 08:06:01.552871 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-s45xn" Jan 26 08:06:01 crc kubenswrapper[4806]: I0126 08:06:01.576349 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sz98w\" (UniqueName: \"kubernetes.io/projected/6eb65788-c61b-4b04-931e-d122493e153b-kube-api-access-sz98w\") pod \"controller-6968d8fdc4-8527s\" (UID: \"6eb65788-c61b-4b04-931e-d122493e153b\") " pod="metallb-system/controller-6968d8fdc4-8527s" Jan 26 08:06:01 crc kubenswrapper[4806]: I0126 08:06:01.576391 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/e4ff47b2-636a-4bae-88ae-6fde41f5cdfc-metallb-excludel2\") pod \"speaker-tg7xx\" (UID: \"e4ff47b2-636a-4bae-88ae-6fde41f5cdfc\") " pod="metallb-system/speaker-tg7xx" Jan 26 08:06:01 crc kubenswrapper[4806]: I0126 08:06:01.576422 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/e4ff47b2-636a-4bae-88ae-6fde41f5cdfc-memberlist\") pod \"speaker-tg7xx\" (UID: \"e4ff47b2-636a-4bae-88ae-6fde41f5cdfc\") " pod="metallb-system/speaker-tg7xx" Jan 26 08:06:01 crc kubenswrapper[4806]: I0126 08:06:01.576445 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e4ff47b2-636a-4bae-88ae-6fde41f5cdfc-metrics-certs\") pod \"speaker-tg7xx\" (UID: \"e4ff47b2-636a-4bae-88ae-6fde41f5cdfc\") " pod="metallb-system/speaker-tg7xx" Jan 26 08:06:01 crc kubenswrapper[4806]: I0126 08:06:01.576718 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6eb65788-c61b-4b04-931e-d122493e153b-metrics-certs\") pod \"controller-6968d8fdc4-8527s\" (UID: \"6eb65788-c61b-4b04-931e-d122493e153b\") " pod="metallb-system/controller-6968d8fdc4-8527s" Jan 26 08:06:01 crc kubenswrapper[4806]: I0126 08:06:01.576762 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nljl\" (UniqueName: \"kubernetes.io/projected/e4ff47b2-636a-4bae-88ae-6fde41f5cdfc-kube-api-access-7nljl\") pod \"speaker-tg7xx\" (UID: \"e4ff47b2-636a-4bae-88ae-6fde41f5cdfc\") " pod="metallb-system/speaker-tg7xx" Jan 26 08:06:01 crc kubenswrapper[4806]: I0126 08:06:01.576816 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6eb65788-c61b-4b04-931e-d122493e153b-cert\") pod \"controller-6968d8fdc4-8527s\" (UID: \"6eb65788-c61b-4b04-931e-d122493e153b\") " pod="metallb-system/controller-6968d8fdc4-8527s" Jan 26 08:06:01 crc kubenswrapper[4806]: I0126 08:06:01.678256 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/e4ff47b2-636a-4bae-88ae-6fde41f5cdfc-memberlist\") pod \"speaker-tg7xx\" (UID: \"e4ff47b2-636a-4bae-88ae-6fde41f5cdfc\") " pod="metallb-system/speaker-tg7xx" Jan 26 08:06:01 crc kubenswrapper[4806]: I0126 08:06:01.678561 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e4ff47b2-636a-4bae-88ae-6fde41f5cdfc-metrics-certs\") pod \"speaker-tg7xx\" (UID: \"e4ff47b2-636a-4bae-88ae-6fde41f5cdfc\") " pod="metallb-system/speaker-tg7xx" Jan 26 08:06:01 crc kubenswrapper[4806]: E0126 08:06:01.678567 4806 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 26 08:06:01 crc kubenswrapper[4806]: I0126 08:06:01.678616 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6eb65788-c61b-4b04-931e-d122493e153b-metrics-certs\") pod \"controller-6968d8fdc4-8527s\" (UID: \"6eb65788-c61b-4b04-931e-d122493e153b\") " pod="metallb-system/controller-6968d8fdc4-8527s" Jan 26 08:06:01 crc kubenswrapper[4806]: I0126 08:06:01.678635 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7nljl\" (UniqueName: \"kubernetes.io/projected/e4ff47b2-636a-4bae-88ae-6fde41f5cdfc-kube-api-access-7nljl\") pod \"speaker-tg7xx\" (UID: \"e4ff47b2-636a-4bae-88ae-6fde41f5cdfc\") " pod="metallb-system/speaker-tg7xx" Jan 26 08:06:01 crc kubenswrapper[4806]: E0126 08:06:01.678722 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e4ff47b2-636a-4bae-88ae-6fde41f5cdfc-memberlist podName:e4ff47b2-636a-4bae-88ae-6fde41f5cdfc nodeName:}" failed. No retries permitted until 2026-01-26 08:06:02.178672901 +0000 UTC m=+741.443080957 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/e4ff47b2-636a-4bae-88ae-6fde41f5cdfc-memberlist") pod "speaker-tg7xx" (UID: "e4ff47b2-636a-4bae-88ae-6fde41f5cdfc") : secret "metallb-memberlist" not found Jan 26 08:06:01 crc kubenswrapper[4806]: I0126 08:06:01.678794 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6eb65788-c61b-4b04-931e-d122493e153b-cert\") pod \"controller-6968d8fdc4-8527s\" (UID: \"6eb65788-c61b-4b04-931e-d122493e153b\") " pod="metallb-system/controller-6968d8fdc4-8527s" Jan 26 08:06:01 crc kubenswrapper[4806]: I0126 08:06:01.678936 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sz98w\" (UniqueName: \"kubernetes.io/projected/6eb65788-c61b-4b04-931e-d122493e153b-kube-api-access-sz98w\") pod \"controller-6968d8fdc4-8527s\" (UID: \"6eb65788-c61b-4b04-931e-d122493e153b\") " pod="metallb-system/controller-6968d8fdc4-8527s" Jan 26 08:06:01 crc kubenswrapper[4806]: I0126 08:06:01.678965 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/e4ff47b2-636a-4bae-88ae-6fde41f5cdfc-metallb-excludel2\") pod \"speaker-tg7xx\" (UID: \"e4ff47b2-636a-4bae-88ae-6fde41f5cdfc\") " pod="metallb-system/speaker-tg7xx" Jan 26 08:06:01 crc kubenswrapper[4806]: I0126 08:06:01.679856 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/e4ff47b2-636a-4bae-88ae-6fde41f5cdfc-metallb-excludel2\") pod \"speaker-tg7xx\" (UID: \"e4ff47b2-636a-4bae-88ae-6fde41f5cdfc\") " pod="metallb-system/speaker-tg7xx" Jan 26 08:06:01 crc kubenswrapper[4806]: I0126 08:06:01.692701 4806 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 26 08:06:01 crc kubenswrapper[4806]: I0126 08:06:01.692752 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e4ff47b2-636a-4bae-88ae-6fde41f5cdfc-metrics-certs\") pod \"speaker-tg7xx\" (UID: \"e4ff47b2-636a-4bae-88ae-6fde41f5cdfc\") " pod="metallb-system/speaker-tg7xx" Jan 26 08:06:01 crc kubenswrapper[4806]: I0126 08:06:01.693882 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6eb65788-c61b-4b04-931e-d122493e153b-metrics-certs\") pod \"controller-6968d8fdc4-8527s\" (UID: \"6eb65788-c61b-4b04-931e-d122493e153b\") " pod="metallb-system/controller-6968d8fdc4-8527s" Jan 26 08:06:01 crc kubenswrapper[4806]: I0126 08:06:01.705227 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6eb65788-c61b-4b04-931e-d122493e153b-cert\") pod \"controller-6968d8fdc4-8527s\" (UID: \"6eb65788-c61b-4b04-931e-d122493e153b\") " pod="metallb-system/controller-6968d8fdc4-8527s" Jan 26 08:06:01 crc kubenswrapper[4806]: I0126 08:06:01.710449 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7nljl\" (UniqueName: \"kubernetes.io/projected/e4ff47b2-636a-4bae-88ae-6fde41f5cdfc-kube-api-access-7nljl\") pod \"speaker-tg7xx\" (UID: \"e4ff47b2-636a-4bae-88ae-6fde41f5cdfc\") " pod="metallb-system/speaker-tg7xx" Jan 26 08:06:01 crc kubenswrapper[4806]: I0126 08:06:01.714198 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sz98w\" (UniqueName: \"kubernetes.io/projected/6eb65788-c61b-4b04-931e-d122493e153b-kube-api-access-sz98w\") pod \"controller-6968d8fdc4-8527s\" (UID: \"6eb65788-c61b-4b04-931e-d122493e153b\") " pod="metallb-system/controller-6968d8fdc4-8527s" Jan 26 08:06:01 crc kubenswrapper[4806]: I0126 08:06:01.732850 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-8527s" Jan 26 08:06:01 crc kubenswrapper[4806]: I0126 08:06:01.957611 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-8527s"] Jan 26 08:06:01 crc kubenswrapper[4806]: W0126 08:06:01.968871 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6eb65788_c61b_4b04_931e_d122493e153b.slice/crio-bdeb5951cec339dfa4dac251124aa3849056f374895a20b7f0be132dd923a43a WatchSource:0}: Error finding container bdeb5951cec339dfa4dac251124aa3849056f374895a20b7f0be132dd923a43a: Status 404 returned error can't find the container with id bdeb5951cec339dfa4dac251124aa3849056f374895a20b7f0be132dd923a43a Jan 26 08:06:01 crc kubenswrapper[4806]: I0126 08:06:01.995791 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-s45xn"] Jan 26 08:06:02 crc kubenswrapper[4806]: W0126 08:06:02.022092 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfe95a617_3830_4e48_99fe_fc542f07b380.slice/crio-83daf341b72c6e97692f20cef8fad0d774c15a6fd7e17905e81c407bfb68de07 WatchSource:0}: Error finding container 83daf341b72c6e97692f20cef8fad0d774c15a6fd7e17905e81c407bfb68de07: Status 404 returned error can't find the container with id 83daf341b72c6e97692f20cef8fad0d774c15a6fd7e17905e81c407bfb68de07 Jan 26 08:06:02 crc kubenswrapper[4806]: I0126 08:06:02.135413 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-s45xn" event={"ID":"fe95a617-3830-4e48-99fe-fc542f07b380","Type":"ContainerStarted","Data":"83daf341b72c6e97692f20cef8fad0d774c15a6fd7e17905e81c407bfb68de07"} Jan 26 08:06:02 crc kubenswrapper[4806]: I0126 08:06:02.136398 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-jws2v" event={"ID":"f35f3a4f-d62b-4a20-85c0-09e66c185e14","Type":"ContainerStarted","Data":"2415814929fdb52a03c3020d2fa49828e44e4a817c09fad6c9f347907ef81d88"} Jan 26 08:06:02 crc kubenswrapper[4806]: I0126 08:06:02.137421 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-8527s" event={"ID":"6eb65788-c61b-4b04-931e-d122493e153b","Type":"ContainerStarted","Data":"42a1f888de318a1141b539ff4b47b4ea68e2c60760be00cad78399865d5abd63"} Jan 26 08:06:02 crc kubenswrapper[4806]: I0126 08:06:02.137448 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-8527s" event={"ID":"6eb65788-c61b-4b04-931e-d122493e153b","Type":"ContainerStarted","Data":"bdeb5951cec339dfa4dac251124aa3849056f374895a20b7f0be132dd923a43a"} Jan 26 08:06:02 crc kubenswrapper[4806]: I0126 08:06:02.185980 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/e4ff47b2-636a-4bae-88ae-6fde41f5cdfc-memberlist\") pod \"speaker-tg7xx\" (UID: \"e4ff47b2-636a-4bae-88ae-6fde41f5cdfc\") " pod="metallb-system/speaker-tg7xx" Jan 26 08:06:02 crc kubenswrapper[4806]: I0126 08:06:02.191938 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/e4ff47b2-636a-4bae-88ae-6fde41f5cdfc-memberlist\") pod \"speaker-tg7xx\" (UID: \"e4ff47b2-636a-4bae-88ae-6fde41f5cdfc\") " pod="metallb-system/speaker-tg7xx" Jan 26 08:06:02 crc kubenswrapper[4806]: I0126 08:06:02.287489 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-tg7xx" Jan 26 08:06:02 crc kubenswrapper[4806]: W0126 08:06:02.307212 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode4ff47b2_636a_4bae_88ae_6fde41f5cdfc.slice/crio-ecf0d73ab1cac175dad13e3680512adf1a7823d988099f7e1de3f674f78f2da5 WatchSource:0}: Error finding container ecf0d73ab1cac175dad13e3680512adf1a7823d988099f7e1de3f674f78f2da5: Status 404 returned error can't find the container with id ecf0d73ab1cac175dad13e3680512adf1a7823d988099f7e1de3f674f78f2da5 Jan 26 08:06:03 crc kubenswrapper[4806]: I0126 08:06:03.147649 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-8527s" event={"ID":"6eb65788-c61b-4b04-931e-d122493e153b","Type":"ContainerStarted","Data":"fccb7197f0cab35b14597f2e0dfa6f41abc8fa3db4631a8f374ca5d5e09504ae"} Jan 26 08:06:03 crc kubenswrapper[4806]: I0126 08:06:03.148553 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-8527s" Jan 26 08:06:03 crc kubenswrapper[4806]: I0126 08:06:03.153554 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-tg7xx" event={"ID":"e4ff47b2-636a-4bae-88ae-6fde41f5cdfc","Type":"ContainerStarted","Data":"cfeabae273e5de5ef89bc4c909a815467e9a41fef6899cfcacfd637afa693300"} Jan 26 08:06:03 crc kubenswrapper[4806]: I0126 08:06:03.153601 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-tg7xx" event={"ID":"e4ff47b2-636a-4bae-88ae-6fde41f5cdfc","Type":"ContainerStarted","Data":"500c3082fde5ceb4be12eeb45ee2e79763fe5d168fd3ea12a51674b42fa0dabe"} Jan 26 08:06:03 crc kubenswrapper[4806]: I0126 08:06:03.153612 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-tg7xx" event={"ID":"e4ff47b2-636a-4bae-88ae-6fde41f5cdfc","Type":"ContainerStarted","Data":"ecf0d73ab1cac175dad13e3680512adf1a7823d988099f7e1de3f674f78f2da5"} Jan 26 08:06:03 crc kubenswrapper[4806]: I0126 08:06:03.153752 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-tg7xx" Jan 26 08:06:03 crc kubenswrapper[4806]: I0126 08:06:03.196507 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-8527s" podStartSLOduration=2.196485463 podStartE2EDuration="2.196485463s" podCreationTimestamp="2026-01-26 08:06:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 08:06:03.172303932 +0000 UTC m=+742.436711988" watchObservedRunningTime="2026-01-26 08:06:03.196485463 +0000 UTC m=+742.460893529" Jan 26 08:06:03 crc kubenswrapper[4806]: I0126 08:06:03.197059 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-tg7xx" podStartSLOduration=2.197052109 podStartE2EDuration="2.197052109s" podCreationTimestamp="2026-01-26 08:06:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 08:06:03.192630904 +0000 UTC m=+742.457038970" watchObservedRunningTime="2026-01-26 08:06:03.197052109 +0000 UTC m=+742.461460165" Jan 26 08:06:10 crc kubenswrapper[4806]: I0126 08:06:10.211936 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-s45xn" event={"ID":"fe95a617-3830-4e48-99fe-fc542f07b380","Type":"ContainerStarted","Data":"38dc6cc6de82f873f4741613a99df16a097415c4051ffd3a7b1a0a1dd5b669b6"} Jan 26 08:06:10 crc kubenswrapper[4806]: I0126 08:06:10.212574 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-s45xn" Jan 26 08:06:10 crc kubenswrapper[4806]: I0126 08:06:10.214395 4806 generic.go:334] "Generic (PLEG): container finished" podID="f35f3a4f-d62b-4a20-85c0-09e66c185e14" containerID="fac9f145d0fde40ba9d4d485da257ef60cdc469c468073779a2af1e5b0a414ea" exitCode=0 Jan 26 08:06:10 crc kubenswrapper[4806]: I0126 08:06:10.214468 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-jws2v" event={"ID":"f35f3a4f-d62b-4a20-85c0-09e66c185e14","Type":"ContainerDied","Data":"fac9f145d0fde40ba9d4d485da257ef60cdc469c468073779a2af1e5b0a414ea"} Jan 26 08:06:10 crc kubenswrapper[4806]: I0126 08:06:10.239270 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-s45xn" podStartSLOduration=1.564417988 podStartE2EDuration="9.239244938s" podCreationTimestamp="2026-01-26 08:06:01 +0000 UTC" firstStartedPulling="2026-01-26 08:06:02.024100386 +0000 UTC m=+741.288508442" lastFinishedPulling="2026-01-26 08:06:09.698927336 +0000 UTC m=+748.963335392" observedRunningTime="2026-01-26 08:06:10.237648063 +0000 UTC m=+749.502056119" watchObservedRunningTime="2026-01-26 08:06:10.239244938 +0000 UTC m=+749.503653004" Jan 26 08:06:11 crc kubenswrapper[4806]: I0126 08:06:11.221788 4806 generic.go:334] "Generic (PLEG): container finished" podID="f35f3a4f-d62b-4a20-85c0-09e66c185e14" containerID="f8c122e6560fe1d3972485e87b0af82bd4a121a3d8911e5c660f0548935574ff" exitCode=0 Jan 26 08:06:11 crc kubenswrapper[4806]: I0126 08:06:11.221887 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-jws2v" event={"ID":"f35f3a4f-d62b-4a20-85c0-09e66c185e14","Type":"ContainerDied","Data":"f8c122e6560fe1d3972485e87b0af82bd4a121a3d8911e5c660f0548935574ff"} Jan 26 08:06:11 crc kubenswrapper[4806]: I0126 08:06:11.703433 4806 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 26 08:06:12 crc kubenswrapper[4806]: I0126 08:06:12.233212 4806 generic.go:334] "Generic (PLEG): container finished" podID="f35f3a4f-d62b-4a20-85c0-09e66c185e14" containerID="e68d3213251cfb65c40bd9a827a133c9770dfbe3143655097775cb5db8257f81" exitCode=0 Jan 26 08:06:12 crc kubenswrapper[4806]: I0126 08:06:12.233324 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-jws2v" event={"ID":"f35f3a4f-d62b-4a20-85c0-09e66c185e14","Type":"ContainerDied","Data":"e68d3213251cfb65c40bd9a827a133c9770dfbe3143655097775cb5db8257f81"} Jan 26 08:06:12 crc kubenswrapper[4806]: I0126 08:06:12.291802 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-tg7xx" Jan 26 08:06:13 crc kubenswrapper[4806]: I0126 08:06:13.241494 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-jws2v" event={"ID":"f35f3a4f-d62b-4a20-85c0-09e66c185e14","Type":"ContainerStarted","Data":"d3da12edfe322df299cb43ef211ff4efb6d1d2348fa87c906fe4067dec9a189e"} Jan 26 08:06:14 crc kubenswrapper[4806]: I0126 08:06:14.252958 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-jws2v" event={"ID":"f35f3a4f-d62b-4a20-85c0-09e66c185e14","Type":"ContainerStarted","Data":"1c6f9efa568461f2f9e35d49cd5037f658ab743c20a58aaf8d83c890c037729d"} Jan 26 08:06:14 crc kubenswrapper[4806]: I0126 08:06:14.252999 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-jws2v" event={"ID":"f35f3a4f-d62b-4a20-85c0-09e66c185e14","Type":"ContainerStarted","Data":"4fde6ff892748f0e01c4d8778af03097af3dec388ece4be4b45b2c9d0e03e80d"} Jan 26 08:06:15 crc kubenswrapper[4806]: I0126 08:06:15.263768 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-jws2v" event={"ID":"f35f3a4f-d62b-4a20-85c0-09e66c185e14","Type":"ContainerStarted","Data":"264afadd3f0ad87154ba01eae3dacf0505e926d2f6604b8b5595734183744567"} Jan 26 08:06:15 crc kubenswrapper[4806]: I0126 08:06:15.264177 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-jws2v" event={"ID":"f35f3a4f-d62b-4a20-85c0-09e66c185e14","Type":"ContainerStarted","Data":"3bcdef3772f279aa4798e1a642c9e5b92b12899a8cecb0af47d0ba2b56cd1b5c"} Jan 26 08:06:15 crc kubenswrapper[4806]: I0126 08:06:15.264193 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-jws2v" event={"ID":"f35f3a4f-d62b-4a20-85c0-09e66c185e14","Type":"ContainerStarted","Data":"e446157492b6a78203e58da637d8a73edf9d6cd026a126c3ed3294617c9b98ef"} Jan 26 08:06:15 crc kubenswrapper[4806]: I0126 08:06:15.264228 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-jws2v" Jan 26 08:06:15 crc kubenswrapper[4806]: I0126 08:06:15.286236 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-vmlc5"] Jan 26 08:06:15 crc kubenswrapper[4806]: I0126 08:06:15.287382 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-vmlc5" Jan 26 08:06:15 crc kubenswrapper[4806]: I0126 08:06:15.290512 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-kfwbz" Jan 26 08:06:15 crc kubenswrapper[4806]: I0126 08:06:15.290985 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 26 08:06:15 crc kubenswrapper[4806]: I0126 08:06:15.291048 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 26 08:06:15 crc kubenswrapper[4806]: I0126 08:06:15.301776 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-vmlc5"] Jan 26 08:06:15 crc kubenswrapper[4806]: I0126 08:06:15.307893 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-jws2v" podStartSLOduration=6.384544959 podStartE2EDuration="14.307875665s" podCreationTimestamp="2026-01-26 08:06:01 +0000 UTC" firstStartedPulling="2026-01-26 08:06:01.743124376 +0000 UTC m=+741.007532432" lastFinishedPulling="2026-01-26 08:06:09.666455082 +0000 UTC m=+748.930863138" observedRunningTime="2026-01-26 08:06:15.304090529 +0000 UTC m=+754.568498585" watchObservedRunningTime="2026-01-26 08:06:15.307875665 +0000 UTC m=+754.572283721" Jan 26 08:06:15 crc kubenswrapper[4806]: I0126 08:06:15.364694 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6kf7\" (UniqueName: \"kubernetes.io/projected/dec4c2d3-ec28-467a-9432-02a3441455be-kube-api-access-t6kf7\") pod \"openstack-operator-index-vmlc5\" (UID: \"dec4c2d3-ec28-467a-9432-02a3441455be\") " pod="openstack-operators/openstack-operator-index-vmlc5" Jan 26 08:06:15 crc kubenswrapper[4806]: I0126 08:06:15.466345 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6kf7\" (UniqueName: \"kubernetes.io/projected/dec4c2d3-ec28-467a-9432-02a3441455be-kube-api-access-t6kf7\") pod \"openstack-operator-index-vmlc5\" (UID: \"dec4c2d3-ec28-467a-9432-02a3441455be\") " pod="openstack-operators/openstack-operator-index-vmlc5" Jan 26 08:06:15 crc kubenswrapper[4806]: I0126 08:06:15.493279 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6kf7\" (UniqueName: \"kubernetes.io/projected/dec4c2d3-ec28-467a-9432-02a3441455be-kube-api-access-t6kf7\") pod \"openstack-operator-index-vmlc5\" (UID: \"dec4c2d3-ec28-467a-9432-02a3441455be\") " pod="openstack-operators/openstack-operator-index-vmlc5" Jan 26 08:06:15 crc kubenswrapper[4806]: I0126 08:06:15.650801 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-vmlc5" Jan 26 08:06:16 crc kubenswrapper[4806]: I0126 08:06:16.106075 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-vmlc5"] Jan 26 08:06:16 crc kubenswrapper[4806]: I0126 08:06:16.278619 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-vmlc5" event={"ID":"dec4c2d3-ec28-467a-9432-02a3441455be","Type":"ContainerStarted","Data":"efb2bfd60460bbf177f5090bab1dd5f2dca0973f4514320b12358d6553095440"} Jan 26 08:06:16 crc kubenswrapper[4806]: I0126 08:06:16.543649 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-jws2v" Jan 26 08:06:16 crc kubenswrapper[4806]: I0126 08:06:16.581638 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-jws2v" Jan 26 08:06:18 crc kubenswrapper[4806]: I0126 08:06:18.662186 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-vmlc5"] Jan 26 08:06:19 crc kubenswrapper[4806]: I0126 08:06:19.262891 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-8lljx"] Jan 26 08:06:19 crc kubenswrapper[4806]: I0126 08:06:19.263686 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-8lljx" Jan 26 08:06:19 crc kubenswrapper[4806]: I0126 08:06:19.280839 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-8lljx"] Jan 26 08:06:19 crc kubenswrapper[4806]: I0126 08:06:19.297214 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-vmlc5" event={"ID":"dec4c2d3-ec28-467a-9432-02a3441455be","Type":"ContainerStarted","Data":"e954c4b5d57e2e21a0c18436a35307e3f25772d0178c831be10fa7d0da48c037"} Jan 26 08:06:19 crc kubenswrapper[4806]: I0126 08:06:19.297336 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-vmlc5" podUID="dec4c2d3-ec28-467a-9432-02a3441455be" containerName="registry-server" containerID="cri-o://e954c4b5d57e2e21a0c18436a35307e3f25772d0178c831be10fa7d0da48c037" gracePeriod=2 Jan 26 08:06:19 crc kubenswrapper[4806]: I0126 08:06:19.314116 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-vmlc5" podStartSLOduration=1.8512742960000002 podStartE2EDuration="4.314096223s" podCreationTimestamp="2026-01-26 08:06:15 +0000 UTC" firstStartedPulling="2026-01-26 08:06:16.113847956 +0000 UTC m=+755.378256022" lastFinishedPulling="2026-01-26 08:06:18.576669893 +0000 UTC m=+757.841077949" observedRunningTime="2026-01-26 08:06:19.312205219 +0000 UTC m=+758.576613265" watchObservedRunningTime="2026-01-26 08:06:19.314096223 +0000 UTC m=+758.578504279" Jan 26 08:06:19 crc kubenswrapper[4806]: I0126 08:06:19.420474 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sx88x\" (UniqueName: \"kubernetes.io/projected/72beed4f-5ada-46d1-874a-3394b8768fd2-kube-api-access-sx88x\") pod \"openstack-operator-index-8lljx\" (UID: \"72beed4f-5ada-46d1-874a-3394b8768fd2\") " pod="openstack-operators/openstack-operator-index-8lljx" Jan 26 08:06:19 crc kubenswrapper[4806]: I0126 08:06:19.522290 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sx88x\" (UniqueName: \"kubernetes.io/projected/72beed4f-5ada-46d1-874a-3394b8768fd2-kube-api-access-sx88x\") pod \"openstack-operator-index-8lljx\" (UID: \"72beed4f-5ada-46d1-874a-3394b8768fd2\") " pod="openstack-operators/openstack-operator-index-8lljx" Jan 26 08:06:19 crc kubenswrapper[4806]: I0126 08:06:19.543431 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sx88x\" (UniqueName: \"kubernetes.io/projected/72beed4f-5ada-46d1-874a-3394b8768fd2-kube-api-access-sx88x\") pod \"openstack-operator-index-8lljx\" (UID: \"72beed4f-5ada-46d1-874a-3394b8768fd2\") " pod="openstack-operators/openstack-operator-index-8lljx" Jan 26 08:06:19 crc kubenswrapper[4806]: I0126 08:06:19.579998 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-8lljx" Jan 26 08:06:19 crc kubenswrapper[4806]: I0126 08:06:19.740731 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-vmlc5" Jan 26 08:06:19 crc kubenswrapper[4806]: I0126 08:06:19.932232 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t6kf7\" (UniqueName: \"kubernetes.io/projected/dec4c2d3-ec28-467a-9432-02a3441455be-kube-api-access-t6kf7\") pod \"dec4c2d3-ec28-467a-9432-02a3441455be\" (UID: \"dec4c2d3-ec28-467a-9432-02a3441455be\") " Jan 26 08:06:19 crc kubenswrapper[4806]: I0126 08:06:19.937714 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dec4c2d3-ec28-467a-9432-02a3441455be-kube-api-access-t6kf7" (OuterVolumeSpecName: "kube-api-access-t6kf7") pod "dec4c2d3-ec28-467a-9432-02a3441455be" (UID: "dec4c2d3-ec28-467a-9432-02a3441455be"). InnerVolumeSpecName "kube-api-access-t6kf7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:06:20 crc kubenswrapper[4806]: I0126 08:06:20.007958 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-8lljx"] Jan 26 08:06:20 crc kubenswrapper[4806]: W0126 08:06:20.016033 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod72beed4f_5ada_46d1_874a_3394b8768fd2.slice/crio-c8f0b7eb8d37debef57321b3f83aa5f16c631cd9320f175aa4c87462c043159e WatchSource:0}: Error finding container c8f0b7eb8d37debef57321b3f83aa5f16c631cd9320f175aa4c87462c043159e: Status 404 returned error can't find the container with id c8f0b7eb8d37debef57321b3f83aa5f16c631cd9320f175aa4c87462c043159e Jan 26 08:06:20 crc kubenswrapper[4806]: I0126 08:06:20.033472 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t6kf7\" (UniqueName: \"kubernetes.io/projected/dec4c2d3-ec28-467a-9432-02a3441455be-kube-api-access-t6kf7\") on node \"crc\" DevicePath \"\"" Jan 26 08:06:20 crc kubenswrapper[4806]: I0126 08:06:20.304282 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-8lljx" event={"ID":"72beed4f-5ada-46d1-874a-3394b8768fd2","Type":"ContainerStarted","Data":"d00a5423f6484686b44fdd68e9438cd4b9160bb937744834328caafc5bf62f55"} Jan 26 08:06:20 crc kubenswrapper[4806]: I0126 08:06:20.304324 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-8lljx" event={"ID":"72beed4f-5ada-46d1-874a-3394b8768fd2","Type":"ContainerStarted","Data":"c8f0b7eb8d37debef57321b3f83aa5f16c631cd9320f175aa4c87462c043159e"} Jan 26 08:06:20 crc kubenswrapper[4806]: I0126 08:06:20.306073 4806 generic.go:334] "Generic (PLEG): container finished" podID="dec4c2d3-ec28-467a-9432-02a3441455be" containerID="e954c4b5d57e2e21a0c18436a35307e3f25772d0178c831be10fa7d0da48c037" exitCode=0 Jan 26 08:06:20 crc kubenswrapper[4806]: I0126 08:06:20.306117 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-vmlc5" Jan 26 08:06:20 crc kubenswrapper[4806]: I0126 08:06:20.306119 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-vmlc5" event={"ID":"dec4c2d3-ec28-467a-9432-02a3441455be","Type":"ContainerDied","Data":"e954c4b5d57e2e21a0c18436a35307e3f25772d0178c831be10fa7d0da48c037"} Jan 26 08:06:20 crc kubenswrapper[4806]: I0126 08:06:20.306286 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-vmlc5" event={"ID":"dec4c2d3-ec28-467a-9432-02a3441455be","Type":"ContainerDied","Data":"efb2bfd60460bbf177f5090bab1dd5f2dca0973f4514320b12358d6553095440"} Jan 26 08:06:20 crc kubenswrapper[4806]: I0126 08:06:20.306321 4806 scope.go:117] "RemoveContainer" containerID="e954c4b5d57e2e21a0c18436a35307e3f25772d0178c831be10fa7d0da48c037" Jan 26 08:06:20 crc kubenswrapper[4806]: I0126 08:06:20.323243 4806 scope.go:117] "RemoveContainer" containerID="e954c4b5d57e2e21a0c18436a35307e3f25772d0178c831be10fa7d0da48c037" Jan 26 08:06:20 crc kubenswrapper[4806]: E0126 08:06:20.324024 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e954c4b5d57e2e21a0c18436a35307e3f25772d0178c831be10fa7d0da48c037\": container with ID starting with e954c4b5d57e2e21a0c18436a35307e3f25772d0178c831be10fa7d0da48c037 not found: ID does not exist" containerID="e954c4b5d57e2e21a0c18436a35307e3f25772d0178c831be10fa7d0da48c037" Jan 26 08:06:20 crc kubenswrapper[4806]: I0126 08:06:20.324061 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e954c4b5d57e2e21a0c18436a35307e3f25772d0178c831be10fa7d0da48c037"} err="failed to get container status \"e954c4b5d57e2e21a0c18436a35307e3f25772d0178c831be10fa7d0da48c037\": rpc error: code = NotFound desc = could not find container \"e954c4b5d57e2e21a0c18436a35307e3f25772d0178c831be10fa7d0da48c037\": container with ID starting with e954c4b5d57e2e21a0c18436a35307e3f25772d0178c831be10fa7d0da48c037 not found: ID does not exist" Jan 26 08:06:20 crc kubenswrapper[4806]: I0126 08:06:20.326665 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-8lljx" podStartSLOduration=1.280467049 podStartE2EDuration="1.326655479s" podCreationTimestamp="2026-01-26 08:06:19 +0000 UTC" firstStartedPulling="2026-01-26 08:06:20.020704326 +0000 UTC m=+759.285112382" lastFinishedPulling="2026-01-26 08:06:20.066892756 +0000 UTC m=+759.331300812" observedRunningTime="2026-01-26 08:06:20.32382594 +0000 UTC m=+759.588233996" watchObservedRunningTime="2026-01-26 08:06:20.326655479 +0000 UTC m=+759.591063535" Jan 26 08:06:20 crc kubenswrapper[4806]: I0126 08:06:20.340815 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-vmlc5"] Jan 26 08:06:20 crc kubenswrapper[4806]: I0126 08:06:20.344508 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-vmlc5"] Jan 26 08:06:21 crc kubenswrapper[4806]: I0126 08:06:21.049984 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dec4c2d3-ec28-467a-9432-02a3441455be" path="/var/lib/kubelet/pods/dec4c2d3-ec28-467a-9432-02a3441455be/volumes" Jan 26 08:06:21 crc kubenswrapper[4806]: I0126 08:06:21.557853 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-s45xn" Jan 26 08:06:21 crc kubenswrapper[4806]: I0126 08:06:21.741108 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-8527s" Jan 26 08:06:29 crc kubenswrapper[4806]: I0126 08:06:29.580945 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-8lljx" Jan 26 08:06:29 crc kubenswrapper[4806]: I0126 08:06:29.581433 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-8lljx" Jan 26 08:06:29 crc kubenswrapper[4806]: I0126 08:06:29.610359 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-8lljx" Jan 26 08:06:30 crc kubenswrapper[4806]: I0126 08:06:30.404086 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-8lljx" Jan 26 08:06:31 crc kubenswrapper[4806]: I0126 08:06:31.546376 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-jws2v" Jan 26 08:06:32 crc kubenswrapper[4806]: I0126 08:06:32.790426 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-gz8k7"] Jan 26 08:06:32 crc kubenswrapper[4806]: E0126 08:06:32.790776 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dec4c2d3-ec28-467a-9432-02a3441455be" containerName="registry-server" Jan 26 08:06:32 crc kubenswrapper[4806]: I0126 08:06:32.790793 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="dec4c2d3-ec28-467a-9432-02a3441455be" containerName="registry-server" Jan 26 08:06:32 crc kubenswrapper[4806]: I0126 08:06:32.790955 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="dec4c2d3-ec28-467a-9432-02a3441455be" containerName="registry-server" Jan 26 08:06:32 crc kubenswrapper[4806]: I0126 08:06:32.791984 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gz8k7" Jan 26 08:06:32 crc kubenswrapper[4806]: I0126 08:06:32.801812 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gz8k7"] Jan 26 08:06:32 crc kubenswrapper[4806]: I0126 08:06:32.920247 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e08c274-72f5-4b31-b57c-007ba7cf486d-utilities\") pod \"redhat-operators-gz8k7\" (UID: \"0e08c274-72f5-4b31-b57c-007ba7cf486d\") " pod="openshift-marketplace/redhat-operators-gz8k7" Jan 26 08:06:32 crc kubenswrapper[4806]: I0126 08:06:32.920331 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e08c274-72f5-4b31-b57c-007ba7cf486d-catalog-content\") pod \"redhat-operators-gz8k7\" (UID: \"0e08c274-72f5-4b31-b57c-007ba7cf486d\") " pod="openshift-marketplace/redhat-operators-gz8k7" Jan 26 08:06:32 crc kubenswrapper[4806]: I0126 08:06:32.920359 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p69w8\" (UniqueName: \"kubernetes.io/projected/0e08c274-72f5-4b31-b57c-007ba7cf486d-kube-api-access-p69w8\") pod \"redhat-operators-gz8k7\" (UID: \"0e08c274-72f5-4b31-b57c-007ba7cf486d\") " pod="openshift-marketplace/redhat-operators-gz8k7" Jan 26 08:06:33 crc kubenswrapper[4806]: I0126 08:06:33.021739 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p69w8\" (UniqueName: \"kubernetes.io/projected/0e08c274-72f5-4b31-b57c-007ba7cf486d-kube-api-access-p69w8\") pod \"redhat-operators-gz8k7\" (UID: \"0e08c274-72f5-4b31-b57c-007ba7cf486d\") " pod="openshift-marketplace/redhat-operators-gz8k7" Jan 26 08:06:33 crc kubenswrapper[4806]: I0126 08:06:33.021839 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e08c274-72f5-4b31-b57c-007ba7cf486d-utilities\") pod \"redhat-operators-gz8k7\" (UID: \"0e08c274-72f5-4b31-b57c-007ba7cf486d\") " pod="openshift-marketplace/redhat-operators-gz8k7" Jan 26 08:06:33 crc kubenswrapper[4806]: I0126 08:06:33.021912 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e08c274-72f5-4b31-b57c-007ba7cf486d-catalog-content\") pod \"redhat-operators-gz8k7\" (UID: \"0e08c274-72f5-4b31-b57c-007ba7cf486d\") " pod="openshift-marketplace/redhat-operators-gz8k7" Jan 26 08:06:33 crc kubenswrapper[4806]: I0126 08:06:33.022712 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e08c274-72f5-4b31-b57c-007ba7cf486d-catalog-content\") pod \"redhat-operators-gz8k7\" (UID: \"0e08c274-72f5-4b31-b57c-007ba7cf486d\") " pod="openshift-marketplace/redhat-operators-gz8k7" Jan 26 08:06:33 crc kubenswrapper[4806]: I0126 08:06:33.022805 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e08c274-72f5-4b31-b57c-007ba7cf486d-utilities\") pod \"redhat-operators-gz8k7\" (UID: \"0e08c274-72f5-4b31-b57c-007ba7cf486d\") " pod="openshift-marketplace/redhat-operators-gz8k7" Jan 26 08:06:33 crc kubenswrapper[4806]: I0126 08:06:33.048180 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p69w8\" (UniqueName: \"kubernetes.io/projected/0e08c274-72f5-4b31-b57c-007ba7cf486d-kube-api-access-p69w8\") pod \"redhat-operators-gz8k7\" (UID: \"0e08c274-72f5-4b31-b57c-007ba7cf486d\") " pod="openshift-marketplace/redhat-operators-gz8k7" Jan 26 08:06:33 crc kubenswrapper[4806]: I0126 08:06:33.115641 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gz8k7" Jan 26 08:06:33 crc kubenswrapper[4806]: I0126 08:06:33.560060 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gz8k7"] Jan 26 08:06:34 crc kubenswrapper[4806]: I0126 08:06:34.416282 4806 generic.go:334] "Generic (PLEG): container finished" podID="0e08c274-72f5-4b31-b57c-007ba7cf486d" containerID="cc2c85e9a8b349c7501b067a79019134fce5cafa6ca76ce1a0d7aaba8843cbe4" exitCode=0 Jan 26 08:06:34 crc kubenswrapper[4806]: I0126 08:06:34.416396 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gz8k7" event={"ID":"0e08c274-72f5-4b31-b57c-007ba7cf486d","Type":"ContainerDied","Data":"cc2c85e9a8b349c7501b067a79019134fce5cafa6ca76ce1a0d7aaba8843cbe4"} Jan 26 08:06:34 crc kubenswrapper[4806]: I0126 08:06:34.416981 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gz8k7" event={"ID":"0e08c274-72f5-4b31-b57c-007ba7cf486d","Type":"ContainerStarted","Data":"ecaaf52e7c955729cc569dc387bf678272a658b8c39d6b41bfa12e456fbdc4c9"} Jan 26 08:06:35 crc kubenswrapper[4806]: I0126 08:06:35.427623 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gz8k7" event={"ID":"0e08c274-72f5-4b31-b57c-007ba7cf486d","Type":"ContainerStarted","Data":"54159f08da6861f43c7543b34891d30e50c313d422a6446bb6c12b39865d60a7"} Jan 26 08:06:36 crc kubenswrapper[4806]: I0126 08:06:36.440694 4806 generic.go:334] "Generic (PLEG): container finished" podID="0e08c274-72f5-4b31-b57c-007ba7cf486d" containerID="54159f08da6861f43c7543b34891d30e50c313d422a6446bb6c12b39865d60a7" exitCode=0 Jan 26 08:06:36 crc kubenswrapper[4806]: I0126 08:06:36.440756 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gz8k7" event={"ID":"0e08c274-72f5-4b31-b57c-007ba7cf486d","Type":"ContainerDied","Data":"54159f08da6861f43c7543b34891d30e50c313d422a6446bb6c12b39865d60a7"} Jan 26 08:06:37 crc kubenswrapper[4806]: I0126 08:06:37.212990 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/5f33e6d26e38b6659ab3f4d90d9fbf1cd409d6eb8076f273c58e43de55gxsxk"] Jan 26 08:06:37 crc kubenswrapper[4806]: I0126 08:06:37.214777 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/5f33e6d26e38b6659ab3f4d90d9fbf1cd409d6eb8076f273c58e43de55gxsxk" Jan 26 08:06:37 crc kubenswrapper[4806]: I0126 08:06:37.218872 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-sq8ws" Jan 26 08:06:37 crc kubenswrapper[4806]: I0126 08:06:37.226675 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/5f33e6d26e38b6659ab3f4d90d9fbf1cd409d6eb8076f273c58e43de55gxsxk"] Jan 26 08:06:37 crc kubenswrapper[4806]: I0126 08:06:37.292435 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d350a047-9d1f-46ea-b0cd-54c9a629f49c-bundle\") pod \"5f33e6d26e38b6659ab3f4d90d9fbf1cd409d6eb8076f273c58e43de55gxsxk\" (UID: \"d350a047-9d1f-46ea-b0cd-54c9a629f49c\") " pod="openstack-operators/5f33e6d26e38b6659ab3f4d90d9fbf1cd409d6eb8076f273c58e43de55gxsxk" Jan 26 08:06:37 crc kubenswrapper[4806]: I0126 08:06:37.292555 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d350a047-9d1f-46ea-b0cd-54c9a629f49c-util\") pod \"5f33e6d26e38b6659ab3f4d90d9fbf1cd409d6eb8076f273c58e43de55gxsxk\" (UID: \"d350a047-9d1f-46ea-b0cd-54c9a629f49c\") " pod="openstack-operators/5f33e6d26e38b6659ab3f4d90d9fbf1cd409d6eb8076f273c58e43de55gxsxk" Jan 26 08:06:37 crc kubenswrapper[4806]: I0126 08:06:37.292587 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvsg6\" (UniqueName: \"kubernetes.io/projected/d350a047-9d1f-46ea-b0cd-54c9a629f49c-kube-api-access-zvsg6\") pod \"5f33e6d26e38b6659ab3f4d90d9fbf1cd409d6eb8076f273c58e43de55gxsxk\" (UID: \"d350a047-9d1f-46ea-b0cd-54c9a629f49c\") " pod="openstack-operators/5f33e6d26e38b6659ab3f4d90d9fbf1cd409d6eb8076f273c58e43de55gxsxk" Jan 26 08:06:37 crc kubenswrapper[4806]: I0126 08:06:37.393750 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d350a047-9d1f-46ea-b0cd-54c9a629f49c-util\") pod \"5f33e6d26e38b6659ab3f4d90d9fbf1cd409d6eb8076f273c58e43de55gxsxk\" (UID: \"d350a047-9d1f-46ea-b0cd-54c9a629f49c\") " pod="openstack-operators/5f33e6d26e38b6659ab3f4d90d9fbf1cd409d6eb8076f273c58e43de55gxsxk" Jan 26 08:06:37 crc kubenswrapper[4806]: I0126 08:06:37.393816 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvsg6\" (UniqueName: \"kubernetes.io/projected/d350a047-9d1f-46ea-b0cd-54c9a629f49c-kube-api-access-zvsg6\") pod \"5f33e6d26e38b6659ab3f4d90d9fbf1cd409d6eb8076f273c58e43de55gxsxk\" (UID: \"d350a047-9d1f-46ea-b0cd-54c9a629f49c\") " pod="openstack-operators/5f33e6d26e38b6659ab3f4d90d9fbf1cd409d6eb8076f273c58e43de55gxsxk" Jan 26 08:06:37 crc kubenswrapper[4806]: I0126 08:06:37.393878 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d350a047-9d1f-46ea-b0cd-54c9a629f49c-bundle\") pod \"5f33e6d26e38b6659ab3f4d90d9fbf1cd409d6eb8076f273c58e43de55gxsxk\" (UID: \"d350a047-9d1f-46ea-b0cd-54c9a629f49c\") " pod="openstack-operators/5f33e6d26e38b6659ab3f4d90d9fbf1cd409d6eb8076f273c58e43de55gxsxk" Jan 26 08:06:37 crc kubenswrapper[4806]: I0126 08:06:37.394262 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d350a047-9d1f-46ea-b0cd-54c9a629f49c-util\") pod \"5f33e6d26e38b6659ab3f4d90d9fbf1cd409d6eb8076f273c58e43de55gxsxk\" (UID: \"d350a047-9d1f-46ea-b0cd-54c9a629f49c\") " pod="openstack-operators/5f33e6d26e38b6659ab3f4d90d9fbf1cd409d6eb8076f273c58e43de55gxsxk" Jan 26 08:06:37 crc kubenswrapper[4806]: I0126 08:06:37.394318 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d350a047-9d1f-46ea-b0cd-54c9a629f49c-bundle\") pod \"5f33e6d26e38b6659ab3f4d90d9fbf1cd409d6eb8076f273c58e43de55gxsxk\" (UID: \"d350a047-9d1f-46ea-b0cd-54c9a629f49c\") " pod="openstack-operators/5f33e6d26e38b6659ab3f4d90d9fbf1cd409d6eb8076f273c58e43de55gxsxk" Jan 26 08:06:37 crc kubenswrapper[4806]: I0126 08:06:37.416201 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvsg6\" (UniqueName: \"kubernetes.io/projected/d350a047-9d1f-46ea-b0cd-54c9a629f49c-kube-api-access-zvsg6\") pod \"5f33e6d26e38b6659ab3f4d90d9fbf1cd409d6eb8076f273c58e43de55gxsxk\" (UID: \"d350a047-9d1f-46ea-b0cd-54c9a629f49c\") " pod="openstack-operators/5f33e6d26e38b6659ab3f4d90d9fbf1cd409d6eb8076f273c58e43de55gxsxk" Jan 26 08:06:37 crc kubenswrapper[4806]: I0126 08:06:37.449702 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gz8k7" event={"ID":"0e08c274-72f5-4b31-b57c-007ba7cf486d","Type":"ContainerStarted","Data":"e0ec4f5aec08a934e78bd36f93cb58f9a5aab62b868c5974b3a2031570889a94"} Jan 26 08:06:37 crc kubenswrapper[4806]: I0126 08:06:37.472744 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-gz8k7" podStartSLOduration=3.021759699 podStartE2EDuration="5.472721101s" podCreationTimestamp="2026-01-26 08:06:32 +0000 UTC" firstStartedPulling="2026-01-26 08:06:34.418386842 +0000 UTC m=+773.682794898" lastFinishedPulling="2026-01-26 08:06:36.869348234 +0000 UTC m=+776.133756300" observedRunningTime="2026-01-26 08:06:37.467061331 +0000 UTC m=+776.731469397" watchObservedRunningTime="2026-01-26 08:06:37.472721101 +0000 UTC m=+776.737129167" Jan 26 08:06:37 crc kubenswrapper[4806]: I0126 08:06:37.529736 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/5f33e6d26e38b6659ab3f4d90d9fbf1cd409d6eb8076f273c58e43de55gxsxk" Jan 26 08:06:37 crc kubenswrapper[4806]: I0126 08:06:37.970742 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/5f33e6d26e38b6659ab3f4d90d9fbf1cd409d6eb8076f273c58e43de55gxsxk"] Jan 26 08:06:38 crc kubenswrapper[4806]: I0126 08:06:38.456645 4806 generic.go:334] "Generic (PLEG): container finished" podID="d350a047-9d1f-46ea-b0cd-54c9a629f49c" containerID="9ab4a530c99ffba6c776ea6fda833278128568425ad0bf6c26d3262c98a17917" exitCode=0 Jan 26 08:06:38 crc kubenswrapper[4806]: I0126 08:06:38.456739 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/5f33e6d26e38b6659ab3f4d90d9fbf1cd409d6eb8076f273c58e43de55gxsxk" event={"ID":"d350a047-9d1f-46ea-b0cd-54c9a629f49c","Type":"ContainerDied","Data":"9ab4a530c99ffba6c776ea6fda833278128568425ad0bf6c26d3262c98a17917"} Jan 26 08:06:38 crc kubenswrapper[4806]: I0126 08:06:38.457683 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/5f33e6d26e38b6659ab3f4d90d9fbf1cd409d6eb8076f273c58e43de55gxsxk" event={"ID":"d350a047-9d1f-46ea-b0cd-54c9a629f49c","Type":"ContainerStarted","Data":"852dc5cf8c8fad81259a07b894ef00bb139408854c4caba7f8aecd98734484c5"} Jan 26 08:06:39 crc kubenswrapper[4806]: I0126 08:06:39.466394 4806 generic.go:334] "Generic (PLEG): container finished" podID="d350a047-9d1f-46ea-b0cd-54c9a629f49c" containerID="1aaea84081a711a869e02d4af9c4e8ae8631deccd37429f170248aa9daad2093" exitCode=0 Jan 26 08:06:39 crc kubenswrapper[4806]: I0126 08:06:39.466512 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/5f33e6d26e38b6659ab3f4d90d9fbf1cd409d6eb8076f273c58e43de55gxsxk" event={"ID":"d350a047-9d1f-46ea-b0cd-54c9a629f49c","Type":"ContainerDied","Data":"1aaea84081a711a869e02d4af9c4e8ae8631deccd37429f170248aa9daad2093"} Jan 26 08:06:40 crc kubenswrapper[4806]: I0126 08:06:40.476364 4806 generic.go:334] "Generic (PLEG): container finished" podID="d350a047-9d1f-46ea-b0cd-54c9a629f49c" containerID="8f9d10641bd7f1c268320a725c1d6f02ab959a299f0cafa030b9124fca48042a" exitCode=0 Jan 26 08:06:40 crc kubenswrapper[4806]: I0126 08:06:40.476401 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/5f33e6d26e38b6659ab3f4d90d9fbf1cd409d6eb8076f273c58e43de55gxsxk" event={"ID":"d350a047-9d1f-46ea-b0cd-54c9a629f49c","Type":"ContainerDied","Data":"8f9d10641bd7f1c268320a725c1d6f02ab959a299f0cafa030b9124fca48042a"} Jan 26 08:06:41 crc kubenswrapper[4806]: I0126 08:06:41.739072 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/5f33e6d26e38b6659ab3f4d90d9fbf1cd409d6eb8076f273c58e43de55gxsxk" Jan 26 08:06:41 crc kubenswrapper[4806]: I0126 08:06:41.852688 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zvsg6\" (UniqueName: \"kubernetes.io/projected/d350a047-9d1f-46ea-b0cd-54c9a629f49c-kube-api-access-zvsg6\") pod \"d350a047-9d1f-46ea-b0cd-54c9a629f49c\" (UID: \"d350a047-9d1f-46ea-b0cd-54c9a629f49c\") " Jan 26 08:06:41 crc kubenswrapper[4806]: I0126 08:06:41.852947 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d350a047-9d1f-46ea-b0cd-54c9a629f49c-bundle\") pod \"d350a047-9d1f-46ea-b0cd-54c9a629f49c\" (UID: \"d350a047-9d1f-46ea-b0cd-54c9a629f49c\") " Jan 26 08:06:41 crc kubenswrapper[4806]: I0126 08:06:41.853042 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d350a047-9d1f-46ea-b0cd-54c9a629f49c-util\") pod \"d350a047-9d1f-46ea-b0cd-54c9a629f49c\" (UID: \"d350a047-9d1f-46ea-b0cd-54c9a629f49c\") " Jan 26 08:06:41 crc kubenswrapper[4806]: I0126 08:06:41.853803 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d350a047-9d1f-46ea-b0cd-54c9a629f49c-bundle" (OuterVolumeSpecName: "bundle") pod "d350a047-9d1f-46ea-b0cd-54c9a629f49c" (UID: "d350a047-9d1f-46ea-b0cd-54c9a629f49c"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 08:06:41 crc kubenswrapper[4806]: I0126 08:06:41.858443 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d350a047-9d1f-46ea-b0cd-54c9a629f49c-kube-api-access-zvsg6" (OuterVolumeSpecName: "kube-api-access-zvsg6") pod "d350a047-9d1f-46ea-b0cd-54c9a629f49c" (UID: "d350a047-9d1f-46ea-b0cd-54c9a629f49c"). InnerVolumeSpecName "kube-api-access-zvsg6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:06:41 crc kubenswrapper[4806]: I0126 08:06:41.869257 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d350a047-9d1f-46ea-b0cd-54c9a629f49c-util" (OuterVolumeSpecName: "util") pod "d350a047-9d1f-46ea-b0cd-54c9a629f49c" (UID: "d350a047-9d1f-46ea-b0cd-54c9a629f49c"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 08:06:41 crc kubenswrapper[4806]: I0126 08:06:41.954239 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zvsg6\" (UniqueName: \"kubernetes.io/projected/d350a047-9d1f-46ea-b0cd-54c9a629f49c-kube-api-access-zvsg6\") on node \"crc\" DevicePath \"\"" Jan 26 08:06:41 crc kubenswrapper[4806]: I0126 08:06:41.954587 4806 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d350a047-9d1f-46ea-b0cd-54c9a629f49c-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 08:06:41 crc kubenswrapper[4806]: I0126 08:06:41.954600 4806 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d350a047-9d1f-46ea-b0cd-54c9a629f49c-util\") on node \"crc\" DevicePath \"\"" Jan 26 08:06:42 crc kubenswrapper[4806]: I0126 08:06:42.489891 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/5f33e6d26e38b6659ab3f4d90d9fbf1cd409d6eb8076f273c58e43de55gxsxk" event={"ID":"d350a047-9d1f-46ea-b0cd-54c9a629f49c","Type":"ContainerDied","Data":"852dc5cf8c8fad81259a07b894ef00bb139408854c4caba7f8aecd98734484c5"} Jan 26 08:06:42 crc kubenswrapper[4806]: I0126 08:06:42.489927 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="852dc5cf8c8fad81259a07b894ef00bb139408854c4caba7f8aecd98734484c5" Jan 26 08:06:42 crc kubenswrapper[4806]: I0126 08:06:42.490280 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/5f33e6d26e38b6659ab3f4d90d9fbf1cd409d6eb8076f273c58e43de55gxsxk" Jan 26 08:06:43 crc kubenswrapper[4806]: I0126 08:06:43.116805 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-gz8k7" Jan 26 08:06:43 crc kubenswrapper[4806]: I0126 08:06:43.116867 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-gz8k7" Jan 26 08:06:43 crc kubenswrapper[4806]: I0126 08:06:43.185515 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-gz8k7" Jan 26 08:06:43 crc kubenswrapper[4806]: I0126 08:06:43.532807 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-gz8k7" Jan 26 08:06:45 crc kubenswrapper[4806]: I0126 08:06:45.565190 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gz8k7"] Jan 26 08:06:45 crc kubenswrapper[4806]: I0126 08:06:45.565692 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-gz8k7" podUID="0e08c274-72f5-4b31-b57c-007ba7cf486d" containerName="registry-server" containerID="cri-o://e0ec4f5aec08a934e78bd36f93cb58f9a5aab62b868c5974b3a2031570889a94" gracePeriod=2 Jan 26 08:06:45 crc kubenswrapper[4806]: I0126 08:06:45.805960 4806 patch_prober.go:28] interesting pod/machine-config-daemon-k2tlk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 08:06:45 crc kubenswrapper[4806]: I0126 08:06:45.806045 4806 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 08:06:47 crc kubenswrapper[4806]: I0126 08:06:47.517933 4806 generic.go:334] "Generic (PLEG): container finished" podID="0e08c274-72f5-4b31-b57c-007ba7cf486d" containerID="e0ec4f5aec08a934e78bd36f93cb58f9a5aab62b868c5974b3a2031570889a94" exitCode=0 Jan 26 08:06:47 crc kubenswrapper[4806]: I0126 08:06:47.518025 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gz8k7" event={"ID":"0e08c274-72f5-4b31-b57c-007ba7cf486d","Type":"ContainerDied","Data":"e0ec4f5aec08a934e78bd36f93cb58f9a5aab62b868c5974b3a2031570889a94"} Jan 26 08:06:47 crc kubenswrapper[4806]: I0126 08:06:47.576430 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-644d5c8bff-nqdhj"] Jan 26 08:06:47 crc kubenswrapper[4806]: E0126 08:06:47.576676 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d350a047-9d1f-46ea-b0cd-54c9a629f49c" containerName="util" Jan 26 08:06:47 crc kubenswrapper[4806]: I0126 08:06:47.576688 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="d350a047-9d1f-46ea-b0cd-54c9a629f49c" containerName="util" Jan 26 08:06:47 crc kubenswrapper[4806]: E0126 08:06:47.576707 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d350a047-9d1f-46ea-b0cd-54c9a629f49c" containerName="pull" Jan 26 08:06:47 crc kubenswrapper[4806]: I0126 08:06:47.576712 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="d350a047-9d1f-46ea-b0cd-54c9a629f49c" containerName="pull" Jan 26 08:06:47 crc kubenswrapper[4806]: E0126 08:06:47.576724 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d350a047-9d1f-46ea-b0cd-54c9a629f49c" containerName="extract" Jan 26 08:06:47 crc kubenswrapper[4806]: I0126 08:06:47.576730 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="d350a047-9d1f-46ea-b0cd-54c9a629f49c" containerName="extract" Jan 26 08:06:47 crc kubenswrapper[4806]: I0126 08:06:47.576841 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="d350a047-9d1f-46ea-b0cd-54c9a629f49c" containerName="extract" Jan 26 08:06:47 crc kubenswrapper[4806]: I0126 08:06:47.577220 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-644d5c8bff-nqdhj" Jan 26 08:06:47 crc kubenswrapper[4806]: I0126 08:06:47.587598 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-6bnk8" Jan 26 08:06:47 crc kubenswrapper[4806]: I0126 08:06:47.612460 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-644d5c8bff-nqdhj"] Jan 26 08:06:47 crc kubenswrapper[4806]: I0126 08:06:47.637012 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsrh7\" (UniqueName: \"kubernetes.io/projected/ec3ed82a-eb71-4099-80bd-be5ed1d06943-kube-api-access-rsrh7\") pod \"openstack-operator-controller-init-644d5c8bff-nqdhj\" (UID: \"ec3ed82a-eb71-4099-80bd-be5ed1d06943\") " pod="openstack-operators/openstack-operator-controller-init-644d5c8bff-nqdhj" Jan 26 08:06:47 crc kubenswrapper[4806]: I0126 08:06:47.738303 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rsrh7\" (UniqueName: \"kubernetes.io/projected/ec3ed82a-eb71-4099-80bd-be5ed1d06943-kube-api-access-rsrh7\") pod \"openstack-operator-controller-init-644d5c8bff-nqdhj\" (UID: \"ec3ed82a-eb71-4099-80bd-be5ed1d06943\") " pod="openstack-operators/openstack-operator-controller-init-644d5c8bff-nqdhj" Jan 26 08:06:47 crc kubenswrapper[4806]: I0126 08:06:47.757661 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rsrh7\" (UniqueName: \"kubernetes.io/projected/ec3ed82a-eb71-4099-80bd-be5ed1d06943-kube-api-access-rsrh7\") pod \"openstack-operator-controller-init-644d5c8bff-nqdhj\" (UID: \"ec3ed82a-eb71-4099-80bd-be5ed1d06943\") " pod="openstack-operators/openstack-operator-controller-init-644d5c8bff-nqdhj" Jan 26 08:06:47 crc kubenswrapper[4806]: I0126 08:06:47.893125 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-644d5c8bff-nqdhj" Jan 26 08:06:48 crc kubenswrapper[4806]: I0126 08:06:48.298258 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gz8k7" Jan 26 08:06:48 crc kubenswrapper[4806]: I0126 08:06:48.346503 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e08c274-72f5-4b31-b57c-007ba7cf486d-utilities\") pod \"0e08c274-72f5-4b31-b57c-007ba7cf486d\" (UID: \"0e08c274-72f5-4b31-b57c-007ba7cf486d\") " Jan 26 08:06:48 crc kubenswrapper[4806]: I0126 08:06:48.346614 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p69w8\" (UniqueName: \"kubernetes.io/projected/0e08c274-72f5-4b31-b57c-007ba7cf486d-kube-api-access-p69w8\") pod \"0e08c274-72f5-4b31-b57c-007ba7cf486d\" (UID: \"0e08c274-72f5-4b31-b57c-007ba7cf486d\") " Jan 26 08:06:48 crc kubenswrapper[4806]: I0126 08:06:48.346745 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e08c274-72f5-4b31-b57c-007ba7cf486d-catalog-content\") pod \"0e08c274-72f5-4b31-b57c-007ba7cf486d\" (UID: \"0e08c274-72f5-4b31-b57c-007ba7cf486d\") " Jan 26 08:06:48 crc kubenswrapper[4806]: I0126 08:06:48.347460 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e08c274-72f5-4b31-b57c-007ba7cf486d-utilities" (OuterVolumeSpecName: "utilities") pod "0e08c274-72f5-4b31-b57c-007ba7cf486d" (UID: "0e08c274-72f5-4b31-b57c-007ba7cf486d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 08:06:48 crc kubenswrapper[4806]: I0126 08:06:48.349099 4806 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e08c274-72f5-4b31-b57c-007ba7cf486d-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 08:06:48 crc kubenswrapper[4806]: I0126 08:06:48.358739 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e08c274-72f5-4b31-b57c-007ba7cf486d-kube-api-access-p69w8" (OuterVolumeSpecName: "kube-api-access-p69w8") pod "0e08c274-72f5-4b31-b57c-007ba7cf486d" (UID: "0e08c274-72f5-4b31-b57c-007ba7cf486d"). InnerVolumeSpecName "kube-api-access-p69w8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:06:48 crc kubenswrapper[4806]: I0126 08:06:48.362244 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-644d5c8bff-nqdhj"] Jan 26 08:06:48 crc kubenswrapper[4806]: I0126 08:06:48.450582 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p69w8\" (UniqueName: \"kubernetes.io/projected/0e08c274-72f5-4b31-b57c-007ba7cf486d-kube-api-access-p69w8\") on node \"crc\" DevicePath \"\"" Jan 26 08:06:48 crc kubenswrapper[4806]: I0126 08:06:48.466831 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e08c274-72f5-4b31-b57c-007ba7cf486d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0e08c274-72f5-4b31-b57c-007ba7cf486d" (UID: "0e08c274-72f5-4b31-b57c-007ba7cf486d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 08:06:48 crc kubenswrapper[4806]: I0126 08:06:48.525545 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gz8k7" event={"ID":"0e08c274-72f5-4b31-b57c-007ba7cf486d","Type":"ContainerDied","Data":"ecaaf52e7c955729cc569dc387bf678272a658b8c39d6b41bfa12e456fbdc4c9"} Jan 26 08:06:48 crc kubenswrapper[4806]: I0126 08:06:48.525616 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gz8k7" Jan 26 08:06:48 crc kubenswrapper[4806]: I0126 08:06:48.525817 4806 scope.go:117] "RemoveContainer" containerID="e0ec4f5aec08a934e78bd36f93cb58f9a5aab62b868c5974b3a2031570889a94" Jan 26 08:06:48 crc kubenswrapper[4806]: I0126 08:06:48.526648 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-644d5c8bff-nqdhj" event={"ID":"ec3ed82a-eb71-4099-80bd-be5ed1d06943","Type":"ContainerStarted","Data":"bb63f36b4874b2596ff9b760fafaa1044822b6e0a76493c65b9ae76017386552"} Jan 26 08:06:48 crc kubenswrapper[4806]: I0126 08:06:48.541673 4806 scope.go:117] "RemoveContainer" containerID="54159f08da6861f43c7543b34891d30e50c313d422a6446bb6c12b39865d60a7" Jan 26 08:06:48 crc kubenswrapper[4806]: I0126 08:06:48.554352 4806 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e08c274-72f5-4b31-b57c-007ba7cf486d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 08:06:48 crc kubenswrapper[4806]: I0126 08:06:48.564056 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gz8k7"] Jan 26 08:06:48 crc kubenswrapper[4806]: I0126 08:06:48.567964 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-gz8k7"] Jan 26 08:06:48 crc kubenswrapper[4806]: I0126 08:06:48.572205 4806 scope.go:117] "RemoveContainer" containerID="cc2c85e9a8b349c7501b067a79019134fce5cafa6ca76ce1a0d7aaba8843cbe4" Jan 26 08:06:49 crc kubenswrapper[4806]: I0126 08:06:49.052788 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e08c274-72f5-4b31-b57c-007ba7cf486d" path="/var/lib/kubelet/pods/0e08c274-72f5-4b31-b57c-007ba7cf486d/volumes" Jan 26 08:06:53 crc kubenswrapper[4806]: I0126 08:06:53.560569 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-644d5c8bff-nqdhj" event={"ID":"ec3ed82a-eb71-4099-80bd-be5ed1d06943","Type":"ContainerStarted","Data":"61fa19065b1c567e82578f73b9970d001d84fc4adc121df6309fa378c104930a"} Jan 26 08:06:53 crc kubenswrapper[4806]: I0126 08:06:53.561701 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-644d5c8bff-nqdhj" Jan 26 08:06:53 crc kubenswrapper[4806]: I0126 08:06:53.586503 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-644d5c8bff-nqdhj" podStartSLOduration=2.019873174 podStartE2EDuration="6.58648819s" podCreationTimestamp="2026-01-26 08:06:47 +0000 UTC" firstStartedPulling="2026-01-26 08:06:48.385593534 +0000 UTC m=+787.650001590" lastFinishedPulling="2026-01-26 08:06:52.95220855 +0000 UTC m=+792.216616606" observedRunningTime="2026-01-26 08:06:53.583287212 +0000 UTC m=+792.847695268" watchObservedRunningTime="2026-01-26 08:06:53.58648819 +0000 UTC m=+792.850896246" Jan 26 08:07:07 crc kubenswrapper[4806]: I0126 08:07:07.896985 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-644d5c8bff-nqdhj" Jan 26 08:07:15 crc kubenswrapper[4806]: I0126 08:07:15.806647 4806 patch_prober.go:28] interesting pod/machine-config-daemon-k2tlk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 08:07:15 crc kubenswrapper[4806]: I0126 08:07:15.807149 4806 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 08:07:21 crc kubenswrapper[4806]: I0126 08:07:21.181994 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-dr2xv"] Jan 26 08:07:21 crc kubenswrapper[4806]: E0126 08:07:21.183144 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e08c274-72f5-4b31-b57c-007ba7cf486d" containerName="extract-utilities" Jan 26 08:07:21 crc kubenswrapper[4806]: I0126 08:07:21.183175 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e08c274-72f5-4b31-b57c-007ba7cf486d" containerName="extract-utilities" Jan 26 08:07:21 crc kubenswrapper[4806]: E0126 08:07:21.183199 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e08c274-72f5-4b31-b57c-007ba7cf486d" containerName="extract-content" Jan 26 08:07:21 crc kubenswrapper[4806]: I0126 08:07:21.183214 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e08c274-72f5-4b31-b57c-007ba7cf486d" containerName="extract-content" Jan 26 08:07:21 crc kubenswrapper[4806]: E0126 08:07:21.183249 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e08c274-72f5-4b31-b57c-007ba7cf486d" containerName="registry-server" Jan 26 08:07:21 crc kubenswrapper[4806]: I0126 08:07:21.183266 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e08c274-72f5-4b31-b57c-007ba7cf486d" containerName="registry-server" Jan 26 08:07:21 crc kubenswrapper[4806]: I0126 08:07:21.183555 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e08c274-72f5-4b31-b57c-007ba7cf486d" containerName="registry-server" Jan 26 08:07:21 crc kubenswrapper[4806]: I0126 08:07:21.185389 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dr2xv" Jan 26 08:07:21 crc kubenswrapper[4806]: I0126 08:07:21.199467 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dr2xv"] Jan 26 08:07:21 crc kubenswrapper[4806]: I0126 08:07:21.286114 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99d74c9d-064b-45e5-a478-9b78a19e34fa-utilities\") pod \"redhat-marketplace-dr2xv\" (UID: \"99d74c9d-064b-45e5-a478-9b78a19e34fa\") " pod="openshift-marketplace/redhat-marketplace-dr2xv" Jan 26 08:07:21 crc kubenswrapper[4806]: I0126 08:07:21.286255 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvj6q\" (UniqueName: \"kubernetes.io/projected/99d74c9d-064b-45e5-a478-9b78a19e34fa-kube-api-access-bvj6q\") pod \"redhat-marketplace-dr2xv\" (UID: \"99d74c9d-064b-45e5-a478-9b78a19e34fa\") " pod="openshift-marketplace/redhat-marketplace-dr2xv" Jan 26 08:07:21 crc kubenswrapper[4806]: I0126 08:07:21.286303 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99d74c9d-064b-45e5-a478-9b78a19e34fa-catalog-content\") pod \"redhat-marketplace-dr2xv\" (UID: \"99d74c9d-064b-45e5-a478-9b78a19e34fa\") " pod="openshift-marketplace/redhat-marketplace-dr2xv" Jan 26 08:07:21 crc kubenswrapper[4806]: I0126 08:07:21.387085 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99d74c9d-064b-45e5-a478-9b78a19e34fa-utilities\") pod \"redhat-marketplace-dr2xv\" (UID: \"99d74c9d-064b-45e5-a478-9b78a19e34fa\") " pod="openshift-marketplace/redhat-marketplace-dr2xv" Jan 26 08:07:21 crc kubenswrapper[4806]: I0126 08:07:21.387158 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvj6q\" (UniqueName: \"kubernetes.io/projected/99d74c9d-064b-45e5-a478-9b78a19e34fa-kube-api-access-bvj6q\") pod \"redhat-marketplace-dr2xv\" (UID: \"99d74c9d-064b-45e5-a478-9b78a19e34fa\") " pod="openshift-marketplace/redhat-marketplace-dr2xv" Jan 26 08:07:21 crc kubenswrapper[4806]: I0126 08:07:21.387184 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99d74c9d-064b-45e5-a478-9b78a19e34fa-catalog-content\") pod \"redhat-marketplace-dr2xv\" (UID: \"99d74c9d-064b-45e5-a478-9b78a19e34fa\") " pod="openshift-marketplace/redhat-marketplace-dr2xv" Jan 26 08:07:21 crc kubenswrapper[4806]: I0126 08:07:21.387646 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99d74c9d-064b-45e5-a478-9b78a19e34fa-catalog-content\") pod \"redhat-marketplace-dr2xv\" (UID: \"99d74c9d-064b-45e5-a478-9b78a19e34fa\") " pod="openshift-marketplace/redhat-marketplace-dr2xv" Jan 26 08:07:21 crc kubenswrapper[4806]: I0126 08:07:21.387796 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99d74c9d-064b-45e5-a478-9b78a19e34fa-utilities\") pod \"redhat-marketplace-dr2xv\" (UID: \"99d74c9d-064b-45e5-a478-9b78a19e34fa\") " pod="openshift-marketplace/redhat-marketplace-dr2xv" Jan 26 08:07:21 crc kubenswrapper[4806]: I0126 08:07:21.408633 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvj6q\" (UniqueName: \"kubernetes.io/projected/99d74c9d-064b-45e5-a478-9b78a19e34fa-kube-api-access-bvj6q\") pod \"redhat-marketplace-dr2xv\" (UID: \"99d74c9d-064b-45e5-a478-9b78a19e34fa\") " pod="openshift-marketplace/redhat-marketplace-dr2xv" Jan 26 08:07:21 crc kubenswrapper[4806]: I0126 08:07:21.509322 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dr2xv" Jan 26 08:07:21 crc kubenswrapper[4806]: I0126 08:07:21.961591 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dr2xv"] Jan 26 08:07:21 crc kubenswrapper[4806]: W0126 08:07:21.966420 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod99d74c9d_064b_45e5_a478_9b78a19e34fa.slice/crio-de7cf46cbaa9d0d9a95ce0e2267ae76fb8c08d9ecc32914c2fb3a2c2b445f46a WatchSource:0}: Error finding container de7cf46cbaa9d0d9a95ce0e2267ae76fb8c08d9ecc32914c2fb3a2c2b445f46a: Status 404 returned error can't find the container with id de7cf46cbaa9d0d9a95ce0e2267ae76fb8c08d9ecc32914c2fb3a2c2b445f46a Jan 26 08:07:22 crc kubenswrapper[4806]: I0126 08:07:22.773827 4806 generic.go:334] "Generic (PLEG): container finished" podID="99d74c9d-064b-45e5-a478-9b78a19e34fa" containerID="6af7d9b56946bf4bdd4227aba8d5a56d6170f95fa323a8318801af1a3c065c4a" exitCode=0 Jan 26 08:07:22 crc kubenswrapper[4806]: I0126 08:07:22.773944 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dr2xv" event={"ID":"99d74c9d-064b-45e5-a478-9b78a19e34fa","Type":"ContainerDied","Data":"6af7d9b56946bf4bdd4227aba8d5a56d6170f95fa323a8318801af1a3c065c4a"} Jan 26 08:07:22 crc kubenswrapper[4806]: I0126 08:07:22.774067 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dr2xv" event={"ID":"99d74c9d-064b-45e5-a478-9b78a19e34fa","Type":"ContainerStarted","Data":"de7cf46cbaa9d0d9a95ce0e2267ae76fb8c08d9ecc32914c2fb3a2c2b445f46a"} Jan 26 08:07:23 crc kubenswrapper[4806]: I0126 08:07:23.782822 4806 generic.go:334] "Generic (PLEG): container finished" podID="99d74c9d-064b-45e5-a478-9b78a19e34fa" containerID="64d0794379edcdb614979b577795c62f3010fef6df59cdd7dc824f1ecfd89877" exitCode=0 Jan 26 08:07:23 crc kubenswrapper[4806]: I0126 08:07:23.782890 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dr2xv" event={"ID":"99d74c9d-064b-45e5-a478-9b78a19e34fa","Type":"ContainerDied","Data":"64d0794379edcdb614979b577795c62f3010fef6df59cdd7dc824f1ecfd89877"} Jan 26 08:07:24 crc kubenswrapper[4806]: I0126 08:07:24.789293 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dr2xv" event={"ID":"99d74c9d-064b-45e5-a478-9b78a19e34fa","Type":"ContainerStarted","Data":"82feb4b2085c085b5758fd90c9835bac59efa07c18f00472964cf87edee6f742"} Jan 26 08:07:31 crc kubenswrapper[4806]: I0126 08:07:31.509490 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-dr2xv" Jan 26 08:07:31 crc kubenswrapper[4806]: I0126 08:07:31.509933 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-dr2xv" Jan 26 08:07:31 crc kubenswrapper[4806]: I0126 08:07:31.573641 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-dr2xv" Jan 26 08:07:31 crc kubenswrapper[4806]: I0126 08:07:31.603922 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-dr2xv" podStartSLOduration=9.216350955 podStartE2EDuration="10.603899529s" podCreationTimestamp="2026-01-26 08:07:21 +0000 UTC" firstStartedPulling="2026-01-26 08:07:22.777892237 +0000 UTC m=+822.042300293" lastFinishedPulling="2026-01-26 08:07:24.165440811 +0000 UTC m=+823.429848867" observedRunningTime="2026-01-26 08:07:24.831425851 +0000 UTC m=+824.095833907" watchObservedRunningTime="2026-01-26 08:07:31.603899529 +0000 UTC m=+830.868307585" Jan 26 08:07:31 crc kubenswrapper[4806]: I0126 08:07:31.870748 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-dr2xv" Jan 26 08:07:31 crc kubenswrapper[4806]: I0126 08:07:31.934423 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dr2xv"] Jan 26 08:07:33 crc kubenswrapper[4806]: I0126 08:07:33.836109 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-dr2xv" podUID="99d74c9d-064b-45e5-a478-9b78a19e34fa" containerName="registry-server" containerID="cri-o://82feb4b2085c085b5758fd90c9835bac59efa07c18f00472964cf87edee6f742" gracePeriod=2 Jan 26 08:07:34 crc kubenswrapper[4806]: I0126 08:07:34.795141 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dr2xv" Jan 26 08:07:34 crc kubenswrapper[4806]: I0126 08:07:34.842706 4806 generic.go:334] "Generic (PLEG): container finished" podID="99d74c9d-064b-45e5-a478-9b78a19e34fa" containerID="82feb4b2085c085b5758fd90c9835bac59efa07c18f00472964cf87edee6f742" exitCode=0 Jan 26 08:07:34 crc kubenswrapper[4806]: I0126 08:07:34.842744 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dr2xv" event={"ID":"99d74c9d-064b-45e5-a478-9b78a19e34fa","Type":"ContainerDied","Data":"82feb4b2085c085b5758fd90c9835bac59efa07c18f00472964cf87edee6f742"} Jan 26 08:07:34 crc kubenswrapper[4806]: I0126 08:07:34.842770 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dr2xv" event={"ID":"99d74c9d-064b-45e5-a478-9b78a19e34fa","Type":"ContainerDied","Data":"de7cf46cbaa9d0d9a95ce0e2267ae76fb8c08d9ecc32914c2fb3a2c2b445f46a"} Jan 26 08:07:34 crc kubenswrapper[4806]: I0126 08:07:34.842789 4806 scope.go:117] "RemoveContainer" containerID="82feb4b2085c085b5758fd90c9835bac59efa07c18f00472964cf87edee6f742" Jan 26 08:07:34 crc kubenswrapper[4806]: I0126 08:07:34.842788 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dr2xv" Jan 26 08:07:34 crc kubenswrapper[4806]: I0126 08:07:34.861339 4806 scope.go:117] "RemoveContainer" containerID="64d0794379edcdb614979b577795c62f3010fef6df59cdd7dc824f1ecfd89877" Jan 26 08:07:34 crc kubenswrapper[4806]: I0126 08:07:34.879308 4806 scope.go:117] "RemoveContainer" containerID="6af7d9b56946bf4bdd4227aba8d5a56d6170f95fa323a8318801af1a3c065c4a" Jan 26 08:07:34 crc kubenswrapper[4806]: I0126 08:07:34.894300 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99d74c9d-064b-45e5-a478-9b78a19e34fa-catalog-content\") pod \"99d74c9d-064b-45e5-a478-9b78a19e34fa\" (UID: \"99d74c9d-064b-45e5-a478-9b78a19e34fa\") " Jan 26 08:07:34 crc kubenswrapper[4806]: I0126 08:07:34.894630 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99d74c9d-064b-45e5-a478-9b78a19e34fa-utilities\") pod \"99d74c9d-064b-45e5-a478-9b78a19e34fa\" (UID: \"99d74c9d-064b-45e5-a478-9b78a19e34fa\") " Jan 26 08:07:34 crc kubenswrapper[4806]: I0126 08:07:34.894770 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bvj6q\" (UniqueName: \"kubernetes.io/projected/99d74c9d-064b-45e5-a478-9b78a19e34fa-kube-api-access-bvj6q\") pod \"99d74c9d-064b-45e5-a478-9b78a19e34fa\" (UID: \"99d74c9d-064b-45e5-a478-9b78a19e34fa\") " Jan 26 08:07:34 crc kubenswrapper[4806]: I0126 08:07:34.895435 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/99d74c9d-064b-45e5-a478-9b78a19e34fa-utilities" (OuterVolumeSpecName: "utilities") pod "99d74c9d-064b-45e5-a478-9b78a19e34fa" (UID: "99d74c9d-064b-45e5-a478-9b78a19e34fa"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 08:07:34 crc kubenswrapper[4806]: I0126 08:07:34.899322 4806 scope.go:117] "RemoveContainer" containerID="82feb4b2085c085b5758fd90c9835bac59efa07c18f00472964cf87edee6f742" Jan 26 08:07:34 crc kubenswrapper[4806]: I0126 08:07:34.899381 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99d74c9d-064b-45e5-a478-9b78a19e34fa-kube-api-access-bvj6q" (OuterVolumeSpecName: "kube-api-access-bvj6q") pod "99d74c9d-064b-45e5-a478-9b78a19e34fa" (UID: "99d74c9d-064b-45e5-a478-9b78a19e34fa"). InnerVolumeSpecName "kube-api-access-bvj6q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:07:34 crc kubenswrapper[4806]: E0126 08:07:34.900059 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"82feb4b2085c085b5758fd90c9835bac59efa07c18f00472964cf87edee6f742\": container with ID starting with 82feb4b2085c085b5758fd90c9835bac59efa07c18f00472964cf87edee6f742 not found: ID does not exist" containerID="82feb4b2085c085b5758fd90c9835bac59efa07c18f00472964cf87edee6f742" Jan 26 08:07:34 crc kubenswrapper[4806]: I0126 08:07:34.900085 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82feb4b2085c085b5758fd90c9835bac59efa07c18f00472964cf87edee6f742"} err="failed to get container status \"82feb4b2085c085b5758fd90c9835bac59efa07c18f00472964cf87edee6f742\": rpc error: code = NotFound desc = could not find container \"82feb4b2085c085b5758fd90c9835bac59efa07c18f00472964cf87edee6f742\": container with ID starting with 82feb4b2085c085b5758fd90c9835bac59efa07c18f00472964cf87edee6f742 not found: ID does not exist" Jan 26 08:07:34 crc kubenswrapper[4806]: I0126 08:07:34.900105 4806 scope.go:117] "RemoveContainer" containerID="64d0794379edcdb614979b577795c62f3010fef6df59cdd7dc824f1ecfd89877" Jan 26 08:07:34 crc kubenswrapper[4806]: E0126 08:07:34.900801 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"64d0794379edcdb614979b577795c62f3010fef6df59cdd7dc824f1ecfd89877\": container with ID starting with 64d0794379edcdb614979b577795c62f3010fef6df59cdd7dc824f1ecfd89877 not found: ID does not exist" containerID="64d0794379edcdb614979b577795c62f3010fef6df59cdd7dc824f1ecfd89877" Jan 26 08:07:34 crc kubenswrapper[4806]: I0126 08:07:34.901020 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64d0794379edcdb614979b577795c62f3010fef6df59cdd7dc824f1ecfd89877"} err="failed to get container status \"64d0794379edcdb614979b577795c62f3010fef6df59cdd7dc824f1ecfd89877\": rpc error: code = NotFound desc = could not find container \"64d0794379edcdb614979b577795c62f3010fef6df59cdd7dc824f1ecfd89877\": container with ID starting with 64d0794379edcdb614979b577795c62f3010fef6df59cdd7dc824f1ecfd89877 not found: ID does not exist" Jan 26 08:07:34 crc kubenswrapper[4806]: I0126 08:07:34.901164 4806 scope.go:117] "RemoveContainer" containerID="6af7d9b56946bf4bdd4227aba8d5a56d6170f95fa323a8318801af1a3c065c4a" Jan 26 08:07:34 crc kubenswrapper[4806]: E0126 08:07:34.901762 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6af7d9b56946bf4bdd4227aba8d5a56d6170f95fa323a8318801af1a3c065c4a\": container with ID starting with 6af7d9b56946bf4bdd4227aba8d5a56d6170f95fa323a8318801af1a3c065c4a not found: ID does not exist" containerID="6af7d9b56946bf4bdd4227aba8d5a56d6170f95fa323a8318801af1a3c065c4a" Jan 26 08:07:34 crc kubenswrapper[4806]: I0126 08:07:34.901788 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6af7d9b56946bf4bdd4227aba8d5a56d6170f95fa323a8318801af1a3c065c4a"} err="failed to get container status \"6af7d9b56946bf4bdd4227aba8d5a56d6170f95fa323a8318801af1a3c065c4a\": rpc error: code = NotFound desc = could not find container \"6af7d9b56946bf4bdd4227aba8d5a56d6170f95fa323a8318801af1a3c065c4a\": container with ID starting with 6af7d9b56946bf4bdd4227aba8d5a56d6170f95fa323a8318801af1a3c065c4a not found: ID does not exist" Jan 26 08:07:34 crc kubenswrapper[4806]: I0126 08:07:34.926588 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/99d74c9d-064b-45e5-a478-9b78a19e34fa-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "99d74c9d-064b-45e5-a478-9b78a19e34fa" (UID: "99d74c9d-064b-45e5-a478-9b78a19e34fa"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 08:07:34 crc kubenswrapper[4806]: I0126 08:07:34.996382 4806 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99d74c9d-064b-45e5-a478-9b78a19e34fa-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 08:07:34 crc kubenswrapper[4806]: I0126 08:07:34.996413 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bvj6q\" (UniqueName: \"kubernetes.io/projected/99d74c9d-064b-45e5-a478-9b78a19e34fa-kube-api-access-bvj6q\") on node \"crc\" DevicePath \"\"" Jan 26 08:07:34 crc kubenswrapper[4806]: I0126 08:07:34.996425 4806 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99d74c9d-064b-45e5-a478-9b78a19e34fa-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 08:07:35 crc kubenswrapper[4806]: I0126 08:07:35.161774 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dr2xv"] Jan 26 08:07:35 crc kubenswrapper[4806]: I0126 08:07:35.167973 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-dr2xv"] Jan 26 08:07:37 crc kubenswrapper[4806]: I0126 08:07:37.048106 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="99d74c9d-064b-45e5-a478-9b78a19e34fa" path="/var/lib/kubelet/pods/99d74c9d-064b-45e5-a478-9b78a19e34fa/volumes" Jan 26 08:07:37 crc kubenswrapper[4806]: I0126 08:07:37.221275 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7f86f8796f-9mwdz"] Jan 26 08:07:37 crc kubenswrapper[4806]: E0126 08:07:37.221581 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99d74c9d-064b-45e5-a478-9b78a19e34fa" containerName="registry-server" Jan 26 08:07:37 crc kubenswrapper[4806]: I0126 08:07:37.221604 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="99d74c9d-064b-45e5-a478-9b78a19e34fa" containerName="registry-server" Jan 26 08:07:37 crc kubenswrapper[4806]: E0126 08:07:37.221618 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99d74c9d-064b-45e5-a478-9b78a19e34fa" containerName="extract-content" Jan 26 08:07:37 crc kubenswrapper[4806]: I0126 08:07:37.221627 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="99d74c9d-064b-45e5-a478-9b78a19e34fa" containerName="extract-content" Jan 26 08:07:37 crc kubenswrapper[4806]: E0126 08:07:37.221644 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99d74c9d-064b-45e5-a478-9b78a19e34fa" containerName="extract-utilities" Jan 26 08:07:37 crc kubenswrapper[4806]: I0126 08:07:37.221654 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="99d74c9d-064b-45e5-a478-9b78a19e34fa" containerName="extract-utilities" Jan 26 08:07:37 crc kubenswrapper[4806]: I0126 08:07:37.221778 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="99d74c9d-064b-45e5-a478-9b78a19e34fa" containerName="registry-server" Jan 26 08:07:37 crc kubenswrapper[4806]: I0126 08:07:37.222358 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-9mwdz" Jan 26 08:07:37 crc kubenswrapper[4806]: I0126 08:07:37.227723 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-8cgl5" Jan 26 08:07:37 crc kubenswrapper[4806]: I0126 08:07:37.231553 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-7478f7dbf9-wwr7b"] Jan 26 08:07:37 crc kubenswrapper[4806]: I0126 08:07:37.232874 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-wwr7b" Jan 26 08:07:37 crc kubenswrapper[4806]: I0126 08:07:37.248144 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-qbfcf" Jan 26 08:07:37 crc kubenswrapper[4806]: I0126 08:07:37.251397 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7f86f8796f-9mwdz"] Jan 26 08:07:37 crc kubenswrapper[4806]: I0126 08:07:37.256707 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-9ld8m"] Jan 26 08:07:37 crc kubenswrapper[4806]: I0126 08:07:37.257560 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-9ld8m" Jan 26 08:07:37 crc kubenswrapper[4806]: I0126 08:07:37.259810 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-6glbt" Jan 26 08:07:37 crc kubenswrapper[4806]: I0126 08:07:37.278641 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-9ld8m"] Jan 26 08:07:37 crc kubenswrapper[4806]: I0126 08:07:37.295271 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-7478f7dbf9-wwr7b"] Jan 26 08:07:37 crc kubenswrapper[4806]: I0126 08:07:37.330003 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-psw9b"] Jan 26 08:07:37 crc kubenswrapper[4806]: I0126 08:07:37.330844 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-psw9b" Jan 26 08:07:37 crc kubenswrapper[4806]: I0126 08:07:37.332919 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-j4p8z" Jan 26 08:07:37 crc kubenswrapper[4806]: I0126 08:07:37.335208 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prvqz\" (UniqueName: \"kubernetes.io/projected/55e84831-2044-4555-844d-93053648d17a-kube-api-access-prvqz\") pod \"cinder-operator-controller-manager-7478f7dbf9-wwr7b\" (UID: \"55e84831-2044-4555-844d-93053648d17a\") " pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-wwr7b" Jan 26 08:07:37 crc kubenswrapper[4806]: I0126 08:07:37.335312 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7w7l\" (UniqueName: \"kubernetes.io/projected/9b0f19e9-5ee8-4f12-a453-2195b20a8f09-kube-api-access-z7w7l\") pod \"barbican-operator-controller-manager-7f86f8796f-9mwdz\" (UID: \"9b0f19e9-5ee8-4f12-a453-2195b20a8f09\") " pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-9mwdz" Jan 26 08:07:37 crc kubenswrapper[4806]: I0126 08:07:37.336037 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-q9hmq"] Jan 26 08:07:37 crc kubenswrapper[4806]: I0126 08:07:37.336836 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-q9hmq" Jan 26 08:07:37 crc kubenswrapper[4806]: I0126 08:07:37.346970 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-9jt9b" Jan 26 08:07:37 crc kubenswrapper[4806]: I0126 08:07:37.352944 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-6fzqz"] Jan 26 08:07:37 crc kubenswrapper[4806]: I0126 08:07:37.353661 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-6fzqz" Jan 26 08:07:37 crc kubenswrapper[4806]: I0126 08:07:37.355497 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-bddhh" Jan 26 08:07:37 crc kubenswrapper[4806]: I0126 08:07:37.358691 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-psw9b"] Jan 26 08:07:37 crc kubenswrapper[4806]: I0126 08:07:37.382838 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-q9hmq"] Jan 26 08:07:37 crc kubenswrapper[4806]: I0126 08:07:37.390597 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-6fzqz"] Jan 26 08:07:37 crc kubenswrapper[4806]: I0126 08:07:37.409751 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-694cf4f878-8s72c"] Jan 26 08:07:37 crc kubenswrapper[4806]: I0126 08:07:37.410559 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-8s72c" Jan 26 08:07:37 crc kubenswrapper[4806]: I0126 08:07:37.414079 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 26 08:07:37 crc kubenswrapper[4806]: I0126 08:07:37.414083 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-2p9bw" Jan 26 08:07:37 crc kubenswrapper[4806]: I0126 08:07:37.416350 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-694cf4f878-8s72c"] Jan 26 08:07:37 crc kubenswrapper[4806]: I0126 08:07:37.430258 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-598f7747c9-fd757"] Jan 26 08:07:37 crc kubenswrapper[4806]: I0126 08:07:37.431277 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-fd757" Jan 26 08:07:37 crc kubenswrapper[4806]: I0126 08:07:37.436047 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7w7l\" (UniqueName: \"kubernetes.io/projected/9b0f19e9-5ee8-4f12-a453-2195b20a8f09-kube-api-access-z7w7l\") pod \"barbican-operator-controller-manager-7f86f8796f-9mwdz\" (UID: \"9b0f19e9-5ee8-4f12-a453-2195b20a8f09\") " pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-9mwdz" Jan 26 08:07:37 crc kubenswrapper[4806]: I0126 08:07:37.436108 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6mj4\" (UniqueName: \"kubernetes.io/projected/293159cc-40c4-4335-ad77-65f1c493e35a-kube-api-access-h6mj4\") pod \"designate-operator-controller-manager-b45d7bf98-9ld8m\" (UID: \"293159cc-40c4-4335-ad77-65f1c493e35a\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-9ld8m" Jan 26 08:07:37 crc kubenswrapper[4806]: I0126 08:07:37.436165 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48fls\" (UniqueName: \"kubernetes.io/projected/002839d6-a78d-4826-a93c-b6dec9671bab-kube-api-access-48fls\") pod \"glance-operator-controller-manager-78fdd796fd-psw9b\" (UID: \"002839d6-a78d-4826-a93c-b6dec9671bab\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-psw9b" Jan 26 08:07:37 crc kubenswrapper[4806]: I0126 08:07:37.436202 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whstt\" (UniqueName: \"kubernetes.io/projected/3191a58e-ee1d-430f-97ce-c7c532d132a6-kube-api-access-whstt\") pod \"heat-operator-controller-manager-594c8c9d5d-q9hmq\" (UID: \"3191a58e-ee1d-430f-97ce-c7c532d132a6\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-q9hmq" Jan 26 08:07:37 crc kubenswrapper[4806]: I0126 08:07:37.436241 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prvqz\" (UniqueName: \"kubernetes.io/projected/55e84831-2044-4555-844d-93053648d17a-kube-api-access-prvqz\") pod \"cinder-operator-controller-manager-7478f7dbf9-wwr7b\" (UID: \"55e84831-2044-4555-844d-93053648d17a\") " pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-wwr7b" Jan 26 08:07:37 crc kubenswrapper[4806]: I0126 08:07:37.441962 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-lbpbc" Jan 26 08:07:37 crc kubenswrapper[4806]: I0126 08:07:37.450396 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-598f7747c9-fd757"] Jan 26 08:07:37 crc kubenswrapper[4806]: I0126 08:07:37.470406 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7w7l\" (UniqueName: \"kubernetes.io/projected/9b0f19e9-5ee8-4f12-a453-2195b20a8f09-kube-api-access-z7w7l\") pod \"barbican-operator-controller-manager-7f86f8796f-9mwdz\" (UID: \"9b0f19e9-5ee8-4f12-a453-2195b20a8f09\") " pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-9mwdz" Jan 26 08:07:37 crc kubenswrapper[4806]: I0126 08:07:37.475180 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prvqz\" (UniqueName: \"kubernetes.io/projected/55e84831-2044-4555-844d-93053648d17a-kube-api-access-prvqz\") pod \"cinder-operator-controller-manager-7478f7dbf9-wwr7b\" (UID: \"55e84831-2044-4555-844d-93053648d17a\") " pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-wwr7b" Jan 26 08:07:37 crc kubenswrapper[4806]: I0126 08:07:37.510846 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-rf496"] Jan 26 08:07:37 crc kubenswrapper[4806]: I0126 08:07:37.511764 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-rf496" Jan 26 08:07:37 crc kubenswrapper[4806]: I0126 08:07:37.516633 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-rf496"] Jan 26 08:07:37 crc kubenswrapper[4806]: I0126 08:07:37.526855 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-8qdkr" Jan 26 08:07:37 crc kubenswrapper[4806]: I0126 08:07:37.527079 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-497jq"] Jan 26 08:07:37 crc kubenswrapper[4806]: I0126 08:07:37.528041 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-497jq" Jan 26 08:07:37 crc kubenswrapper[4806]: I0126 08:07:37.529966 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-497jq"] Jan 26 08:07:37 crc kubenswrapper[4806]: I0126 08:07:37.539592 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-lkcgg"] Jan 26 08:07:37 crc kubenswrapper[4806]: I0126 08:07:37.540039 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-9mwdz" Jan 26 08:07:37 crc kubenswrapper[4806]: I0126 08:07:37.540508 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78d58447c5-rk454"] Jan 26 08:07:37 crc kubenswrapper[4806]: I0126 08:07:37.541167 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-fxlgn" Jan 26 08:07:37 crc kubenswrapper[4806]: I0126 08:07:37.546757 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-lkcgg" Jan 26 08:07:37 crc kubenswrapper[4806]: I0126 08:07:37.549608 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48fls\" (UniqueName: \"kubernetes.io/projected/002839d6-a78d-4826-a93c-b6dec9671bab-kube-api-access-48fls\") pod \"glance-operator-controller-manager-78fdd796fd-psw9b\" (UID: \"002839d6-a78d-4826-a93c-b6dec9671bab\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-psw9b" Jan 26 08:07:37 crc kubenswrapper[4806]: I0126 08:07:37.549668 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-whstt\" (UniqueName: \"kubernetes.io/projected/3191a58e-ee1d-430f-97ce-c7c532d132a6-kube-api-access-whstt\") pod \"heat-operator-controller-manager-594c8c9d5d-q9hmq\" (UID: \"3191a58e-ee1d-430f-97ce-c7c532d132a6\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-q9hmq" Jan 26 08:07:37 crc kubenswrapper[4806]: I0126 08:07:37.549779 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cz2js\" (UniqueName: \"kubernetes.io/projected/d36b3dbe-4776-4c55-a64f-4ea15cad6fb7-kube-api-access-cz2js\") pod \"horizon-operator-controller-manager-77d5c5b54f-6fzqz\" (UID: \"d36b3dbe-4776-4c55-a64f-4ea15cad6fb7\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-6fzqz" Jan 26 08:07:37 crc kubenswrapper[4806]: I0126 08:07:37.549803 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t67r5\" (UniqueName: \"kubernetes.io/projected/00c672af-a00d-45d3-9d80-39de7bbcf49c-kube-api-access-t67r5\") pod \"ironic-operator-controller-manager-598f7747c9-fd757\" (UID: \"00c672af-a00d-45d3-9d80-39de7bbcf49c\") " pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-fd757" Jan 26 08:07:37 crc kubenswrapper[4806]: I0126 08:07:37.551551 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-nwn2g" Jan 26 08:07:37 crc kubenswrapper[4806]: I0126 08:07:37.554354 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sf4d5\" (UniqueName: \"kubernetes.io/projected/52ffe9cc-7d93-400f-a7ef-81d4c7335024-kube-api-access-sf4d5\") pod \"infra-operator-controller-manager-694cf4f878-8s72c\" (UID: \"52ffe9cc-7d93-400f-a7ef-81d4c7335024\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-8s72c" Jan 26 08:07:37 crc kubenswrapper[4806]: I0126 08:07:37.554426 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6mj4\" (UniqueName: \"kubernetes.io/projected/293159cc-40c4-4335-ad77-65f1c493e35a-kube-api-access-h6mj4\") pod \"designate-operator-controller-manager-b45d7bf98-9ld8m\" (UID: \"293159cc-40c4-4335-ad77-65f1c493e35a\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-9ld8m" Jan 26 08:07:37 crc kubenswrapper[4806]: I0126 08:07:37.554596 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/52ffe9cc-7d93-400f-a7ef-81d4c7335024-cert\") pod \"infra-operator-controller-manager-694cf4f878-8s72c\" (UID: \"52ffe9cc-7d93-400f-a7ef-81d4c7335024\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-8s72c" Jan 26 08:07:37 crc kubenswrapper[4806]: I0126 08:07:37.557654 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-rk454" Jan 26 08:07:37 crc kubenswrapper[4806]: I0126 08:07:37.562168 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-wwr7b" Jan 26 08:07:37 crc kubenswrapper[4806]: I0126 08:07:37.565366 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-7j7lq" Jan 26 08:07:37 crc kubenswrapper[4806]: I0126 08:07:37.596166 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-lkcgg"] Jan 26 08:07:37 crc kubenswrapper[4806]: I0126 08:07:37.602262 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48fls\" (UniqueName: \"kubernetes.io/projected/002839d6-a78d-4826-a93c-b6dec9671bab-kube-api-access-48fls\") pod \"glance-operator-controller-manager-78fdd796fd-psw9b\" (UID: \"002839d6-a78d-4826-a93c-b6dec9671bab\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-psw9b" Jan 26 08:07:37 crc kubenswrapper[4806]: I0126 08:07:37.636041 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-7bdb645866-tj4m9"] Jan 26 08:07:37 crc kubenswrapper[4806]: I0126 08:07:37.638901 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-tj4m9" Jan 26 08:07:37 crc kubenswrapper[4806]: I0126 08:07:37.641058 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-k4j2d" Jan 26 08:07:37 crc kubenswrapper[4806]: I0126 08:07:37.648325 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6mj4\" (UniqueName: \"kubernetes.io/projected/293159cc-40c4-4335-ad77-65f1c493e35a-kube-api-access-h6mj4\") pod \"designate-operator-controller-manager-b45d7bf98-9ld8m\" (UID: \"293159cc-40c4-4335-ad77-65f1c493e35a\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-9ld8m" Jan 26 08:07:37 crc kubenswrapper[4806]: I0126 08:07:37.650366 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-whstt\" (UniqueName: \"kubernetes.io/projected/3191a58e-ee1d-430f-97ce-c7c532d132a6-kube-api-access-whstt\") pod \"heat-operator-controller-manager-594c8c9d5d-q9hmq\" (UID: \"3191a58e-ee1d-430f-97ce-c7c532d132a6\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-q9hmq" Jan 26 08:07:37 crc kubenswrapper[4806]: I0126 08:07:37.652105 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-psw9b" Jan 26 08:07:37 crc kubenswrapper[4806]: I0126 08:07:37.664773 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-q9hmq" Jan 26 08:07:37 crc kubenswrapper[4806]: I0126 08:07:37.715944 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkrf9\" (UniqueName: \"kubernetes.io/projected/41df476c-557f-407c-8711-57c979600bea-kube-api-access-rkrf9\") pod \"manila-operator-controller-manager-78c6999f6f-497jq\" (UID: \"41df476c-557f-407c-8711-57c979600bea\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-497jq" Jan 26 08:07:37 crc kubenswrapper[4806]: I0126 08:07:37.720998 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/52ffe9cc-7d93-400f-a7ef-81d4c7335024-cert\") pod \"infra-operator-controller-manager-694cf4f878-8s72c\" (UID: \"52ffe9cc-7d93-400f-a7ef-81d4c7335024\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-8s72c" Jan 26 08:07:37 crc kubenswrapper[4806]: I0126 08:07:37.721061 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jz2r9\" (UniqueName: \"kubernetes.io/projected/5270b699-329c-41eb-a8cf-5f94eeb4cd11-kube-api-access-jz2r9\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-lkcgg\" (UID: \"5270b699-329c-41eb-a8cf-5f94eeb4cd11\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-lkcgg" Jan 26 08:07:37 crc kubenswrapper[4806]: I0126 08:07:37.721121 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98hll\" (UniqueName: \"kubernetes.io/projected/167c1b32-0550-4c81-a2b6-b30e8d58dd3d-kube-api-access-98hll\") pod \"neutron-operator-controller-manager-78d58447c5-rk454\" (UID: \"167c1b32-0550-4c81-a2b6-b30e8d58dd3d\") " pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-rk454" Jan 26 08:07:37 crc kubenswrapper[4806]: I0126 08:07:37.721167 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85bn6\" (UniqueName: \"kubernetes.io/projected/1ca63855-2d6d-4543-a084-4cdb7c6d0c5c-kube-api-access-85bn6\") pod \"keystone-operator-controller-manager-b8b6d4659-rf496\" (UID: \"1ca63855-2d6d-4543-a084-4cdb7c6d0c5c\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-rf496" Jan 26 08:07:37 crc kubenswrapper[4806]: I0126 08:07:37.721206 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cz2js\" (UniqueName: \"kubernetes.io/projected/d36b3dbe-4776-4c55-a64f-4ea15cad6fb7-kube-api-access-cz2js\") pod \"horizon-operator-controller-manager-77d5c5b54f-6fzqz\" (UID: \"d36b3dbe-4776-4c55-a64f-4ea15cad6fb7\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-6fzqz" Jan 26 08:07:37 crc kubenswrapper[4806]: I0126 08:07:37.721224 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t67r5\" (UniqueName: \"kubernetes.io/projected/00c672af-a00d-45d3-9d80-39de7bbcf49c-kube-api-access-t67r5\") pod \"ironic-operator-controller-manager-598f7747c9-fd757\" (UID: \"00c672af-a00d-45d3-9d80-39de7bbcf49c\") " pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-fd757" Jan 26 08:07:37 crc kubenswrapper[4806]: I0126 08:07:37.721239 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sf4d5\" (UniqueName: \"kubernetes.io/projected/52ffe9cc-7d93-400f-a7ef-81d4c7335024-kube-api-access-sf4d5\") pod \"infra-operator-controller-manager-694cf4f878-8s72c\" (UID: \"52ffe9cc-7d93-400f-a7ef-81d4c7335024\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-8s72c" Jan 26 08:07:37 crc kubenswrapper[4806]: E0126 08:07:37.721580 4806 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 26 08:07:37 crc kubenswrapper[4806]: E0126 08:07:37.721624 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/52ffe9cc-7d93-400f-a7ef-81d4c7335024-cert podName:52ffe9cc-7d93-400f-a7ef-81d4c7335024 nodeName:}" failed. No retries permitted until 2026-01-26 08:07:38.221609326 +0000 UTC m=+837.486017372 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/52ffe9cc-7d93-400f-a7ef-81d4c7335024-cert") pod "infra-operator-controller-manager-694cf4f878-8s72c" (UID: "52ffe9cc-7d93-400f-a7ef-81d4c7335024") : secret "infra-operator-webhook-server-cert" not found Jan 26 08:07:37 crc kubenswrapper[4806]: I0126 08:07:37.722015 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78d58447c5-rk454"] Jan 26 08:07:37 crc kubenswrapper[4806]: I0126 08:07:37.781475 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-7bdb645866-tj4m9"] Jan 26 08:07:37 crc kubenswrapper[4806]: I0126 08:07:37.793262 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cz2js\" (UniqueName: \"kubernetes.io/projected/d36b3dbe-4776-4c55-a64f-4ea15cad6fb7-kube-api-access-cz2js\") pod \"horizon-operator-controller-manager-77d5c5b54f-6fzqz\" (UID: \"d36b3dbe-4776-4c55-a64f-4ea15cad6fb7\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-6fzqz" Jan 26 08:07:37 crc kubenswrapper[4806]: I0126 08:07:37.793324 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5f4cd88d46-nk6xc"] Jan 26 08:07:37 crc kubenswrapper[4806]: I0126 08:07:37.794140 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-nk6xc" Jan 26 08:07:37 crc kubenswrapper[4806]: I0126 08:07:37.802505 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-k4jng" Jan 26 08:07:37 crc kubenswrapper[4806]: I0126 08:07:37.817567 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sf4d5\" (UniqueName: \"kubernetes.io/projected/52ffe9cc-7d93-400f-a7ef-81d4c7335024-kube-api-access-sf4d5\") pod \"infra-operator-controller-manager-694cf4f878-8s72c\" (UID: \"52ffe9cc-7d93-400f-a7ef-81d4c7335024\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-8s72c" Jan 26 08:07:37 crc kubenswrapper[4806]: I0126 08:07:37.824410 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jz2r9\" (UniqueName: \"kubernetes.io/projected/5270b699-329c-41eb-a8cf-5f94eeb4cd11-kube-api-access-jz2r9\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-lkcgg\" (UID: \"5270b699-329c-41eb-a8cf-5f94eeb4cd11\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-lkcgg" Jan 26 08:07:37 crc kubenswrapper[4806]: I0126 08:07:37.824472 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98hll\" (UniqueName: \"kubernetes.io/projected/167c1b32-0550-4c81-a2b6-b30e8d58dd3d-kube-api-access-98hll\") pod \"neutron-operator-controller-manager-78d58447c5-rk454\" (UID: \"167c1b32-0550-4c81-a2b6-b30e8d58dd3d\") " pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-rk454" Jan 26 08:07:37 crc kubenswrapper[4806]: I0126 08:07:37.824502 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-85bn6\" (UniqueName: \"kubernetes.io/projected/1ca63855-2d6d-4543-a084-4cdb7c6d0c5c-kube-api-access-85bn6\") pod \"keystone-operator-controller-manager-b8b6d4659-rf496\" (UID: \"1ca63855-2d6d-4543-a084-4cdb7c6d0c5c\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-rf496" Jan 26 08:07:37 crc kubenswrapper[4806]: I0126 08:07:37.824543 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ltsh\" (UniqueName: \"kubernetes.io/projected/9096956d-1ed7-4e3c-bdec-d86c14168601-kube-api-access-7ltsh\") pod \"nova-operator-controller-manager-7bdb645866-tj4m9\" (UID: \"9096956d-1ed7-4e3c-bdec-d86c14168601\") " pod="openstack-operators/nova-operator-controller-manager-7bdb645866-tj4m9" Jan 26 08:07:37 crc kubenswrapper[4806]: I0126 08:07:37.824597 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rkrf9\" (UniqueName: \"kubernetes.io/projected/41df476c-557f-407c-8711-57c979600bea-kube-api-access-rkrf9\") pod \"manila-operator-controller-manager-78c6999f6f-497jq\" (UID: \"41df476c-557f-407c-8711-57c979600bea\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-497jq" Jan 26 08:07:37 crc kubenswrapper[4806]: I0126 08:07:37.862171 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t67r5\" (UniqueName: \"kubernetes.io/projected/00c672af-a00d-45d3-9d80-39de7bbcf49c-kube-api-access-t67r5\") pod \"ironic-operator-controller-manager-598f7747c9-fd757\" (UID: \"00c672af-a00d-45d3-9d80-39de7bbcf49c\") " pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-fd757" Jan 26 08:07:37 crc kubenswrapper[4806]: I0126 08:07:37.874841 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-9ld8m" Jan 26 08:07:37 crc kubenswrapper[4806]: I0126 08:07:37.875366 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-79d5ccc684-ncfjs"] Jan 26 08:07:37 crc kubenswrapper[4806]: I0126 08:07:37.876113 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-ncfjs" Jan 26 08:07:37 crc kubenswrapper[4806]: I0126 08:07:37.882104 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-k6784" Jan 26 08:07:37 crc kubenswrapper[4806]: I0126 08:07:37.891390 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98hll\" (UniqueName: \"kubernetes.io/projected/167c1b32-0550-4c81-a2b6-b30e8d58dd3d-kube-api-access-98hll\") pod \"neutron-operator-controller-manager-78d58447c5-rk454\" (UID: \"167c1b32-0550-4c81-a2b6-b30e8d58dd3d\") " pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-rk454" Jan 26 08:07:37 crc kubenswrapper[4806]: I0126 08:07:37.894812 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5f4cd88d46-nk6xc"] Jan 26 08:07:37 crc kubenswrapper[4806]: I0126 08:07:37.909124 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jz2r9\" (UniqueName: \"kubernetes.io/projected/5270b699-329c-41eb-a8cf-5f94eeb4cd11-kube-api-access-jz2r9\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-lkcgg\" (UID: \"5270b699-329c-41eb-a8cf-5f94eeb4cd11\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-lkcgg" Jan 26 08:07:37 crc kubenswrapper[4806]: I0126 08:07:37.925452 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlp9f\" (UniqueName: \"kubernetes.io/projected/f84e4d06-7a1b-4038-b30f-ec7bf90efa2c-kube-api-access-dlp9f\") pod \"octavia-operator-controller-manager-5f4cd88d46-nk6xc\" (UID: \"f84e4d06-7a1b-4038-b30f-ec7bf90efa2c\") " pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-nk6xc" Jan 26 08:07:37 crc kubenswrapper[4806]: I0126 08:07:37.925753 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ltsh\" (UniqueName: \"kubernetes.io/projected/9096956d-1ed7-4e3c-bdec-d86c14168601-kube-api-access-7ltsh\") pod \"nova-operator-controller-manager-7bdb645866-tj4m9\" (UID: \"9096956d-1ed7-4e3c-bdec-d86c14168601\") " pod="openstack-operators/nova-operator-controller-manager-7bdb645866-tj4m9" Jan 26 08:07:37 crc kubenswrapper[4806]: I0126 08:07:37.953985 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854bc45q"] Jan 26 08:07:37 crc kubenswrapper[4806]: I0126 08:07:37.954819 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854bc45q" Jan 26 08:07:37 crc kubenswrapper[4806]: I0126 08:07:37.967080 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 26 08:07:37 crc kubenswrapper[4806]: I0126 08:07:37.967339 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-hkmr9" Jan 26 08:07:37 crc kubenswrapper[4806]: I0126 08:07:37.971717 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-85bn6\" (UniqueName: \"kubernetes.io/projected/1ca63855-2d6d-4543-a084-4cdb7c6d0c5c-kube-api-access-85bn6\") pod \"keystone-operator-controller-manager-b8b6d4659-rf496\" (UID: \"1ca63855-2d6d-4543-a084-4cdb7c6d0c5c\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-rf496" Jan 26 08:07:37 crc kubenswrapper[4806]: I0126 08:07:37.975233 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-6fzqz" Jan 26 08:07:37 crc kubenswrapper[4806]: I0126 08:07:37.985351 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkrf9\" (UniqueName: \"kubernetes.io/projected/41df476c-557f-407c-8711-57c979600bea-kube-api-access-rkrf9\") pod \"manila-operator-controller-manager-78c6999f6f-497jq\" (UID: \"41df476c-557f-407c-8711-57c979600bea\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-497jq" Jan 26 08:07:37 crc kubenswrapper[4806]: I0126 08:07:37.985844 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ltsh\" (UniqueName: \"kubernetes.io/projected/9096956d-1ed7-4e3c-bdec-d86c14168601-kube-api-access-7ltsh\") pod \"nova-operator-controller-manager-7bdb645866-tj4m9\" (UID: \"9096956d-1ed7-4e3c-bdec-d86c14168601\") " pod="openstack-operators/nova-operator-controller-manager-7bdb645866-tj4m9" Jan 26 08:07:38 crc kubenswrapper[4806]: I0126 08:07:38.032755 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-lkcgg" Jan 26 08:07:38 crc kubenswrapper[4806]: I0126 08:07:38.034603 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dlp9f\" (UniqueName: \"kubernetes.io/projected/f84e4d06-7a1b-4038-b30f-ec7bf90efa2c-kube-api-access-dlp9f\") pod \"octavia-operator-controller-manager-5f4cd88d46-nk6xc\" (UID: \"f84e4d06-7a1b-4038-b30f-ec7bf90efa2c\") " pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-nk6xc" Jan 26 08:07:38 crc kubenswrapper[4806]: I0126 08:07:38.034723 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2vlt\" (UniqueName: \"kubernetes.io/projected/109bb090-2776-45ce-b579-711304ae2db8-kube-api-access-f2vlt\") pod \"placement-operator-controller-manager-79d5ccc684-ncfjs\" (UID: \"109bb090-2776-45ce-b579-711304ae2db8\") " pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-ncfjs" Jan 26 08:07:38 crc kubenswrapper[4806]: I0126 08:07:38.037568 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6f75f45d54-ktw6x"] Jan 26 08:07:38 crc kubenswrapper[4806]: I0126 08:07:38.038297 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-ktw6x" Jan 26 08:07:38 crc kubenswrapper[4806]: I0126 08:07:38.043898 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-qr6nl" Jan 26 08:07:38 crc kubenswrapper[4806]: I0126 08:07:38.050086 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-rk454" Jan 26 08:07:38 crc kubenswrapper[4806]: I0126 08:07:38.058816 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-fd757" Jan 26 08:07:38 crc kubenswrapper[4806]: I0126 08:07:38.085517 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-tj4m9" Jan 26 08:07:38 crc kubenswrapper[4806]: I0126 08:07:38.100707 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlp9f\" (UniqueName: \"kubernetes.io/projected/f84e4d06-7a1b-4038-b30f-ec7bf90efa2c-kube-api-access-dlp9f\") pod \"octavia-operator-controller-manager-5f4cd88d46-nk6xc\" (UID: \"f84e4d06-7a1b-4038-b30f-ec7bf90efa2c\") " pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-nk6xc" Jan 26 08:07:38 crc kubenswrapper[4806]: I0126 08:07:38.116566 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-swpm7"] Jan 26 08:07:38 crc kubenswrapper[4806]: I0126 08:07:38.117370 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-swpm7" Jan 26 08:07:38 crc kubenswrapper[4806]: I0126 08:07:38.124015 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-sw6qs" Jan 26 08:07:38 crc kubenswrapper[4806]: I0126 08:07:38.130491 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-79d5ccc684-ncfjs"] Jan 26 08:07:38 crc kubenswrapper[4806]: I0126 08:07:38.140251 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2vlt\" (UniqueName: \"kubernetes.io/projected/109bb090-2776-45ce-b579-711304ae2db8-kube-api-access-f2vlt\") pod \"placement-operator-controller-manager-79d5ccc684-ncfjs\" (UID: \"109bb090-2776-45ce-b579-711304ae2db8\") " pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-ncfjs" Jan 26 08:07:38 crc kubenswrapper[4806]: I0126 08:07:38.140293 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f955001e-4d2d-437c-bc31-19a4234ed701-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854bc45q\" (UID: \"f955001e-4d2d-437c-bc31-19a4234ed701\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854bc45q" Jan 26 08:07:38 crc kubenswrapper[4806]: I0126 08:07:38.140344 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69kb4\" (UniqueName: \"kubernetes.io/projected/1a85568e-bc00-4bc5-a99e-bcef2f7041ee-kube-api-access-69kb4\") pod \"ovn-operator-controller-manager-6f75f45d54-ktw6x\" (UID: \"1a85568e-bc00-4bc5-a99e-bcef2f7041ee\") " pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-ktw6x" Jan 26 08:07:38 crc kubenswrapper[4806]: I0126 08:07:38.140383 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnplp\" (UniqueName: \"kubernetes.io/projected/f955001e-4d2d-437c-bc31-19a4234ed701-kube-api-access-rnplp\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854bc45q\" (UID: \"f955001e-4d2d-437c-bc31-19a4234ed701\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854bc45q" Jan 26 08:07:38 crc kubenswrapper[4806]: I0126 08:07:38.141733 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-rf496" Jan 26 08:07:38 crc kubenswrapper[4806]: I0126 08:07:38.158841 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-nk6xc" Jan 26 08:07:38 crc kubenswrapper[4806]: I0126 08:07:38.175158 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6f75f45d54-ktw6x"] Jan 26 08:07:38 crc kubenswrapper[4806]: I0126 08:07:38.196131 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-swpm7"] Jan 26 08:07:38 crc kubenswrapper[4806]: I0126 08:07:38.200197 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2vlt\" (UniqueName: \"kubernetes.io/projected/109bb090-2776-45ce-b579-711304ae2db8-kube-api-access-f2vlt\") pod \"placement-operator-controller-manager-79d5ccc684-ncfjs\" (UID: \"109bb090-2776-45ce-b579-711304ae2db8\") " pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-ncfjs" Jan 26 08:07:38 crc kubenswrapper[4806]: I0126 08:07:38.214420 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-85cd9769bb-p4bvd"] Jan 26 08:07:38 crc kubenswrapper[4806]: I0126 08:07:38.215234 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-p4bvd" Jan 26 08:07:38 crc kubenswrapper[4806]: I0126 08:07:38.217686 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-gsp7n" Jan 26 08:07:38 crc kubenswrapper[4806]: I0126 08:07:38.233647 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854bc45q"] Jan 26 08:07:38 crc kubenswrapper[4806]: I0126 08:07:38.244501 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcrvk\" (UniqueName: \"kubernetes.io/projected/e048fc14-f2ba-4930-9e77-a281b25c7a07-kube-api-access-dcrvk\") pod \"swift-operator-controller-manager-547cbdb99f-swpm7\" (UID: \"e048fc14-f2ba-4930-9e77-a281b25c7a07\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-swpm7" Jan 26 08:07:38 crc kubenswrapper[4806]: I0126 08:07:38.244553 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rnplp\" (UniqueName: \"kubernetes.io/projected/f955001e-4d2d-437c-bc31-19a4234ed701-kube-api-access-rnplp\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854bc45q\" (UID: \"f955001e-4d2d-437c-bc31-19a4234ed701\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854bc45q" Jan 26 08:07:38 crc kubenswrapper[4806]: I0126 08:07:38.244583 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/52ffe9cc-7d93-400f-a7ef-81d4c7335024-cert\") pod \"infra-operator-controller-manager-694cf4f878-8s72c\" (UID: \"52ffe9cc-7d93-400f-a7ef-81d4c7335024\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-8s72c" Jan 26 08:07:38 crc kubenswrapper[4806]: I0126 08:07:38.244621 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f955001e-4d2d-437c-bc31-19a4234ed701-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854bc45q\" (UID: \"f955001e-4d2d-437c-bc31-19a4234ed701\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854bc45q" Jan 26 08:07:38 crc kubenswrapper[4806]: I0126 08:07:38.244670 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69kb4\" (UniqueName: \"kubernetes.io/projected/1a85568e-bc00-4bc5-a99e-bcef2f7041ee-kube-api-access-69kb4\") pod \"ovn-operator-controller-manager-6f75f45d54-ktw6x\" (UID: \"1a85568e-bc00-4bc5-a99e-bcef2f7041ee\") " pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-ktw6x" Jan 26 08:07:38 crc kubenswrapper[4806]: E0126 08:07:38.245111 4806 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 26 08:07:38 crc kubenswrapper[4806]: E0126 08:07:38.245146 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/52ffe9cc-7d93-400f-a7ef-81d4c7335024-cert podName:52ffe9cc-7d93-400f-a7ef-81d4c7335024 nodeName:}" failed. No retries permitted until 2026-01-26 08:07:39.245133788 +0000 UTC m=+838.509541844 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/52ffe9cc-7d93-400f-a7ef-81d4c7335024-cert") pod "infra-operator-controller-manager-694cf4f878-8s72c" (UID: "52ffe9cc-7d93-400f-a7ef-81d4c7335024") : secret "infra-operator-webhook-server-cert" not found Jan 26 08:07:38 crc kubenswrapper[4806]: E0126 08:07:38.245354 4806 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 08:07:38 crc kubenswrapper[4806]: E0126 08:07:38.245376 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f955001e-4d2d-437c-bc31-19a4234ed701-cert podName:f955001e-4d2d-437c-bc31-19a4234ed701 nodeName:}" failed. No retries permitted until 2026-01-26 08:07:38.745369605 +0000 UTC m=+838.009777661 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f955001e-4d2d-437c-bc31-19a4234ed701-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854bc45q" (UID: "f955001e-4d2d-437c-bc31-19a4234ed701") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 08:07:38 crc kubenswrapper[4806]: I0126 08:07:38.273450 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-85cd9769bb-p4bvd"] Jan 26 08:07:38 crc kubenswrapper[4806]: I0126 08:07:38.274064 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69kb4\" (UniqueName: \"kubernetes.io/projected/1a85568e-bc00-4bc5-a99e-bcef2f7041ee-kube-api-access-69kb4\") pod \"ovn-operator-controller-manager-6f75f45d54-ktw6x\" (UID: \"1a85568e-bc00-4bc5-a99e-bcef2f7041ee\") " pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-ktw6x" Jan 26 08:07:38 crc kubenswrapper[4806]: I0126 08:07:38.275219 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnplp\" (UniqueName: \"kubernetes.io/projected/f955001e-4d2d-437c-bc31-19a4234ed701-kube-api-access-rnplp\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854bc45q\" (UID: \"f955001e-4d2d-437c-bc31-19a4234ed701\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854bc45q" Jan 26 08:07:38 crc kubenswrapper[4806]: I0126 08:07:38.276362 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-ncfjs" Jan 26 08:07:38 crc kubenswrapper[4806]: I0126 08:07:38.284382 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-497jq" Jan 26 08:07:38 crc kubenswrapper[4806]: I0126 08:07:38.309560 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-bhg2d"] Jan 26 08:07:38 crc kubenswrapper[4806]: I0126 08:07:38.310651 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-bhg2d" Jan 26 08:07:38 crc kubenswrapper[4806]: I0126 08:07:38.317690 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-5kd6l" Jan 26 08:07:38 crc kubenswrapper[4806]: I0126 08:07:38.358278 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pprz9\" (UniqueName: \"kubernetes.io/projected/3959df09-4052-4ccb-8c3f-b3f5aebb747c-kube-api-access-pprz9\") pod \"telemetry-operator-controller-manager-85cd9769bb-p4bvd\" (UID: \"3959df09-4052-4ccb-8c3f-b3f5aebb747c\") " pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-p4bvd" Jan 26 08:07:38 crc kubenswrapper[4806]: I0126 08:07:38.358381 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dcrvk\" (UniqueName: \"kubernetes.io/projected/e048fc14-f2ba-4930-9e77-a281b25c7a07-kube-api-access-dcrvk\") pod \"swift-operator-controller-manager-547cbdb99f-swpm7\" (UID: \"e048fc14-f2ba-4930-9e77-a281b25c7a07\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-swpm7" Jan 26 08:07:38 crc kubenswrapper[4806]: I0126 08:07:38.396202 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dcrvk\" (UniqueName: \"kubernetes.io/projected/e048fc14-f2ba-4930-9e77-a281b25c7a07-kube-api-access-dcrvk\") pod \"swift-operator-controller-manager-547cbdb99f-swpm7\" (UID: \"e048fc14-f2ba-4930-9e77-a281b25c7a07\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-swpm7" Jan 26 08:07:38 crc kubenswrapper[4806]: I0126 08:07:38.405599 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-bhg2d"] Jan 26 08:07:38 crc kubenswrapper[4806]: I0126 08:07:38.413256 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-ktw6x" Jan 26 08:07:38 crc kubenswrapper[4806]: I0126 08:07:38.430515 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-zqkgl"] Jan 26 08:07:38 crc kubenswrapper[4806]: I0126 08:07:38.431364 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-zqkgl" Jan 26 08:07:38 crc kubenswrapper[4806]: I0126 08:07:38.436145 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-zqkgl"] Jan 26 08:07:38 crc kubenswrapper[4806]: I0126 08:07:38.446904 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-dfrl4" Jan 26 08:07:38 crc kubenswrapper[4806]: I0126 08:07:38.457656 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-swpm7" Jan 26 08:07:38 crc kubenswrapper[4806]: I0126 08:07:38.459286 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4lf4\" (UniqueName: \"kubernetes.io/projected/16faebac-962b-4520-bb85-f77bc1d781d1-kube-api-access-c4lf4\") pod \"test-operator-controller-manager-69797bbcbd-bhg2d\" (UID: \"16faebac-962b-4520-bb85-f77bc1d781d1\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-bhg2d" Jan 26 08:07:38 crc kubenswrapper[4806]: I0126 08:07:38.459353 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pprz9\" (UniqueName: \"kubernetes.io/projected/3959df09-4052-4ccb-8c3f-b3f5aebb747c-kube-api-access-pprz9\") pod \"telemetry-operator-controller-manager-85cd9769bb-p4bvd\" (UID: \"3959df09-4052-4ccb-8c3f-b3f5aebb747c\") " pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-p4bvd" Jan 26 08:07:38 crc kubenswrapper[4806]: I0126 08:07:38.502961 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pprz9\" (UniqueName: \"kubernetes.io/projected/3959df09-4052-4ccb-8c3f-b3f5aebb747c-kube-api-access-pprz9\") pod \"telemetry-operator-controller-manager-85cd9769bb-p4bvd\" (UID: \"3959df09-4052-4ccb-8c3f-b3f5aebb747c\") " pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-p4bvd" Jan 26 08:07:38 crc kubenswrapper[4806]: I0126 08:07:38.555475 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6898c455c-d6bzz"] Jan 26 08:07:38 crc kubenswrapper[4806]: I0126 08:07:38.560709 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-6898c455c-d6bzz" Jan 26 08:07:38 crc kubenswrapper[4806]: I0126 08:07:38.562929 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 26 08:07:38 crc kubenswrapper[4806]: I0126 08:07:38.564868 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-pmbn8" Jan 26 08:07:38 crc kubenswrapper[4806]: I0126 08:07:38.565023 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 26 08:07:38 crc kubenswrapper[4806]: I0126 08:07:38.568794 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjzts\" (UniqueName: \"kubernetes.io/projected/f63ffecc-85dc-48df-b4d6-675d0792cacf-kube-api-access-wjzts\") pod \"watcher-operator-controller-manager-564965969-zqkgl\" (UID: \"f63ffecc-85dc-48df-b4d6-675d0792cacf\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-zqkgl" Jan 26 08:07:38 crc kubenswrapper[4806]: I0126 08:07:38.568932 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4lf4\" (UniqueName: \"kubernetes.io/projected/16faebac-962b-4520-bb85-f77bc1d781d1-kube-api-access-c4lf4\") pod \"test-operator-controller-manager-69797bbcbd-bhg2d\" (UID: \"16faebac-962b-4520-bb85-f77bc1d781d1\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-bhg2d" Jan 26 08:07:38 crc kubenswrapper[4806]: I0126 08:07:38.597159 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6898c455c-d6bzz"] Jan 26 08:07:38 crc kubenswrapper[4806]: I0126 08:07:38.633827 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-p4bvd" Jan 26 08:07:38 crc kubenswrapper[4806]: I0126 08:07:38.676679 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-gnhrx"] Jan 26 08:07:38 crc kubenswrapper[4806]: I0126 08:07:38.680378 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtt9f\" (UniqueName: \"kubernetes.io/projected/d6d4c1d3-c0cc-4cc3-b281-73f28ac929d2-kube-api-access-rtt9f\") pod \"openstack-operator-controller-manager-6898c455c-d6bzz\" (UID: \"d6d4c1d3-c0cc-4cc3-b281-73f28ac929d2\") " pod="openstack-operators/openstack-operator-controller-manager-6898c455c-d6bzz" Jan 26 08:07:38 crc kubenswrapper[4806]: I0126 08:07:38.680428 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d6d4c1d3-c0cc-4cc3-b281-73f28ac929d2-webhook-certs\") pod \"openstack-operator-controller-manager-6898c455c-d6bzz\" (UID: \"d6d4c1d3-c0cc-4cc3-b281-73f28ac929d2\") " pod="openstack-operators/openstack-operator-controller-manager-6898c455c-d6bzz" Jan 26 08:07:38 crc kubenswrapper[4806]: I0126 08:07:38.680457 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d6d4c1d3-c0cc-4cc3-b281-73f28ac929d2-metrics-certs\") pod \"openstack-operator-controller-manager-6898c455c-d6bzz\" (UID: \"d6d4c1d3-c0cc-4cc3-b281-73f28ac929d2\") " pod="openstack-operators/openstack-operator-controller-manager-6898c455c-d6bzz" Jan 26 08:07:38 crc kubenswrapper[4806]: I0126 08:07:38.680570 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjzts\" (UniqueName: \"kubernetes.io/projected/f63ffecc-85dc-48df-b4d6-675d0792cacf-kube-api-access-wjzts\") pod \"watcher-operator-controller-manager-564965969-zqkgl\" (UID: \"f63ffecc-85dc-48df-b4d6-675d0792cacf\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-zqkgl" Jan 26 08:07:38 crc kubenswrapper[4806]: I0126 08:07:38.693165 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-gnhrx"] Jan 26 08:07:38 crc kubenswrapper[4806]: I0126 08:07:38.693317 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-gnhrx" Jan 26 08:07:38 crc kubenswrapper[4806]: I0126 08:07:38.696084 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-pqc9w" Jan 26 08:07:38 crc kubenswrapper[4806]: I0126 08:07:38.712456 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7f86f8796f-9mwdz"] Jan 26 08:07:38 crc kubenswrapper[4806]: I0126 08:07:38.722781 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjzts\" (UniqueName: \"kubernetes.io/projected/f63ffecc-85dc-48df-b4d6-675d0792cacf-kube-api-access-wjzts\") pod \"watcher-operator-controller-manager-564965969-zqkgl\" (UID: \"f63ffecc-85dc-48df-b4d6-675d0792cacf\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-zqkgl" Jan 26 08:07:38 crc kubenswrapper[4806]: I0126 08:07:38.715996 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4lf4\" (UniqueName: \"kubernetes.io/projected/16faebac-962b-4520-bb85-f77bc1d781d1-kube-api-access-c4lf4\") pod \"test-operator-controller-manager-69797bbcbd-bhg2d\" (UID: \"16faebac-962b-4520-bb85-f77bc1d781d1\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-bhg2d" Jan 26 08:07:38 crc kubenswrapper[4806]: I0126 08:07:38.755143 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-q9hmq"] Jan 26 08:07:38 crc kubenswrapper[4806]: I0126 08:07:38.788348 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rtt9f\" (UniqueName: \"kubernetes.io/projected/d6d4c1d3-c0cc-4cc3-b281-73f28ac929d2-kube-api-access-rtt9f\") pod \"openstack-operator-controller-manager-6898c455c-d6bzz\" (UID: \"d6d4c1d3-c0cc-4cc3-b281-73f28ac929d2\") " pod="openstack-operators/openstack-operator-controller-manager-6898c455c-d6bzz" Jan 26 08:07:38 crc kubenswrapper[4806]: I0126 08:07:38.788389 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d6d4c1d3-c0cc-4cc3-b281-73f28ac929d2-webhook-certs\") pod \"openstack-operator-controller-manager-6898c455c-d6bzz\" (UID: \"d6d4c1d3-c0cc-4cc3-b281-73f28ac929d2\") " pod="openstack-operators/openstack-operator-controller-manager-6898c455c-d6bzz" Jan 26 08:07:38 crc kubenswrapper[4806]: I0126 08:07:38.788413 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d6d4c1d3-c0cc-4cc3-b281-73f28ac929d2-metrics-certs\") pod \"openstack-operator-controller-manager-6898c455c-d6bzz\" (UID: \"d6d4c1d3-c0cc-4cc3-b281-73f28ac929d2\") " pod="openstack-operators/openstack-operator-controller-manager-6898c455c-d6bzz" Jan 26 08:07:38 crc kubenswrapper[4806]: E0126 08:07:38.788550 4806 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 08:07:38 crc kubenswrapper[4806]: I0126 08:07:38.788434 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f955001e-4d2d-437c-bc31-19a4234ed701-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854bc45q\" (UID: \"f955001e-4d2d-437c-bc31-19a4234ed701\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854bc45q" Jan 26 08:07:38 crc kubenswrapper[4806]: E0126 08:07:38.788916 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f955001e-4d2d-437c-bc31-19a4234ed701-cert podName:f955001e-4d2d-437c-bc31-19a4234ed701 nodeName:}" failed. No retries permitted until 2026-01-26 08:07:39.788585797 +0000 UTC m=+839.052993853 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f955001e-4d2d-437c-bc31-19a4234ed701-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854bc45q" (UID: "f955001e-4d2d-437c-bc31-19a4234ed701") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 08:07:38 crc kubenswrapper[4806]: I0126 08:07:38.788949 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntdqv\" (UniqueName: \"kubernetes.io/projected/77931dd1-1acc-4552-8605-33a24c74fc43-kube-api-access-ntdqv\") pod \"rabbitmq-cluster-operator-manager-668c99d594-gnhrx\" (UID: \"77931dd1-1acc-4552-8605-33a24c74fc43\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-gnhrx" Jan 26 08:07:38 crc kubenswrapper[4806]: E0126 08:07:38.788968 4806 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 26 08:07:38 crc kubenswrapper[4806]: E0126 08:07:38.789008 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d6d4c1d3-c0cc-4cc3-b281-73f28ac929d2-metrics-certs podName:d6d4c1d3-c0cc-4cc3-b281-73f28ac929d2 nodeName:}" failed. No retries permitted until 2026-01-26 08:07:39.288997648 +0000 UTC m=+838.553405704 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d6d4c1d3-c0cc-4cc3-b281-73f28ac929d2-metrics-certs") pod "openstack-operator-controller-manager-6898c455c-d6bzz" (UID: "d6d4c1d3-c0cc-4cc3-b281-73f28ac929d2") : secret "metrics-server-cert" not found Jan 26 08:07:38 crc kubenswrapper[4806]: E0126 08:07:38.789047 4806 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 26 08:07:38 crc kubenswrapper[4806]: E0126 08:07:38.789065 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d6d4c1d3-c0cc-4cc3-b281-73f28ac929d2-webhook-certs podName:d6d4c1d3-c0cc-4cc3-b281-73f28ac929d2 nodeName:}" failed. No retries permitted until 2026-01-26 08:07:39.28905936 +0000 UTC m=+838.553467406 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d6d4c1d3-c0cc-4cc3-b281-73f28ac929d2-webhook-certs") pod "openstack-operator-controller-manager-6898c455c-d6bzz" (UID: "d6d4c1d3-c0cc-4cc3-b281-73f28ac929d2") : secret "webhook-server-cert" not found Jan 26 08:07:38 crc kubenswrapper[4806]: W0126 08:07:38.822695 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3191a58e_ee1d_430f_97ce_c7c532d132a6.slice/crio-676dd1f027c4132557f7a7e40f50af4b33d0ab8210fc85b47d8e8129bf4474e6 WatchSource:0}: Error finding container 676dd1f027c4132557f7a7e40f50af4b33d0ab8210fc85b47d8e8129bf4474e6: Status 404 returned error can't find the container with id 676dd1f027c4132557f7a7e40f50af4b33d0ab8210fc85b47d8e8129bf4474e6 Jan 26 08:07:38 crc kubenswrapper[4806]: I0126 08:07:38.827355 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rtt9f\" (UniqueName: \"kubernetes.io/projected/d6d4c1d3-c0cc-4cc3-b281-73f28ac929d2-kube-api-access-rtt9f\") pod \"openstack-operator-controller-manager-6898c455c-d6bzz\" (UID: \"d6d4c1d3-c0cc-4cc3-b281-73f28ac929d2\") " pod="openstack-operators/openstack-operator-controller-manager-6898c455c-d6bzz" Jan 26 08:07:38 crc kubenswrapper[4806]: I0126 08:07:38.890157 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ntdqv\" (UniqueName: \"kubernetes.io/projected/77931dd1-1acc-4552-8605-33a24c74fc43-kube-api-access-ntdqv\") pod \"rabbitmq-cluster-operator-manager-668c99d594-gnhrx\" (UID: \"77931dd1-1acc-4552-8605-33a24c74fc43\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-gnhrx" Jan 26 08:07:38 crc kubenswrapper[4806]: I0126 08:07:38.892478 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-psw9b"] Jan 26 08:07:38 crc kubenswrapper[4806]: I0126 08:07:38.910992 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-q9hmq" event={"ID":"3191a58e-ee1d-430f-97ce-c7c532d132a6","Type":"ContainerStarted","Data":"676dd1f027c4132557f7a7e40f50af4b33d0ab8210fc85b47d8e8129bf4474e6"} Jan 26 08:07:38 crc kubenswrapper[4806]: I0126 08:07:38.913606 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-9mwdz" event={"ID":"9b0f19e9-5ee8-4f12-a453-2195b20a8f09","Type":"ContainerStarted","Data":"04eae3870e1345d2ec71b53e7429be0b388a252dc4d1914ffd9098c9c3645639"} Jan 26 08:07:38 crc kubenswrapper[4806]: I0126 08:07:38.926010 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntdqv\" (UniqueName: \"kubernetes.io/projected/77931dd1-1acc-4552-8605-33a24c74fc43-kube-api-access-ntdqv\") pod \"rabbitmq-cluster-operator-manager-668c99d594-gnhrx\" (UID: \"77931dd1-1acc-4552-8605-33a24c74fc43\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-gnhrx" Jan 26 08:07:38 crc kubenswrapper[4806]: I0126 08:07:38.957843 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-zqkgl" Jan 26 08:07:38 crc kubenswrapper[4806]: I0126 08:07:38.978979 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-bhg2d" Jan 26 08:07:39 crc kubenswrapper[4806]: I0126 08:07:39.089062 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-gnhrx" Jan 26 08:07:39 crc kubenswrapper[4806]: I0126 08:07:39.111112 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-6fzqz"] Jan 26 08:07:39 crc kubenswrapper[4806]: I0126 08:07:39.136567 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-7478f7dbf9-wwr7b"] Jan 26 08:07:39 crc kubenswrapper[4806]: I0126 08:07:39.296656 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/52ffe9cc-7d93-400f-a7ef-81d4c7335024-cert\") pod \"infra-operator-controller-manager-694cf4f878-8s72c\" (UID: \"52ffe9cc-7d93-400f-a7ef-81d4c7335024\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-8s72c" Jan 26 08:07:39 crc kubenswrapper[4806]: I0126 08:07:39.296709 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d6d4c1d3-c0cc-4cc3-b281-73f28ac929d2-webhook-certs\") pod \"openstack-operator-controller-manager-6898c455c-d6bzz\" (UID: \"d6d4c1d3-c0cc-4cc3-b281-73f28ac929d2\") " pod="openstack-operators/openstack-operator-controller-manager-6898c455c-d6bzz" Jan 26 08:07:39 crc kubenswrapper[4806]: I0126 08:07:39.296729 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d6d4c1d3-c0cc-4cc3-b281-73f28ac929d2-metrics-certs\") pod \"openstack-operator-controller-manager-6898c455c-d6bzz\" (UID: \"d6d4c1d3-c0cc-4cc3-b281-73f28ac929d2\") " pod="openstack-operators/openstack-operator-controller-manager-6898c455c-d6bzz" Jan 26 08:07:39 crc kubenswrapper[4806]: E0126 08:07:39.296832 4806 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 26 08:07:39 crc kubenswrapper[4806]: E0126 08:07:39.296872 4806 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 26 08:07:39 crc kubenswrapper[4806]: E0126 08:07:39.296917 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/52ffe9cc-7d93-400f-a7ef-81d4c7335024-cert podName:52ffe9cc-7d93-400f-a7ef-81d4c7335024 nodeName:}" failed. No retries permitted until 2026-01-26 08:07:41.296898762 +0000 UTC m=+840.561306818 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/52ffe9cc-7d93-400f-a7ef-81d4c7335024-cert") pod "infra-operator-controller-manager-694cf4f878-8s72c" (UID: "52ffe9cc-7d93-400f-a7ef-81d4c7335024") : secret "infra-operator-webhook-server-cert" not found Jan 26 08:07:39 crc kubenswrapper[4806]: E0126 08:07:39.296922 4806 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 26 08:07:39 crc kubenswrapper[4806]: E0126 08:07:39.296951 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d6d4c1d3-c0cc-4cc3-b281-73f28ac929d2-webhook-certs podName:d6d4c1d3-c0cc-4cc3-b281-73f28ac929d2 nodeName:}" failed. No retries permitted until 2026-01-26 08:07:40.296925182 +0000 UTC m=+839.561333238 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d6d4c1d3-c0cc-4cc3-b281-73f28ac929d2-webhook-certs") pod "openstack-operator-controller-manager-6898c455c-d6bzz" (UID: "d6d4c1d3-c0cc-4cc3-b281-73f28ac929d2") : secret "webhook-server-cert" not found Jan 26 08:07:39 crc kubenswrapper[4806]: E0126 08:07:39.296974 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d6d4c1d3-c0cc-4cc3-b281-73f28ac929d2-metrics-certs podName:d6d4c1d3-c0cc-4cc3-b281-73f28ac929d2 nodeName:}" failed. No retries permitted until 2026-01-26 08:07:40.296960223 +0000 UTC m=+839.561368269 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d6d4c1d3-c0cc-4cc3-b281-73f28ac929d2-metrics-certs") pod "openstack-operator-controller-manager-6898c455c-d6bzz" (UID: "d6d4c1d3-c0cc-4cc3-b281-73f28ac929d2") : secret "metrics-server-cert" not found Jan 26 08:07:39 crc kubenswrapper[4806]: I0126 08:07:39.456626 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5f4cd88d46-nk6xc"] Jan 26 08:07:39 crc kubenswrapper[4806]: I0126 08:07:39.464333 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-lkcgg"] Jan 26 08:07:39 crc kubenswrapper[4806]: I0126 08:07:39.493847 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-9ld8m"] Jan 26 08:07:39 crc kubenswrapper[4806]: I0126 08:07:39.500616 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-rf496"] Jan 26 08:07:39 crc kubenswrapper[4806]: W0126 08:07:39.520202 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf84e4d06_7a1b_4038_b30f_ec7bf90efa2c.slice/crio-602c32cdcb5e1fcab2f991460f91ea884233aa19dc231c744684f228340ad936 WatchSource:0}: Error finding container 602c32cdcb5e1fcab2f991460f91ea884233aa19dc231c744684f228340ad936: Status 404 returned error can't find the container with id 602c32cdcb5e1fcab2f991460f91ea884233aa19dc231c744684f228340ad936 Jan 26 08:07:39 crc kubenswrapper[4806]: I0126 08:07:39.531616 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-598f7747c9-fd757"] Jan 26 08:07:39 crc kubenswrapper[4806]: I0126 08:07:39.556291 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78d58447c5-rk454"] Jan 26 08:07:39 crc kubenswrapper[4806]: I0126 08:07:39.565909 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-497jq"] Jan 26 08:07:39 crc kubenswrapper[4806]: I0126 08:07:39.574968 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-swpm7"] Jan 26 08:07:39 crc kubenswrapper[4806]: I0126 08:07:39.584819 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-85cd9769bb-p4bvd"] Jan 26 08:07:39 crc kubenswrapper[4806]: I0126 08:07:39.592707 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6f75f45d54-ktw6x"] Jan 26 08:07:39 crc kubenswrapper[4806]: I0126 08:07:39.607686 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-7bdb645866-tj4m9"] Jan 26 08:07:39 crc kubenswrapper[4806]: W0126 08:07:39.608730 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod167c1b32_0550_4c81_a2b6_b30e8d58dd3d.slice/crio-179694b315f21477f7d4ccd83289342b4b25e41cf738185ea5aaf66e6d15b3cc WatchSource:0}: Error finding container 179694b315f21477f7d4ccd83289342b4b25e41cf738185ea5aaf66e6d15b3cc: Status 404 returned error can't find the container with id 179694b315f21477f7d4ccd83289342b4b25e41cf738185ea5aaf66e6d15b3cc Jan 26 08:07:39 crc kubenswrapper[4806]: W0126 08:07:39.613894 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3959df09_4052_4ccb_8c3f_b3f5aebb747c.slice/crio-a1f293f28bc6c1910c2f4bab6f6f056d153c78cf4438dafd6d51a3aec214c006 WatchSource:0}: Error finding container a1f293f28bc6c1910c2f4bab6f6f056d153c78cf4438dafd6d51a3aec214c006: Status 404 returned error can't find the container with id a1f293f28bc6c1910c2f4bab6f6f056d153c78cf4438dafd6d51a3aec214c006 Jan 26 08:07:39 crc kubenswrapper[4806]: I0126 08:07:39.613932 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-79d5ccc684-ncfjs"] Jan 26 08:07:39 crc kubenswrapper[4806]: I0126 08:07:39.620965 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-zqkgl"] Jan 26 08:07:39 crc kubenswrapper[4806]: I0126 08:07:39.625444 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-bhg2d"] Jan 26 08:07:39 crc kubenswrapper[4806]: W0126 08:07:39.638202 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod109bb090_2776_45ce_b579_711304ae2db8.slice/crio-06b5d425a88e10ef060396ecdd53efbc0ed92acddaf8589043de86d114a60991 WatchSource:0}: Error finding container 06b5d425a88e10ef060396ecdd53efbc0ed92acddaf8589043de86d114a60991: Status 404 returned error can't find the container with id 06b5d425a88e10ef060396ecdd53efbc0ed92acddaf8589043de86d114a60991 Jan 26 08:07:39 crc kubenswrapper[4806]: W0126 08:07:39.656892 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode048fc14_f2ba_4930_9e77_a281b25c7a07.slice/crio-c3eb63debe0ff83529b4db73ea5299bcbc207afd82aeca841605cd26d8df32bc WatchSource:0}: Error finding container c3eb63debe0ff83529b4db73ea5299bcbc207afd82aeca841605cd26d8df32bc: Status 404 returned error can't find the container with id c3eb63debe0ff83529b4db73ea5299bcbc207afd82aeca841605cd26d8df32bc Jan 26 08:07:39 crc kubenswrapper[4806]: I0126 08:07:39.660681 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-gnhrx"] Jan 26 08:07:39 crc kubenswrapper[4806]: E0126 08:07:39.663549 4806 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dcrvk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-547cbdb99f-swpm7_openstack-operators(e048fc14-f2ba-4930-9e77-a281b25c7a07): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 26 08:07:39 crc kubenswrapper[4806]: E0126 08:07:39.664750 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-swpm7" podUID="e048fc14-f2ba-4930-9e77-a281b25c7a07" Jan 26 08:07:39 crc kubenswrapper[4806]: W0126 08:07:39.667271 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9096956d_1ed7_4e3c_bdec_d86c14168601.slice/crio-ebdbd2cce021690a93031e765ce0ab4ad6bc3fe52e1eaed10ebd97a6d06d6c9b WatchSource:0}: Error finding container ebdbd2cce021690a93031e765ce0ab4ad6bc3fe52e1eaed10ebd97a6d06d6c9b: Status 404 returned error can't find the container with id ebdbd2cce021690a93031e765ce0ab4ad6bc3fe52e1eaed10ebd97a6d06d6c9b Jan 26 08:07:39 crc kubenswrapper[4806]: W0126 08:07:39.668467 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1a85568e_bc00_4bc5_a99e_bcef2f7041ee.slice/crio-24ee773a077d5560b5a8652d6f735601bab8bb1b684d424f9f57114717da615c WatchSource:0}: Error finding container 24ee773a077d5560b5a8652d6f735601bab8bb1b684d424f9f57114717da615c: Status 404 returned error can't find the container with id 24ee773a077d5560b5a8652d6f735601bab8bb1b684d424f9f57114717da615c Jan 26 08:07:39 crc kubenswrapper[4806]: W0126 08:07:39.671666 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf63ffecc_85dc_48df_b4d6_675d0792cacf.slice/crio-fe8832f8c590897ec3adfa1116a055f5e844589dd3c677cf80d0689bb3bde627 WatchSource:0}: Error finding container fe8832f8c590897ec3adfa1116a055f5e844589dd3c677cf80d0689bb3bde627: Status 404 returned error can't find the container with id fe8832f8c590897ec3adfa1116a055f5e844589dd3c677cf80d0689bb3bde627 Jan 26 08:07:39 crc kubenswrapper[4806]: W0126 08:07:39.675140 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod77931dd1_1acc_4552_8605_33a24c74fc43.slice/crio-c20316999f5c8e5f04a2331863a7eb809cc85c1ccf10c87a64d3a15290fae950 WatchSource:0}: Error finding container c20316999f5c8e5f04a2331863a7eb809cc85c1ccf10c87a64d3a15290fae950: Status 404 returned error can't find the container with id c20316999f5c8e5f04a2331863a7eb809cc85c1ccf10c87a64d3a15290fae950 Jan 26 08:07:39 crc kubenswrapper[4806]: E0126 08:07:39.676429 4806 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:fa46fc14710961e6b4a76a3522dca3aa3cfa71436c7cf7ade533d3712822f327,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-69kb4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-6f75f45d54-ktw6x_openstack-operators(1a85568e-bc00-4bc5-a99e-bcef2f7041ee): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 26 08:07:39 crc kubenswrapper[4806]: W0126 08:07:39.676957 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod16faebac_962b_4520_bb85_f77bc1d781d1.slice/crio-b45f8c4329b003ab4beffef9a495c56fd492efdc9f7e519248dedb345c10c379 WatchSource:0}: Error finding container b45f8c4329b003ab4beffef9a495c56fd492efdc9f7e519248dedb345c10c379: Status 404 returned error can't find the container with id b45f8c4329b003ab4beffef9a495c56fd492efdc9f7e519248dedb345c10c379 Jan 26 08:07:39 crc kubenswrapper[4806]: E0126 08:07:39.677471 4806 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:8abfbec47f0119a6c22c61a0ff80a4b1c6c14439a327bc75d4c529c5d8f59658,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7ltsh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-7bdb645866-tj4m9_openstack-operators(9096956d-1ed7-4e3c-bdec-d86c14168601): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 26 08:07:39 crc kubenswrapper[4806]: E0126 08:07:39.677638 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-ktw6x" podUID="1a85568e-bc00-4bc5-a99e-bcef2f7041ee" Jan 26 08:07:39 crc kubenswrapper[4806]: E0126 08:07:39.678562 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-tj4m9" podUID="9096956d-1ed7-4e3c-bdec-d86c14168601" Jan 26 08:07:39 crc kubenswrapper[4806]: E0126 08:07:39.679201 4806 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ntdqv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-gnhrx_openstack-operators(77931dd1-1acc-4552-8605-33a24c74fc43): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 26 08:07:39 crc kubenswrapper[4806]: E0126 08:07:39.680651 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-gnhrx" podUID="77931dd1-1acc-4552-8605-33a24c74fc43" Jan 26 08:07:39 crc kubenswrapper[4806]: E0126 08:07:39.683997 4806 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wjzts,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-564965969-zqkgl_openstack-operators(f63ffecc-85dc-48df-b4d6-675d0792cacf): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 26 08:07:39 crc kubenswrapper[4806]: E0126 08:07:39.685207 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-zqkgl" podUID="f63ffecc-85dc-48df-b4d6-675d0792cacf" Jan 26 08:07:39 crc kubenswrapper[4806]: E0126 08:07:39.686699 4806 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-c4lf4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-69797bbcbd-bhg2d_openstack-operators(16faebac-962b-4520-bb85-f77bc1d781d1): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 26 08:07:39 crc kubenswrapper[4806]: E0126 08:07:39.688486 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-bhg2d" podUID="16faebac-962b-4520-bb85-f77bc1d781d1" Jan 26 08:07:39 crc kubenswrapper[4806]: I0126 08:07:39.807353 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f955001e-4d2d-437c-bc31-19a4234ed701-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854bc45q\" (UID: \"f955001e-4d2d-437c-bc31-19a4234ed701\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854bc45q" Jan 26 08:07:39 crc kubenswrapper[4806]: E0126 08:07:39.807562 4806 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 08:07:39 crc kubenswrapper[4806]: E0126 08:07:39.807617 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f955001e-4d2d-437c-bc31-19a4234ed701-cert podName:f955001e-4d2d-437c-bc31-19a4234ed701 nodeName:}" failed. No retries permitted until 2026-01-26 08:07:41.807601642 +0000 UTC m=+841.072009698 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f955001e-4d2d-437c-bc31-19a4234ed701-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854bc45q" (UID: "f955001e-4d2d-437c-bc31-19a4234ed701") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 08:07:39 crc kubenswrapper[4806]: I0126 08:07:39.959331 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-9ld8m" event={"ID":"293159cc-40c4-4335-ad77-65f1c493e35a","Type":"ContainerStarted","Data":"e3fd04a20f472457f664b666271a341ae04b068ea58ef086a3667e1dc3120907"} Jan 26 08:07:39 crc kubenswrapper[4806]: I0126 08:07:39.961229 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-lkcgg" event={"ID":"5270b699-329c-41eb-a8cf-5f94eeb4cd11","Type":"ContainerStarted","Data":"a65143ff338606ef3c618542011e9493cd6a0616ee62ae256e29f6120f8588e6"} Jan 26 08:07:39 crc kubenswrapper[4806]: I0126 08:07:39.964576 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-p4bvd" event={"ID":"3959df09-4052-4ccb-8c3f-b3f5aebb747c","Type":"ContainerStarted","Data":"a1f293f28bc6c1910c2f4bab6f6f056d153c78cf4438dafd6d51a3aec214c006"} Jan 26 08:07:39 crc kubenswrapper[4806]: I0126 08:07:39.968755 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-fd757" event={"ID":"00c672af-a00d-45d3-9d80-39de7bbcf49c","Type":"ContainerStarted","Data":"77ed943b53503612423f4ebddef80b30aeb5a491d047e6eb41e0a1edc454242a"} Jan 26 08:07:39 crc kubenswrapper[4806]: I0126 08:07:39.970473 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-zqkgl" event={"ID":"f63ffecc-85dc-48df-b4d6-675d0792cacf","Type":"ContainerStarted","Data":"fe8832f8c590897ec3adfa1116a055f5e844589dd3c677cf80d0689bb3bde627"} Jan 26 08:07:39 crc kubenswrapper[4806]: I0126 08:07:39.972195 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-497jq" event={"ID":"41df476c-557f-407c-8711-57c979600bea","Type":"ContainerStarted","Data":"610be0f0901d196e6c115df14a9a911513c510325c032d3666f7ec81d693f1cd"} Jan 26 08:07:39 crc kubenswrapper[4806]: E0126 08:07:39.972511 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-zqkgl" podUID="f63ffecc-85dc-48df-b4d6-675d0792cacf" Jan 26 08:07:39 crc kubenswrapper[4806]: I0126 08:07:39.974612 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-rk454" event={"ID":"167c1b32-0550-4c81-a2b6-b30e8d58dd3d","Type":"ContainerStarted","Data":"179694b315f21477f7d4ccd83289342b4b25e41cf738185ea5aaf66e6d15b3cc"} Jan 26 08:07:39 crc kubenswrapper[4806]: I0126 08:07:39.975915 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-ktw6x" event={"ID":"1a85568e-bc00-4bc5-a99e-bcef2f7041ee","Type":"ContainerStarted","Data":"24ee773a077d5560b5a8652d6f735601bab8bb1b684d424f9f57114717da615c"} Jan 26 08:07:39 crc kubenswrapper[4806]: I0126 08:07:39.977013 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-psw9b" event={"ID":"002839d6-a78d-4826-a93c-b6dec9671bab","Type":"ContainerStarted","Data":"656e20dd7f58eba07a4cf5511df13fcb49de483b5bd2e8a777462a70a250d920"} Jan 26 08:07:39 crc kubenswrapper[4806]: E0126 08:07:39.977826 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:fa46fc14710961e6b4a76a3522dca3aa3cfa71436c7cf7ade533d3712822f327\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-ktw6x" podUID="1a85568e-bc00-4bc5-a99e-bcef2f7041ee" Jan 26 08:07:39 crc kubenswrapper[4806]: I0126 08:07:39.980708 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-ncfjs" event={"ID":"109bb090-2776-45ce-b579-711304ae2db8","Type":"ContainerStarted","Data":"06b5d425a88e10ef060396ecdd53efbc0ed92acddaf8589043de86d114a60991"} Jan 26 08:07:39 crc kubenswrapper[4806]: I0126 08:07:39.988773 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-6fzqz" event={"ID":"d36b3dbe-4776-4c55-a64f-4ea15cad6fb7","Type":"ContainerStarted","Data":"3cb4c0c6ed06605e82de2fe9426fde567b36c3fbba2428bb04b6089ff8bf36a1"} Jan 26 08:07:39 crc kubenswrapper[4806]: I0126 08:07:39.995604 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-tj4m9" event={"ID":"9096956d-1ed7-4e3c-bdec-d86c14168601","Type":"ContainerStarted","Data":"ebdbd2cce021690a93031e765ce0ab4ad6bc3fe52e1eaed10ebd97a6d06d6c9b"} Jan 26 08:07:40 crc kubenswrapper[4806]: E0126 08:07:39.999514 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:8abfbec47f0119a6c22c61a0ff80a4b1c6c14439a327bc75d4c529c5d8f59658\\\"\"" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-tj4m9" podUID="9096956d-1ed7-4e3c-bdec-d86c14168601" Jan 26 08:07:40 crc kubenswrapper[4806]: I0126 08:07:40.000626 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-swpm7" event={"ID":"e048fc14-f2ba-4930-9e77-a281b25c7a07","Type":"ContainerStarted","Data":"c3eb63debe0ff83529b4db73ea5299bcbc207afd82aeca841605cd26d8df32bc"} Jan 26 08:07:40 crc kubenswrapper[4806]: E0126 08:07:40.005575 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922\\\"\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-swpm7" podUID="e048fc14-f2ba-4930-9e77-a281b25c7a07" Jan 26 08:07:40 crc kubenswrapper[4806]: I0126 08:07:40.008385 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-rf496" event={"ID":"1ca63855-2d6d-4543-a084-4cdb7c6d0c5c","Type":"ContainerStarted","Data":"686b3beeb8c234778f67ecea6ebc94967334e32cebc63773f4b411fbc61151b7"} Jan 26 08:07:40 crc kubenswrapper[4806]: I0126 08:07:40.013694 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-gnhrx" event={"ID":"77931dd1-1acc-4552-8605-33a24c74fc43","Type":"ContainerStarted","Data":"c20316999f5c8e5f04a2331863a7eb809cc85c1ccf10c87a64d3a15290fae950"} Jan 26 08:07:40 crc kubenswrapper[4806]: E0126 08:07:40.020598 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-gnhrx" podUID="77931dd1-1acc-4552-8605-33a24c74fc43" Jan 26 08:07:40 crc kubenswrapper[4806]: I0126 08:07:40.022449 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-bhg2d" event={"ID":"16faebac-962b-4520-bb85-f77bc1d781d1","Type":"ContainerStarted","Data":"b45f8c4329b003ab4beffef9a495c56fd492efdc9f7e519248dedb345c10c379"} Jan 26 08:07:40 crc kubenswrapper[4806]: E0126 08:07:40.024604 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d\\\"\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-bhg2d" podUID="16faebac-962b-4520-bb85-f77bc1d781d1" Jan 26 08:07:40 crc kubenswrapper[4806]: I0126 08:07:40.028348 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-nk6xc" event={"ID":"f84e4d06-7a1b-4038-b30f-ec7bf90efa2c","Type":"ContainerStarted","Data":"602c32cdcb5e1fcab2f991460f91ea884233aa19dc231c744684f228340ad936"} Jan 26 08:07:40 crc kubenswrapper[4806]: I0126 08:07:40.036128 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-wwr7b" event={"ID":"55e84831-2044-4555-844d-93053648d17a","Type":"ContainerStarted","Data":"3b1da7772d64a485600922d688f896d530cf37bdc126a25105ba5f9df9266985"} Jan 26 08:07:40 crc kubenswrapper[4806]: I0126 08:07:40.315642 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d6d4c1d3-c0cc-4cc3-b281-73f28ac929d2-webhook-certs\") pod \"openstack-operator-controller-manager-6898c455c-d6bzz\" (UID: \"d6d4c1d3-c0cc-4cc3-b281-73f28ac929d2\") " pod="openstack-operators/openstack-operator-controller-manager-6898c455c-d6bzz" Jan 26 08:07:40 crc kubenswrapper[4806]: I0126 08:07:40.315698 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d6d4c1d3-c0cc-4cc3-b281-73f28ac929d2-metrics-certs\") pod \"openstack-operator-controller-manager-6898c455c-d6bzz\" (UID: \"d6d4c1d3-c0cc-4cc3-b281-73f28ac929d2\") " pod="openstack-operators/openstack-operator-controller-manager-6898c455c-d6bzz" Jan 26 08:07:40 crc kubenswrapper[4806]: E0126 08:07:40.315923 4806 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 26 08:07:40 crc kubenswrapper[4806]: E0126 08:07:40.316075 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d6d4c1d3-c0cc-4cc3-b281-73f28ac929d2-metrics-certs podName:d6d4c1d3-c0cc-4cc3-b281-73f28ac929d2 nodeName:}" failed. No retries permitted until 2026-01-26 08:07:42.316022899 +0000 UTC m=+841.580430955 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d6d4c1d3-c0cc-4cc3-b281-73f28ac929d2-metrics-certs") pod "openstack-operator-controller-manager-6898c455c-d6bzz" (UID: "d6d4c1d3-c0cc-4cc3-b281-73f28ac929d2") : secret "metrics-server-cert" not found Jan 26 08:07:40 crc kubenswrapper[4806]: E0126 08:07:40.316658 4806 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 26 08:07:40 crc kubenswrapper[4806]: E0126 08:07:40.316710 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d6d4c1d3-c0cc-4cc3-b281-73f28ac929d2-webhook-certs podName:d6d4c1d3-c0cc-4cc3-b281-73f28ac929d2 nodeName:}" failed. No retries permitted until 2026-01-26 08:07:42.316700208 +0000 UTC m=+841.581108254 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d6d4c1d3-c0cc-4cc3-b281-73f28ac929d2-webhook-certs") pod "openstack-operator-controller-manager-6898c455c-d6bzz" (UID: "d6d4c1d3-c0cc-4cc3-b281-73f28ac929d2") : secret "webhook-server-cert" not found Jan 26 08:07:41 crc kubenswrapper[4806]: E0126 08:07:41.060577 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-gnhrx" podUID="77931dd1-1acc-4552-8605-33a24c74fc43" Jan 26 08:07:41 crc kubenswrapper[4806]: E0126 08:07:41.061141 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-zqkgl" podUID="f63ffecc-85dc-48df-b4d6-675d0792cacf" Jan 26 08:07:41 crc kubenswrapper[4806]: E0126 08:07:41.061775 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922\\\"\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-swpm7" podUID="e048fc14-f2ba-4930-9e77-a281b25c7a07" Jan 26 08:07:41 crc kubenswrapper[4806]: E0126 08:07:41.062674 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:fa46fc14710961e6b4a76a3522dca3aa3cfa71436c7cf7ade533d3712822f327\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-ktw6x" podUID="1a85568e-bc00-4bc5-a99e-bcef2f7041ee" Jan 26 08:07:41 crc kubenswrapper[4806]: E0126 08:07:41.062973 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d\\\"\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-bhg2d" podUID="16faebac-962b-4520-bb85-f77bc1d781d1" Jan 26 08:07:41 crc kubenswrapper[4806]: E0126 08:07:41.064189 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:8abfbec47f0119a6c22c61a0ff80a4b1c6c14439a327bc75d4c529c5d8f59658\\\"\"" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-tj4m9" podUID="9096956d-1ed7-4e3c-bdec-d86c14168601" Jan 26 08:07:41 crc kubenswrapper[4806]: I0126 08:07:41.331635 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/52ffe9cc-7d93-400f-a7ef-81d4c7335024-cert\") pod \"infra-operator-controller-manager-694cf4f878-8s72c\" (UID: \"52ffe9cc-7d93-400f-a7ef-81d4c7335024\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-8s72c" Jan 26 08:07:41 crc kubenswrapper[4806]: E0126 08:07:41.331965 4806 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 26 08:07:41 crc kubenswrapper[4806]: E0126 08:07:41.332011 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/52ffe9cc-7d93-400f-a7ef-81d4c7335024-cert podName:52ffe9cc-7d93-400f-a7ef-81d4c7335024 nodeName:}" failed. No retries permitted until 2026-01-26 08:07:45.33199694 +0000 UTC m=+844.596404996 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/52ffe9cc-7d93-400f-a7ef-81d4c7335024-cert") pod "infra-operator-controller-manager-694cf4f878-8s72c" (UID: "52ffe9cc-7d93-400f-a7ef-81d4c7335024") : secret "infra-operator-webhook-server-cert" not found Jan 26 08:07:41 crc kubenswrapper[4806]: E0126 08:07:41.838215 4806 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 08:07:41 crc kubenswrapper[4806]: I0126 08:07:41.838609 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f955001e-4d2d-437c-bc31-19a4234ed701-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854bc45q\" (UID: \"f955001e-4d2d-437c-bc31-19a4234ed701\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854bc45q" Jan 26 08:07:41 crc kubenswrapper[4806]: E0126 08:07:41.838752 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f955001e-4d2d-437c-bc31-19a4234ed701-cert podName:f955001e-4d2d-437c-bc31-19a4234ed701 nodeName:}" failed. No retries permitted until 2026-01-26 08:07:45.838711711 +0000 UTC m=+845.103119857 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f955001e-4d2d-437c-bc31-19a4234ed701-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854bc45q" (UID: "f955001e-4d2d-437c-bc31-19a4234ed701") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 08:07:42 crc kubenswrapper[4806]: I0126 08:07:42.346560 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d6d4c1d3-c0cc-4cc3-b281-73f28ac929d2-webhook-certs\") pod \"openstack-operator-controller-manager-6898c455c-d6bzz\" (UID: \"d6d4c1d3-c0cc-4cc3-b281-73f28ac929d2\") " pod="openstack-operators/openstack-operator-controller-manager-6898c455c-d6bzz" Jan 26 08:07:42 crc kubenswrapper[4806]: E0126 08:07:42.346789 4806 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 26 08:07:42 crc kubenswrapper[4806]: E0126 08:07:42.347033 4806 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 26 08:07:42 crc kubenswrapper[4806]: I0126 08:07:42.346948 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d6d4c1d3-c0cc-4cc3-b281-73f28ac929d2-metrics-certs\") pod \"openstack-operator-controller-manager-6898c455c-d6bzz\" (UID: \"d6d4c1d3-c0cc-4cc3-b281-73f28ac929d2\") " pod="openstack-operators/openstack-operator-controller-manager-6898c455c-d6bzz" Jan 26 08:07:42 crc kubenswrapper[4806]: E0126 08:07:42.347039 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d6d4c1d3-c0cc-4cc3-b281-73f28ac929d2-webhook-certs podName:d6d4c1d3-c0cc-4cc3-b281-73f28ac929d2 nodeName:}" failed. No retries permitted until 2026-01-26 08:07:46.347013685 +0000 UTC m=+845.611421831 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d6d4c1d3-c0cc-4cc3-b281-73f28ac929d2-webhook-certs") pod "openstack-operator-controller-manager-6898c455c-d6bzz" (UID: "d6d4c1d3-c0cc-4cc3-b281-73f28ac929d2") : secret "webhook-server-cert" not found Jan 26 08:07:42 crc kubenswrapper[4806]: E0126 08:07:42.347100 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d6d4c1d3-c0cc-4cc3-b281-73f28ac929d2-metrics-certs podName:d6d4c1d3-c0cc-4cc3-b281-73f28ac929d2 nodeName:}" failed. No retries permitted until 2026-01-26 08:07:46.347084177 +0000 UTC m=+845.611492233 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d6d4c1d3-c0cc-4cc3-b281-73f28ac929d2-metrics-certs") pod "openstack-operator-controller-manager-6898c455c-d6bzz" (UID: "d6d4c1d3-c0cc-4cc3-b281-73f28ac929d2") : secret "metrics-server-cert" not found Jan 26 08:07:45 crc kubenswrapper[4806]: I0126 08:07:45.425204 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/52ffe9cc-7d93-400f-a7ef-81d4c7335024-cert\") pod \"infra-operator-controller-manager-694cf4f878-8s72c\" (UID: \"52ffe9cc-7d93-400f-a7ef-81d4c7335024\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-8s72c" Jan 26 08:07:45 crc kubenswrapper[4806]: E0126 08:07:45.425378 4806 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 26 08:07:45 crc kubenswrapper[4806]: E0126 08:07:45.425447 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/52ffe9cc-7d93-400f-a7ef-81d4c7335024-cert podName:52ffe9cc-7d93-400f-a7ef-81d4c7335024 nodeName:}" failed. No retries permitted until 2026-01-26 08:07:53.425430555 +0000 UTC m=+852.689838611 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/52ffe9cc-7d93-400f-a7ef-81d4c7335024-cert") pod "infra-operator-controller-manager-694cf4f878-8s72c" (UID: "52ffe9cc-7d93-400f-a7ef-81d4c7335024") : secret "infra-operator-webhook-server-cert" not found Jan 26 08:07:45 crc kubenswrapper[4806]: I0126 08:07:45.807438 4806 patch_prober.go:28] interesting pod/machine-config-daemon-k2tlk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 08:07:45 crc kubenswrapper[4806]: I0126 08:07:45.807873 4806 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 08:07:45 crc kubenswrapper[4806]: I0126 08:07:45.807965 4806 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" Jan 26 08:07:45 crc kubenswrapper[4806]: I0126 08:07:45.810028 4806 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1043e4eeb08886878cec455f2ca6376f949985237b4b0930fb8995d1f97399b2"} pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 08:07:45 crc kubenswrapper[4806]: I0126 08:07:45.810091 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" containerName="machine-config-daemon" containerID="cri-o://1043e4eeb08886878cec455f2ca6376f949985237b4b0930fb8995d1f97399b2" gracePeriod=600 Jan 26 08:07:45 crc kubenswrapper[4806]: I0126 08:07:45.933712 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f955001e-4d2d-437c-bc31-19a4234ed701-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854bc45q\" (UID: \"f955001e-4d2d-437c-bc31-19a4234ed701\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854bc45q" Jan 26 08:07:45 crc kubenswrapper[4806]: E0126 08:07:45.933831 4806 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 08:07:45 crc kubenswrapper[4806]: E0126 08:07:45.933897 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f955001e-4d2d-437c-bc31-19a4234ed701-cert podName:f955001e-4d2d-437c-bc31-19a4234ed701 nodeName:}" failed. No retries permitted until 2026-01-26 08:07:53.933879803 +0000 UTC m=+853.198287859 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f955001e-4d2d-437c-bc31-19a4234ed701-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854bc45q" (UID: "f955001e-4d2d-437c-bc31-19a4234ed701") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 08:07:46 crc kubenswrapper[4806]: I0126 08:07:46.095311 4806 generic.go:334] "Generic (PLEG): container finished" podID="d07502a2-50b0-4012-b335-340a1c694c50" containerID="1043e4eeb08886878cec455f2ca6376f949985237b4b0930fb8995d1f97399b2" exitCode=0 Jan 26 08:07:46 crc kubenswrapper[4806]: I0126 08:07:46.095363 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" event={"ID":"d07502a2-50b0-4012-b335-340a1c694c50","Type":"ContainerDied","Data":"1043e4eeb08886878cec455f2ca6376f949985237b4b0930fb8995d1f97399b2"} Jan 26 08:07:46 crc kubenswrapper[4806]: I0126 08:07:46.095400 4806 scope.go:117] "RemoveContainer" containerID="25fe21fbdefc972bf60875548f11358df4e04c7bb242af40b8201587c399a5cc" Jan 26 08:07:46 crc kubenswrapper[4806]: I0126 08:07:46.440045 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d6d4c1d3-c0cc-4cc3-b281-73f28ac929d2-webhook-certs\") pod \"openstack-operator-controller-manager-6898c455c-d6bzz\" (UID: \"d6d4c1d3-c0cc-4cc3-b281-73f28ac929d2\") " pod="openstack-operators/openstack-operator-controller-manager-6898c455c-d6bzz" Jan 26 08:07:46 crc kubenswrapper[4806]: I0126 08:07:46.440083 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d6d4c1d3-c0cc-4cc3-b281-73f28ac929d2-metrics-certs\") pod \"openstack-operator-controller-manager-6898c455c-d6bzz\" (UID: \"d6d4c1d3-c0cc-4cc3-b281-73f28ac929d2\") " pod="openstack-operators/openstack-operator-controller-manager-6898c455c-d6bzz" Jan 26 08:07:46 crc kubenswrapper[4806]: E0126 08:07:46.440155 4806 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 26 08:07:46 crc kubenswrapper[4806]: E0126 08:07:46.440171 4806 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 26 08:07:46 crc kubenswrapper[4806]: E0126 08:07:46.440214 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d6d4c1d3-c0cc-4cc3-b281-73f28ac929d2-webhook-certs podName:d6d4c1d3-c0cc-4cc3-b281-73f28ac929d2 nodeName:}" failed. No retries permitted until 2026-01-26 08:07:54.440196623 +0000 UTC m=+853.704604699 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/d6d4c1d3-c0cc-4cc3-b281-73f28ac929d2-webhook-certs") pod "openstack-operator-controller-manager-6898c455c-d6bzz" (UID: "d6d4c1d3-c0cc-4cc3-b281-73f28ac929d2") : secret "webhook-server-cert" not found Jan 26 08:07:46 crc kubenswrapper[4806]: E0126 08:07:46.440231 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d6d4c1d3-c0cc-4cc3-b281-73f28ac929d2-metrics-certs podName:d6d4c1d3-c0cc-4cc3-b281-73f28ac929d2 nodeName:}" failed. No retries permitted until 2026-01-26 08:07:54.440223314 +0000 UTC m=+853.704631380 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d6d4c1d3-c0cc-4cc3-b281-73f28ac929d2-metrics-certs") pod "openstack-operator-controller-manager-6898c455c-d6bzz" (UID: "d6d4c1d3-c0cc-4cc3-b281-73f28ac929d2") : secret "metrics-server-cert" not found Jan 26 08:07:53 crc kubenswrapper[4806]: E0126 08:07:53.199690 4806 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/designate-operator@sha256:6c88312afa9673f7b72c558368034d7a488ead73080cdcdf581fe85b99263ece" Jan 26 08:07:53 crc kubenswrapper[4806]: E0126 08:07:53.200375 4806 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/designate-operator@sha256:6c88312afa9673f7b72c558368034d7a488ead73080cdcdf581fe85b99263ece,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-h6mj4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-b45d7bf98-9ld8m_openstack-operators(293159cc-40c4-4335-ad77-65f1c493e35a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 08:07:53 crc kubenswrapper[4806]: E0126 08:07:53.201613 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-9ld8m" podUID="293159cc-40c4-4335-ad77-65f1c493e35a" Jan 26 08:07:53 crc kubenswrapper[4806]: I0126 08:07:53.445824 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/52ffe9cc-7d93-400f-a7ef-81d4c7335024-cert\") pod \"infra-operator-controller-manager-694cf4f878-8s72c\" (UID: \"52ffe9cc-7d93-400f-a7ef-81d4c7335024\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-8s72c" Jan 26 08:07:53 crc kubenswrapper[4806]: E0126 08:07:53.446058 4806 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 26 08:07:53 crc kubenswrapper[4806]: E0126 08:07:53.446111 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/52ffe9cc-7d93-400f-a7ef-81d4c7335024-cert podName:52ffe9cc-7d93-400f-a7ef-81d4c7335024 nodeName:}" failed. No retries permitted until 2026-01-26 08:08:09.446096706 +0000 UTC m=+868.710504762 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/52ffe9cc-7d93-400f-a7ef-81d4c7335024-cert") pod "infra-operator-controller-manager-694cf4f878-8s72c" (UID: "52ffe9cc-7d93-400f-a7ef-81d4c7335024") : secret "infra-operator-webhook-server-cert" not found Jan 26 08:07:53 crc kubenswrapper[4806]: I0126 08:07:53.952185 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f955001e-4d2d-437c-bc31-19a4234ed701-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854bc45q\" (UID: \"f955001e-4d2d-437c-bc31-19a4234ed701\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854bc45q" Jan 26 08:07:53 crc kubenswrapper[4806]: E0126 08:07:53.952384 4806 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 08:07:53 crc kubenswrapper[4806]: E0126 08:07:53.952439 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f955001e-4d2d-437c-bc31-19a4234ed701-cert podName:f955001e-4d2d-437c-bc31-19a4234ed701 nodeName:}" failed. No retries permitted until 2026-01-26 08:08:09.952423076 +0000 UTC m=+869.216831132 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f955001e-4d2d-437c-bc31-19a4234ed701-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854bc45q" (UID: "f955001e-4d2d-437c-bc31-19a4234ed701") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 08:07:54 crc kubenswrapper[4806]: E0126 08:07:54.144493 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/designate-operator@sha256:6c88312afa9673f7b72c558368034d7a488ead73080cdcdf581fe85b99263ece\\\"\"" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-9ld8m" podUID="293159cc-40c4-4335-ad77-65f1c493e35a" Jan 26 08:07:54 crc kubenswrapper[4806]: I0126 08:07:54.460590 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d6d4c1d3-c0cc-4cc3-b281-73f28ac929d2-webhook-certs\") pod \"openstack-operator-controller-manager-6898c455c-d6bzz\" (UID: \"d6d4c1d3-c0cc-4cc3-b281-73f28ac929d2\") " pod="openstack-operators/openstack-operator-controller-manager-6898c455c-d6bzz" Jan 26 08:07:54 crc kubenswrapper[4806]: I0126 08:07:54.460639 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d6d4c1d3-c0cc-4cc3-b281-73f28ac929d2-metrics-certs\") pod \"openstack-operator-controller-manager-6898c455c-d6bzz\" (UID: \"d6d4c1d3-c0cc-4cc3-b281-73f28ac929d2\") " pod="openstack-operators/openstack-operator-controller-manager-6898c455c-d6bzz" Jan 26 08:07:54 crc kubenswrapper[4806]: I0126 08:07:54.473309 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d6d4c1d3-c0cc-4cc3-b281-73f28ac929d2-metrics-certs\") pod \"openstack-operator-controller-manager-6898c455c-d6bzz\" (UID: \"d6d4c1d3-c0cc-4cc3-b281-73f28ac929d2\") " pod="openstack-operators/openstack-operator-controller-manager-6898c455c-d6bzz" Jan 26 08:07:54 crc kubenswrapper[4806]: I0126 08:07:54.483992 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d6d4c1d3-c0cc-4cc3-b281-73f28ac929d2-webhook-certs\") pod \"openstack-operator-controller-manager-6898c455c-d6bzz\" (UID: \"d6d4c1d3-c0cc-4cc3-b281-73f28ac929d2\") " pod="openstack-operators/openstack-operator-controller-manager-6898c455c-d6bzz" Jan 26 08:07:54 crc kubenswrapper[4806]: I0126 08:07:54.665024 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-pmbn8" Jan 26 08:07:54 crc kubenswrapper[4806]: I0126 08:07:54.673985 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-6898c455c-d6bzz" Jan 26 08:07:55 crc kubenswrapper[4806]: E0126 08:07:55.964200 4806 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:8bee4480babd6fd8f686e0ba52a304acb6ffb90f09c7c57e7f5df5f7658836d8" Jan 26 08:07:55 crc kubenswrapper[4806]: E0126 08:07:55.964383 4806 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:8bee4480babd6fd8f686e0ba52a304acb6ffb90f09c7c57e7f5df5f7658836d8,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rkrf9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-78c6999f6f-497jq_openstack-operators(41df476c-557f-407c-8711-57c979600bea): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 08:07:55 crc kubenswrapper[4806]: E0126 08:07:55.966328 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-497jq" podUID="41df476c-557f-407c-8711-57c979600bea" Jan 26 08:07:56 crc kubenswrapper[4806]: E0126 08:07:56.163740 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:8bee4480babd6fd8f686e0ba52a304acb6ffb90f09c7c57e7f5df5f7658836d8\\\"\"" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-497jq" podUID="41df476c-557f-407c-8711-57c979600bea" Jan 26 08:07:57 crc kubenswrapper[4806]: E0126 08:07:57.066532 4806 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/glance-operator@sha256:9caae9b3ee328df678baa26454e45e47693acdadb27f9c635680597aaec43337" Jan 26 08:07:57 crc kubenswrapper[4806]: E0126 08:07:57.066789 4806 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/glance-operator@sha256:9caae9b3ee328df678baa26454e45e47693acdadb27f9c635680597aaec43337,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-48fls,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-operator-controller-manager-78fdd796fd-psw9b_openstack-operators(002839d6-a78d-4826-a93c-b6dec9671bab): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 08:07:57 crc kubenswrapper[4806]: E0126 08:07:57.069316 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-psw9b" podUID="002839d6-a78d-4826-a93c-b6dec9671bab" Jan 26 08:07:57 crc kubenswrapper[4806]: E0126 08:07:57.163673 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/glance-operator@sha256:9caae9b3ee328df678baa26454e45e47693acdadb27f9c635680597aaec43337\\\"\"" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-psw9b" podUID="002839d6-a78d-4826-a93c-b6dec9671bab" Jan 26 08:07:58 crc kubenswrapper[4806]: E0126 08:07:58.981460 4806 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/octavia-operator@sha256:ed489f21a0c72557d2da5a271808f19b7c7b85ef32fd9f4aa91bdbfc5bca3bdd" Jan 26 08:07:58 crc kubenswrapper[4806]: E0126 08:07:58.981668 4806 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:ed489f21a0c72557d2da5a271808f19b7c7b85ef32fd9f4aa91bdbfc5bca3bdd,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dlp9f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-5f4cd88d46-nk6xc_openstack-operators(f84e4d06-7a1b-4038-b30f-ec7bf90efa2c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 08:07:58 crc kubenswrapper[4806]: E0126 08:07:58.983049 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-nk6xc" podUID="f84e4d06-7a1b-4038-b30f-ec7bf90efa2c" Jan 26 08:07:59 crc kubenswrapper[4806]: E0126 08:07:59.175916 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:ed489f21a0c72557d2da5a271808f19b7c7b85ef32fd9f4aa91bdbfc5bca3bdd\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-nk6xc" podUID="f84e4d06-7a1b-4038-b30f-ec7bf90efa2c" Jan 26 08:07:59 crc kubenswrapper[4806]: E0126 08:07:59.548073 4806 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:816d474f502d730d6a2522a272b0e09a2d579ac63617817655d60c54bda4191e" Jan 26 08:07:59 crc kubenswrapper[4806]: E0126 08:07:59.548257 4806 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:816d474f502d730d6a2522a272b0e09a2d579ac63617817655d60c54bda4191e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-98hll,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-78d58447c5-rk454_openstack-operators(167c1b32-0550-4c81-a2b6-b30e8d58dd3d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 08:07:59 crc kubenswrapper[4806]: E0126 08:07:59.549653 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-rk454" podUID="167c1b32-0550-4c81-a2b6-b30e8d58dd3d" Jan 26 08:08:00 crc kubenswrapper[4806]: E0126 08:08:00.179504 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:816d474f502d730d6a2522a272b0e09a2d579ac63617817655d60c54bda4191e\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-rk454" podUID="167c1b32-0550-4c81-a2b6-b30e8d58dd3d" Jan 26 08:08:03 crc kubenswrapper[4806]: E0126 08:08:03.069517 4806 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/placement-operator@sha256:013c0ad82d21a21c7eece5cd4b5d5c4b8eb410b6671ac33a6f3fb78c8510811d" Jan 26 08:08:03 crc kubenswrapper[4806]: E0126 08:08:03.070382 4806 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:013c0ad82d21a21c7eece5cd4b5d5c4b8eb410b6671ac33a6f3fb78c8510811d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-f2vlt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-79d5ccc684-ncfjs_openstack-operators(109bb090-2776-45ce-b579-711304ae2db8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 08:08:03 crc kubenswrapper[4806]: E0126 08:08:03.071611 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-ncfjs" podUID="109bb090-2776-45ce-b579-711304ae2db8" Jan 26 08:08:03 crc kubenswrapper[4806]: E0126 08:08:03.197480 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:013c0ad82d21a21c7eece5cd4b5d5c4b8eb410b6671ac33a6f3fb78c8510811d\\\"\"" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-ncfjs" podUID="109bb090-2776-45ce-b579-711304ae2db8" Jan 26 08:08:04 crc kubenswrapper[4806]: E0126 08:08:04.149393 4806 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349" Jan 26 08:08:04 crc kubenswrapper[4806]: E0126 08:08:04.149897 4806 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-85bn6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-b8b6d4659-rf496_openstack-operators(1ca63855-2d6d-4543-a084-4cdb7c6d0c5c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 08:08:04 crc kubenswrapper[4806]: E0126 08:08:04.151102 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-rf496" podUID="1ca63855-2d6d-4543-a084-4cdb7c6d0c5c" Jan 26 08:08:04 crc kubenswrapper[4806]: E0126 08:08:04.206427 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-rf496" podUID="1ca63855-2d6d-4543-a084-4cdb7c6d0c5c" Jan 26 08:08:09 crc kubenswrapper[4806]: I0126 08:08:09.499900 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/52ffe9cc-7d93-400f-a7ef-81d4c7335024-cert\") pod \"infra-operator-controller-manager-694cf4f878-8s72c\" (UID: \"52ffe9cc-7d93-400f-a7ef-81d4c7335024\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-8s72c" Jan 26 08:08:09 crc kubenswrapper[4806]: I0126 08:08:09.505750 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/52ffe9cc-7d93-400f-a7ef-81d4c7335024-cert\") pod \"infra-operator-controller-manager-694cf4f878-8s72c\" (UID: \"52ffe9cc-7d93-400f-a7ef-81d4c7335024\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-8s72c" Jan 26 08:08:09 crc kubenswrapper[4806]: I0126 08:08:09.533347 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-2p9bw" Jan 26 08:08:09 crc kubenswrapper[4806]: I0126 08:08:09.539182 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-8s72c" Jan 26 08:08:09 crc kubenswrapper[4806]: E0126 08:08:09.576497 4806 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Jan 26 08:08:09 crc kubenswrapper[4806]: E0126 08:08:09.576699 4806 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ntdqv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-gnhrx_openstack-operators(77931dd1-1acc-4552-8605-33a24c74fc43): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 08:08:09 crc kubenswrapper[4806]: E0126 08:08:09.577869 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-gnhrx" podUID="77931dd1-1acc-4552-8605-33a24c74fc43" Jan 26 08:08:10 crc kubenswrapper[4806]: I0126 08:08:10.006628 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f955001e-4d2d-437c-bc31-19a4234ed701-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854bc45q\" (UID: \"f955001e-4d2d-437c-bc31-19a4234ed701\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854bc45q" Jan 26 08:08:10 crc kubenswrapper[4806]: I0126 08:08:10.016177 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f955001e-4d2d-437c-bc31-19a4234ed701-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854bc45q\" (UID: \"f955001e-4d2d-437c-bc31-19a4234ed701\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854bc45q" Jan 26 08:08:10 crc kubenswrapper[4806]: I0126 08:08:10.151088 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-hkmr9" Jan 26 08:08:10 crc kubenswrapper[4806]: I0126 08:08:10.158147 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854bc45q" Jan 26 08:08:10 crc kubenswrapper[4806]: I0126 08:08:10.237379 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6898c455c-d6bzz"] Jan 26 08:08:10 crc kubenswrapper[4806]: I0126 08:08:10.281331 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-fd757" event={"ID":"00c672af-a00d-45d3-9d80-39de7bbcf49c","Type":"ContainerStarted","Data":"e66c723c6f93b9cae294f6fd645888ff7e62053e947b60072c47b4d8ea800c8d"} Jan 26 08:08:10 crc kubenswrapper[4806]: I0126 08:08:10.281663 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-fd757" Jan 26 08:08:10 crc kubenswrapper[4806]: I0126 08:08:10.300302 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-694cf4f878-8s72c"] Jan 26 08:08:10 crc kubenswrapper[4806]: I0126 08:08:10.303874 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-9mwdz" event={"ID":"9b0f19e9-5ee8-4f12-a453-2195b20a8f09","Type":"ContainerStarted","Data":"70cf1483c2ca25fd518df472b89ab2b2c62ed20ed0fc26844421a7ffc5a24bef"} Jan 26 08:08:10 crc kubenswrapper[4806]: I0126 08:08:10.304869 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-9mwdz" Jan 26 08:08:10 crc kubenswrapper[4806]: I0126 08:08:10.307678 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-6fzqz" event={"ID":"d36b3dbe-4776-4c55-a64f-4ea15cad6fb7","Type":"ContainerStarted","Data":"0236f608baac36eb2f627fecf83379450790e6a8e9d8de18cb2769cdc4b391b6"} Jan 26 08:08:10 crc kubenswrapper[4806]: I0126 08:08:10.308468 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-6fzqz" Jan 26 08:08:10 crc kubenswrapper[4806]: I0126 08:08:10.312225 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-fd757" podStartSLOduration=7.092701914 podStartE2EDuration="33.312205032s" podCreationTimestamp="2026-01-26 08:07:37 +0000 UTC" firstStartedPulling="2026-01-26 08:07:39.574496707 +0000 UTC m=+838.838904763" lastFinishedPulling="2026-01-26 08:08:05.793999795 +0000 UTC m=+865.058407881" observedRunningTime="2026-01-26 08:08:10.302679811 +0000 UTC m=+869.567087877" watchObservedRunningTime="2026-01-26 08:08:10.312205032 +0000 UTC m=+869.576613088" Jan 26 08:08:10 crc kubenswrapper[4806]: I0126 08:08:10.315900 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-tj4m9" event={"ID":"9096956d-1ed7-4e3c-bdec-d86c14168601","Type":"ContainerStarted","Data":"b9965ea27c9ba8e2042fc324302aec39e90dd95f1afe9d54165782db42f1bb7c"} Jan 26 08:08:10 crc kubenswrapper[4806]: I0126 08:08:10.316566 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-tj4m9" Jan 26 08:08:10 crc kubenswrapper[4806]: I0126 08:08:10.338358 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" event={"ID":"d07502a2-50b0-4012-b335-340a1c694c50","Type":"ContainerStarted","Data":"1062ca2b49b34478f04a62458a36769a2e31737989a78160ffd05a185dfcbbaa"} Jan 26 08:08:10 crc kubenswrapper[4806]: I0126 08:08:10.339253 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-9mwdz" podStartSLOduration=6.296299605 podStartE2EDuration="33.339233943s" podCreationTimestamp="2026-01-26 08:07:37 +0000 UTC" firstStartedPulling="2026-01-26 08:07:38.750872242 +0000 UTC m=+838.015280298" lastFinishedPulling="2026-01-26 08:08:05.79380655 +0000 UTC m=+865.058214636" observedRunningTime="2026-01-26 08:08:10.3380224 +0000 UTC m=+869.602430446" watchObservedRunningTime="2026-01-26 08:08:10.339233943 +0000 UTC m=+869.603641999" Jan 26 08:08:10 crc kubenswrapper[4806]: I0126 08:08:10.356675 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-lkcgg" event={"ID":"5270b699-329c-41eb-a8cf-5f94eeb4cd11","Type":"ContainerStarted","Data":"45b79d4fe567b2acd333328b9ff922f78b1e1c3c884e043f8f972c38705d42dd"} Jan 26 08:08:10 crc kubenswrapper[4806]: I0126 08:08:10.357058 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-lkcgg" Jan 26 08:08:10 crc kubenswrapper[4806]: I0126 08:08:10.364033 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-6fzqz" podStartSLOduration=6.701094011 podStartE2EDuration="33.364010273s" podCreationTimestamp="2026-01-26 08:07:37 +0000 UTC" firstStartedPulling="2026-01-26 08:07:39.131085243 +0000 UTC m=+838.395493299" lastFinishedPulling="2026-01-26 08:08:05.794001505 +0000 UTC m=+865.058409561" observedRunningTime="2026-01-26 08:08:10.359482689 +0000 UTC m=+869.623890745" watchObservedRunningTime="2026-01-26 08:08:10.364010273 +0000 UTC m=+869.628418339" Jan 26 08:08:10 crc kubenswrapper[4806]: I0126 08:08:10.371219 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-p4bvd" event={"ID":"3959df09-4052-4ccb-8c3f-b3f5aebb747c","Type":"ContainerStarted","Data":"cbd44e92063e133c7c31a0ff4932eb91caf06955efe2405884819971df785ffc"} Jan 26 08:08:10 crc kubenswrapper[4806]: I0126 08:08:10.371261 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-p4bvd" Jan 26 08:08:10 crc kubenswrapper[4806]: I0126 08:08:10.384384 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-wwr7b" event={"ID":"55e84831-2044-4555-844d-93053648d17a","Type":"ContainerStarted","Data":"7d87bc0d2a9f880afe810738bd2a44a60f8791b7c883fe88d09ad05782ebe06b"} Jan 26 08:08:10 crc kubenswrapper[4806]: I0126 08:08:10.384986 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-wwr7b" Jan 26 08:08:10 crc kubenswrapper[4806]: I0126 08:08:10.410540 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-p4bvd" podStartSLOduration=7.250606716 podStartE2EDuration="33.410510399s" podCreationTimestamp="2026-01-26 08:07:37 +0000 UTC" firstStartedPulling="2026-01-26 08:07:39.635583143 +0000 UTC m=+838.899991199" lastFinishedPulling="2026-01-26 08:08:05.795486826 +0000 UTC m=+865.059894882" observedRunningTime="2026-01-26 08:08:10.408768771 +0000 UTC m=+869.673176827" watchObservedRunningTime="2026-01-26 08:08:10.410510399 +0000 UTC m=+869.674918455" Jan 26 08:08:10 crc kubenswrapper[4806]: I0126 08:08:10.411688 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-tj4m9" podStartSLOduration=3.418618403 podStartE2EDuration="33.411683781s" podCreationTimestamp="2026-01-26 08:07:37 +0000 UTC" firstStartedPulling="2026-01-26 08:07:39.677276137 +0000 UTC m=+838.941684193" lastFinishedPulling="2026-01-26 08:08:09.670341515 +0000 UTC m=+868.934749571" observedRunningTime="2026-01-26 08:08:10.381656527 +0000 UTC m=+869.646064583" watchObservedRunningTime="2026-01-26 08:08:10.411683781 +0000 UTC m=+869.676091837" Jan 26 08:08:10 crc kubenswrapper[4806]: I0126 08:08:10.602680 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-lkcgg" podStartSLOduration=7.329093959 podStartE2EDuration="33.60266361s" podCreationTimestamp="2026-01-26 08:07:37 +0000 UTC" firstStartedPulling="2026-01-26 08:07:39.52029312 +0000 UTC m=+838.784701176" lastFinishedPulling="2026-01-26 08:08:05.793862761 +0000 UTC m=+865.058270827" observedRunningTime="2026-01-26 08:08:10.49293579 +0000 UTC m=+869.757343846" watchObservedRunningTime="2026-01-26 08:08:10.60266361 +0000 UTC m=+869.867071666" Jan 26 08:08:10 crc kubenswrapper[4806]: I0126 08:08:10.655831 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-wwr7b" podStartSLOduration=8.673557151 podStartE2EDuration="33.655813588s" podCreationTimestamp="2026-01-26 08:07:37 +0000 UTC" firstStartedPulling="2026-01-26 08:07:39.149282332 +0000 UTC m=+838.413690388" lastFinishedPulling="2026-01-26 08:08:04.131538769 +0000 UTC m=+863.395946825" observedRunningTime="2026-01-26 08:08:10.653962438 +0000 UTC m=+869.918370494" watchObservedRunningTime="2026-01-26 08:08:10.655813588 +0000 UTC m=+869.920221644" Jan 26 08:08:10 crc kubenswrapper[4806]: W0126 08:08:10.976422 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf955001e_4d2d_437c_bc31_19a4234ed701.slice/crio-c6366c6eaaf0aa15f2820c7c9e57ccdc267417b91c6185cc888a14b3d6e76f34 WatchSource:0}: Error finding container c6366c6eaaf0aa15f2820c7c9e57ccdc267417b91c6185cc888a14b3d6e76f34: Status 404 returned error can't find the container with id c6366c6eaaf0aa15f2820c7c9e57ccdc267417b91c6185cc888a14b3d6e76f34 Jan 26 08:08:10 crc kubenswrapper[4806]: I0126 08:08:10.984819 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854bc45q"] Jan 26 08:08:11 crc kubenswrapper[4806]: I0126 08:08:11.397141 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-bhg2d" event={"ID":"16faebac-962b-4520-bb85-f77bc1d781d1","Type":"ContainerStarted","Data":"d7f9568de88df8fc0c111361e19fc8ebe25d62bbe090d533d88ed1ba8a19a963"} Jan 26 08:08:11 crc kubenswrapper[4806]: I0126 08:08:11.397744 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-bhg2d" Jan 26 08:08:11 crc kubenswrapper[4806]: I0126 08:08:11.398981 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-zqkgl" event={"ID":"f63ffecc-85dc-48df-b4d6-675d0792cacf","Type":"ContainerStarted","Data":"adf1abdfa4d29d50f318b5b1a0f175ac0fc0f92beb39fee32b45e2ed0b29b536"} Jan 26 08:08:11 crc kubenswrapper[4806]: I0126 08:08:11.399185 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-564965969-zqkgl" Jan 26 08:08:11 crc kubenswrapper[4806]: I0126 08:08:11.401122 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-8s72c" event={"ID":"52ffe9cc-7d93-400f-a7ef-81d4c7335024","Type":"ContainerStarted","Data":"a2f986d48bc83bb8784959bc7e8fa306f816a320fd42812fdc41649b70a64585"} Jan 26 08:08:11 crc kubenswrapper[4806]: I0126 08:08:11.401838 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854bc45q" event={"ID":"f955001e-4d2d-437c-bc31-19a4234ed701","Type":"ContainerStarted","Data":"c6366c6eaaf0aa15f2820c7c9e57ccdc267417b91c6185cc888a14b3d6e76f34"} Jan 26 08:08:11 crc kubenswrapper[4806]: I0126 08:08:11.405881 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-ktw6x" event={"ID":"1a85568e-bc00-4bc5-a99e-bcef2f7041ee","Type":"ContainerStarted","Data":"22145f4b86eaeb2693672516e35c4b1d27966891be1ca1db64df3265640c0075"} Jan 26 08:08:11 crc kubenswrapper[4806]: I0126 08:08:11.406035 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-ktw6x" Jan 26 08:08:11 crc kubenswrapper[4806]: I0126 08:08:11.408812 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-q9hmq" event={"ID":"3191a58e-ee1d-430f-97ce-c7c532d132a6","Type":"ContainerStarted","Data":"8f1115f741b145cfc6d685348e1cc4e216a5b8014248400123241ca2f4a03a67"} Jan 26 08:08:11 crc kubenswrapper[4806]: I0126 08:08:11.408925 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-q9hmq" Jan 26 08:08:11 crc kubenswrapper[4806]: I0126 08:08:11.410200 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-psw9b" event={"ID":"002839d6-a78d-4826-a93c-b6dec9671bab","Type":"ContainerStarted","Data":"60da9a54099aec714129fe1cf3e9971f2088b5cbb38611aafeef1be951e44cfe"} Jan 26 08:08:11 crc kubenswrapper[4806]: I0126 08:08:11.410361 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-psw9b" Jan 26 08:08:11 crc kubenswrapper[4806]: I0126 08:08:11.417335 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-497jq" event={"ID":"41df476c-557f-407c-8711-57c979600bea","Type":"ContainerStarted","Data":"d49785c90ed38e614831a16eac24e26561ca2602202007571aedf8aa22f8dd9a"} Jan 26 08:08:11 crc kubenswrapper[4806]: I0126 08:08:11.417513 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-497jq" Jan 26 08:08:11 crc kubenswrapper[4806]: I0126 08:08:11.420834 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-6898c455c-d6bzz" event={"ID":"d6d4c1d3-c0cc-4cc3-b281-73f28ac929d2","Type":"ContainerStarted","Data":"0eb187fc95ddce146f4a8320bea157d709151442adcb80aac462ec4de722d5fe"} Jan 26 08:08:11 crc kubenswrapper[4806]: I0126 08:08:11.420872 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-6898c455c-d6bzz" event={"ID":"d6d4c1d3-c0cc-4cc3-b281-73f28ac929d2","Type":"ContainerStarted","Data":"8c8dddedfc49da0248de992f71a3dd8071fdbedf921309615511beffd009ec54"} Jan 26 08:08:11 crc kubenswrapper[4806]: I0126 08:08:11.421605 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-6898c455c-d6bzz" Jan 26 08:08:11 crc kubenswrapper[4806]: I0126 08:08:11.424638 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-swpm7" event={"ID":"e048fc14-f2ba-4930-9e77-a281b25c7a07","Type":"ContainerStarted","Data":"2c939a672bb61278f783b1939c7e073355ea2b074365adbe26d5d57f76f9e0b9"} Jan 26 08:08:11 crc kubenswrapper[4806]: I0126 08:08:11.424822 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-swpm7" Jan 26 08:08:11 crc kubenswrapper[4806]: I0126 08:08:11.426541 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-9ld8m" event={"ID":"293159cc-40c4-4335-ad77-65f1c493e35a","Type":"ContainerStarted","Data":"597b7dd9a74b2f8080254c1af4e1b01b25bfd611f643bba77752dd23dfae4443"} Jan 26 08:08:11 crc kubenswrapper[4806]: I0126 08:08:11.426950 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-9ld8m" Jan 26 08:08:11 crc kubenswrapper[4806]: I0126 08:08:11.522971 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-bhg2d" podStartSLOduration=4.488798232 podStartE2EDuration="34.522953767s" podCreationTimestamp="2026-01-26 08:07:37 +0000 UTC" firstStartedPulling="2026-01-26 08:07:39.686608773 +0000 UTC m=+838.951016819" lastFinishedPulling="2026-01-26 08:08:09.720764298 +0000 UTC m=+868.985172354" observedRunningTime="2026-01-26 08:08:11.503323658 +0000 UTC m=+870.767731704" watchObservedRunningTime="2026-01-26 08:08:11.522953767 +0000 UTC m=+870.787361823" Jan 26 08:08:11 crc kubenswrapper[4806]: I0126 08:08:11.552467 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-564965969-zqkgl" podStartSLOduration=4.565573688 podStartE2EDuration="34.552449996s" podCreationTimestamp="2026-01-26 08:07:37 +0000 UTC" firstStartedPulling="2026-01-26 08:07:39.683702293 +0000 UTC m=+838.948110359" lastFinishedPulling="2026-01-26 08:08:09.670578601 +0000 UTC m=+868.934986667" observedRunningTime="2026-01-26 08:08:11.547634384 +0000 UTC m=+870.812042450" watchObservedRunningTime="2026-01-26 08:08:11.552449996 +0000 UTC m=+870.816858052" Jan 26 08:08:11 crc kubenswrapper[4806]: I0126 08:08:11.637099 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-swpm7" podStartSLOduration=4.630091717 podStartE2EDuration="34.637079457s" podCreationTimestamp="2026-01-26 08:07:37 +0000 UTC" firstStartedPulling="2026-01-26 08:07:39.663351285 +0000 UTC m=+838.927759341" lastFinishedPulling="2026-01-26 08:08:09.670339015 +0000 UTC m=+868.934747081" observedRunningTime="2026-01-26 08:08:11.628845072 +0000 UTC m=+870.893253148" watchObservedRunningTime="2026-01-26 08:08:11.637079457 +0000 UTC m=+870.901487513" Jan 26 08:08:11 crc kubenswrapper[4806]: I0126 08:08:11.687111 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-497jq" podStartSLOduration=4.5195596160000004 podStartE2EDuration="34.6870958s" podCreationTimestamp="2026-01-26 08:07:37 +0000 UTC" firstStartedPulling="2026-01-26 08:07:39.595347549 +0000 UTC m=+838.859755605" lastFinishedPulling="2026-01-26 08:08:09.762883733 +0000 UTC m=+869.027291789" observedRunningTime="2026-01-26 08:08:11.682674838 +0000 UTC m=+870.947082894" watchObservedRunningTime="2026-01-26 08:08:11.6870958 +0000 UTC m=+870.951503856" Jan 26 08:08:11 crc kubenswrapper[4806]: I0126 08:08:11.740310 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-ktw6x" podStartSLOduration=4.74572375 podStartE2EDuration="34.740281499s" podCreationTimestamp="2026-01-26 08:07:37 +0000 UTC" firstStartedPulling="2026-01-26 08:07:39.67631697 +0000 UTC m=+838.940725026" lastFinishedPulling="2026-01-26 08:08:09.670874719 +0000 UTC m=+868.935282775" observedRunningTime="2026-01-26 08:08:11.734435458 +0000 UTC m=+870.998843524" watchObservedRunningTime="2026-01-26 08:08:11.740281499 +0000 UTC m=+871.004689555" Jan 26 08:08:11 crc kubenswrapper[4806]: I0126 08:08:11.814009 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-6898c455c-d6bzz" podStartSLOduration=33.813995441 podStartE2EDuration="33.813995441s" podCreationTimestamp="2026-01-26 08:07:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 08:08:11.81106213 +0000 UTC m=+871.075470176" watchObservedRunningTime="2026-01-26 08:08:11.813995441 +0000 UTC m=+871.078403497" Jan 26 08:08:11 crc kubenswrapper[4806]: I0126 08:08:11.886897 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-q9hmq" podStartSLOduration=4.189146001 podStartE2EDuration="34.88688102s" podCreationTimestamp="2026-01-26 08:07:37 +0000 UTC" firstStartedPulling="2026-01-26 08:07:38.863565744 +0000 UTC m=+838.127973800" lastFinishedPulling="2026-01-26 08:08:09.561300763 +0000 UTC m=+868.825708819" observedRunningTime="2026-01-26 08:08:11.883894548 +0000 UTC m=+871.148302604" watchObservedRunningTime="2026-01-26 08:08:11.88688102 +0000 UTC m=+871.151289076" Jan 26 08:08:11 crc kubenswrapper[4806]: I0126 08:08:11.938402 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-9ld8m" podStartSLOduration=4.767283871 podStartE2EDuration="34.938375883s" podCreationTimestamp="2026-01-26 08:07:37 +0000 UTC" firstStartedPulling="2026-01-26 08:07:39.573769477 +0000 UTC m=+838.838177523" lastFinishedPulling="2026-01-26 08:08:09.744861469 +0000 UTC m=+869.009269535" observedRunningTime="2026-01-26 08:08:11.935111853 +0000 UTC m=+871.199519909" watchObservedRunningTime="2026-01-26 08:08:11.938375883 +0000 UTC m=+871.202783939" Jan 26 08:08:12 crc kubenswrapper[4806]: I0126 08:08:12.442635 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-nk6xc" event={"ID":"f84e4d06-7a1b-4038-b30f-ec7bf90efa2c","Type":"ContainerStarted","Data":"b821595b7ea35ca9079646ee0514ad9ad7496532da08022397f8ed98d45f22ae"} Jan 26 08:08:12 crc kubenswrapper[4806]: I0126 08:08:12.476911 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-psw9b" podStartSLOduration=4.63746647 podStartE2EDuration="35.476891796s" podCreationTimestamp="2026-01-26 08:07:37 +0000 UTC" firstStartedPulling="2026-01-26 08:07:38.908792615 +0000 UTC m=+838.173200671" lastFinishedPulling="2026-01-26 08:08:09.748217941 +0000 UTC m=+869.012625997" observedRunningTime="2026-01-26 08:08:11.97983709 +0000 UTC m=+871.244245146" watchObservedRunningTime="2026-01-26 08:08:12.476891796 +0000 UTC m=+871.741299852" Jan 26 08:08:12 crc kubenswrapper[4806]: I0126 08:08:12.478483 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-nk6xc" podStartSLOduration=3.506944747 podStartE2EDuration="35.47847478s" podCreationTimestamp="2026-01-26 08:07:37 +0000 UTC" firstStartedPulling="2026-01-26 08:07:39.534254823 +0000 UTC m=+838.798662879" lastFinishedPulling="2026-01-26 08:08:11.505784856 +0000 UTC m=+870.770192912" observedRunningTime="2026-01-26 08:08:12.474541702 +0000 UTC m=+871.738949758" watchObservedRunningTime="2026-01-26 08:08:12.47847478 +0000 UTC m=+871.742882836" Jan 26 08:08:16 crc kubenswrapper[4806]: I0126 08:08:16.476867 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-8s72c" event={"ID":"52ffe9cc-7d93-400f-a7ef-81d4c7335024","Type":"ContainerStarted","Data":"b5dbc31baff78b7f5705e10958c21fab7d2c987c2fc0e6b4676c72c3271a3673"} Jan 26 08:08:16 crc kubenswrapper[4806]: I0126 08:08:16.478956 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-8s72c" Jan 26 08:08:16 crc kubenswrapper[4806]: I0126 08:08:16.480425 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-rk454" event={"ID":"167c1b32-0550-4c81-a2b6-b30e8d58dd3d","Type":"ContainerStarted","Data":"9c7d4b95aa66840ac272a98810d5ed9ba7e3065ace62b7d90591def55b40af5c"} Jan 26 08:08:16 crc kubenswrapper[4806]: I0126 08:08:16.480788 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-rk454" Jan 26 08:08:16 crc kubenswrapper[4806]: I0126 08:08:16.482042 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854bc45q" event={"ID":"f955001e-4d2d-437c-bc31-19a4234ed701","Type":"ContainerStarted","Data":"d57c31f07409eed2bf05a71cf2fe00d81397ff0ab9c4086a1626402f85ef9a51"} Jan 26 08:08:16 crc kubenswrapper[4806]: I0126 08:08:16.482552 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854bc45q" Jan 26 08:08:16 crc kubenswrapper[4806]: I0126 08:08:16.484311 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-ncfjs" event={"ID":"109bb090-2776-45ce-b579-711304ae2db8","Type":"ContainerStarted","Data":"70c8583347843754f608eb2e44f03e94fe08d3e88158d5c444d462f799527837"} Jan 26 08:08:16 crc kubenswrapper[4806]: I0126 08:08:16.484504 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-ncfjs" Jan 26 08:08:16 crc kubenswrapper[4806]: I0126 08:08:16.498446 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-8s72c" podStartSLOduration=34.240293073 podStartE2EDuration="39.498428339s" podCreationTimestamp="2026-01-26 08:07:37 +0000 UTC" firstStartedPulling="2026-01-26 08:08:10.381631547 +0000 UTC m=+869.646039603" lastFinishedPulling="2026-01-26 08:08:15.639766823 +0000 UTC m=+874.904174869" observedRunningTime="2026-01-26 08:08:16.494247194 +0000 UTC m=+875.758655250" watchObservedRunningTime="2026-01-26 08:08:16.498428339 +0000 UTC m=+875.762836395" Jan 26 08:08:16 crc kubenswrapper[4806]: I0126 08:08:16.535906 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854bc45q" podStartSLOduration=34.873937375 podStartE2EDuration="39.535890866s" podCreationTimestamp="2026-01-26 08:07:37 +0000 UTC" firstStartedPulling="2026-01-26 08:08:10.977906224 +0000 UTC m=+870.242314280" lastFinishedPulling="2026-01-26 08:08:15.639859715 +0000 UTC m=+874.904267771" observedRunningTime="2026-01-26 08:08:16.528391801 +0000 UTC m=+875.792799857" watchObservedRunningTime="2026-01-26 08:08:16.535890866 +0000 UTC m=+875.800298922" Jan 26 08:08:16 crc kubenswrapper[4806]: I0126 08:08:16.561379 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-ncfjs" podStartSLOduration=3.570438259 podStartE2EDuration="39.561364785s" podCreationTimestamp="2026-01-26 08:07:37 +0000 UTC" firstStartedPulling="2026-01-26 08:07:39.648308042 +0000 UTC m=+838.912716088" lastFinishedPulling="2026-01-26 08:08:15.639234558 +0000 UTC m=+874.903642614" observedRunningTime="2026-01-26 08:08:16.556126892 +0000 UTC m=+875.820534948" watchObservedRunningTime="2026-01-26 08:08:16.561364785 +0000 UTC m=+875.825772841" Jan 26 08:08:16 crc kubenswrapper[4806]: I0126 08:08:16.579993 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-rk454" podStartSLOduration=3.565453372 podStartE2EDuration="39.579973476s" podCreationTimestamp="2026-01-26 08:07:37 +0000 UTC" firstStartedPulling="2026-01-26 08:07:39.625983169 +0000 UTC m=+838.890391225" lastFinishedPulling="2026-01-26 08:08:15.640503273 +0000 UTC m=+874.904911329" observedRunningTime="2026-01-26 08:08:16.577308303 +0000 UTC m=+875.841716359" watchObservedRunningTime="2026-01-26 08:08:16.579973476 +0000 UTC m=+875.844381532" Jan 26 08:08:17 crc kubenswrapper[4806]: I0126 08:08:17.295481 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4sstt"] Jan 26 08:08:17 crc kubenswrapper[4806]: I0126 08:08:17.297440 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4sstt" Jan 26 08:08:17 crc kubenswrapper[4806]: I0126 08:08:17.308581 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4sstt"] Jan 26 08:08:17 crc kubenswrapper[4806]: I0126 08:08:17.352749 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14284482-c890-4014-ba68-75fbbe78ec04-utilities\") pod \"certified-operators-4sstt\" (UID: \"14284482-c890-4014-ba68-75fbbe78ec04\") " pod="openshift-marketplace/certified-operators-4sstt" Jan 26 08:08:17 crc kubenswrapper[4806]: I0126 08:08:17.352851 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14284482-c890-4014-ba68-75fbbe78ec04-catalog-content\") pod \"certified-operators-4sstt\" (UID: \"14284482-c890-4014-ba68-75fbbe78ec04\") " pod="openshift-marketplace/certified-operators-4sstt" Jan 26 08:08:17 crc kubenswrapper[4806]: I0126 08:08:17.352909 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brtmc\" (UniqueName: \"kubernetes.io/projected/14284482-c890-4014-ba68-75fbbe78ec04-kube-api-access-brtmc\") pod \"certified-operators-4sstt\" (UID: \"14284482-c890-4014-ba68-75fbbe78ec04\") " pod="openshift-marketplace/certified-operators-4sstt" Jan 26 08:08:17 crc kubenswrapper[4806]: I0126 08:08:17.453592 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14284482-c890-4014-ba68-75fbbe78ec04-catalog-content\") pod \"certified-operators-4sstt\" (UID: \"14284482-c890-4014-ba68-75fbbe78ec04\") " pod="openshift-marketplace/certified-operators-4sstt" Jan 26 08:08:17 crc kubenswrapper[4806]: I0126 08:08:17.453656 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brtmc\" (UniqueName: \"kubernetes.io/projected/14284482-c890-4014-ba68-75fbbe78ec04-kube-api-access-brtmc\") pod \"certified-operators-4sstt\" (UID: \"14284482-c890-4014-ba68-75fbbe78ec04\") " pod="openshift-marketplace/certified-operators-4sstt" Jan 26 08:08:17 crc kubenswrapper[4806]: I0126 08:08:17.453708 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14284482-c890-4014-ba68-75fbbe78ec04-utilities\") pod \"certified-operators-4sstt\" (UID: \"14284482-c890-4014-ba68-75fbbe78ec04\") " pod="openshift-marketplace/certified-operators-4sstt" Jan 26 08:08:17 crc kubenswrapper[4806]: I0126 08:08:17.454140 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14284482-c890-4014-ba68-75fbbe78ec04-catalog-content\") pod \"certified-operators-4sstt\" (UID: \"14284482-c890-4014-ba68-75fbbe78ec04\") " pod="openshift-marketplace/certified-operators-4sstt" Jan 26 08:08:17 crc kubenswrapper[4806]: I0126 08:08:17.454228 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14284482-c890-4014-ba68-75fbbe78ec04-utilities\") pod \"certified-operators-4sstt\" (UID: \"14284482-c890-4014-ba68-75fbbe78ec04\") " pod="openshift-marketplace/certified-operators-4sstt" Jan 26 08:08:17 crc kubenswrapper[4806]: I0126 08:08:17.474249 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brtmc\" (UniqueName: \"kubernetes.io/projected/14284482-c890-4014-ba68-75fbbe78ec04-kube-api-access-brtmc\") pod \"certified-operators-4sstt\" (UID: \"14284482-c890-4014-ba68-75fbbe78ec04\") " pod="openshift-marketplace/certified-operators-4sstt" Jan 26 08:08:17 crc kubenswrapper[4806]: I0126 08:08:17.492278 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-rf496" event={"ID":"1ca63855-2d6d-4543-a084-4cdb7c6d0c5c","Type":"ContainerStarted","Data":"37b1e963ae0aa2416d198f39646434e7b9b5b25b35a22759156d77891fcf16a7"} Jan 26 08:08:17 crc kubenswrapper[4806]: I0126 08:08:17.509510 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-rf496" podStartSLOduration=3.065742103 podStartE2EDuration="40.509495104s" podCreationTimestamp="2026-01-26 08:07:37 +0000 UTC" firstStartedPulling="2026-01-26 08:07:39.582746393 +0000 UTC m=+838.847154449" lastFinishedPulling="2026-01-26 08:08:17.026499394 +0000 UTC m=+876.290907450" observedRunningTime="2026-01-26 08:08:17.508267951 +0000 UTC m=+876.772676007" watchObservedRunningTime="2026-01-26 08:08:17.509495104 +0000 UTC m=+876.773903160" Jan 26 08:08:17 crc kubenswrapper[4806]: I0126 08:08:17.550770 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-9mwdz" Jan 26 08:08:17 crc kubenswrapper[4806]: I0126 08:08:17.566136 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-wwr7b" Jan 26 08:08:17 crc kubenswrapper[4806]: I0126 08:08:17.615782 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4sstt" Jan 26 08:08:17 crc kubenswrapper[4806]: I0126 08:08:17.664950 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-psw9b" Jan 26 08:08:17 crc kubenswrapper[4806]: I0126 08:08:17.673939 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-q9hmq" Jan 26 08:08:17 crc kubenswrapper[4806]: I0126 08:08:17.886200 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-9ld8m" Jan 26 08:08:17 crc kubenswrapper[4806]: I0126 08:08:17.952894 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4sstt"] Jan 26 08:08:17 crc kubenswrapper[4806]: I0126 08:08:17.982273 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-6fzqz" Jan 26 08:08:18 crc kubenswrapper[4806]: I0126 08:08:18.042455 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-lkcgg" Jan 26 08:08:18 crc kubenswrapper[4806]: I0126 08:08:18.082784 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-fd757" Jan 26 08:08:18 crc kubenswrapper[4806]: I0126 08:08:18.092977 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-tj4m9" Jan 26 08:08:18 crc kubenswrapper[4806]: I0126 08:08:18.143294 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-rf496" Jan 26 08:08:18 crc kubenswrapper[4806]: I0126 08:08:18.159882 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-nk6xc" Jan 26 08:08:18 crc kubenswrapper[4806]: I0126 08:08:18.168130 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-nk6xc" Jan 26 08:08:18 crc kubenswrapper[4806]: I0126 08:08:18.287967 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-497jq" Jan 26 08:08:18 crc kubenswrapper[4806]: I0126 08:08:18.415891 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-ktw6x" Jan 26 08:08:18 crc kubenswrapper[4806]: I0126 08:08:18.460227 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-swpm7" Jan 26 08:08:18 crc kubenswrapper[4806]: I0126 08:08:18.498485 4806 generic.go:334] "Generic (PLEG): container finished" podID="14284482-c890-4014-ba68-75fbbe78ec04" containerID="08e84a651f31baa64f831be674c10d38d91a71343c6ad7bb5ca0b885e76de6a6" exitCode=0 Jan 26 08:08:18 crc kubenswrapper[4806]: I0126 08:08:18.499412 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4sstt" event={"ID":"14284482-c890-4014-ba68-75fbbe78ec04","Type":"ContainerDied","Data":"08e84a651f31baa64f831be674c10d38d91a71343c6ad7bb5ca0b885e76de6a6"} Jan 26 08:08:18 crc kubenswrapper[4806]: I0126 08:08:18.499438 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4sstt" event={"ID":"14284482-c890-4014-ba68-75fbbe78ec04","Type":"ContainerStarted","Data":"c0594ba85e2dd0d40838bf7a8a2de268346717966de238be92a383065afa2695"} Jan 26 08:08:18 crc kubenswrapper[4806]: I0126 08:08:18.574013 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-p4bvd" Jan 26 08:08:18 crc kubenswrapper[4806]: I0126 08:08:18.960660 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-564965969-zqkgl" Jan 26 08:08:19 crc kubenswrapper[4806]: I0126 08:08:19.009780 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-bhg2d" Jan 26 08:08:19 crc kubenswrapper[4806]: I0126 08:08:19.512454 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4sstt" event={"ID":"14284482-c890-4014-ba68-75fbbe78ec04","Type":"ContainerStarted","Data":"390897f38fcb5761f241610f9661de8165eef9f186e4ac7e0a5e0b0120f436de"} Jan 26 08:08:20 crc kubenswrapper[4806]: I0126 08:08:20.165674 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854bc45q" Jan 26 08:08:20 crc kubenswrapper[4806]: I0126 08:08:20.519958 4806 generic.go:334] "Generic (PLEG): container finished" podID="14284482-c890-4014-ba68-75fbbe78ec04" containerID="390897f38fcb5761f241610f9661de8165eef9f186e4ac7e0a5e0b0120f436de" exitCode=0 Jan 26 08:08:20 crc kubenswrapper[4806]: I0126 08:08:20.520054 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4sstt" event={"ID":"14284482-c890-4014-ba68-75fbbe78ec04","Type":"ContainerDied","Data":"390897f38fcb5761f241610f9661de8165eef9f186e4ac7e0a5e0b0120f436de"} Jan 26 08:08:21 crc kubenswrapper[4806]: I0126 08:08:21.533826 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4sstt" event={"ID":"14284482-c890-4014-ba68-75fbbe78ec04","Type":"ContainerStarted","Data":"e1408da147d6bd1cccf7ab9014fa96236a01e693e04382af795f37f6319caff3"} Jan 26 08:08:21 crc kubenswrapper[4806]: I0126 08:08:21.559094 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4sstt" podStartSLOduration=1.799930595 podStartE2EDuration="4.559078326s" podCreationTimestamp="2026-01-26 08:08:17 +0000 UTC" firstStartedPulling="2026-01-26 08:08:18.500635984 +0000 UTC m=+877.765044040" lastFinishedPulling="2026-01-26 08:08:21.259783715 +0000 UTC m=+880.524191771" observedRunningTime="2026-01-26 08:08:21.554680745 +0000 UTC m=+880.819088801" watchObservedRunningTime="2026-01-26 08:08:21.559078326 +0000 UTC m=+880.823486382" Jan 26 08:08:22 crc kubenswrapper[4806]: E0126 08:08:22.044500 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-gnhrx" podUID="77931dd1-1acc-4552-8605-33a24c74fc43" Jan 26 08:08:24 crc kubenswrapper[4806]: I0126 08:08:24.683924 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-6898c455c-d6bzz" Jan 26 08:08:27 crc kubenswrapper[4806]: I0126 08:08:27.616665 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-4sstt" Jan 26 08:08:27 crc kubenswrapper[4806]: I0126 08:08:27.616716 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4sstt" Jan 26 08:08:27 crc kubenswrapper[4806]: I0126 08:08:27.681777 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4sstt" Jan 26 08:08:28 crc kubenswrapper[4806]: I0126 08:08:28.053554 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-rk454" Jan 26 08:08:28 crc kubenswrapper[4806]: I0126 08:08:28.144180 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-rf496" Jan 26 08:08:28 crc kubenswrapper[4806]: I0126 08:08:28.280407 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-ncfjs" Jan 26 08:08:28 crc kubenswrapper[4806]: I0126 08:08:28.649149 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4sstt" Jan 26 08:08:28 crc kubenswrapper[4806]: I0126 08:08:28.712159 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4sstt"] Jan 26 08:08:29 crc kubenswrapper[4806]: I0126 08:08:29.546190 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-8s72c" Jan 26 08:08:30 crc kubenswrapper[4806]: I0126 08:08:30.604275 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-4sstt" podUID="14284482-c890-4014-ba68-75fbbe78ec04" containerName="registry-server" containerID="cri-o://e1408da147d6bd1cccf7ab9014fa96236a01e693e04382af795f37f6319caff3" gracePeriod=2 Jan 26 08:08:31 crc kubenswrapper[4806]: I0126 08:08:31.027243 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4sstt" Jan 26 08:08:31 crc kubenswrapper[4806]: I0126 08:08:31.165632 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-brtmc\" (UniqueName: \"kubernetes.io/projected/14284482-c890-4014-ba68-75fbbe78ec04-kube-api-access-brtmc\") pod \"14284482-c890-4014-ba68-75fbbe78ec04\" (UID: \"14284482-c890-4014-ba68-75fbbe78ec04\") " Jan 26 08:08:31 crc kubenswrapper[4806]: I0126 08:08:31.165719 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14284482-c890-4014-ba68-75fbbe78ec04-catalog-content\") pod \"14284482-c890-4014-ba68-75fbbe78ec04\" (UID: \"14284482-c890-4014-ba68-75fbbe78ec04\") " Jan 26 08:08:31 crc kubenswrapper[4806]: I0126 08:08:31.165826 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14284482-c890-4014-ba68-75fbbe78ec04-utilities\") pod \"14284482-c890-4014-ba68-75fbbe78ec04\" (UID: \"14284482-c890-4014-ba68-75fbbe78ec04\") " Jan 26 08:08:31 crc kubenswrapper[4806]: I0126 08:08:31.167547 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14284482-c890-4014-ba68-75fbbe78ec04-utilities" (OuterVolumeSpecName: "utilities") pod "14284482-c890-4014-ba68-75fbbe78ec04" (UID: "14284482-c890-4014-ba68-75fbbe78ec04"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 08:08:31 crc kubenswrapper[4806]: I0126 08:08:31.175320 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14284482-c890-4014-ba68-75fbbe78ec04-kube-api-access-brtmc" (OuterVolumeSpecName: "kube-api-access-brtmc") pod "14284482-c890-4014-ba68-75fbbe78ec04" (UID: "14284482-c890-4014-ba68-75fbbe78ec04"). InnerVolumeSpecName "kube-api-access-brtmc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:08:31 crc kubenswrapper[4806]: I0126 08:08:31.233566 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14284482-c890-4014-ba68-75fbbe78ec04-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "14284482-c890-4014-ba68-75fbbe78ec04" (UID: "14284482-c890-4014-ba68-75fbbe78ec04"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 08:08:31 crc kubenswrapper[4806]: I0126 08:08:31.269470 4806 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14284482-c890-4014-ba68-75fbbe78ec04-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 08:08:31 crc kubenswrapper[4806]: I0126 08:08:31.269717 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-brtmc\" (UniqueName: \"kubernetes.io/projected/14284482-c890-4014-ba68-75fbbe78ec04-kube-api-access-brtmc\") on node \"crc\" DevicePath \"\"" Jan 26 08:08:31 crc kubenswrapper[4806]: I0126 08:08:31.269788 4806 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14284482-c890-4014-ba68-75fbbe78ec04-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 08:08:31 crc kubenswrapper[4806]: I0126 08:08:31.611373 4806 generic.go:334] "Generic (PLEG): container finished" podID="14284482-c890-4014-ba68-75fbbe78ec04" containerID="e1408da147d6bd1cccf7ab9014fa96236a01e693e04382af795f37f6319caff3" exitCode=0 Jan 26 08:08:31 crc kubenswrapper[4806]: I0126 08:08:31.611407 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4sstt" event={"ID":"14284482-c890-4014-ba68-75fbbe78ec04","Type":"ContainerDied","Data":"e1408da147d6bd1cccf7ab9014fa96236a01e693e04382af795f37f6319caff3"} Jan 26 08:08:31 crc kubenswrapper[4806]: I0126 08:08:31.611429 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4sstt" event={"ID":"14284482-c890-4014-ba68-75fbbe78ec04","Type":"ContainerDied","Data":"c0594ba85e2dd0d40838bf7a8a2de268346717966de238be92a383065afa2695"} Jan 26 08:08:31 crc kubenswrapper[4806]: I0126 08:08:31.611447 4806 scope.go:117] "RemoveContainer" containerID="e1408da147d6bd1cccf7ab9014fa96236a01e693e04382af795f37f6319caff3" Jan 26 08:08:31 crc kubenswrapper[4806]: I0126 08:08:31.611618 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4sstt" Jan 26 08:08:31 crc kubenswrapper[4806]: I0126 08:08:31.632479 4806 scope.go:117] "RemoveContainer" containerID="390897f38fcb5761f241610f9661de8165eef9f186e4ac7e0a5e0b0120f436de" Jan 26 08:08:31 crc kubenswrapper[4806]: I0126 08:08:31.642385 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4sstt"] Jan 26 08:08:31 crc kubenswrapper[4806]: I0126 08:08:31.655180 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-4sstt"] Jan 26 08:08:31 crc kubenswrapper[4806]: I0126 08:08:31.666012 4806 scope.go:117] "RemoveContainer" containerID="08e84a651f31baa64f831be674c10d38d91a71343c6ad7bb5ca0b885e76de6a6" Jan 26 08:08:31 crc kubenswrapper[4806]: I0126 08:08:31.679227 4806 scope.go:117] "RemoveContainer" containerID="e1408da147d6bd1cccf7ab9014fa96236a01e693e04382af795f37f6319caff3" Jan 26 08:08:31 crc kubenswrapper[4806]: E0126 08:08:31.679650 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1408da147d6bd1cccf7ab9014fa96236a01e693e04382af795f37f6319caff3\": container with ID starting with e1408da147d6bd1cccf7ab9014fa96236a01e693e04382af795f37f6319caff3 not found: ID does not exist" containerID="e1408da147d6bd1cccf7ab9014fa96236a01e693e04382af795f37f6319caff3" Jan 26 08:08:31 crc kubenswrapper[4806]: I0126 08:08:31.679762 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1408da147d6bd1cccf7ab9014fa96236a01e693e04382af795f37f6319caff3"} err="failed to get container status \"e1408da147d6bd1cccf7ab9014fa96236a01e693e04382af795f37f6319caff3\": rpc error: code = NotFound desc = could not find container \"e1408da147d6bd1cccf7ab9014fa96236a01e693e04382af795f37f6319caff3\": container with ID starting with e1408da147d6bd1cccf7ab9014fa96236a01e693e04382af795f37f6319caff3 not found: ID does not exist" Jan 26 08:08:31 crc kubenswrapper[4806]: I0126 08:08:31.679841 4806 scope.go:117] "RemoveContainer" containerID="390897f38fcb5761f241610f9661de8165eef9f186e4ac7e0a5e0b0120f436de" Jan 26 08:08:31 crc kubenswrapper[4806]: E0126 08:08:31.680137 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"390897f38fcb5761f241610f9661de8165eef9f186e4ac7e0a5e0b0120f436de\": container with ID starting with 390897f38fcb5761f241610f9661de8165eef9f186e4ac7e0a5e0b0120f436de not found: ID does not exist" containerID="390897f38fcb5761f241610f9661de8165eef9f186e4ac7e0a5e0b0120f436de" Jan 26 08:08:31 crc kubenswrapper[4806]: I0126 08:08:31.680178 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"390897f38fcb5761f241610f9661de8165eef9f186e4ac7e0a5e0b0120f436de"} err="failed to get container status \"390897f38fcb5761f241610f9661de8165eef9f186e4ac7e0a5e0b0120f436de\": rpc error: code = NotFound desc = could not find container \"390897f38fcb5761f241610f9661de8165eef9f186e4ac7e0a5e0b0120f436de\": container with ID starting with 390897f38fcb5761f241610f9661de8165eef9f186e4ac7e0a5e0b0120f436de not found: ID does not exist" Jan 26 08:08:31 crc kubenswrapper[4806]: I0126 08:08:31.680205 4806 scope.go:117] "RemoveContainer" containerID="08e84a651f31baa64f831be674c10d38d91a71343c6ad7bb5ca0b885e76de6a6" Jan 26 08:08:31 crc kubenswrapper[4806]: E0126 08:08:31.680435 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08e84a651f31baa64f831be674c10d38d91a71343c6ad7bb5ca0b885e76de6a6\": container with ID starting with 08e84a651f31baa64f831be674c10d38d91a71343c6ad7bb5ca0b885e76de6a6 not found: ID does not exist" containerID="08e84a651f31baa64f831be674c10d38d91a71343c6ad7bb5ca0b885e76de6a6" Jan 26 08:08:31 crc kubenswrapper[4806]: I0126 08:08:31.680531 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08e84a651f31baa64f831be674c10d38d91a71343c6ad7bb5ca0b885e76de6a6"} err="failed to get container status \"08e84a651f31baa64f831be674c10d38d91a71343c6ad7bb5ca0b885e76de6a6\": rpc error: code = NotFound desc = could not find container \"08e84a651f31baa64f831be674c10d38d91a71343c6ad7bb5ca0b885e76de6a6\": container with ID starting with 08e84a651f31baa64f831be674c10d38d91a71343c6ad7bb5ca0b885e76de6a6 not found: ID does not exist" Jan 26 08:08:33 crc kubenswrapper[4806]: I0126 08:08:33.056448 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14284482-c890-4014-ba68-75fbbe78ec04" path="/var/lib/kubelet/pods/14284482-c890-4014-ba68-75fbbe78ec04/volumes" Jan 26 08:08:36 crc kubenswrapper[4806]: I0126 08:08:36.046447 4806 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 08:08:36 crc kubenswrapper[4806]: I0126 08:08:36.664774 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-gnhrx" event={"ID":"77931dd1-1acc-4552-8605-33a24c74fc43","Type":"ContainerStarted","Data":"279d3af271d5778d9d4ea959441d2964979c41adaa81d1ae4a5a5497ee479fcd"} Jan 26 08:08:43 crc kubenswrapper[4806]: I0126 08:08:43.848047 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-gnhrx" podStartSLOduration=9.085894256 podStartE2EDuration="1m5.848023177s" podCreationTimestamp="2026-01-26 08:07:38 +0000 UTC" firstStartedPulling="2026-01-26 08:07:39.679062816 +0000 UTC m=+838.943470872" lastFinishedPulling="2026-01-26 08:08:36.441191737 +0000 UTC m=+895.705599793" observedRunningTime="2026-01-26 08:08:36.683675469 +0000 UTC m=+895.948083535" watchObservedRunningTime="2026-01-26 08:08:43.848023177 +0000 UTC m=+903.112431243" Jan 26 08:08:43 crc kubenswrapper[4806]: I0126 08:08:43.850540 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wvpl9"] Jan 26 08:08:43 crc kubenswrapper[4806]: E0126 08:08:43.850940 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14284482-c890-4014-ba68-75fbbe78ec04" containerName="extract-utilities" Jan 26 08:08:43 crc kubenswrapper[4806]: I0126 08:08:43.850964 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="14284482-c890-4014-ba68-75fbbe78ec04" containerName="extract-utilities" Jan 26 08:08:43 crc kubenswrapper[4806]: E0126 08:08:43.850984 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14284482-c890-4014-ba68-75fbbe78ec04" containerName="extract-content" Jan 26 08:08:43 crc kubenswrapper[4806]: I0126 08:08:43.850996 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="14284482-c890-4014-ba68-75fbbe78ec04" containerName="extract-content" Jan 26 08:08:43 crc kubenswrapper[4806]: E0126 08:08:43.851021 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14284482-c890-4014-ba68-75fbbe78ec04" containerName="registry-server" Jan 26 08:08:43 crc kubenswrapper[4806]: I0126 08:08:43.851031 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="14284482-c890-4014-ba68-75fbbe78ec04" containerName="registry-server" Jan 26 08:08:43 crc kubenswrapper[4806]: I0126 08:08:43.851232 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="14284482-c890-4014-ba68-75fbbe78ec04" containerName="registry-server" Jan 26 08:08:43 crc kubenswrapper[4806]: I0126 08:08:43.852738 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wvpl9" Jan 26 08:08:43 crc kubenswrapper[4806]: I0126 08:08:43.870312 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wvpl9"] Jan 26 08:08:43 crc kubenswrapper[4806]: I0126 08:08:43.988138 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed1a6d5a-1ba9-4070-a696-04e2bb2eb7e0-utilities\") pod \"community-operators-wvpl9\" (UID: \"ed1a6d5a-1ba9-4070-a696-04e2bb2eb7e0\") " pod="openshift-marketplace/community-operators-wvpl9" Jan 26 08:08:43 crc kubenswrapper[4806]: I0126 08:08:43.988233 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwjdr\" (UniqueName: \"kubernetes.io/projected/ed1a6d5a-1ba9-4070-a696-04e2bb2eb7e0-kube-api-access-gwjdr\") pod \"community-operators-wvpl9\" (UID: \"ed1a6d5a-1ba9-4070-a696-04e2bb2eb7e0\") " pod="openshift-marketplace/community-operators-wvpl9" Jan 26 08:08:43 crc kubenswrapper[4806]: I0126 08:08:43.988254 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed1a6d5a-1ba9-4070-a696-04e2bb2eb7e0-catalog-content\") pod \"community-operators-wvpl9\" (UID: \"ed1a6d5a-1ba9-4070-a696-04e2bb2eb7e0\") " pod="openshift-marketplace/community-operators-wvpl9" Jan 26 08:08:44 crc kubenswrapper[4806]: I0126 08:08:44.089569 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwjdr\" (UniqueName: \"kubernetes.io/projected/ed1a6d5a-1ba9-4070-a696-04e2bb2eb7e0-kube-api-access-gwjdr\") pod \"community-operators-wvpl9\" (UID: \"ed1a6d5a-1ba9-4070-a696-04e2bb2eb7e0\") " pod="openshift-marketplace/community-operators-wvpl9" Jan 26 08:08:44 crc kubenswrapper[4806]: I0126 08:08:44.089868 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed1a6d5a-1ba9-4070-a696-04e2bb2eb7e0-catalog-content\") pod \"community-operators-wvpl9\" (UID: \"ed1a6d5a-1ba9-4070-a696-04e2bb2eb7e0\") " pod="openshift-marketplace/community-operators-wvpl9" Jan 26 08:08:44 crc kubenswrapper[4806]: I0126 08:08:44.089968 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed1a6d5a-1ba9-4070-a696-04e2bb2eb7e0-utilities\") pod \"community-operators-wvpl9\" (UID: \"ed1a6d5a-1ba9-4070-a696-04e2bb2eb7e0\") " pod="openshift-marketplace/community-operators-wvpl9" Jan 26 08:08:44 crc kubenswrapper[4806]: I0126 08:08:44.090478 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed1a6d5a-1ba9-4070-a696-04e2bb2eb7e0-catalog-content\") pod \"community-operators-wvpl9\" (UID: \"ed1a6d5a-1ba9-4070-a696-04e2bb2eb7e0\") " pod="openshift-marketplace/community-operators-wvpl9" Jan 26 08:08:44 crc kubenswrapper[4806]: I0126 08:08:44.090555 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed1a6d5a-1ba9-4070-a696-04e2bb2eb7e0-utilities\") pod \"community-operators-wvpl9\" (UID: \"ed1a6d5a-1ba9-4070-a696-04e2bb2eb7e0\") " pod="openshift-marketplace/community-operators-wvpl9" Jan 26 08:08:44 crc kubenswrapper[4806]: I0126 08:08:44.116437 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwjdr\" (UniqueName: \"kubernetes.io/projected/ed1a6d5a-1ba9-4070-a696-04e2bb2eb7e0-kube-api-access-gwjdr\") pod \"community-operators-wvpl9\" (UID: \"ed1a6d5a-1ba9-4070-a696-04e2bb2eb7e0\") " pod="openshift-marketplace/community-operators-wvpl9" Jan 26 08:08:44 crc kubenswrapper[4806]: I0126 08:08:44.178906 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wvpl9" Jan 26 08:08:44 crc kubenswrapper[4806]: I0126 08:08:44.487202 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wvpl9"] Jan 26 08:08:44 crc kubenswrapper[4806]: I0126 08:08:44.722736 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wvpl9" event={"ID":"ed1a6d5a-1ba9-4070-a696-04e2bb2eb7e0","Type":"ContainerStarted","Data":"53da387e11b842a90324c60fd8063589ee66cc97069ddd12aeb4df9ef7863b49"} Jan 26 08:08:45 crc kubenswrapper[4806]: I0126 08:08:45.730049 4806 generic.go:334] "Generic (PLEG): container finished" podID="ed1a6d5a-1ba9-4070-a696-04e2bb2eb7e0" containerID="631bb750f529839b7acb0d5007af588da1bd9c753499adc1345974ee243c0bd9" exitCode=0 Jan 26 08:08:45 crc kubenswrapper[4806]: I0126 08:08:45.730120 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wvpl9" event={"ID":"ed1a6d5a-1ba9-4070-a696-04e2bb2eb7e0","Type":"ContainerDied","Data":"631bb750f529839b7acb0d5007af588da1bd9c753499adc1345974ee243c0bd9"} Jan 26 08:08:46 crc kubenswrapper[4806]: I0126 08:08:46.738872 4806 generic.go:334] "Generic (PLEG): container finished" podID="ed1a6d5a-1ba9-4070-a696-04e2bb2eb7e0" containerID="a2aef2617b8d4edb8f91713934d0f236d6fce90c017f2b224edf1060dc318579" exitCode=0 Jan 26 08:08:46 crc kubenswrapper[4806]: I0126 08:08:46.739306 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wvpl9" event={"ID":"ed1a6d5a-1ba9-4070-a696-04e2bb2eb7e0","Type":"ContainerDied","Data":"a2aef2617b8d4edb8f91713934d0f236d6fce90c017f2b224edf1060dc318579"} Jan 26 08:08:47 crc kubenswrapper[4806]: I0126 08:08:47.746187 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wvpl9" event={"ID":"ed1a6d5a-1ba9-4070-a696-04e2bb2eb7e0","Type":"ContainerStarted","Data":"c6abf3066d343d221fda645b0d86a0dfe573d113d6246d3dfb1bec303c9e73f6"} Jan 26 08:08:47 crc kubenswrapper[4806]: I0126 08:08:47.768597 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wvpl9" podStartSLOduration=3.375481573 podStartE2EDuration="4.76857548s" podCreationTimestamp="2026-01-26 08:08:43 +0000 UTC" firstStartedPulling="2026-01-26 08:08:45.732064692 +0000 UTC m=+904.996472748" lastFinishedPulling="2026-01-26 08:08:47.125158609 +0000 UTC m=+906.389566655" observedRunningTime="2026-01-26 08:08:47.762034201 +0000 UTC m=+907.026442277" watchObservedRunningTime="2026-01-26 08:08:47.76857548 +0000 UTC m=+907.032983536" Jan 26 08:08:52 crc kubenswrapper[4806]: I0126 08:08:52.229029 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-8lxk9"] Jan 26 08:08:52 crc kubenswrapper[4806]: I0126 08:08:52.230665 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-8lxk9" Jan 26 08:08:52 crc kubenswrapper[4806]: I0126 08:08:52.235683 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 26 08:08:52 crc kubenswrapper[4806]: I0126 08:08:52.235750 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-gn2wr" Jan 26 08:08:52 crc kubenswrapper[4806]: I0126 08:08:52.235927 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 26 08:08:52 crc kubenswrapper[4806]: I0126 08:08:52.236101 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 26 08:08:52 crc kubenswrapper[4806]: I0126 08:08:52.264606 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-8lxk9"] Jan 26 08:08:52 crc kubenswrapper[4806]: I0126 08:08:52.300495 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-6kr5x"] Jan 26 08:08:52 crc kubenswrapper[4806]: I0126 08:08:52.301645 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-6kr5x" Jan 26 08:08:52 crc kubenswrapper[4806]: I0126 08:08:52.306261 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 26 08:08:52 crc kubenswrapper[4806]: I0126 08:08:52.323113 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-6kr5x"] Jan 26 08:08:52 crc kubenswrapper[4806]: I0126 08:08:52.422134 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zc2zv\" (UniqueName: \"kubernetes.io/projected/8712dfde-3740-4f37-85c2-bc532a559a48-kube-api-access-zc2zv\") pod \"dnsmasq-dns-675f4bcbfc-8lxk9\" (UID: \"8712dfde-3740-4f37-85c2-bc532a559a48\") " pod="openstack/dnsmasq-dns-675f4bcbfc-8lxk9" Jan 26 08:08:52 crc kubenswrapper[4806]: I0126 08:08:52.422203 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xrpx\" (UniqueName: \"kubernetes.io/projected/fcd872ae-4b9c-4f34-8b78-aac4c0602746-kube-api-access-5xrpx\") pod \"dnsmasq-dns-78dd6ddcc-6kr5x\" (UID: \"fcd872ae-4b9c-4f34-8b78-aac4c0602746\") " pod="openstack/dnsmasq-dns-78dd6ddcc-6kr5x" Jan 26 08:08:52 crc kubenswrapper[4806]: I0126 08:08:52.422257 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fcd872ae-4b9c-4f34-8b78-aac4c0602746-config\") pod \"dnsmasq-dns-78dd6ddcc-6kr5x\" (UID: \"fcd872ae-4b9c-4f34-8b78-aac4c0602746\") " pod="openstack/dnsmasq-dns-78dd6ddcc-6kr5x" Jan 26 08:08:52 crc kubenswrapper[4806]: I0126 08:08:52.422274 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8712dfde-3740-4f37-85c2-bc532a559a48-config\") pod \"dnsmasq-dns-675f4bcbfc-8lxk9\" (UID: \"8712dfde-3740-4f37-85c2-bc532a559a48\") " pod="openstack/dnsmasq-dns-675f4bcbfc-8lxk9" Jan 26 08:08:52 crc kubenswrapper[4806]: I0126 08:08:52.422301 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fcd872ae-4b9c-4f34-8b78-aac4c0602746-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-6kr5x\" (UID: \"fcd872ae-4b9c-4f34-8b78-aac4c0602746\") " pod="openstack/dnsmasq-dns-78dd6ddcc-6kr5x" Jan 26 08:08:52 crc kubenswrapper[4806]: I0126 08:08:52.525691 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fcd872ae-4b9c-4f34-8b78-aac4c0602746-config\") pod \"dnsmasq-dns-78dd6ddcc-6kr5x\" (UID: \"fcd872ae-4b9c-4f34-8b78-aac4c0602746\") " pod="openstack/dnsmasq-dns-78dd6ddcc-6kr5x" Jan 26 08:08:52 crc kubenswrapper[4806]: I0126 08:08:52.525784 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8712dfde-3740-4f37-85c2-bc532a559a48-config\") pod \"dnsmasq-dns-675f4bcbfc-8lxk9\" (UID: \"8712dfde-3740-4f37-85c2-bc532a559a48\") " pod="openstack/dnsmasq-dns-675f4bcbfc-8lxk9" Jan 26 08:08:52 crc kubenswrapper[4806]: I0126 08:08:52.525873 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fcd872ae-4b9c-4f34-8b78-aac4c0602746-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-6kr5x\" (UID: \"fcd872ae-4b9c-4f34-8b78-aac4c0602746\") " pod="openstack/dnsmasq-dns-78dd6ddcc-6kr5x" Jan 26 08:08:52 crc kubenswrapper[4806]: I0126 08:08:52.525961 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zc2zv\" (UniqueName: \"kubernetes.io/projected/8712dfde-3740-4f37-85c2-bc532a559a48-kube-api-access-zc2zv\") pod \"dnsmasq-dns-675f4bcbfc-8lxk9\" (UID: \"8712dfde-3740-4f37-85c2-bc532a559a48\") " pod="openstack/dnsmasq-dns-675f4bcbfc-8lxk9" Jan 26 08:08:52 crc kubenswrapper[4806]: I0126 08:08:52.526033 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xrpx\" (UniqueName: \"kubernetes.io/projected/fcd872ae-4b9c-4f34-8b78-aac4c0602746-kube-api-access-5xrpx\") pod \"dnsmasq-dns-78dd6ddcc-6kr5x\" (UID: \"fcd872ae-4b9c-4f34-8b78-aac4c0602746\") " pod="openstack/dnsmasq-dns-78dd6ddcc-6kr5x" Jan 26 08:08:52 crc kubenswrapper[4806]: I0126 08:08:52.526332 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fcd872ae-4b9c-4f34-8b78-aac4c0602746-config\") pod \"dnsmasq-dns-78dd6ddcc-6kr5x\" (UID: \"fcd872ae-4b9c-4f34-8b78-aac4c0602746\") " pod="openstack/dnsmasq-dns-78dd6ddcc-6kr5x" Jan 26 08:08:52 crc kubenswrapper[4806]: I0126 08:08:52.526725 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fcd872ae-4b9c-4f34-8b78-aac4c0602746-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-6kr5x\" (UID: \"fcd872ae-4b9c-4f34-8b78-aac4c0602746\") " pod="openstack/dnsmasq-dns-78dd6ddcc-6kr5x" Jan 26 08:08:52 crc kubenswrapper[4806]: I0126 08:08:52.527802 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8712dfde-3740-4f37-85c2-bc532a559a48-config\") pod \"dnsmasq-dns-675f4bcbfc-8lxk9\" (UID: \"8712dfde-3740-4f37-85c2-bc532a559a48\") " pod="openstack/dnsmasq-dns-675f4bcbfc-8lxk9" Jan 26 08:08:52 crc kubenswrapper[4806]: I0126 08:08:52.550037 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xrpx\" (UniqueName: \"kubernetes.io/projected/fcd872ae-4b9c-4f34-8b78-aac4c0602746-kube-api-access-5xrpx\") pod \"dnsmasq-dns-78dd6ddcc-6kr5x\" (UID: \"fcd872ae-4b9c-4f34-8b78-aac4c0602746\") " pod="openstack/dnsmasq-dns-78dd6ddcc-6kr5x" Jan 26 08:08:52 crc kubenswrapper[4806]: I0126 08:08:52.550206 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zc2zv\" (UniqueName: \"kubernetes.io/projected/8712dfde-3740-4f37-85c2-bc532a559a48-kube-api-access-zc2zv\") pod \"dnsmasq-dns-675f4bcbfc-8lxk9\" (UID: \"8712dfde-3740-4f37-85c2-bc532a559a48\") " pod="openstack/dnsmasq-dns-675f4bcbfc-8lxk9" Jan 26 08:08:52 crc kubenswrapper[4806]: I0126 08:08:52.559649 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-8lxk9" Jan 26 08:08:52 crc kubenswrapper[4806]: I0126 08:08:52.635628 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-6kr5x" Jan 26 08:08:53 crc kubenswrapper[4806]: I0126 08:08:53.073162 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-8lxk9"] Jan 26 08:08:53 crc kubenswrapper[4806]: I0126 08:08:53.124631 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-6kr5x"] Jan 26 08:08:53 crc kubenswrapper[4806]: W0126 08:08:53.135474 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfcd872ae_4b9c_4f34_8b78_aac4c0602746.slice/crio-fe10e18eeb833b27470be15c662f928ea3ad9824baa1bd225705acb9bea8225e WatchSource:0}: Error finding container fe10e18eeb833b27470be15c662f928ea3ad9824baa1bd225705acb9bea8225e: Status 404 returned error can't find the container with id fe10e18eeb833b27470be15c662f928ea3ad9824baa1bd225705acb9bea8225e Jan 26 08:08:53 crc kubenswrapper[4806]: I0126 08:08:53.785209 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-6kr5x" event={"ID":"fcd872ae-4b9c-4f34-8b78-aac4c0602746","Type":"ContainerStarted","Data":"fe10e18eeb833b27470be15c662f928ea3ad9824baa1bd225705acb9bea8225e"} Jan 26 08:08:53 crc kubenswrapper[4806]: I0126 08:08:53.788214 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-8lxk9" event={"ID":"8712dfde-3740-4f37-85c2-bc532a559a48","Type":"ContainerStarted","Data":"f49269112e729b52bd7d2e6460c8ae3e41d0af3e10827266557d321dd094fcb1"} Jan 26 08:08:54 crc kubenswrapper[4806]: I0126 08:08:54.180070 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wvpl9" Jan 26 08:08:54 crc kubenswrapper[4806]: I0126 08:08:54.180905 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-wvpl9" Jan 26 08:08:54 crc kubenswrapper[4806]: I0126 08:08:54.343743 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wvpl9" Jan 26 08:08:54 crc kubenswrapper[4806]: I0126 08:08:54.941164 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wvpl9" Jan 26 08:08:55 crc kubenswrapper[4806]: I0126 08:08:55.080141 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wvpl9"] Jan 26 08:08:55 crc kubenswrapper[4806]: I0126 08:08:55.153927 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-8lxk9"] Jan 26 08:08:55 crc kubenswrapper[4806]: I0126 08:08:55.190705 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-nc6vl"] Jan 26 08:08:55 crc kubenswrapper[4806]: I0126 08:08:55.191823 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-nc6vl" Jan 26 08:08:55 crc kubenswrapper[4806]: I0126 08:08:55.220968 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-nc6vl"] Jan 26 08:08:55 crc kubenswrapper[4806]: I0126 08:08:55.265922 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89k5s\" (UniqueName: \"kubernetes.io/projected/01b8727d-453c-4e87-aaa9-e938db5d17dc-kube-api-access-89k5s\") pod \"dnsmasq-dns-666b6646f7-nc6vl\" (UID: \"01b8727d-453c-4e87-aaa9-e938db5d17dc\") " pod="openstack/dnsmasq-dns-666b6646f7-nc6vl" Jan 26 08:08:55 crc kubenswrapper[4806]: I0126 08:08:55.266336 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01b8727d-453c-4e87-aaa9-e938db5d17dc-config\") pod \"dnsmasq-dns-666b6646f7-nc6vl\" (UID: \"01b8727d-453c-4e87-aaa9-e938db5d17dc\") " pod="openstack/dnsmasq-dns-666b6646f7-nc6vl" Jan 26 08:08:55 crc kubenswrapper[4806]: I0126 08:08:55.266399 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/01b8727d-453c-4e87-aaa9-e938db5d17dc-dns-svc\") pod \"dnsmasq-dns-666b6646f7-nc6vl\" (UID: \"01b8727d-453c-4e87-aaa9-e938db5d17dc\") " pod="openstack/dnsmasq-dns-666b6646f7-nc6vl" Jan 26 08:08:55 crc kubenswrapper[4806]: I0126 08:08:55.367476 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89k5s\" (UniqueName: \"kubernetes.io/projected/01b8727d-453c-4e87-aaa9-e938db5d17dc-kube-api-access-89k5s\") pod \"dnsmasq-dns-666b6646f7-nc6vl\" (UID: \"01b8727d-453c-4e87-aaa9-e938db5d17dc\") " pod="openstack/dnsmasq-dns-666b6646f7-nc6vl" Jan 26 08:08:55 crc kubenswrapper[4806]: I0126 08:08:55.367556 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01b8727d-453c-4e87-aaa9-e938db5d17dc-config\") pod \"dnsmasq-dns-666b6646f7-nc6vl\" (UID: \"01b8727d-453c-4e87-aaa9-e938db5d17dc\") " pod="openstack/dnsmasq-dns-666b6646f7-nc6vl" Jan 26 08:08:55 crc kubenswrapper[4806]: I0126 08:08:55.367580 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/01b8727d-453c-4e87-aaa9-e938db5d17dc-dns-svc\") pod \"dnsmasq-dns-666b6646f7-nc6vl\" (UID: \"01b8727d-453c-4e87-aaa9-e938db5d17dc\") " pod="openstack/dnsmasq-dns-666b6646f7-nc6vl" Jan 26 08:08:55 crc kubenswrapper[4806]: I0126 08:08:55.368485 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/01b8727d-453c-4e87-aaa9-e938db5d17dc-dns-svc\") pod \"dnsmasq-dns-666b6646f7-nc6vl\" (UID: \"01b8727d-453c-4e87-aaa9-e938db5d17dc\") " pod="openstack/dnsmasq-dns-666b6646f7-nc6vl" Jan 26 08:08:55 crc kubenswrapper[4806]: I0126 08:08:55.445025 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01b8727d-453c-4e87-aaa9-e938db5d17dc-config\") pod \"dnsmasq-dns-666b6646f7-nc6vl\" (UID: \"01b8727d-453c-4e87-aaa9-e938db5d17dc\") " pod="openstack/dnsmasq-dns-666b6646f7-nc6vl" Jan 26 08:08:55 crc kubenswrapper[4806]: I0126 08:08:55.464753 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89k5s\" (UniqueName: \"kubernetes.io/projected/01b8727d-453c-4e87-aaa9-e938db5d17dc-kube-api-access-89k5s\") pod \"dnsmasq-dns-666b6646f7-nc6vl\" (UID: \"01b8727d-453c-4e87-aaa9-e938db5d17dc\") " pod="openstack/dnsmasq-dns-666b6646f7-nc6vl" Jan 26 08:08:55 crc kubenswrapper[4806]: I0126 08:08:55.527960 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-nc6vl" Jan 26 08:08:55 crc kubenswrapper[4806]: I0126 08:08:55.567879 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-6kr5x"] Jan 26 08:08:55 crc kubenswrapper[4806]: I0126 08:08:55.608102 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-z28t5"] Jan 26 08:08:55 crc kubenswrapper[4806]: I0126 08:08:55.609548 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-z28t5" Jan 26 08:08:55 crc kubenswrapper[4806]: I0126 08:08:55.628545 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-z28t5"] Jan 26 08:08:55 crc kubenswrapper[4806]: I0126 08:08:55.670214 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bf67838f-5d5e-48a8-ba5a-ed0c64d2756f-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-z28t5\" (UID: \"bf67838f-5d5e-48a8-ba5a-ed0c64d2756f\") " pod="openstack/dnsmasq-dns-57d769cc4f-z28t5" Jan 26 08:08:55 crc kubenswrapper[4806]: I0126 08:08:55.670307 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42z7w\" (UniqueName: \"kubernetes.io/projected/bf67838f-5d5e-48a8-ba5a-ed0c64d2756f-kube-api-access-42z7w\") pod \"dnsmasq-dns-57d769cc4f-z28t5\" (UID: \"bf67838f-5d5e-48a8-ba5a-ed0c64d2756f\") " pod="openstack/dnsmasq-dns-57d769cc4f-z28t5" Jan 26 08:08:55 crc kubenswrapper[4806]: I0126 08:08:55.670360 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf67838f-5d5e-48a8-ba5a-ed0c64d2756f-config\") pod \"dnsmasq-dns-57d769cc4f-z28t5\" (UID: \"bf67838f-5d5e-48a8-ba5a-ed0c64d2756f\") " pod="openstack/dnsmasq-dns-57d769cc4f-z28t5" Jan 26 08:08:55 crc kubenswrapper[4806]: I0126 08:08:55.772376 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42z7w\" (UniqueName: \"kubernetes.io/projected/bf67838f-5d5e-48a8-ba5a-ed0c64d2756f-kube-api-access-42z7w\") pod \"dnsmasq-dns-57d769cc4f-z28t5\" (UID: \"bf67838f-5d5e-48a8-ba5a-ed0c64d2756f\") " pod="openstack/dnsmasq-dns-57d769cc4f-z28t5" Jan 26 08:08:55 crc kubenswrapper[4806]: I0126 08:08:55.772676 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf67838f-5d5e-48a8-ba5a-ed0c64d2756f-config\") pod \"dnsmasq-dns-57d769cc4f-z28t5\" (UID: \"bf67838f-5d5e-48a8-ba5a-ed0c64d2756f\") " pod="openstack/dnsmasq-dns-57d769cc4f-z28t5" Jan 26 08:08:55 crc kubenswrapper[4806]: I0126 08:08:55.772709 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bf67838f-5d5e-48a8-ba5a-ed0c64d2756f-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-z28t5\" (UID: \"bf67838f-5d5e-48a8-ba5a-ed0c64d2756f\") " pod="openstack/dnsmasq-dns-57d769cc4f-z28t5" Jan 26 08:08:55 crc kubenswrapper[4806]: I0126 08:08:55.773565 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bf67838f-5d5e-48a8-ba5a-ed0c64d2756f-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-z28t5\" (UID: \"bf67838f-5d5e-48a8-ba5a-ed0c64d2756f\") " pod="openstack/dnsmasq-dns-57d769cc4f-z28t5" Jan 26 08:08:55 crc kubenswrapper[4806]: I0126 08:08:55.773619 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf67838f-5d5e-48a8-ba5a-ed0c64d2756f-config\") pod \"dnsmasq-dns-57d769cc4f-z28t5\" (UID: \"bf67838f-5d5e-48a8-ba5a-ed0c64d2756f\") " pod="openstack/dnsmasq-dns-57d769cc4f-z28t5" Jan 26 08:08:55 crc kubenswrapper[4806]: I0126 08:08:55.809096 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42z7w\" (UniqueName: \"kubernetes.io/projected/bf67838f-5d5e-48a8-ba5a-ed0c64d2756f-kube-api-access-42z7w\") pod \"dnsmasq-dns-57d769cc4f-z28t5\" (UID: \"bf67838f-5d5e-48a8-ba5a-ed0c64d2756f\") " pod="openstack/dnsmasq-dns-57d769cc4f-z28t5" Jan 26 08:08:55 crc kubenswrapper[4806]: I0126 08:08:55.982723 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-z28t5" Jan 26 08:08:56 crc kubenswrapper[4806]: I0126 08:08:56.184883 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-nc6vl"] Jan 26 08:08:56 crc kubenswrapper[4806]: W0126 08:08:56.213033 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod01b8727d_453c_4e87_aaa9_e938db5d17dc.slice/crio-163dad23fb81989fe0e451d00ea09dfa77218cacba380bed301aff1277e6275b WatchSource:0}: Error finding container 163dad23fb81989fe0e451d00ea09dfa77218cacba380bed301aff1277e6275b: Status 404 returned error can't find the container with id 163dad23fb81989fe0e451d00ea09dfa77218cacba380bed301aff1277e6275b Jan 26 08:08:56 crc kubenswrapper[4806]: I0126 08:08:56.479376 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 26 08:08:56 crc kubenswrapper[4806]: I0126 08:08:56.481940 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 26 08:08:56 crc kubenswrapper[4806]: W0126 08:08:56.485099 4806 reflector.go:561] object-"openstack"/"rabbitmq-plugins-conf": failed to list *v1.ConfigMap: configmaps "rabbitmq-plugins-conf" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openstack": no relationship found between node 'crc' and this object Jan 26 08:08:56 crc kubenswrapper[4806]: E0126 08:08:56.485135 4806 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"rabbitmq-plugins-conf\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"rabbitmq-plugins-conf\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openstack\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 26 08:08:56 crc kubenswrapper[4806]: W0126 08:08:56.485219 4806 reflector.go:561] object-"openstack"/"rabbitmq-erlang-cookie": failed to list *v1.Secret: secrets "rabbitmq-erlang-cookie" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openstack": no relationship found between node 'crc' and this object Jan 26 08:08:56 crc kubenswrapper[4806]: E0126 08:08:56.485233 4806 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"rabbitmq-erlang-cookie\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"rabbitmq-erlang-cookie\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openstack\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 26 08:08:56 crc kubenswrapper[4806]: W0126 08:08:56.485279 4806 reflector.go:561] object-"openstack"/"rabbitmq-config-data": failed to list *v1.ConfigMap: configmaps "rabbitmq-config-data" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openstack": no relationship found between node 'crc' and this object Jan 26 08:08:56 crc kubenswrapper[4806]: E0126 08:08:56.485289 4806 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"rabbitmq-config-data\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"rabbitmq-config-data\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openstack\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 26 08:08:56 crc kubenswrapper[4806]: W0126 08:08:56.485321 4806 reflector.go:561] object-"openstack"/"rabbitmq-default-user": failed to list *v1.Secret: secrets "rabbitmq-default-user" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openstack": no relationship found between node 'crc' and this object Jan 26 08:08:56 crc kubenswrapper[4806]: E0126 08:08:56.485331 4806 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"rabbitmq-default-user\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"rabbitmq-default-user\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openstack\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 26 08:08:56 crc kubenswrapper[4806]: W0126 08:08:56.485506 4806 reflector.go:561] object-"openstack"/"rabbitmq-server-conf": failed to list *v1.ConfigMap: configmaps "rabbitmq-server-conf" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openstack": no relationship found between node 'crc' and this object Jan 26 08:08:56 crc kubenswrapper[4806]: E0126 08:08:56.485548 4806 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"rabbitmq-server-conf\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"rabbitmq-server-conf\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openstack\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 26 08:08:56 crc kubenswrapper[4806]: W0126 08:08:56.486133 4806 reflector.go:561] object-"openstack"/"rabbitmq-server-dockercfg-czc68": failed to list *v1.Secret: secrets "rabbitmq-server-dockercfg-czc68" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openstack": no relationship found between node 'crc' and this object Jan 26 08:08:56 crc kubenswrapper[4806]: E0126 08:08:56.486184 4806 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"rabbitmq-server-dockercfg-czc68\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"rabbitmq-server-dockercfg-czc68\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openstack\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 26 08:08:56 crc kubenswrapper[4806]: W0126 08:08:56.489670 4806 reflector.go:561] object-"openstack"/"cert-rabbitmq-svc": failed to list *v1.Secret: secrets "cert-rabbitmq-svc" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openstack": no relationship found between node 'crc' and this object Jan 26 08:08:56 crc kubenswrapper[4806]: E0126 08:08:56.489701 4806 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cert-rabbitmq-svc\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cert-rabbitmq-svc\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openstack\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 26 08:08:56 crc kubenswrapper[4806]: I0126 08:08:56.512513 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 26 08:08:56 crc kubenswrapper[4806]: W0126 08:08:56.622072 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbf67838f_5d5e_48a8_ba5a_ed0c64d2756f.slice/crio-fc7b5a287a722aa632011a18f078f8b31df229abc0b5e67da12cdfc8eea89399 WatchSource:0}: Error finding container fc7b5a287a722aa632011a18f078f8b31df229abc0b5e67da12cdfc8eea89399: Status 404 returned error can't find the container with id fc7b5a287a722aa632011a18f078f8b31df229abc0b5e67da12cdfc8eea89399 Jan 26 08:08:56 crc kubenswrapper[4806]: I0126 08:08:56.623653 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8f0a4edb-6a17-43a2-9c55-88ded9dfcc35-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"8f0a4edb-6a17-43a2-9c55-88ded9dfcc35\") " pod="openstack/rabbitmq-server-0" Jan 26 08:08:56 crc kubenswrapper[4806]: I0126 08:08:56.623822 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"8f0a4edb-6a17-43a2-9c55-88ded9dfcc35\") " pod="openstack/rabbitmq-server-0" Jan 26 08:08:56 crc kubenswrapper[4806]: I0126 08:08:56.623907 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkj6w\" (UniqueName: \"kubernetes.io/projected/8f0a4edb-6a17-43a2-9c55-88ded9dfcc35-kube-api-access-xkj6w\") pod \"rabbitmq-server-0\" (UID: \"8f0a4edb-6a17-43a2-9c55-88ded9dfcc35\") " pod="openstack/rabbitmq-server-0" Jan 26 08:08:56 crc kubenswrapper[4806]: I0126 08:08:56.624002 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8f0a4edb-6a17-43a2-9c55-88ded9dfcc35-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"8f0a4edb-6a17-43a2-9c55-88ded9dfcc35\") " pod="openstack/rabbitmq-server-0" Jan 26 08:08:56 crc kubenswrapper[4806]: I0126 08:08:56.624107 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8f0a4edb-6a17-43a2-9c55-88ded9dfcc35-config-data\") pod \"rabbitmq-server-0\" (UID: \"8f0a4edb-6a17-43a2-9c55-88ded9dfcc35\") " pod="openstack/rabbitmq-server-0" Jan 26 08:08:56 crc kubenswrapper[4806]: I0126 08:08:56.624201 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8f0a4edb-6a17-43a2-9c55-88ded9dfcc35-pod-info\") pod \"rabbitmq-server-0\" (UID: \"8f0a4edb-6a17-43a2-9c55-88ded9dfcc35\") " pod="openstack/rabbitmq-server-0" Jan 26 08:08:56 crc kubenswrapper[4806]: I0126 08:08:56.624289 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8f0a4edb-6a17-43a2-9c55-88ded9dfcc35-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"8f0a4edb-6a17-43a2-9c55-88ded9dfcc35\") " pod="openstack/rabbitmq-server-0" Jan 26 08:08:56 crc kubenswrapper[4806]: I0126 08:08:56.624361 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8f0a4edb-6a17-43a2-9c55-88ded9dfcc35-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"8f0a4edb-6a17-43a2-9c55-88ded9dfcc35\") " pod="openstack/rabbitmq-server-0" Jan 26 08:08:56 crc kubenswrapper[4806]: I0126 08:08:56.624430 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8f0a4edb-6a17-43a2-9c55-88ded9dfcc35-server-conf\") pod \"rabbitmq-server-0\" (UID: \"8f0a4edb-6a17-43a2-9c55-88ded9dfcc35\") " pod="openstack/rabbitmq-server-0" Jan 26 08:08:56 crc kubenswrapper[4806]: I0126 08:08:56.624542 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8f0a4edb-6a17-43a2-9c55-88ded9dfcc35-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"8f0a4edb-6a17-43a2-9c55-88ded9dfcc35\") " pod="openstack/rabbitmq-server-0" Jan 26 08:08:56 crc kubenswrapper[4806]: I0126 08:08:56.624637 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8f0a4edb-6a17-43a2-9c55-88ded9dfcc35-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"8f0a4edb-6a17-43a2-9c55-88ded9dfcc35\") " pod="openstack/rabbitmq-server-0" Jan 26 08:08:56 crc kubenswrapper[4806]: I0126 08:08:56.625814 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-z28t5"] Jan 26 08:08:56 crc kubenswrapper[4806]: I0126 08:08:56.731930 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8f0a4edb-6a17-43a2-9c55-88ded9dfcc35-config-data\") pod \"rabbitmq-server-0\" (UID: \"8f0a4edb-6a17-43a2-9c55-88ded9dfcc35\") " pod="openstack/rabbitmq-server-0" Jan 26 08:08:56 crc kubenswrapper[4806]: I0126 08:08:56.732171 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8f0a4edb-6a17-43a2-9c55-88ded9dfcc35-pod-info\") pod \"rabbitmq-server-0\" (UID: \"8f0a4edb-6a17-43a2-9c55-88ded9dfcc35\") " pod="openstack/rabbitmq-server-0" Jan 26 08:08:56 crc kubenswrapper[4806]: I0126 08:08:56.732207 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8f0a4edb-6a17-43a2-9c55-88ded9dfcc35-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"8f0a4edb-6a17-43a2-9c55-88ded9dfcc35\") " pod="openstack/rabbitmq-server-0" Jan 26 08:08:56 crc kubenswrapper[4806]: I0126 08:08:56.732231 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8f0a4edb-6a17-43a2-9c55-88ded9dfcc35-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"8f0a4edb-6a17-43a2-9c55-88ded9dfcc35\") " pod="openstack/rabbitmq-server-0" Jan 26 08:08:56 crc kubenswrapper[4806]: I0126 08:08:56.732246 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8f0a4edb-6a17-43a2-9c55-88ded9dfcc35-server-conf\") pod \"rabbitmq-server-0\" (UID: \"8f0a4edb-6a17-43a2-9c55-88ded9dfcc35\") " pod="openstack/rabbitmq-server-0" Jan 26 08:08:56 crc kubenswrapper[4806]: I0126 08:08:56.732280 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8f0a4edb-6a17-43a2-9c55-88ded9dfcc35-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"8f0a4edb-6a17-43a2-9c55-88ded9dfcc35\") " pod="openstack/rabbitmq-server-0" Jan 26 08:08:56 crc kubenswrapper[4806]: I0126 08:08:56.732296 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8f0a4edb-6a17-43a2-9c55-88ded9dfcc35-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"8f0a4edb-6a17-43a2-9c55-88ded9dfcc35\") " pod="openstack/rabbitmq-server-0" Jan 26 08:08:56 crc kubenswrapper[4806]: I0126 08:08:56.732335 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8f0a4edb-6a17-43a2-9c55-88ded9dfcc35-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"8f0a4edb-6a17-43a2-9c55-88ded9dfcc35\") " pod="openstack/rabbitmq-server-0" Jan 26 08:08:56 crc kubenswrapper[4806]: I0126 08:08:56.732354 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"8f0a4edb-6a17-43a2-9c55-88ded9dfcc35\") " pod="openstack/rabbitmq-server-0" Jan 26 08:08:56 crc kubenswrapper[4806]: I0126 08:08:56.732379 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkj6w\" (UniqueName: \"kubernetes.io/projected/8f0a4edb-6a17-43a2-9c55-88ded9dfcc35-kube-api-access-xkj6w\") pod \"rabbitmq-server-0\" (UID: \"8f0a4edb-6a17-43a2-9c55-88ded9dfcc35\") " pod="openstack/rabbitmq-server-0" Jan 26 08:08:56 crc kubenswrapper[4806]: I0126 08:08:56.732432 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8f0a4edb-6a17-43a2-9c55-88ded9dfcc35-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"8f0a4edb-6a17-43a2-9c55-88ded9dfcc35\") " pod="openstack/rabbitmq-server-0" Jan 26 08:08:56 crc kubenswrapper[4806]: I0126 08:08:56.732880 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8f0a4edb-6a17-43a2-9c55-88ded9dfcc35-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"8f0a4edb-6a17-43a2-9c55-88ded9dfcc35\") " pod="openstack/rabbitmq-server-0" Jan 26 08:08:56 crc kubenswrapper[4806]: I0126 08:08:56.733087 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8f0a4edb-6a17-43a2-9c55-88ded9dfcc35-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"8f0a4edb-6a17-43a2-9c55-88ded9dfcc35\") " pod="openstack/rabbitmq-server-0" Jan 26 08:08:56 crc kubenswrapper[4806]: I0126 08:08:56.733236 4806 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"8f0a4edb-6a17-43a2-9c55-88ded9dfcc35\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/rabbitmq-server-0" Jan 26 08:08:56 crc kubenswrapper[4806]: I0126 08:08:56.752413 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8f0a4edb-6a17-43a2-9c55-88ded9dfcc35-pod-info\") pod \"rabbitmq-server-0\" (UID: \"8f0a4edb-6a17-43a2-9c55-88ded9dfcc35\") " pod="openstack/rabbitmq-server-0" Jan 26 08:08:56 crc kubenswrapper[4806]: I0126 08:08:56.753308 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkj6w\" (UniqueName: \"kubernetes.io/projected/8f0a4edb-6a17-43a2-9c55-88ded9dfcc35-kube-api-access-xkj6w\") pod \"rabbitmq-server-0\" (UID: \"8f0a4edb-6a17-43a2-9c55-88ded9dfcc35\") " pod="openstack/rabbitmq-server-0" Jan 26 08:08:56 crc kubenswrapper[4806]: I0126 08:08:56.757978 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 26 08:08:56 crc kubenswrapper[4806]: I0126 08:08:56.759255 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 26 08:08:56 crc kubenswrapper[4806]: I0126 08:08:56.764021 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 26 08:08:56 crc kubenswrapper[4806]: I0126 08:08:56.764308 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 26 08:08:56 crc kubenswrapper[4806]: I0126 08:08:56.764480 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 26 08:08:56 crc kubenswrapper[4806]: I0126 08:08:56.764741 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 26 08:08:56 crc kubenswrapper[4806]: I0126 08:08:56.764873 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-77bn9" Jan 26 08:08:56 crc kubenswrapper[4806]: I0126 08:08:56.764908 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 26 08:08:56 crc kubenswrapper[4806]: I0126 08:08:56.765036 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 26 08:08:56 crc kubenswrapper[4806]: I0126 08:08:56.767394 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"8f0a4edb-6a17-43a2-9c55-88ded9dfcc35\") " pod="openstack/rabbitmq-server-0" Jan 26 08:08:56 crc kubenswrapper[4806]: I0126 08:08:56.792244 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 26 08:08:56 crc kubenswrapper[4806]: I0126 08:08:56.832465 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-nc6vl" event={"ID":"01b8727d-453c-4e87-aaa9-e938db5d17dc","Type":"ContainerStarted","Data":"163dad23fb81989fe0e451d00ea09dfa77218cacba380bed301aff1277e6275b"} Jan 26 08:08:56 crc kubenswrapper[4806]: I0126 08:08:56.833119 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/025ae3ca-3082-4bc8-8611-5b23cec63932-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"025ae3ca-3082-4bc8-8611-5b23cec63932\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 08:08:56 crc kubenswrapper[4806]: I0126 08:08:56.833187 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/025ae3ca-3082-4bc8-8611-5b23cec63932-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"025ae3ca-3082-4bc8-8611-5b23cec63932\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 08:08:56 crc kubenswrapper[4806]: I0126 08:08:56.833216 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/025ae3ca-3082-4bc8-8611-5b23cec63932-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"025ae3ca-3082-4bc8-8611-5b23cec63932\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 08:08:56 crc kubenswrapper[4806]: I0126 08:08:56.833283 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"025ae3ca-3082-4bc8-8611-5b23cec63932\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 08:08:56 crc kubenswrapper[4806]: I0126 08:08:56.833302 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/025ae3ca-3082-4bc8-8611-5b23cec63932-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"025ae3ca-3082-4bc8-8611-5b23cec63932\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 08:08:56 crc kubenswrapper[4806]: I0126 08:08:56.833351 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gr76k\" (UniqueName: \"kubernetes.io/projected/025ae3ca-3082-4bc8-8611-5b23cec63932-kube-api-access-gr76k\") pod \"rabbitmq-cell1-server-0\" (UID: \"025ae3ca-3082-4bc8-8611-5b23cec63932\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 08:08:56 crc kubenswrapper[4806]: I0126 08:08:56.833395 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/025ae3ca-3082-4bc8-8611-5b23cec63932-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"025ae3ca-3082-4bc8-8611-5b23cec63932\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 08:08:56 crc kubenswrapper[4806]: I0126 08:08:56.833415 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/025ae3ca-3082-4bc8-8611-5b23cec63932-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"025ae3ca-3082-4bc8-8611-5b23cec63932\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 08:08:56 crc kubenswrapper[4806]: I0126 08:08:56.833439 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/025ae3ca-3082-4bc8-8611-5b23cec63932-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"025ae3ca-3082-4bc8-8611-5b23cec63932\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 08:08:56 crc kubenswrapper[4806]: I0126 08:08:56.833468 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/025ae3ca-3082-4bc8-8611-5b23cec63932-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"025ae3ca-3082-4bc8-8611-5b23cec63932\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 08:08:56 crc kubenswrapper[4806]: I0126 08:08:56.833487 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/025ae3ca-3082-4bc8-8611-5b23cec63932-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"025ae3ca-3082-4bc8-8611-5b23cec63932\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 08:08:56 crc kubenswrapper[4806]: I0126 08:08:56.835905 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-wvpl9" podUID="ed1a6d5a-1ba9-4070-a696-04e2bb2eb7e0" containerName="registry-server" containerID="cri-o://c6abf3066d343d221fda645b0d86a0dfe573d113d6246d3dfb1bec303c9e73f6" gracePeriod=2 Jan 26 08:08:56 crc kubenswrapper[4806]: I0126 08:08:56.836182 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-z28t5" event={"ID":"bf67838f-5d5e-48a8-ba5a-ed0c64d2756f","Type":"ContainerStarted","Data":"fc7b5a287a722aa632011a18f078f8b31df229abc0b5e67da12cdfc8eea89399"} Jan 26 08:08:56 crc kubenswrapper[4806]: I0126 08:08:56.939402 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"025ae3ca-3082-4bc8-8611-5b23cec63932\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 08:08:56 crc kubenswrapper[4806]: I0126 08:08:56.939476 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/025ae3ca-3082-4bc8-8611-5b23cec63932-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"025ae3ca-3082-4bc8-8611-5b23cec63932\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 08:08:56 crc kubenswrapper[4806]: I0126 08:08:56.939569 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gr76k\" (UniqueName: \"kubernetes.io/projected/025ae3ca-3082-4bc8-8611-5b23cec63932-kube-api-access-gr76k\") pod \"rabbitmq-cell1-server-0\" (UID: \"025ae3ca-3082-4bc8-8611-5b23cec63932\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 08:08:56 crc kubenswrapper[4806]: I0126 08:08:56.939635 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/025ae3ca-3082-4bc8-8611-5b23cec63932-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"025ae3ca-3082-4bc8-8611-5b23cec63932\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 08:08:56 crc kubenswrapper[4806]: I0126 08:08:56.939652 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/025ae3ca-3082-4bc8-8611-5b23cec63932-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"025ae3ca-3082-4bc8-8611-5b23cec63932\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 08:08:56 crc kubenswrapper[4806]: I0126 08:08:56.939697 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/025ae3ca-3082-4bc8-8611-5b23cec63932-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"025ae3ca-3082-4bc8-8611-5b23cec63932\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 08:08:56 crc kubenswrapper[4806]: I0126 08:08:56.939737 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/025ae3ca-3082-4bc8-8611-5b23cec63932-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"025ae3ca-3082-4bc8-8611-5b23cec63932\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 08:08:56 crc kubenswrapper[4806]: I0126 08:08:56.939777 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/025ae3ca-3082-4bc8-8611-5b23cec63932-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"025ae3ca-3082-4bc8-8611-5b23cec63932\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 08:08:56 crc kubenswrapper[4806]: I0126 08:08:56.939838 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/025ae3ca-3082-4bc8-8611-5b23cec63932-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"025ae3ca-3082-4bc8-8611-5b23cec63932\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 08:08:56 crc kubenswrapper[4806]: I0126 08:08:56.939873 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/025ae3ca-3082-4bc8-8611-5b23cec63932-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"025ae3ca-3082-4bc8-8611-5b23cec63932\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 08:08:56 crc kubenswrapper[4806]: I0126 08:08:56.939917 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/025ae3ca-3082-4bc8-8611-5b23cec63932-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"025ae3ca-3082-4bc8-8611-5b23cec63932\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 08:08:56 crc kubenswrapper[4806]: I0126 08:08:56.940377 4806 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"025ae3ca-3082-4bc8-8611-5b23cec63932\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/rabbitmq-cell1-server-0" Jan 26 08:08:56 crc kubenswrapper[4806]: I0126 08:08:56.941214 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/025ae3ca-3082-4bc8-8611-5b23cec63932-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"025ae3ca-3082-4bc8-8611-5b23cec63932\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 08:08:56 crc kubenswrapper[4806]: I0126 08:08:56.942986 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/025ae3ca-3082-4bc8-8611-5b23cec63932-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"025ae3ca-3082-4bc8-8611-5b23cec63932\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 08:08:56 crc kubenswrapper[4806]: I0126 08:08:56.943299 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/025ae3ca-3082-4bc8-8611-5b23cec63932-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"025ae3ca-3082-4bc8-8611-5b23cec63932\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 08:08:56 crc kubenswrapper[4806]: I0126 08:08:56.943387 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/025ae3ca-3082-4bc8-8611-5b23cec63932-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"025ae3ca-3082-4bc8-8611-5b23cec63932\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 08:08:56 crc kubenswrapper[4806]: I0126 08:08:56.944799 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/025ae3ca-3082-4bc8-8611-5b23cec63932-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"025ae3ca-3082-4bc8-8611-5b23cec63932\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 08:08:56 crc kubenswrapper[4806]: I0126 08:08:56.947979 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/025ae3ca-3082-4bc8-8611-5b23cec63932-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"025ae3ca-3082-4bc8-8611-5b23cec63932\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 08:08:56 crc kubenswrapper[4806]: I0126 08:08:56.963611 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/025ae3ca-3082-4bc8-8611-5b23cec63932-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"025ae3ca-3082-4bc8-8611-5b23cec63932\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 08:08:56 crc kubenswrapper[4806]: I0126 08:08:56.964273 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gr76k\" (UniqueName: \"kubernetes.io/projected/025ae3ca-3082-4bc8-8611-5b23cec63932-kube-api-access-gr76k\") pod \"rabbitmq-cell1-server-0\" (UID: \"025ae3ca-3082-4bc8-8611-5b23cec63932\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 08:08:56 crc kubenswrapper[4806]: I0126 08:08:56.971011 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"025ae3ca-3082-4bc8-8611-5b23cec63932\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 08:08:56 crc kubenswrapper[4806]: I0126 08:08:56.977777 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/025ae3ca-3082-4bc8-8611-5b23cec63932-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"025ae3ca-3082-4bc8-8611-5b23cec63932\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 08:08:56 crc kubenswrapper[4806]: I0126 08:08:56.983672 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/025ae3ca-3082-4bc8-8611-5b23cec63932-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"025ae3ca-3082-4bc8-8611-5b23cec63932\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 08:08:57 crc kubenswrapper[4806]: I0126 08:08:57.155010 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 26 08:08:57 crc kubenswrapper[4806]: I0126 08:08:57.295400 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 26 08:08:57 crc kubenswrapper[4806]: I0126 08:08:57.305205 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8f0a4edb-6a17-43a2-9c55-88ded9dfcc35-server-conf\") pod \"rabbitmq-server-0\" (UID: \"8f0a4edb-6a17-43a2-9c55-88ded9dfcc35\") " pod="openstack/rabbitmq-server-0" Jan 26 08:08:57 crc kubenswrapper[4806]: I0126 08:08:57.343419 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wvpl9" Jan 26 08:08:57 crc kubenswrapper[4806]: I0126 08:08:57.433309 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 26 08:08:57 crc kubenswrapper[4806]: I0126 08:08:57.444367 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8f0a4edb-6a17-43a2-9c55-88ded9dfcc35-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"8f0a4edb-6a17-43a2-9c55-88ded9dfcc35\") " pod="openstack/rabbitmq-server-0" Jan 26 08:08:57 crc kubenswrapper[4806]: I0126 08:08:57.461009 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gwjdr\" (UniqueName: \"kubernetes.io/projected/ed1a6d5a-1ba9-4070-a696-04e2bb2eb7e0-kube-api-access-gwjdr\") pod \"ed1a6d5a-1ba9-4070-a696-04e2bb2eb7e0\" (UID: \"ed1a6d5a-1ba9-4070-a696-04e2bb2eb7e0\") " Jan 26 08:08:57 crc kubenswrapper[4806]: I0126 08:08:57.461073 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed1a6d5a-1ba9-4070-a696-04e2bb2eb7e0-catalog-content\") pod \"ed1a6d5a-1ba9-4070-a696-04e2bb2eb7e0\" (UID: \"ed1a6d5a-1ba9-4070-a696-04e2bb2eb7e0\") " Jan 26 08:08:57 crc kubenswrapper[4806]: I0126 08:08:57.461111 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed1a6d5a-1ba9-4070-a696-04e2bb2eb7e0-utilities\") pod \"ed1a6d5a-1ba9-4070-a696-04e2bb2eb7e0\" (UID: \"ed1a6d5a-1ba9-4070-a696-04e2bb2eb7e0\") " Jan 26 08:08:57 crc kubenswrapper[4806]: I0126 08:08:57.462210 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ed1a6d5a-1ba9-4070-a696-04e2bb2eb7e0-utilities" (OuterVolumeSpecName: "utilities") pod "ed1a6d5a-1ba9-4070-a696-04e2bb2eb7e0" (UID: "ed1a6d5a-1ba9-4070-a696-04e2bb2eb7e0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 08:08:57 crc kubenswrapper[4806]: I0126 08:08:57.473069 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed1a6d5a-1ba9-4070-a696-04e2bb2eb7e0-kube-api-access-gwjdr" (OuterVolumeSpecName: "kube-api-access-gwjdr") pod "ed1a6d5a-1ba9-4070-a696-04e2bb2eb7e0" (UID: "ed1a6d5a-1ba9-4070-a696-04e2bb2eb7e0"). InnerVolumeSpecName "kube-api-access-gwjdr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:08:57 crc kubenswrapper[4806]: I0126 08:08:57.527931 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 26 08:08:57 crc kubenswrapper[4806]: I0126 08:08:57.539932 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8f0a4edb-6a17-43a2-9c55-88ded9dfcc35-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"8f0a4edb-6a17-43a2-9c55-88ded9dfcc35\") " pod="openstack/rabbitmq-server-0" Jan 26 08:08:57 crc kubenswrapper[4806]: I0126 08:08:57.570118 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gwjdr\" (UniqueName: \"kubernetes.io/projected/ed1a6d5a-1ba9-4070-a696-04e2bb2eb7e0-kube-api-access-gwjdr\") on node \"crc\" DevicePath \"\"" Jan 26 08:08:57 crc kubenswrapper[4806]: I0126 08:08:57.570157 4806 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed1a6d5a-1ba9-4070-a696-04e2bb2eb7e0-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 08:08:57 crc kubenswrapper[4806]: I0126 08:08:57.584835 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-czc68" Jan 26 08:08:57 crc kubenswrapper[4806]: I0126 08:08:57.594093 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ed1a6d5a-1ba9-4070-a696-04e2bb2eb7e0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ed1a6d5a-1ba9-4070-a696-04e2bb2eb7e0" (UID: "ed1a6d5a-1ba9-4070-a696-04e2bb2eb7e0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 08:08:57 crc kubenswrapper[4806]: I0126 08:08:57.616269 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 26 08:08:57 crc kubenswrapper[4806]: E0126 08:08:57.616574 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed1a6d5a-1ba9-4070-a696-04e2bb2eb7e0" containerName="extract-utilities" Jan 26 08:08:57 crc kubenswrapper[4806]: I0126 08:08:57.616585 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed1a6d5a-1ba9-4070-a696-04e2bb2eb7e0" containerName="extract-utilities" Jan 26 08:08:57 crc kubenswrapper[4806]: E0126 08:08:57.616604 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed1a6d5a-1ba9-4070-a696-04e2bb2eb7e0" containerName="extract-content" Jan 26 08:08:57 crc kubenswrapper[4806]: I0126 08:08:57.616610 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed1a6d5a-1ba9-4070-a696-04e2bb2eb7e0" containerName="extract-content" Jan 26 08:08:57 crc kubenswrapper[4806]: E0126 08:08:57.616626 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed1a6d5a-1ba9-4070-a696-04e2bb2eb7e0" containerName="registry-server" Jan 26 08:08:57 crc kubenswrapper[4806]: I0126 08:08:57.616632 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed1a6d5a-1ba9-4070-a696-04e2bb2eb7e0" containerName="registry-server" Jan 26 08:08:57 crc kubenswrapper[4806]: I0126 08:08:57.616767 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed1a6d5a-1ba9-4070-a696-04e2bb2eb7e0" containerName="registry-server" Jan 26 08:08:57 crc kubenswrapper[4806]: I0126 08:08:57.617505 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 26 08:08:57 crc kubenswrapper[4806]: I0126 08:08:57.624084 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-6mdlc" Jan 26 08:08:57 crc kubenswrapper[4806]: I0126 08:08:57.624256 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 26 08:08:57 crc kubenswrapper[4806]: I0126 08:08:57.624393 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 26 08:08:57 crc kubenswrapper[4806]: I0126 08:08:57.624509 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 26 08:08:57 crc kubenswrapper[4806]: I0126 08:08:57.651058 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 26 08:08:57 crc kubenswrapper[4806]: I0126 08:08:57.653955 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 26 08:08:57 crc kubenswrapper[4806]: I0126 08:08:57.675023 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 26 08:08:57 crc kubenswrapper[4806]: I0126 08:08:57.675878 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/70aa246b-31a1-4800-b76e-d50a2002a5f8-operator-scripts\") pod \"openstack-galera-0\" (UID: \"70aa246b-31a1-4800-b76e-d50a2002a5f8\") " pod="openstack/openstack-galera-0" Jan 26 08:08:57 crc kubenswrapper[4806]: I0126 08:08:57.675929 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/70aa246b-31a1-4800-b76e-d50a2002a5f8-config-data-default\") pod \"openstack-galera-0\" (UID: \"70aa246b-31a1-4800-b76e-d50a2002a5f8\") " pod="openstack/openstack-galera-0" Jan 26 08:08:57 crc kubenswrapper[4806]: I0126 08:08:57.676005 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-galera-0\" (UID: \"70aa246b-31a1-4800-b76e-d50a2002a5f8\") " pod="openstack/openstack-galera-0" Jan 26 08:08:57 crc kubenswrapper[4806]: I0126 08:08:57.676030 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/70aa246b-31a1-4800-b76e-d50a2002a5f8-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"70aa246b-31a1-4800-b76e-d50a2002a5f8\") " pod="openstack/openstack-galera-0" Jan 26 08:08:57 crc kubenswrapper[4806]: I0126 08:08:57.676050 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70aa246b-31a1-4800-b76e-d50a2002a5f8-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"70aa246b-31a1-4800-b76e-d50a2002a5f8\") " pod="openstack/openstack-galera-0" Jan 26 08:08:57 crc kubenswrapper[4806]: I0126 08:08:57.676087 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlfjh\" (UniqueName: \"kubernetes.io/projected/70aa246b-31a1-4800-b76e-d50a2002a5f8-kube-api-access-dlfjh\") pod \"openstack-galera-0\" (UID: \"70aa246b-31a1-4800-b76e-d50a2002a5f8\") " pod="openstack/openstack-galera-0" Jan 26 08:08:57 crc kubenswrapper[4806]: I0126 08:08:57.676104 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/70aa246b-31a1-4800-b76e-d50a2002a5f8-kolla-config\") pod \"openstack-galera-0\" (UID: \"70aa246b-31a1-4800-b76e-d50a2002a5f8\") " pod="openstack/openstack-galera-0" Jan 26 08:08:57 crc kubenswrapper[4806]: I0126 08:08:57.676131 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/70aa246b-31a1-4800-b76e-d50a2002a5f8-config-data-generated\") pod \"openstack-galera-0\" (UID: \"70aa246b-31a1-4800-b76e-d50a2002a5f8\") " pod="openstack/openstack-galera-0" Jan 26 08:08:57 crc kubenswrapper[4806]: I0126 08:08:57.676180 4806 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed1a6d5a-1ba9-4070-a696-04e2bb2eb7e0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 08:08:57 crc kubenswrapper[4806]: I0126 08:08:57.683873 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8f0a4edb-6a17-43a2-9c55-88ded9dfcc35-config-data\") pod \"rabbitmq-server-0\" (UID: \"8f0a4edb-6a17-43a2-9c55-88ded9dfcc35\") " pod="openstack/rabbitmq-server-0" Jan 26 08:08:57 crc kubenswrapper[4806]: I0126 08:08:57.694902 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 26 08:08:57 crc kubenswrapper[4806]: I0126 08:08:57.704857 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8f0a4edb-6a17-43a2-9c55-88ded9dfcc35-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"8f0a4edb-6a17-43a2-9c55-88ded9dfcc35\") " pod="openstack/rabbitmq-server-0" Jan 26 08:08:57 crc kubenswrapper[4806]: I0126 08:08:57.776630 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/70aa246b-31a1-4800-b76e-d50a2002a5f8-operator-scripts\") pod \"openstack-galera-0\" (UID: \"70aa246b-31a1-4800-b76e-d50a2002a5f8\") " pod="openstack/openstack-galera-0" Jan 26 08:08:57 crc kubenswrapper[4806]: I0126 08:08:57.776672 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/70aa246b-31a1-4800-b76e-d50a2002a5f8-config-data-default\") pod \"openstack-galera-0\" (UID: \"70aa246b-31a1-4800-b76e-d50a2002a5f8\") " pod="openstack/openstack-galera-0" Jan 26 08:08:57 crc kubenswrapper[4806]: I0126 08:08:57.776732 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-galera-0\" (UID: \"70aa246b-31a1-4800-b76e-d50a2002a5f8\") " pod="openstack/openstack-galera-0" Jan 26 08:08:57 crc kubenswrapper[4806]: I0126 08:08:57.776750 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/70aa246b-31a1-4800-b76e-d50a2002a5f8-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"70aa246b-31a1-4800-b76e-d50a2002a5f8\") " pod="openstack/openstack-galera-0" Jan 26 08:08:57 crc kubenswrapper[4806]: I0126 08:08:57.776769 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70aa246b-31a1-4800-b76e-d50a2002a5f8-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"70aa246b-31a1-4800-b76e-d50a2002a5f8\") " pod="openstack/openstack-galera-0" Jan 26 08:08:57 crc kubenswrapper[4806]: I0126 08:08:57.776802 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dlfjh\" (UniqueName: \"kubernetes.io/projected/70aa246b-31a1-4800-b76e-d50a2002a5f8-kube-api-access-dlfjh\") pod \"openstack-galera-0\" (UID: \"70aa246b-31a1-4800-b76e-d50a2002a5f8\") " pod="openstack/openstack-galera-0" Jan 26 08:08:57 crc kubenswrapper[4806]: I0126 08:08:57.776819 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/70aa246b-31a1-4800-b76e-d50a2002a5f8-kolla-config\") pod \"openstack-galera-0\" (UID: \"70aa246b-31a1-4800-b76e-d50a2002a5f8\") " pod="openstack/openstack-galera-0" Jan 26 08:08:57 crc kubenswrapper[4806]: I0126 08:08:57.776840 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/70aa246b-31a1-4800-b76e-d50a2002a5f8-config-data-generated\") pod \"openstack-galera-0\" (UID: \"70aa246b-31a1-4800-b76e-d50a2002a5f8\") " pod="openstack/openstack-galera-0" Jan 26 08:08:57 crc kubenswrapper[4806]: I0126 08:08:57.777174 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/70aa246b-31a1-4800-b76e-d50a2002a5f8-config-data-generated\") pod \"openstack-galera-0\" (UID: \"70aa246b-31a1-4800-b76e-d50a2002a5f8\") " pod="openstack/openstack-galera-0" Jan 26 08:08:57 crc kubenswrapper[4806]: I0126 08:08:57.778377 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/70aa246b-31a1-4800-b76e-d50a2002a5f8-operator-scripts\") pod \"openstack-galera-0\" (UID: \"70aa246b-31a1-4800-b76e-d50a2002a5f8\") " pod="openstack/openstack-galera-0" Jan 26 08:08:57 crc kubenswrapper[4806]: I0126 08:08:57.778987 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/70aa246b-31a1-4800-b76e-d50a2002a5f8-config-data-default\") pod \"openstack-galera-0\" (UID: \"70aa246b-31a1-4800-b76e-d50a2002a5f8\") " pod="openstack/openstack-galera-0" Jan 26 08:08:57 crc kubenswrapper[4806]: I0126 08:08:57.779351 4806 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-galera-0\" (UID: \"70aa246b-31a1-4800-b76e-d50a2002a5f8\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/openstack-galera-0" Jan 26 08:08:57 crc kubenswrapper[4806]: I0126 08:08:57.780334 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/70aa246b-31a1-4800-b76e-d50a2002a5f8-kolla-config\") pod \"openstack-galera-0\" (UID: \"70aa246b-31a1-4800-b76e-d50a2002a5f8\") " pod="openstack/openstack-galera-0" Jan 26 08:08:57 crc kubenswrapper[4806]: I0126 08:08:57.816205 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 26 08:08:57 crc kubenswrapper[4806]: I0126 08:08:57.816386 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 26 08:08:57 crc kubenswrapper[4806]: I0126 08:08:57.822405 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/70aa246b-31a1-4800-b76e-d50a2002a5f8-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"70aa246b-31a1-4800-b76e-d50a2002a5f8\") " pod="openstack/openstack-galera-0" Jan 26 08:08:57 crc kubenswrapper[4806]: I0126 08:08:57.822692 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlfjh\" (UniqueName: \"kubernetes.io/projected/70aa246b-31a1-4800-b76e-d50a2002a5f8-kube-api-access-dlfjh\") pod \"openstack-galera-0\" (UID: \"70aa246b-31a1-4800-b76e-d50a2002a5f8\") " pod="openstack/openstack-galera-0" Jan 26 08:08:57 crc kubenswrapper[4806]: W0126 08:08:57.825644 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod025ae3ca_3082_4bc8_8611_5b23cec63932.slice/crio-261fde538c17c5c59f604f1bee431cd40a02e04aeb03e67f7f3577e90392d908 WatchSource:0}: Error finding container 261fde538c17c5c59f604f1bee431cd40a02e04aeb03e67f7f3577e90392d908: Status 404 returned error can't find the container with id 261fde538c17c5c59f604f1bee431cd40a02e04aeb03e67f7f3577e90392d908 Jan 26 08:08:57 crc kubenswrapper[4806]: I0126 08:08:57.829118 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70aa246b-31a1-4800-b76e-d50a2002a5f8-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"70aa246b-31a1-4800-b76e-d50a2002a5f8\") " pod="openstack/openstack-galera-0" Jan 26 08:08:57 crc kubenswrapper[4806]: I0126 08:08:57.830455 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8f0a4edb-6a17-43a2-9c55-88ded9dfcc35-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"8f0a4edb-6a17-43a2-9c55-88ded9dfcc35\") " pod="openstack/rabbitmq-server-0" Jan 26 08:08:57 crc kubenswrapper[4806]: I0126 08:08:57.850393 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-galera-0\" (UID: \"70aa246b-31a1-4800-b76e-d50a2002a5f8\") " pod="openstack/openstack-galera-0" Jan 26 08:08:57 crc kubenswrapper[4806]: I0126 08:08:57.870457 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"025ae3ca-3082-4bc8-8611-5b23cec63932","Type":"ContainerStarted","Data":"261fde538c17c5c59f604f1bee431cd40a02e04aeb03e67f7f3577e90392d908"} Jan 26 08:08:57 crc kubenswrapper[4806]: I0126 08:08:57.886844 4806 generic.go:334] "Generic (PLEG): container finished" podID="ed1a6d5a-1ba9-4070-a696-04e2bb2eb7e0" containerID="c6abf3066d343d221fda645b0d86a0dfe573d113d6246d3dfb1bec303c9e73f6" exitCode=0 Jan 26 08:08:57 crc kubenswrapper[4806]: I0126 08:08:57.886898 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wvpl9" event={"ID":"ed1a6d5a-1ba9-4070-a696-04e2bb2eb7e0","Type":"ContainerDied","Data":"c6abf3066d343d221fda645b0d86a0dfe573d113d6246d3dfb1bec303c9e73f6"} Jan 26 08:08:57 crc kubenswrapper[4806]: I0126 08:08:57.886917 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wvpl9" Jan 26 08:08:57 crc kubenswrapper[4806]: I0126 08:08:57.886931 4806 scope.go:117] "RemoveContainer" containerID="c6abf3066d343d221fda645b0d86a0dfe573d113d6246d3dfb1bec303c9e73f6" Jan 26 08:08:57 crc kubenswrapper[4806]: I0126 08:08:57.886921 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wvpl9" event={"ID":"ed1a6d5a-1ba9-4070-a696-04e2bb2eb7e0","Type":"ContainerDied","Data":"53da387e11b842a90324c60fd8063589ee66cc97069ddd12aeb4df9ef7863b49"} Jan 26 08:08:57 crc kubenswrapper[4806]: I0126 08:08:57.962174 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wvpl9"] Jan 26 08:08:57 crc kubenswrapper[4806]: I0126 08:08:57.967564 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-wvpl9"] Jan 26 08:08:57 crc kubenswrapper[4806]: I0126 08:08:57.975468 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 26 08:08:58 crc kubenswrapper[4806]: I0126 08:08:58.012358 4806 scope.go:117] "RemoveContainer" containerID="a2aef2617b8d4edb8f91713934d0f236d6fce90c017f2b224edf1060dc318579" Jan 26 08:08:58 crc kubenswrapper[4806]: I0126 08:08:58.023027 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 26 08:08:58 crc kubenswrapper[4806]: I0126 08:08:58.107215 4806 scope.go:117] "RemoveContainer" containerID="631bb750f529839b7acb0d5007af588da1bd9c753499adc1345974ee243c0bd9" Jan 26 08:08:58 crc kubenswrapper[4806]: I0126 08:08:58.201635 4806 scope.go:117] "RemoveContainer" containerID="c6abf3066d343d221fda645b0d86a0dfe573d113d6246d3dfb1bec303c9e73f6" Jan 26 08:08:58 crc kubenswrapper[4806]: E0126 08:08:58.215186 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6abf3066d343d221fda645b0d86a0dfe573d113d6246d3dfb1bec303c9e73f6\": container with ID starting with c6abf3066d343d221fda645b0d86a0dfe573d113d6246d3dfb1bec303c9e73f6 not found: ID does not exist" containerID="c6abf3066d343d221fda645b0d86a0dfe573d113d6246d3dfb1bec303c9e73f6" Jan 26 08:08:58 crc kubenswrapper[4806]: I0126 08:08:58.215228 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6abf3066d343d221fda645b0d86a0dfe573d113d6246d3dfb1bec303c9e73f6"} err="failed to get container status \"c6abf3066d343d221fda645b0d86a0dfe573d113d6246d3dfb1bec303c9e73f6\": rpc error: code = NotFound desc = could not find container \"c6abf3066d343d221fda645b0d86a0dfe573d113d6246d3dfb1bec303c9e73f6\": container with ID starting with c6abf3066d343d221fda645b0d86a0dfe573d113d6246d3dfb1bec303c9e73f6 not found: ID does not exist" Jan 26 08:08:58 crc kubenswrapper[4806]: I0126 08:08:58.215253 4806 scope.go:117] "RemoveContainer" containerID="a2aef2617b8d4edb8f91713934d0f236d6fce90c017f2b224edf1060dc318579" Jan 26 08:08:58 crc kubenswrapper[4806]: E0126 08:08:58.225145 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a2aef2617b8d4edb8f91713934d0f236d6fce90c017f2b224edf1060dc318579\": container with ID starting with a2aef2617b8d4edb8f91713934d0f236d6fce90c017f2b224edf1060dc318579 not found: ID does not exist" containerID="a2aef2617b8d4edb8f91713934d0f236d6fce90c017f2b224edf1060dc318579" Jan 26 08:08:58 crc kubenswrapper[4806]: I0126 08:08:58.225183 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2aef2617b8d4edb8f91713934d0f236d6fce90c017f2b224edf1060dc318579"} err="failed to get container status \"a2aef2617b8d4edb8f91713934d0f236d6fce90c017f2b224edf1060dc318579\": rpc error: code = NotFound desc = could not find container \"a2aef2617b8d4edb8f91713934d0f236d6fce90c017f2b224edf1060dc318579\": container with ID starting with a2aef2617b8d4edb8f91713934d0f236d6fce90c017f2b224edf1060dc318579 not found: ID does not exist" Jan 26 08:08:58 crc kubenswrapper[4806]: I0126 08:08:58.225230 4806 scope.go:117] "RemoveContainer" containerID="631bb750f529839b7acb0d5007af588da1bd9c753499adc1345974ee243c0bd9" Jan 26 08:08:58 crc kubenswrapper[4806]: E0126 08:08:58.225790 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"631bb750f529839b7acb0d5007af588da1bd9c753499adc1345974ee243c0bd9\": container with ID starting with 631bb750f529839b7acb0d5007af588da1bd9c753499adc1345974ee243c0bd9 not found: ID does not exist" containerID="631bb750f529839b7acb0d5007af588da1bd9c753499adc1345974ee243c0bd9" Jan 26 08:08:58 crc kubenswrapper[4806]: I0126 08:08:58.225813 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"631bb750f529839b7acb0d5007af588da1bd9c753499adc1345974ee243c0bd9"} err="failed to get container status \"631bb750f529839b7acb0d5007af588da1bd9c753499adc1345974ee243c0bd9\": rpc error: code = NotFound desc = could not find container \"631bb750f529839b7acb0d5007af588da1bd9c753499adc1345974ee243c0bd9\": container with ID starting with 631bb750f529839b7acb0d5007af588da1bd9c753499adc1345974ee243c0bd9 not found: ID does not exist" Jan 26 08:08:58 crc kubenswrapper[4806]: I0126 08:08:58.651455 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 26 08:08:58 crc kubenswrapper[4806]: I0126 08:08:58.805281 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 26 08:08:58 crc kubenswrapper[4806]: W0126 08:08:58.829612 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod70aa246b_31a1_4800_b76e_d50a2002a5f8.slice/crio-820c5ddf7878fb67663da86da83a59b0e0340d550924f45b6e4a365b10da1113 WatchSource:0}: Error finding container 820c5ddf7878fb67663da86da83a59b0e0340d550924f45b6e4a365b10da1113: Status 404 returned error can't find the container with id 820c5ddf7878fb67663da86da83a59b0e0340d550924f45b6e4a365b10da1113 Jan 26 08:08:58 crc kubenswrapper[4806]: I0126 08:08:58.911087 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"8f0a4edb-6a17-43a2-9c55-88ded9dfcc35","Type":"ContainerStarted","Data":"b539fddec11123e110e6cdb1ffeb36945cec2a3941bcf94deac4653c6cbaee79"} Jan 26 08:08:58 crc kubenswrapper[4806]: I0126 08:08:58.913207 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"70aa246b-31a1-4800-b76e-d50a2002a5f8","Type":"ContainerStarted","Data":"820c5ddf7878fb67663da86da83a59b0e0340d550924f45b6e4a365b10da1113"} Jan 26 08:08:59 crc kubenswrapper[4806]: I0126 08:08:59.087452 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed1a6d5a-1ba9-4070-a696-04e2bb2eb7e0" path="/var/lib/kubelet/pods/ed1a6d5a-1ba9-4070-a696-04e2bb2eb7e0/volumes" Jan 26 08:08:59 crc kubenswrapper[4806]: I0126 08:08:59.100568 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 26 08:08:59 crc kubenswrapper[4806]: I0126 08:08:59.115713 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 26 08:08:59 crc kubenswrapper[4806]: I0126 08:08:59.122515 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 26 08:08:59 crc kubenswrapper[4806]: I0126 08:08:59.122897 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 26 08:08:59 crc kubenswrapper[4806]: I0126 08:08:59.123297 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-vq6lf" Jan 26 08:08:59 crc kubenswrapper[4806]: I0126 08:08:59.123670 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 26 08:08:59 crc kubenswrapper[4806]: I0126 08:08:59.127188 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 26 08:08:59 crc kubenswrapper[4806]: I0126 08:08:59.227503 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/cc07bbaf-381b-4edc-acd9-48211c3eb4c6-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"cc07bbaf-381b-4edc-acd9-48211c3eb4c6\") " pod="openstack/openstack-cell1-galera-0" Jan 26 08:08:59 crc kubenswrapper[4806]: I0126 08:08:59.227580 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/cc07bbaf-381b-4edc-acd9-48211c3eb4c6-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"cc07bbaf-381b-4edc-acd9-48211c3eb4c6\") " pod="openstack/openstack-cell1-galera-0" Jan 26 08:08:59 crc kubenswrapper[4806]: I0126 08:08:59.227602 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/cc07bbaf-381b-4edc-acd9-48211c3eb4c6-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"cc07bbaf-381b-4edc-acd9-48211c3eb4c6\") " pod="openstack/openstack-cell1-galera-0" Jan 26 08:08:59 crc kubenswrapper[4806]: I0126 08:08:59.227716 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc07bbaf-381b-4edc-acd9-48211c3eb4c6-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"cc07bbaf-381b-4edc-acd9-48211c3eb4c6\") " pod="openstack/openstack-cell1-galera-0" Jan 26 08:08:59 crc kubenswrapper[4806]: I0126 08:08:59.227920 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-cell1-galera-0\" (UID: \"cc07bbaf-381b-4edc-acd9-48211c3eb4c6\") " pod="openstack/openstack-cell1-galera-0" Jan 26 08:08:59 crc kubenswrapper[4806]: I0126 08:08:59.227944 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc07bbaf-381b-4edc-acd9-48211c3eb4c6-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"cc07bbaf-381b-4edc-acd9-48211c3eb4c6\") " pod="openstack/openstack-cell1-galera-0" Jan 26 08:08:59 crc kubenswrapper[4806]: I0126 08:08:59.227990 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gg7mx\" (UniqueName: \"kubernetes.io/projected/cc07bbaf-381b-4edc-acd9-48211c3eb4c6-kube-api-access-gg7mx\") pod \"openstack-cell1-galera-0\" (UID: \"cc07bbaf-381b-4edc-acd9-48211c3eb4c6\") " pod="openstack/openstack-cell1-galera-0" Jan 26 08:08:59 crc kubenswrapper[4806]: I0126 08:08:59.228031 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cc07bbaf-381b-4edc-acd9-48211c3eb4c6-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"cc07bbaf-381b-4edc-acd9-48211c3eb4c6\") " pod="openstack/openstack-cell1-galera-0" Jan 26 08:08:59 crc kubenswrapper[4806]: I0126 08:08:59.261669 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 26 08:08:59 crc kubenswrapper[4806]: I0126 08:08:59.262588 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 26 08:08:59 crc kubenswrapper[4806]: I0126 08:08:59.265745 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-944nl" Jan 26 08:08:59 crc kubenswrapper[4806]: I0126 08:08:59.265873 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 26 08:08:59 crc kubenswrapper[4806]: I0126 08:08:59.266061 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 26 08:08:59 crc kubenswrapper[4806]: I0126 08:08:59.331255 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-cell1-galera-0\" (UID: \"cc07bbaf-381b-4edc-acd9-48211c3eb4c6\") " pod="openstack/openstack-cell1-galera-0" Jan 26 08:08:59 crc kubenswrapper[4806]: I0126 08:08:59.331301 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc07bbaf-381b-4edc-acd9-48211c3eb4c6-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"cc07bbaf-381b-4edc-acd9-48211c3eb4c6\") " pod="openstack/openstack-cell1-galera-0" Jan 26 08:08:59 crc kubenswrapper[4806]: I0126 08:08:59.331345 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gg7mx\" (UniqueName: \"kubernetes.io/projected/cc07bbaf-381b-4edc-acd9-48211c3eb4c6-kube-api-access-gg7mx\") pod \"openstack-cell1-galera-0\" (UID: \"cc07bbaf-381b-4edc-acd9-48211c3eb4c6\") " pod="openstack/openstack-cell1-galera-0" Jan 26 08:08:59 crc kubenswrapper[4806]: I0126 08:08:59.331369 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cc07bbaf-381b-4edc-acd9-48211c3eb4c6-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"cc07bbaf-381b-4edc-acd9-48211c3eb4c6\") " pod="openstack/openstack-cell1-galera-0" Jan 26 08:08:59 crc kubenswrapper[4806]: I0126 08:08:59.331404 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/cc07bbaf-381b-4edc-acd9-48211c3eb4c6-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"cc07bbaf-381b-4edc-acd9-48211c3eb4c6\") " pod="openstack/openstack-cell1-galera-0" Jan 26 08:08:59 crc kubenswrapper[4806]: I0126 08:08:59.331428 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/cc07bbaf-381b-4edc-acd9-48211c3eb4c6-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"cc07bbaf-381b-4edc-acd9-48211c3eb4c6\") " pod="openstack/openstack-cell1-galera-0" Jan 26 08:08:59 crc kubenswrapper[4806]: I0126 08:08:59.331445 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/cc07bbaf-381b-4edc-acd9-48211c3eb4c6-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"cc07bbaf-381b-4edc-acd9-48211c3eb4c6\") " pod="openstack/openstack-cell1-galera-0" Jan 26 08:08:59 crc kubenswrapper[4806]: I0126 08:08:59.331468 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc07bbaf-381b-4edc-acd9-48211c3eb4c6-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"cc07bbaf-381b-4edc-acd9-48211c3eb4c6\") " pod="openstack/openstack-cell1-galera-0" Jan 26 08:08:59 crc kubenswrapper[4806]: I0126 08:08:59.333973 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/cc07bbaf-381b-4edc-acd9-48211c3eb4c6-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"cc07bbaf-381b-4edc-acd9-48211c3eb4c6\") " pod="openstack/openstack-cell1-galera-0" Jan 26 08:08:59 crc kubenswrapper[4806]: I0126 08:08:59.334719 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/cc07bbaf-381b-4edc-acd9-48211c3eb4c6-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"cc07bbaf-381b-4edc-acd9-48211c3eb4c6\") " pod="openstack/openstack-cell1-galera-0" Jan 26 08:08:59 crc kubenswrapper[4806]: I0126 08:08:59.335355 4806 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-cell1-galera-0\" (UID: \"cc07bbaf-381b-4edc-acd9-48211c3eb4c6\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/openstack-cell1-galera-0" Jan 26 08:08:59 crc kubenswrapper[4806]: I0126 08:08:59.336108 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/cc07bbaf-381b-4edc-acd9-48211c3eb4c6-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"cc07bbaf-381b-4edc-acd9-48211c3eb4c6\") " pod="openstack/openstack-cell1-galera-0" Jan 26 08:08:59 crc kubenswrapper[4806]: I0126 08:08:59.370545 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc07bbaf-381b-4edc-acd9-48211c3eb4c6-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"cc07bbaf-381b-4edc-acd9-48211c3eb4c6\") " pod="openstack/openstack-cell1-galera-0" Jan 26 08:08:59 crc kubenswrapper[4806]: I0126 08:08:59.372410 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 26 08:08:59 crc kubenswrapper[4806]: I0126 08:08:59.382426 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/cc07bbaf-381b-4edc-acd9-48211c3eb4c6-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"cc07bbaf-381b-4edc-acd9-48211c3eb4c6\") " pod="openstack/openstack-cell1-galera-0" Jan 26 08:08:59 crc kubenswrapper[4806]: I0126 08:08:59.404575 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cc07bbaf-381b-4edc-acd9-48211c3eb4c6-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"cc07bbaf-381b-4edc-acd9-48211c3eb4c6\") " pod="openstack/openstack-cell1-galera-0" Jan 26 08:08:59 crc kubenswrapper[4806]: I0126 08:08:59.408204 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gg7mx\" (UniqueName: \"kubernetes.io/projected/cc07bbaf-381b-4edc-acd9-48211c3eb4c6-kube-api-access-gg7mx\") pod \"openstack-cell1-galera-0\" (UID: \"cc07bbaf-381b-4edc-acd9-48211c3eb4c6\") " pod="openstack/openstack-cell1-galera-0" Jan 26 08:08:59 crc kubenswrapper[4806]: I0126 08:08:59.423820 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-cell1-galera-0\" (UID: \"cc07bbaf-381b-4edc-acd9-48211c3eb4c6\") " pod="openstack/openstack-cell1-galera-0" Jan 26 08:08:59 crc kubenswrapper[4806]: I0126 08:08:59.434067 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/376996ab-adaf-4126-80f8-09242f277fe2-memcached-tls-certs\") pod \"memcached-0\" (UID: \"376996ab-adaf-4126-80f8-09242f277fe2\") " pod="openstack/memcached-0" Jan 26 08:08:59 crc kubenswrapper[4806]: I0126 08:08:59.441645 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/376996ab-adaf-4126-80f8-09242f277fe2-combined-ca-bundle\") pod \"memcached-0\" (UID: \"376996ab-adaf-4126-80f8-09242f277fe2\") " pod="openstack/memcached-0" Jan 26 08:08:59 crc kubenswrapper[4806]: I0126 08:08:59.439723 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 26 08:08:59 crc kubenswrapper[4806]: I0126 08:08:59.442084 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/376996ab-adaf-4126-80f8-09242f277fe2-config-data\") pod \"memcached-0\" (UID: \"376996ab-adaf-4126-80f8-09242f277fe2\") " pod="openstack/memcached-0" Jan 26 08:08:59 crc kubenswrapper[4806]: I0126 08:08:59.442179 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrdpg\" (UniqueName: \"kubernetes.io/projected/376996ab-adaf-4126-80f8-09242f277fe2-kube-api-access-zrdpg\") pod \"memcached-0\" (UID: \"376996ab-adaf-4126-80f8-09242f277fe2\") " pod="openstack/memcached-0" Jan 26 08:08:59 crc kubenswrapper[4806]: I0126 08:08:59.442303 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/376996ab-adaf-4126-80f8-09242f277fe2-kolla-config\") pod \"memcached-0\" (UID: \"376996ab-adaf-4126-80f8-09242f277fe2\") " pod="openstack/memcached-0" Jan 26 08:08:59 crc kubenswrapper[4806]: I0126 08:08:59.543362 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/376996ab-adaf-4126-80f8-09242f277fe2-config-data\") pod \"memcached-0\" (UID: \"376996ab-adaf-4126-80f8-09242f277fe2\") " pod="openstack/memcached-0" Jan 26 08:08:59 crc kubenswrapper[4806]: I0126 08:08:59.543417 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrdpg\" (UniqueName: \"kubernetes.io/projected/376996ab-adaf-4126-80f8-09242f277fe2-kube-api-access-zrdpg\") pod \"memcached-0\" (UID: \"376996ab-adaf-4126-80f8-09242f277fe2\") " pod="openstack/memcached-0" Jan 26 08:08:59 crc kubenswrapper[4806]: I0126 08:08:59.543457 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/376996ab-adaf-4126-80f8-09242f277fe2-kolla-config\") pod \"memcached-0\" (UID: \"376996ab-adaf-4126-80f8-09242f277fe2\") " pod="openstack/memcached-0" Jan 26 08:08:59 crc kubenswrapper[4806]: I0126 08:08:59.543492 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/376996ab-adaf-4126-80f8-09242f277fe2-memcached-tls-certs\") pod \"memcached-0\" (UID: \"376996ab-adaf-4126-80f8-09242f277fe2\") " pod="openstack/memcached-0" Jan 26 08:08:59 crc kubenswrapper[4806]: I0126 08:08:59.543567 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/376996ab-adaf-4126-80f8-09242f277fe2-combined-ca-bundle\") pod \"memcached-0\" (UID: \"376996ab-adaf-4126-80f8-09242f277fe2\") " pod="openstack/memcached-0" Jan 26 08:08:59 crc kubenswrapper[4806]: I0126 08:08:59.545052 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/376996ab-adaf-4126-80f8-09242f277fe2-config-data\") pod \"memcached-0\" (UID: \"376996ab-adaf-4126-80f8-09242f277fe2\") " pod="openstack/memcached-0" Jan 26 08:08:59 crc kubenswrapper[4806]: I0126 08:08:59.545502 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/376996ab-adaf-4126-80f8-09242f277fe2-kolla-config\") pod \"memcached-0\" (UID: \"376996ab-adaf-4126-80f8-09242f277fe2\") " pod="openstack/memcached-0" Jan 26 08:08:59 crc kubenswrapper[4806]: I0126 08:08:59.550058 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/376996ab-adaf-4126-80f8-09242f277fe2-memcached-tls-certs\") pod \"memcached-0\" (UID: \"376996ab-adaf-4126-80f8-09242f277fe2\") " pod="openstack/memcached-0" Jan 26 08:08:59 crc kubenswrapper[4806]: I0126 08:08:59.550756 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/376996ab-adaf-4126-80f8-09242f277fe2-combined-ca-bundle\") pod \"memcached-0\" (UID: \"376996ab-adaf-4126-80f8-09242f277fe2\") " pod="openstack/memcached-0" Jan 26 08:08:59 crc kubenswrapper[4806]: I0126 08:08:59.587106 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrdpg\" (UniqueName: \"kubernetes.io/projected/376996ab-adaf-4126-80f8-09242f277fe2-kube-api-access-zrdpg\") pod \"memcached-0\" (UID: \"376996ab-adaf-4126-80f8-09242f277fe2\") " pod="openstack/memcached-0" Jan 26 08:08:59 crc kubenswrapper[4806]: I0126 08:08:59.597014 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 26 08:09:00 crc kubenswrapper[4806]: I0126 08:09:00.220337 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 26 08:09:00 crc kubenswrapper[4806]: I0126 08:09:00.485751 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 26 08:09:00 crc kubenswrapper[4806]: W0126 08:09:00.572028 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcc07bbaf_381b_4edc_acd9_48211c3eb4c6.slice/crio-090f6b95ce62ef58860366cd042c339b5fa0e75893c7bd9139362dda5699f7b3 WatchSource:0}: Error finding container 090f6b95ce62ef58860366cd042c339b5fa0e75893c7bd9139362dda5699f7b3: Status 404 returned error can't find the container with id 090f6b95ce62ef58860366cd042c339b5fa0e75893c7bd9139362dda5699f7b3 Jan 26 08:09:01 crc kubenswrapper[4806]: I0126 08:09:01.020800 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"376996ab-adaf-4126-80f8-09242f277fe2","Type":"ContainerStarted","Data":"4b186744645329fc49f08a5d0421a76e3ccc2fe74d3912cbe68b2be1164f592f"} Jan 26 08:09:01 crc kubenswrapper[4806]: I0126 08:09:01.022847 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"cc07bbaf-381b-4edc-acd9-48211c3eb4c6","Type":"ContainerStarted","Data":"090f6b95ce62ef58860366cd042c339b5fa0e75893c7bd9139362dda5699f7b3"} Jan 26 08:09:01 crc kubenswrapper[4806]: I0126 08:09:01.275123 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 26 08:09:01 crc kubenswrapper[4806]: I0126 08:09:01.276186 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 26 08:09:01 crc kubenswrapper[4806]: I0126 08:09:01.286914 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-m2wnk" Jan 26 08:09:01 crc kubenswrapper[4806]: I0126 08:09:01.355686 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 26 08:09:01 crc kubenswrapper[4806]: I0126 08:09:01.413801 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8r6cr\" (UniqueName: \"kubernetes.io/projected/0e3b9ca7-b75b-4a84-b3f1-23b98ee8ed2a-kube-api-access-8r6cr\") pod \"kube-state-metrics-0\" (UID: \"0e3b9ca7-b75b-4a84-b3f1-23b98ee8ed2a\") " pod="openstack/kube-state-metrics-0" Jan 26 08:09:01 crc kubenswrapper[4806]: I0126 08:09:01.514852 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8r6cr\" (UniqueName: \"kubernetes.io/projected/0e3b9ca7-b75b-4a84-b3f1-23b98ee8ed2a-kube-api-access-8r6cr\") pod \"kube-state-metrics-0\" (UID: \"0e3b9ca7-b75b-4a84-b3f1-23b98ee8ed2a\") " pod="openstack/kube-state-metrics-0" Jan 26 08:09:01 crc kubenswrapper[4806]: I0126 08:09:01.536491 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8r6cr\" (UniqueName: \"kubernetes.io/projected/0e3b9ca7-b75b-4a84-b3f1-23b98ee8ed2a-kube-api-access-8r6cr\") pod \"kube-state-metrics-0\" (UID: \"0e3b9ca7-b75b-4a84-b3f1-23b98ee8ed2a\") " pod="openstack/kube-state-metrics-0" Jan 26 08:09:01 crc kubenswrapper[4806]: I0126 08:09:01.660104 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 26 08:09:02 crc kubenswrapper[4806]: I0126 08:09:02.531890 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 26 08:09:03 crc kubenswrapper[4806]: I0126 08:09:03.081507 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"0e3b9ca7-b75b-4a84-b3f1-23b98ee8ed2a","Type":"ContainerStarted","Data":"002cb32771f528b20edab6100fbf0401cf4e50f474e4ba20d6291943ffbe329c"} Jan 26 08:09:04 crc kubenswrapper[4806]: I0126 08:09:04.981780 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 26 08:09:04 crc kubenswrapper[4806]: I0126 08:09:04.983338 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 26 08:09:04 crc kubenswrapper[4806]: I0126 08:09:04.989782 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 26 08:09:04 crc kubenswrapper[4806]: I0126 08:09:04.989983 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 26 08:09:05 crc kubenswrapper[4806]: I0126 08:09:04.995409 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 26 08:09:05 crc kubenswrapper[4806]: I0126 08:09:05.028309 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 26 08:09:05 crc kubenswrapper[4806]: I0126 08:09:05.028373 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-6dgqs" Jan 26 08:09:05 crc kubenswrapper[4806]: I0126 08:09:05.028500 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 26 08:09:05 crc kubenswrapper[4806]: I0126 08:09:05.128467 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/04a8f9ea-cfac-4d1a-8ef9-9f3f4b2efcc5-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"04a8f9ea-cfac-4d1a-8ef9-9f3f4b2efcc5\") " pod="openstack/ovsdbserver-nb-0" Jan 26 08:09:05 crc kubenswrapper[4806]: I0126 08:09:05.128852 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-nb-0\" (UID: \"04a8f9ea-cfac-4d1a-8ef9-9f3f4b2efcc5\") " pod="openstack/ovsdbserver-nb-0" Jan 26 08:09:05 crc kubenswrapper[4806]: I0126 08:09:05.128969 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04a8f9ea-cfac-4d1a-8ef9-9f3f4b2efcc5-config\") pod \"ovsdbserver-nb-0\" (UID: \"04a8f9ea-cfac-4d1a-8ef9-9f3f4b2efcc5\") " pod="openstack/ovsdbserver-nb-0" Jan 26 08:09:05 crc kubenswrapper[4806]: I0126 08:09:05.128989 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04a8f9ea-cfac-4d1a-8ef9-9f3f4b2efcc5-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"04a8f9ea-cfac-4d1a-8ef9-9f3f4b2efcc5\") " pod="openstack/ovsdbserver-nb-0" Jan 26 08:09:05 crc kubenswrapper[4806]: I0126 08:09:05.129010 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/04a8f9ea-cfac-4d1a-8ef9-9f3f4b2efcc5-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"04a8f9ea-cfac-4d1a-8ef9-9f3f4b2efcc5\") " pod="openstack/ovsdbserver-nb-0" Jan 26 08:09:05 crc kubenswrapper[4806]: I0126 08:09:05.129151 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/04a8f9ea-cfac-4d1a-8ef9-9f3f4b2efcc5-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"04a8f9ea-cfac-4d1a-8ef9-9f3f4b2efcc5\") " pod="openstack/ovsdbserver-nb-0" Jan 26 08:09:05 crc kubenswrapper[4806]: I0126 08:09:05.129196 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/04a8f9ea-cfac-4d1a-8ef9-9f3f4b2efcc5-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"04a8f9ea-cfac-4d1a-8ef9-9f3f4b2efcc5\") " pod="openstack/ovsdbserver-nb-0" Jan 26 08:09:05 crc kubenswrapper[4806]: I0126 08:09:05.129367 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lb299\" (UniqueName: \"kubernetes.io/projected/04a8f9ea-cfac-4d1a-8ef9-9f3f4b2efcc5-kube-api-access-lb299\") pod \"ovsdbserver-nb-0\" (UID: \"04a8f9ea-cfac-4d1a-8ef9-9f3f4b2efcc5\") " pod="openstack/ovsdbserver-nb-0" Jan 26 08:09:05 crc kubenswrapper[4806]: I0126 08:09:05.233267 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/04a8f9ea-cfac-4d1a-8ef9-9f3f4b2efcc5-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"04a8f9ea-cfac-4d1a-8ef9-9f3f4b2efcc5\") " pod="openstack/ovsdbserver-nb-0" Jan 26 08:09:05 crc kubenswrapper[4806]: I0126 08:09:05.233334 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-nb-0\" (UID: \"04a8f9ea-cfac-4d1a-8ef9-9f3f4b2efcc5\") " pod="openstack/ovsdbserver-nb-0" Jan 26 08:09:05 crc kubenswrapper[4806]: I0126 08:09:05.233369 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04a8f9ea-cfac-4d1a-8ef9-9f3f4b2efcc5-config\") pod \"ovsdbserver-nb-0\" (UID: \"04a8f9ea-cfac-4d1a-8ef9-9f3f4b2efcc5\") " pod="openstack/ovsdbserver-nb-0" Jan 26 08:09:05 crc kubenswrapper[4806]: I0126 08:09:05.233384 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04a8f9ea-cfac-4d1a-8ef9-9f3f4b2efcc5-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"04a8f9ea-cfac-4d1a-8ef9-9f3f4b2efcc5\") " pod="openstack/ovsdbserver-nb-0" Jan 26 08:09:05 crc kubenswrapper[4806]: I0126 08:09:05.233397 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/04a8f9ea-cfac-4d1a-8ef9-9f3f4b2efcc5-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"04a8f9ea-cfac-4d1a-8ef9-9f3f4b2efcc5\") " pod="openstack/ovsdbserver-nb-0" Jan 26 08:09:05 crc kubenswrapper[4806]: I0126 08:09:05.233424 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/04a8f9ea-cfac-4d1a-8ef9-9f3f4b2efcc5-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"04a8f9ea-cfac-4d1a-8ef9-9f3f4b2efcc5\") " pod="openstack/ovsdbserver-nb-0" Jan 26 08:09:05 crc kubenswrapper[4806]: I0126 08:09:05.233442 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/04a8f9ea-cfac-4d1a-8ef9-9f3f4b2efcc5-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"04a8f9ea-cfac-4d1a-8ef9-9f3f4b2efcc5\") " pod="openstack/ovsdbserver-nb-0" Jan 26 08:09:05 crc kubenswrapper[4806]: I0126 08:09:05.233489 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lb299\" (UniqueName: \"kubernetes.io/projected/04a8f9ea-cfac-4d1a-8ef9-9f3f4b2efcc5-kube-api-access-lb299\") pod \"ovsdbserver-nb-0\" (UID: \"04a8f9ea-cfac-4d1a-8ef9-9f3f4b2efcc5\") " pod="openstack/ovsdbserver-nb-0" Jan 26 08:09:05 crc kubenswrapper[4806]: I0126 08:09:05.237984 4806 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-nb-0\" (UID: \"04a8f9ea-cfac-4d1a-8ef9-9f3f4b2efcc5\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/ovsdbserver-nb-0" Jan 26 08:09:05 crc kubenswrapper[4806]: I0126 08:09:05.238576 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04a8f9ea-cfac-4d1a-8ef9-9f3f4b2efcc5-config\") pod \"ovsdbserver-nb-0\" (UID: \"04a8f9ea-cfac-4d1a-8ef9-9f3f4b2efcc5\") " pod="openstack/ovsdbserver-nb-0" Jan 26 08:09:05 crc kubenswrapper[4806]: I0126 08:09:05.239174 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/04a8f9ea-cfac-4d1a-8ef9-9f3f4b2efcc5-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"04a8f9ea-cfac-4d1a-8ef9-9f3f4b2efcc5\") " pod="openstack/ovsdbserver-nb-0" Jan 26 08:09:05 crc kubenswrapper[4806]: I0126 08:09:05.239566 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/04a8f9ea-cfac-4d1a-8ef9-9f3f4b2efcc5-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"04a8f9ea-cfac-4d1a-8ef9-9f3f4b2efcc5\") " pod="openstack/ovsdbserver-nb-0" Jan 26 08:09:05 crc kubenswrapper[4806]: I0126 08:09:05.244252 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04a8f9ea-cfac-4d1a-8ef9-9f3f4b2efcc5-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"04a8f9ea-cfac-4d1a-8ef9-9f3f4b2efcc5\") " pod="openstack/ovsdbserver-nb-0" Jan 26 08:09:05 crc kubenswrapper[4806]: I0126 08:09:05.260046 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lb299\" (UniqueName: \"kubernetes.io/projected/04a8f9ea-cfac-4d1a-8ef9-9f3f4b2efcc5-kube-api-access-lb299\") pod \"ovsdbserver-nb-0\" (UID: \"04a8f9ea-cfac-4d1a-8ef9-9f3f4b2efcc5\") " pod="openstack/ovsdbserver-nb-0" Jan 26 08:09:05 crc kubenswrapper[4806]: I0126 08:09:05.261006 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/04a8f9ea-cfac-4d1a-8ef9-9f3f4b2efcc5-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"04a8f9ea-cfac-4d1a-8ef9-9f3f4b2efcc5\") " pod="openstack/ovsdbserver-nb-0" Jan 26 08:09:05 crc kubenswrapper[4806]: I0126 08:09:05.279080 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/04a8f9ea-cfac-4d1a-8ef9-9f3f4b2efcc5-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"04a8f9ea-cfac-4d1a-8ef9-9f3f4b2efcc5\") " pod="openstack/ovsdbserver-nb-0" Jan 26 08:09:05 crc kubenswrapper[4806]: I0126 08:09:05.307990 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-nb-0\" (UID: \"04a8f9ea-cfac-4d1a-8ef9-9f3f4b2efcc5\") " pod="openstack/ovsdbserver-nb-0" Jan 26 08:09:05 crc kubenswrapper[4806]: I0126 08:09:05.358241 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 26 08:09:06 crc kubenswrapper[4806]: I0126 08:09:06.213476 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-jb2zj"] Jan 26 08:09:06 crc kubenswrapper[4806]: I0126 08:09:06.219370 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-jb2zj" Jan 26 08:09:06 crc kubenswrapper[4806]: I0126 08:09:06.226898 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-gq7kz" Jan 26 08:09:06 crc kubenswrapper[4806]: I0126 08:09:06.227203 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 26 08:09:06 crc kubenswrapper[4806]: I0126 08:09:06.227346 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 26 08:09:06 crc kubenswrapper[4806]: I0126 08:09:06.234730 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-jb2zj"] Jan 26 08:09:06 crc kubenswrapper[4806]: I0126 08:09:06.247766 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-r7hjs"] Jan 26 08:09:06 crc kubenswrapper[4806]: I0126 08:09:06.249257 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-r7hjs" Jan 26 08:09:06 crc kubenswrapper[4806]: I0126 08:09:06.264388 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-r7hjs"] Jan 26 08:09:06 crc kubenswrapper[4806]: I0126 08:09:06.353538 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/a9b96a34-03af-4967-bec7-e1beda976396-etc-ovs\") pod \"ovn-controller-ovs-r7hjs\" (UID: \"a9b96a34-03af-4967-bec7-e1beda976396\") " pod="openstack/ovn-controller-ovs-r7hjs" Jan 26 08:09:06 crc kubenswrapper[4806]: I0126 08:09:06.353587 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a9b96a34-03af-4967-bec7-e1beda976396-var-run\") pod \"ovn-controller-ovs-r7hjs\" (UID: \"a9b96a34-03af-4967-bec7-e1beda976396\") " pod="openstack/ovn-controller-ovs-r7hjs" Jan 26 08:09:06 crc kubenswrapper[4806]: I0126 08:09:06.353607 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/b5d47098-d6d7-4b59-a88c-4bfb7d643a89-ovn-controller-tls-certs\") pod \"ovn-controller-jb2zj\" (UID: \"b5d47098-d6d7-4b59-a88c-4bfb7d643a89\") " pod="openstack/ovn-controller-jb2zj" Jan 26 08:09:06 crc kubenswrapper[4806]: I0126 08:09:06.353634 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8t424\" (UniqueName: \"kubernetes.io/projected/b5d47098-d6d7-4b59-a88c-4bfb7d643a89-kube-api-access-8t424\") pod \"ovn-controller-jb2zj\" (UID: \"b5d47098-d6d7-4b59-a88c-4bfb7d643a89\") " pod="openstack/ovn-controller-jb2zj" Jan 26 08:09:06 crc kubenswrapper[4806]: I0126 08:09:06.353666 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b5d47098-d6d7-4b59-a88c-4bfb7d643a89-scripts\") pod \"ovn-controller-jb2zj\" (UID: \"b5d47098-d6d7-4b59-a88c-4bfb7d643a89\") " pod="openstack/ovn-controller-jb2zj" Jan 26 08:09:06 crc kubenswrapper[4806]: I0126 08:09:06.353737 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/a9b96a34-03af-4967-bec7-e1beda976396-var-log\") pod \"ovn-controller-ovs-r7hjs\" (UID: \"a9b96a34-03af-4967-bec7-e1beda976396\") " pod="openstack/ovn-controller-ovs-r7hjs" Jan 26 08:09:06 crc kubenswrapper[4806]: I0126 08:09:06.353772 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/a9b96a34-03af-4967-bec7-e1beda976396-var-lib\") pod \"ovn-controller-ovs-r7hjs\" (UID: \"a9b96a34-03af-4967-bec7-e1beda976396\") " pod="openstack/ovn-controller-ovs-r7hjs" Jan 26 08:09:06 crc kubenswrapper[4806]: I0126 08:09:06.353843 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cf7cx\" (UniqueName: \"kubernetes.io/projected/a9b96a34-03af-4967-bec7-e1beda976396-kube-api-access-cf7cx\") pod \"ovn-controller-ovs-r7hjs\" (UID: \"a9b96a34-03af-4967-bec7-e1beda976396\") " pod="openstack/ovn-controller-ovs-r7hjs" Jan 26 08:09:06 crc kubenswrapper[4806]: I0126 08:09:06.353912 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5d47098-d6d7-4b59-a88c-4bfb7d643a89-combined-ca-bundle\") pod \"ovn-controller-jb2zj\" (UID: \"b5d47098-d6d7-4b59-a88c-4bfb7d643a89\") " pod="openstack/ovn-controller-jb2zj" Jan 26 08:09:06 crc kubenswrapper[4806]: I0126 08:09:06.353988 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a9b96a34-03af-4967-bec7-e1beda976396-scripts\") pod \"ovn-controller-ovs-r7hjs\" (UID: \"a9b96a34-03af-4967-bec7-e1beda976396\") " pod="openstack/ovn-controller-ovs-r7hjs" Jan 26 08:09:06 crc kubenswrapper[4806]: I0126 08:09:06.354057 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b5d47098-d6d7-4b59-a88c-4bfb7d643a89-var-run\") pod \"ovn-controller-jb2zj\" (UID: \"b5d47098-d6d7-4b59-a88c-4bfb7d643a89\") " pod="openstack/ovn-controller-jb2zj" Jan 26 08:09:06 crc kubenswrapper[4806]: I0126 08:09:06.354071 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b5d47098-d6d7-4b59-a88c-4bfb7d643a89-var-run-ovn\") pod \"ovn-controller-jb2zj\" (UID: \"b5d47098-d6d7-4b59-a88c-4bfb7d643a89\") " pod="openstack/ovn-controller-jb2zj" Jan 26 08:09:06 crc kubenswrapper[4806]: I0126 08:09:06.354096 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b5d47098-d6d7-4b59-a88c-4bfb7d643a89-var-log-ovn\") pod \"ovn-controller-jb2zj\" (UID: \"b5d47098-d6d7-4b59-a88c-4bfb7d643a89\") " pod="openstack/ovn-controller-jb2zj" Jan 26 08:09:06 crc kubenswrapper[4806]: I0126 08:09:06.456181 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a9b96a34-03af-4967-bec7-e1beda976396-scripts\") pod \"ovn-controller-ovs-r7hjs\" (UID: \"a9b96a34-03af-4967-bec7-e1beda976396\") " pod="openstack/ovn-controller-ovs-r7hjs" Jan 26 08:09:06 crc kubenswrapper[4806]: I0126 08:09:06.456793 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b5d47098-d6d7-4b59-a88c-4bfb7d643a89-var-run\") pod \"ovn-controller-jb2zj\" (UID: \"b5d47098-d6d7-4b59-a88c-4bfb7d643a89\") " pod="openstack/ovn-controller-jb2zj" Jan 26 08:09:06 crc kubenswrapper[4806]: I0126 08:09:06.456845 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b5d47098-d6d7-4b59-a88c-4bfb7d643a89-var-run-ovn\") pod \"ovn-controller-jb2zj\" (UID: \"b5d47098-d6d7-4b59-a88c-4bfb7d643a89\") " pod="openstack/ovn-controller-jb2zj" Jan 26 08:09:06 crc kubenswrapper[4806]: I0126 08:09:06.456873 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b5d47098-d6d7-4b59-a88c-4bfb7d643a89-var-log-ovn\") pod \"ovn-controller-jb2zj\" (UID: \"b5d47098-d6d7-4b59-a88c-4bfb7d643a89\") " pod="openstack/ovn-controller-jb2zj" Jan 26 08:09:06 crc kubenswrapper[4806]: I0126 08:09:06.456985 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/a9b96a34-03af-4967-bec7-e1beda976396-etc-ovs\") pod \"ovn-controller-ovs-r7hjs\" (UID: \"a9b96a34-03af-4967-bec7-e1beda976396\") " pod="openstack/ovn-controller-ovs-r7hjs" Jan 26 08:09:06 crc kubenswrapper[4806]: I0126 08:09:06.457011 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a9b96a34-03af-4967-bec7-e1beda976396-var-run\") pod \"ovn-controller-ovs-r7hjs\" (UID: \"a9b96a34-03af-4967-bec7-e1beda976396\") " pod="openstack/ovn-controller-ovs-r7hjs" Jan 26 08:09:06 crc kubenswrapper[4806]: I0126 08:09:06.457028 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/b5d47098-d6d7-4b59-a88c-4bfb7d643a89-ovn-controller-tls-certs\") pod \"ovn-controller-jb2zj\" (UID: \"b5d47098-d6d7-4b59-a88c-4bfb7d643a89\") " pod="openstack/ovn-controller-jb2zj" Jan 26 08:09:06 crc kubenswrapper[4806]: I0126 08:09:06.457058 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8t424\" (UniqueName: \"kubernetes.io/projected/b5d47098-d6d7-4b59-a88c-4bfb7d643a89-kube-api-access-8t424\") pod \"ovn-controller-jb2zj\" (UID: \"b5d47098-d6d7-4b59-a88c-4bfb7d643a89\") " pod="openstack/ovn-controller-jb2zj" Jan 26 08:09:06 crc kubenswrapper[4806]: I0126 08:09:06.457080 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b5d47098-d6d7-4b59-a88c-4bfb7d643a89-scripts\") pod \"ovn-controller-jb2zj\" (UID: \"b5d47098-d6d7-4b59-a88c-4bfb7d643a89\") " pod="openstack/ovn-controller-jb2zj" Jan 26 08:09:06 crc kubenswrapper[4806]: I0126 08:09:06.457100 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/a9b96a34-03af-4967-bec7-e1beda976396-var-log\") pod \"ovn-controller-ovs-r7hjs\" (UID: \"a9b96a34-03af-4967-bec7-e1beda976396\") " pod="openstack/ovn-controller-ovs-r7hjs" Jan 26 08:09:06 crc kubenswrapper[4806]: I0126 08:09:06.457114 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/a9b96a34-03af-4967-bec7-e1beda976396-var-lib\") pod \"ovn-controller-ovs-r7hjs\" (UID: \"a9b96a34-03af-4967-bec7-e1beda976396\") " pod="openstack/ovn-controller-ovs-r7hjs" Jan 26 08:09:06 crc kubenswrapper[4806]: I0126 08:09:06.457139 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cf7cx\" (UniqueName: \"kubernetes.io/projected/a9b96a34-03af-4967-bec7-e1beda976396-kube-api-access-cf7cx\") pod \"ovn-controller-ovs-r7hjs\" (UID: \"a9b96a34-03af-4967-bec7-e1beda976396\") " pod="openstack/ovn-controller-ovs-r7hjs" Jan 26 08:09:06 crc kubenswrapper[4806]: I0126 08:09:06.457164 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5d47098-d6d7-4b59-a88c-4bfb7d643a89-combined-ca-bundle\") pod \"ovn-controller-jb2zj\" (UID: \"b5d47098-d6d7-4b59-a88c-4bfb7d643a89\") " pod="openstack/ovn-controller-jb2zj" Jan 26 08:09:06 crc kubenswrapper[4806]: I0126 08:09:06.458016 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b5d47098-d6d7-4b59-a88c-4bfb7d643a89-var-run\") pod \"ovn-controller-jb2zj\" (UID: \"b5d47098-d6d7-4b59-a88c-4bfb7d643a89\") " pod="openstack/ovn-controller-jb2zj" Jan 26 08:09:06 crc kubenswrapper[4806]: I0126 08:09:06.458151 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b5d47098-d6d7-4b59-a88c-4bfb7d643a89-var-run-ovn\") pod \"ovn-controller-jb2zj\" (UID: \"b5d47098-d6d7-4b59-a88c-4bfb7d643a89\") " pod="openstack/ovn-controller-jb2zj" Jan 26 08:09:06 crc kubenswrapper[4806]: I0126 08:09:06.458304 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b5d47098-d6d7-4b59-a88c-4bfb7d643a89-var-log-ovn\") pod \"ovn-controller-jb2zj\" (UID: \"b5d47098-d6d7-4b59-a88c-4bfb7d643a89\") " pod="openstack/ovn-controller-jb2zj" Jan 26 08:09:06 crc kubenswrapper[4806]: I0126 08:09:06.458605 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/a9b96a34-03af-4967-bec7-e1beda976396-etc-ovs\") pod \"ovn-controller-ovs-r7hjs\" (UID: \"a9b96a34-03af-4967-bec7-e1beda976396\") " pod="openstack/ovn-controller-ovs-r7hjs" Jan 26 08:09:06 crc kubenswrapper[4806]: I0126 08:09:06.458672 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a9b96a34-03af-4967-bec7-e1beda976396-var-run\") pod \"ovn-controller-ovs-r7hjs\" (UID: \"a9b96a34-03af-4967-bec7-e1beda976396\") " pod="openstack/ovn-controller-ovs-r7hjs" Jan 26 08:09:06 crc kubenswrapper[4806]: I0126 08:09:06.459086 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a9b96a34-03af-4967-bec7-e1beda976396-scripts\") pod \"ovn-controller-ovs-r7hjs\" (UID: \"a9b96a34-03af-4967-bec7-e1beda976396\") " pod="openstack/ovn-controller-ovs-r7hjs" Jan 26 08:09:06 crc kubenswrapper[4806]: I0126 08:09:06.459437 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/a9b96a34-03af-4967-bec7-e1beda976396-var-log\") pod \"ovn-controller-ovs-r7hjs\" (UID: \"a9b96a34-03af-4967-bec7-e1beda976396\") " pod="openstack/ovn-controller-ovs-r7hjs" Jan 26 08:09:06 crc kubenswrapper[4806]: I0126 08:09:06.459880 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/a9b96a34-03af-4967-bec7-e1beda976396-var-lib\") pod \"ovn-controller-ovs-r7hjs\" (UID: \"a9b96a34-03af-4967-bec7-e1beda976396\") " pod="openstack/ovn-controller-ovs-r7hjs" Jan 26 08:09:06 crc kubenswrapper[4806]: I0126 08:09:06.463824 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b5d47098-d6d7-4b59-a88c-4bfb7d643a89-scripts\") pod \"ovn-controller-jb2zj\" (UID: \"b5d47098-d6d7-4b59-a88c-4bfb7d643a89\") " pod="openstack/ovn-controller-jb2zj" Jan 26 08:09:06 crc kubenswrapper[4806]: I0126 08:09:06.480748 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/b5d47098-d6d7-4b59-a88c-4bfb7d643a89-ovn-controller-tls-certs\") pod \"ovn-controller-jb2zj\" (UID: \"b5d47098-d6d7-4b59-a88c-4bfb7d643a89\") " pod="openstack/ovn-controller-jb2zj" Jan 26 08:09:06 crc kubenswrapper[4806]: I0126 08:09:06.482099 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5d47098-d6d7-4b59-a88c-4bfb7d643a89-combined-ca-bundle\") pod \"ovn-controller-jb2zj\" (UID: \"b5d47098-d6d7-4b59-a88c-4bfb7d643a89\") " pod="openstack/ovn-controller-jb2zj" Jan 26 08:09:06 crc kubenswrapper[4806]: I0126 08:09:06.483585 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8t424\" (UniqueName: \"kubernetes.io/projected/b5d47098-d6d7-4b59-a88c-4bfb7d643a89-kube-api-access-8t424\") pod \"ovn-controller-jb2zj\" (UID: \"b5d47098-d6d7-4b59-a88c-4bfb7d643a89\") " pod="openstack/ovn-controller-jb2zj" Jan 26 08:09:06 crc kubenswrapper[4806]: I0126 08:09:06.512164 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cf7cx\" (UniqueName: \"kubernetes.io/projected/a9b96a34-03af-4967-bec7-e1beda976396-kube-api-access-cf7cx\") pod \"ovn-controller-ovs-r7hjs\" (UID: \"a9b96a34-03af-4967-bec7-e1beda976396\") " pod="openstack/ovn-controller-ovs-r7hjs" Jan 26 08:09:06 crc kubenswrapper[4806]: I0126 08:09:06.553403 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-jb2zj" Jan 26 08:09:06 crc kubenswrapper[4806]: I0126 08:09:06.578646 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-r7hjs" Jan 26 08:09:08 crc kubenswrapper[4806]: I0126 08:09:08.342438 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 26 08:09:08 crc kubenswrapper[4806]: I0126 08:09:08.345795 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 26 08:09:08 crc kubenswrapper[4806]: I0126 08:09:08.351806 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 26 08:09:08 crc kubenswrapper[4806]: I0126 08:09:08.351901 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 26 08:09:08 crc kubenswrapper[4806]: I0126 08:09:08.352674 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-w2pqx" Jan 26 08:09:08 crc kubenswrapper[4806]: I0126 08:09:08.355788 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 26 08:09:08 crc kubenswrapper[4806]: I0126 08:09:08.362696 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 26 08:09:08 crc kubenswrapper[4806]: I0126 08:09:08.391299 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69ceca24-1275-4ecf-a77b-acb2728d7cc4-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"69ceca24-1275-4ecf-a77b-acb2728d7cc4\") " pod="openstack/ovsdbserver-sb-0" Jan 26 08:09:08 crc kubenswrapper[4806]: I0126 08:09:08.391354 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/69ceca24-1275-4ecf-a77b-acb2728d7cc4-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"69ceca24-1275-4ecf-a77b-acb2728d7cc4\") " pod="openstack/ovsdbserver-sb-0" Jan 26 08:09:08 crc kubenswrapper[4806]: I0126 08:09:08.391404 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-sb-0\" (UID: \"69ceca24-1275-4ecf-a77b-acb2728d7cc4\") " pod="openstack/ovsdbserver-sb-0" Jan 26 08:09:08 crc kubenswrapper[4806]: I0126 08:09:08.391424 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69ceca24-1275-4ecf-a77b-acb2728d7cc4-config\") pod \"ovsdbserver-sb-0\" (UID: \"69ceca24-1275-4ecf-a77b-acb2728d7cc4\") " pod="openstack/ovsdbserver-sb-0" Jan 26 08:09:08 crc kubenswrapper[4806]: I0126 08:09:08.391438 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/69ceca24-1275-4ecf-a77b-acb2728d7cc4-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"69ceca24-1275-4ecf-a77b-acb2728d7cc4\") " pod="openstack/ovsdbserver-sb-0" Jan 26 08:09:08 crc kubenswrapper[4806]: I0126 08:09:08.391463 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9plhl\" (UniqueName: \"kubernetes.io/projected/69ceca24-1275-4ecf-a77b-acb2728d7cc4-kube-api-access-9plhl\") pod \"ovsdbserver-sb-0\" (UID: \"69ceca24-1275-4ecf-a77b-acb2728d7cc4\") " pod="openstack/ovsdbserver-sb-0" Jan 26 08:09:08 crc kubenswrapper[4806]: I0126 08:09:08.391481 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/69ceca24-1275-4ecf-a77b-acb2728d7cc4-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"69ceca24-1275-4ecf-a77b-acb2728d7cc4\") " pod="openstack/ovsdbserver-sb-0" Jan 26 08:09:08 crc kubenswrapper[4806]: I0126 08:09:08.391510 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/69ceca24-1275-4ecf-a77b-acb2728d7cc4-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"69ceca24-1275-4ecf-a77b-acb2728d7cc4\") " pod="openstack/ovsdbserver-sb-0" Jan 26 08:09:08 crc kubenswrapper[4806]: I0126 08:09:08.493134 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/69ceca24-1275-4ecf-a77b-acb2728d7cc4-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"69ceca24-1275-4ecf-a77b-acb2728d7cc4\") " pod="openstack/ovsdbserver-sb-0" Jan 26 08:09:08 crc kubenswrapper[4806]: I0126 08:09:08.493295 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69ceca24-1275-4ecf-a77b-acb2728d7cc4-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"69ceca24-1275-4ecf-a77b-acb2728d7cc4\") " pod="openstack/ovsdbserver-sb-0" Jan 26 08:09:08 crc kubenswrapper[4806]: I0126 08:09:08.493332 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/69ceca24-1275-4ecf-a77b-acb2728d7cc4-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"69ceca24-1275-4ecf-a77b-acb2728d7cc4\") " pod="openstack/ovsdbserver-sb-0" Jan 26 08:09:08 crc kubenswrapper[4806]: I0126 08:09:08.493398 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-sb-0\" (UID: \"69ceca24-1275-4ecf-a77b-acb2728d7cc4\") " pod="openstack/ovsdbserver-sb-0" Jan 26 08:09:08 crc kubenswrapper[4806]: I0126 08:09:08.493425 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69ceca24-1275-4ecf-a77b-acb2728d7cc4-config\") pod \"ovsdbserver-sb-0\" (UID: \"69ceca24-1275-4ecf-a77b-acb2728d7cc4\") " pod="openstack/ovsdbserver-sb-0" Jan 26 08:09:08 crc kubenswrapper[4806]: I0126 08:09:08.493449 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/69ceca24-1275-4ecf-a77b-acb2728d7cc4-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"69ceca24-1275-4ecf-a77b-acb2728d7cc4\") " pod="openstack/ovsdbserver-sb-0" Jan 26 08:09:08 crc kubenswrapper[4806]: I0126 08:09:08.493485 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9plhl\" (UniqueName: \"kubernetes.io/projected/69ceca24-1275-4ecf-a77b-acb2728d7cc4-kube-api-access-9plhl\") pod \"ovsdbserver-sb-0\" (UID: \"69ceca24-1275-4ecf-a77b-acb2728d7cc4\") " pod="openstack/ovsdbserver-sb-0" Jan 26 08:09:08 crc kubenswrapper[4806]: I0126 08:09:08.493513 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/69ceca24-1275-4ecf-a77b-acb2728d7cc4-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"69ceca24-1275-4ecf-a77b-acb2728d7cc4\") " pod="openstack/ovsdbserver-sb-0" Jan 26 08:09:08 crc kubenswrapper[4806]: I0126 08:09:08.493801 4806 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-sb-0\" (UID: \"69ceca24-1275-4ecf-a77b-acb2728d7cc4\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/ovsdbserver-sb-0" Jan 26 08:09:08 crc kubenswrapper[4806]: I0126 08:09:08.494686 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/69ceca24-1275-4ecf-a77b-acb2728d7cc4-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"69ceca24-1275-4ecf-a77b-acb2728d7cc4\") " pod="openstack/ovsdbserver-sb-0" Jan 26 08:09:08 crc kubenswrapper[4806]: I0126 08:09:08.500144 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69ceca24-1275-4ecf-a77b-acb2728d7cc4-config\") pod \"ovsdbserver-sb-0\" (UID: \"69ceca24-1275-4ecf-a77b-acb2728d7cc4\") " pod="openstack/ovsdbserver-sb-0" Jan 26 08:09:08 crc kubenswrapper[4806]: I0126 08:09:08.501281 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/69ceca24-1275-4ecf-a77b-acb2728d7cc4-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"69ceca24-1275-4ecf-a77b-acb2728d7cc4\") " pod="openstack/ovsdbserver-sb-0" Jan 26 08:09:08 crc kubenswrapper[4806]: I0126 08:09:08.502718 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/69ceca24-1275-4ecf-a77b-acb2728d7cc4-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"69ceca24-1275-4ecf-a77b-acb2728d7cc4\") " pod="openstack/ovsdbserver-sb-0" Jan 26 08:09:08 crc kubenswrapper[4806]: I0126 08:09:08.503147 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69ceca24-1275-4ecf-a77b-acb2728d7cc4-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"69ceca24-1275-4ecf-a77b-acb2728d7cc4\") " pod="openstack/ovsdbserver-sb-0" Jan 26 08:09:08 crc kubenswrapper[4806]: I0126 08:09:08.503875 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/69ceca24-1275-4ecf-a77b-acb2728d7cc4-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"69ceca24-1275-4ecf-a77b-acb2728d7cc4\") " pod="openstack/ovsdbserver-sb-0" Jan 26 08:09:08 crc kubenswrapper[4806]: I0126 08:09:08.540875 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-sb-0\" (UID: \"69ceca24-1275-4ecf-a77b-acb2728d7cc4\") " pod="openstack/ovsdbserver-sb-0" Jan 26 08:09:08 crc kubenswrapper[4806]: I0126 08:09:08.543266 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9plhl\" (UniqueName: \"kubernetes.io/projected/69ceca24-1275-4ecf-a77b-acb2728d7cc4-kube-api-access-9plhl\") pod \"ovsdbserver-sb-0\" (UID: \"69ceca24-1275-4ecf-a77b-acb2728d7cc4\") " pod="openstack/ovsdbserver-sb-0" Jan 26 08:09:08 crc kubenswrapper[4806]: I0126 08:09:08.665133 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 26 08:09:22 crc kubenswrapper[4806]: E0126 08:09:22.104466 4806 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-memcached:current-podified" Jan 26 08:09:22 crc kubenswrapper[4806]: E0126 08:09:22.105597 4806 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:memcached,Image:quay.io/podified-antelope-centos9/openstack-memcached:current-podified,Command:[/usr/bin/dumb-init -- /usr/local/bin/kolla_start],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:memcached,HostPort:0,ContainerPort:11211,Protocol:TCP,HostIP:,},ContainerPort{Name:memcached-tls,HostPort:0,ContainerPort:11212,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:POD_IPS,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIPs,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:CONFIG_HASH,Value:n698h64hf5h567hd8h5fbh5d4h669h57bh5cdh558h585h87hf9h697h597h9ch578h599h594hd6h5b6h64fh55bhf5h5c5h5c7h68fh5fchdfhbfh5ccq,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/src,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/certs/memcached.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/private/memcached.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zrdpg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42457,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42457,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod memcached-0_openstack(376996ab-adaf-4126-80f8-09242f277fe2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 08:09:22 crc kubenswrapper[4806]: E0126 08:09:22.106962 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/memcached-0" podUID="376996ab-adaf-4126-80f8-09242f277fe2" Jan 26 08:09:22 crc kubenswrapper[4806]: E0126 08:09:22.296946 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-memcached:current-podified\\\"\"" pod="openstack/memcached-0" podUID="376996ab-adaf-4126-80f8-09242f277fe2" Jan 26 08:09:24 crc kubenswrapper[4806]: E0126 08:09:24.330626 4806 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Jan 26 08:09:24 crc kubenswrapper[4806]: E0126 08:09:24.330988 4806 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dlfjh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-galera-0_openstack(70aa246b-31a1-4800-b76e-d50a2002a5f8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 08:09:24 crc kubenswrapper[4806]: E0126 08:09:24.332241 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-galera-0" podUID="70aa246b-31a1-4800-b76e-d50a2002a5f8" Jan 26 08:09:24 crc kubenswrapper[4806]: E0126 08:09:24.367346 4806 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Jan 26 08:09:24 crc kubenswrapper[4806]: E0126 08:09:24.368033 4806 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gg7mx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-cell1-galera-0_openstack(cc07bbaf-381b-4edc-acd9-48211c3eb4c6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 08:09:24 crc kubenswrapper[4806]: E0126 08:09:24.369350 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-cell1-galera-0" podUID="cc07bbaf-381b-4edc-acd9-48211c3eb4c6" Jan 26 08:09:25 crc kubenswrapper[4806]: E0126 08:09:25.321145 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-cell1-galera-0" podUID="cc07bbaf-381b-4edc-acd9-48211c3eb4c6" Jan 26 08:09:25 crc kubenswrapper[4806]: E0126 08:09:25.321281 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-galera-0" podUID="70aa246b-31a1-4800-b76e-d50a2002a5f8" Jan 26 08:09:32 crc kubenswrapper[4806]: E0126 08:09:32.730504 4806 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 26 08:09:32 crc kubenswrapper[4806]: E0126 08:09:32.731042 4806 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-42z7w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-z28t5_openstack(bf67838f-5d5e-48a8-ba5a-ed0c64d2756f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 08:09:32 crc kubenswrapper[4806]: E0126 08:09:32.732233 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-z28t5" podUID="bf67838f-5d5e-48a8-ba5a-ed0c64d2756f" Jan 26 08:09:32 crc kubenswrapper[4806]: E0126 08:09:32.734988 4806 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 26 08:09:32 crc kubenswrapper[4806]: E0126 08:09:32.735101 4806 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5xrpx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-6kr5x_openstack(fcd872ae-4b9c-4f34-8b78-aac4c0602746): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 08:09:32 crc kubenswrapper[4806]: E0126 08:09:32.736259 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-6kr5x" podUID="fcd872ae-4b9c-4f34-8b78-aac4c0602746" Jan 26 08:09:32 crc kubenswrapper[4806]: E0126 08:09:32.839454 4806 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 26 08:09:32 crc kubenswrapper[4806]: E0126 08:09:32.839621 4806 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-89k5s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-nc6vl_openstack(01b8727d-453c-4e87-aaa9-e938db5d17dc): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 08:09:32 crc kubenswrapper[4806]: E0126 08:09:32.841742 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-nc6vl" podUID="01b8727d-453c-4e87-aaa9-e938db5d17dc" Jan 26 08:09:32 crc kubenswrapper[4806]: E0126 08:09:32.849449 4806 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 26 08:09:32 crc kubenswrapper[4806]: E0126 08:09:32.849638 4806 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zc2zv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-8lxk9_openstack(8712dfde-3740-4f37-85c2-bc532a559a48): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 08:09:32 crc kubenswrapper[4806]: E0126 08:09:32.851002 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-8lxk9" podUID="8712dfde-3740-4f37-85c2-bc532a559a48" Jan 26 08:09:33 crc kubenswrapper[4806]: I0126 08:09:33.353074 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-jb2zj"] Jan 26 08:09:33 crc kubenswrapper[4806]: W0126 08:09:33.374678 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb5d47098_d6d7_4b59_a88c_4bfb7d643a89.slice/crio-af265163df9d3b61faea86848b969654dad18d285a73d0217b5aaf9eb53aeba5 WatchSource:0}: Error finding container af265163df9d3b61faea86848b969654dad18d285a73d0217b5aaf9eb53aeba5: Status 404 returned error can't find the container with id af265163df9d3b61faea86848b969654dad18d285a73d0217b5aaf9eb53aeba5 Jan 26 08:09:33 crc kubenswrapper[4806]: E0126 08:09:33.386156 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-666b6646f7-nc6vl" podUID="01b8727d-453c-4e87-aaa9-e938db5d17dc" Jan 26 08:09:33 crc kubenswrapper[4806]: E0126 08:09:33.386502 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-z28t5" podUID="bf67838f-5d5e-48a8-ba5a-ed0c64d2756f" Jan 26 08:09:33 crc kubenswrapper[4806]: I0126 08:09:33.626483 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-r7hjs"] Jan 26 08:09:34 crc kubenswrapper[4806]: E0126 08:09:34.074456 4806 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Jan 26 08:09:34 crc kubenswrapper[4806]: E0126 08:09:34.074565 4806 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Jan 26 08:09:34 crc kubenswrapper[4806]: E0126 08:09:34.074675 4806 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-state-metrics,Image:registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0,Command:[],Args:[--resources=pods --namespaces=openstack],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http-metrics,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},ContainerPort{Name:telemetry,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8r6cr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-state-metrics-0_openstack(0e3b9ca7-b75b-4a84-b3f1-23b98ee8ed2a): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 08:09:34 crc kubenswrapper[4806]: E0126 08:09:34.076028 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/kube-state-metrics-0" podUID="0e3b9ca7-b75b-4a84-b3f1-23b98ee8ed2a" Jan 26 08:09:34 crc kubenswrapper[4806]: I0126 08:09:34.250793 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-8lxk9" Jan 26 08:09:34 crc kubenswrapper[4806]: I0126 08:09:34.288200 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-6kr5x" Jan 26 08:09:34 crc kubenswrapper[4806]: I0126 08:09:34.392225 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-6kr5x" event={"ID":"fcd872ae-4b9c-4f34-8b78-aac4c0602746","Type":"ContainerDied","Data":"fe10e18eeb833b27470be15c662f928ea3ad9824baa1bd225705acb9bea8225e"} Jan 26 08:09:34 crc kubenswrapper[4806]: I0126 08:09:34.392309 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-6kr5x" Jan 26 08:09:34 crc kubenswrapper[4806]: I0126 08:09:34.396494 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-r7hjs" event={"ID":"a9b96a34-03af-4967-bec7-e1beda976396","Type":"ContainerStarted","Data":"1d5673690011aa9c2df90a9e7fd745d751e894087043816595d87cc899df8e5f"} Jan 26 08:09:34 crc kubenswrapper[4806]: I0126 08:09:34.397631 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-jb2zj" event={"ID":"b5d47098-d6d7-4b59-a88c-4bfb7d643a89","Type":"ContainerStarted","Data":"af265163df9d3b61faea86848b969654dad18d285a73d0217b5aaf9eb53aeba5"} Jan 26 08:09:34 crc kubenswrapper[4806]: I0126 08:09:34.399738 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-8lxk9" Jan 26 08:09:34 crc kubenswrapper[4806]: I0126 08:09:34.399916 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-8lxk9" event={"ID":"8712dfde-3740-4f37-85c2-bc532a559a48","Type":"ContainerDied","Data":"f49269112e729b52bd7d2e6460c8ae3e41d0af3e10827266557d321dd094fcb1"} Jan 26 08:09:34 crc kubenswrapper[4806]: E0126 08:09:34.401826 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0\\\"\"" pod="openstack/kube-state-metrics-0" podUID="0e3b9ca7-b75b-4a84-b3f1-23b98ee8ed2a" Jan 26 08:09:34 crc kubenswrapper[4806]: I0126 08:09:34.422344 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5xrpx\" (UniqueName: \"kubernetes.io/projected/fcd872ae-4b9c-4f34-8b78-aac4c0602746-kube-api-access-5xrpx\") pod \"fcd872ae-4b9c-4f34-8b78-aac4c0602746\" (UID: \"fcd872ae-4b9c-4f34-8b78-aac4c0602746\") " Jan 26 08:09:34 crc kubenswrapper[4806]: I0126 08:09:34.422485 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fcd872ae-4b9c-4f34-8b78-aac4c0602746-config\") pod \"fcd872ae-4b9c-4f34-8b78-aac4c0602746\" (UID: \"fcd872ae-4b9c-4f34-8b78-aac4c0602746\") " Jan 26 08:09:34 crc kubenswrapper[4806]: I0126 08:09:34.422552 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fcd872ae-4b9c-4f34-8b78-aac4c0602746-dns-svc\") pod \"fcd872ae-4b9c-4f34-8b78-aac4c0602746\" (UID: \"fcd872ae-4b9c-4f34-8b78-aac4c0602746\") " Jan 26 08:09:34 crc kubenswrapper[4806]: I0126 08:09:34.422599 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8712dfde-3740-4f37-85c2-bc532a559a48-config\") pod \"8712dfde-3740-4f37-85c2-bc532a559a48\" (UID: \"8712dfde-3740-4f37-85c2-bc532a559a48\") " Jan 26 08:09:34 crc kubenswrapper[4806]: I0126 08:09:34.422632 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zc2zv\" (UniqueName: \"kubernetes.io/projected/8712dfde-3740-4f37-85c2-bc532a559a48-kube-api-access-zc2zv\") pod \"8712dfde-3740-4f37-85c2-bc532a559a48\" (UID: \"8712dfde-3740-4f37-85c2-bc532a559a48\") " Jan 26 08:09:34 crc kubenswrapper[4806]: I0126 08:09:34.423853 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8712dfde-3740-4f37-85c2-bc532a559a48-config" (OuterVolumeSpecName: "config") pod "8712dfde-3740-4f37-85c2-bc532a559a48" (UID: "8712dfde-3740-4f37-85c2-bc532a559a48"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 08:09:34 crc kubenswrapper[4806]: I0126 08:09:34.423885 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fcd872ae-4b9c-4f34-8b78-aac4c0602746-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "fcd872ae-4b9c-4f34-8b78-aac4c0602746" (UID: "fcd872ae-4b9c-4f34-8b78-aac4c0602746"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 08:09:34 crc kubenswrapper[4806]: I0126 08:09:34.424436 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fcd872ae-4b9c-4f34-8b78-aac4c0602746-config" (OuterVolumeSpecName: "config") pod "fcd872ae-4b9c-4f34-8b78-aac4c0602746" (UID: "fcd872ae-4b9c-4f34-8b78-aac4c0602746"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 08:09:34 crc kubenswrapper[4806]: I0126 08:09:34.429477 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fcd872ae-4b9c-4f34-8b78-aac4c0602746-kube-api-access-5xrpx" (OuterVolumeSpecName: "kube-api-access-5xrpx") pod "fcd872ae-4b9c-4f34-8b78-aac4c0602746" (UID: "fcd872ae-4b9c-4f34-8b78-aac4c0602746"). InnerVolumeSpecName "kube-api-access-5xrpx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:09:34 crc kubenswrapper[4806]: I0126 08:09:34.430102 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8712dfde-3740-4f37-85c2-bc532a559a48-kube-api-access-zc2zv" (OuterVolumeSpecName: "kube-api-access-zc2zv") pod "8712dfde-3740-4f37-85c2-bc532a559a48" (UID: "8712dfde-3740-4f37-85c2-bc532a559a48"). InnerVolumeSpecName "kube-api-access-zc2zv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:09:34 crc kubenswrapper[4806]: I0126 08:09:34.496137 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 26 08:09:34 crc kubenswrapper[4806]: W0126 08:09:34.504015 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod69ceca24_1275_4ecf_a77b_acb2728d7cc4.slice/crio-b635702da059cf335dfbd9f41e2be0b55a1a6b04bfbe789b2532d5a9e4a8dda9 WatchSource:0}: Error finding container b635702da059cf335dfbd9f41e2be0b55a1a6b04bfbe789b2532d5a9e4a8dda9: Status 404 returned error can't find the container with id b635702da059cf335dfbd9f41e2be0b55a1a6b04bfbe789b2532d5a9e4a8dda9 Jan 26 08:09:34 crc kubenswrapper[4806]: I0126 08:09:34.524835 4806 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8712dfde-3740-4f37-85c2-bc532a559a48-config\") on node \"crc\" DevicePath \"\"" Jan 26 08:09:34 crc kubenswrapper[4806]: I0126 08:09:34.524905 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zc2zv\" (UniqueName: \"kubernetes.io/projected/8712dfde-3740-4f37-85c2-bc532a559a48-kube-api-access-zc2zv\") on node \"crc\" DevicePath \"\"" Jan 26 08:09:34 crc kubenswrapper[4806]: I0126 08:09:34.524919 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5xrpx\" (UniqueName: \"kubernetes.io/projected/fcd872ae-4b9c-4f34-8b78-aac4c0602746-kube-api-access-5xrpx\") on node \"crc\" DevicePath \"\"" Jan 26 08:09:34 crc kubenswrapper[4806]: I0126 08:09:34.524933 4806 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fcd872ae-4b9c-4f34-8b78-aac4c0602746-config\") on node \"crc\" DevicePath \"\"" Jan 26 08:09:34 crc kubenswrapper[4806]: I0126 08:09:34.524946 4806 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fcd872ae-4b9c-4f34-8b78-aac4c0602746-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 08:09:34 crc kubenswrapper[4806]: I0126 08:09:34.791710 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-6kr5x"] Jan 26 08:09:34 crc kubenswrapper[4806]: I0126 08:09:34.806000 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-6kr5x"] Jan 26 08:09:34 crc kubenswrapper[4806]: I0126 08:09:34.834071 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-8lxk9"] Jan 26 08:09:34 crc kubenswrapper[4806]: I0126 08:09:34.839939 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-8lxk9"] Jan 26 08:09:35 crc kubenswrapper[4806]: I0126 08:09:35.062062 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8712dfde-3740-4f37-85c2-bc532a559a48" path="/var/lib/kubelet/pods/8712dfde-3740-4f37-85c2-bc532a559a48/volumes" Jan 26 08:09:35 crc kubenswrapper[4806]: I0126 08:09:35.063466 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fcd872ae-4b9c-4f34-8b78-aac4c0602746" path="/var/lib/kubelet/pods/fcd872ae-4b9c-4f34-8b78-aac4c0602746/volumes" Jan 26 08:09:35 crc kubenswrapper[4806]: I0126 08:09:35.414116 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"8f0a4edb-6a17-43a2-9c55-88ded9dfcc35","Type":"ContainerStarted","Data":"2218f5533d96af9fe346f68622866cb68caba66cdeb205c50f295727b54e7752"} Jan 26 08:09:35 crc kubenswrapper[4806]: I0126 08:09:35.420024 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"025ae3ca-3082-4bc8-8611-5b23cec63932","Type":"ContainerStarted","Data":"064bb866426cda4e680a27ba4d576877d2096d5ec28f01113c3d45c614189c37"} Jan 26 08:09:35 crc kubenswrapper[4806]: I0126 08:09:35.424079 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"69ceca24-1275-4ecf-a77b-acb2728d7cc4","Type":"ContainerStarted","Data":"b635702da059cf335dfbd9f41e2be0b55a1a6b04bfbe789b2532d5a9e4a8dda9"} Jan 26 08:09:35 crc kubenswrapper[4806]: I0126 08:09:35.516987 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 26 08:09:36 crc kubenswrapper[4806]: I0126 08:09:36.432323 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"04a8f9ea-cfac-4d1a-8ef9-9f3f4b2efcc5","Type":"ContainerStarted","Data":"528da487b6861e6367c853a8d70e80524c391adb26c10a72c5a2c853ce1f848f"} Jan 26 08:09:38 crc kubenswrapper[4806]: I0126 08:09:38.447054 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"69ceca24-1275-4ecf-a77b-acb2728d7cc4","Type":"ContainerStarted","Data":"d7aa24c2a46ee7b4b1658f071331f3429f86b6808b757aa4af1d13ac37496746"} Jan 26 08:09:38 crc kubenswrapper[4806]: I0126 08:09:38.448715 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-r7hjs" event={"ID":"a9b96a34-03af-4967-bec7-e1beda976396","Type":"ContainerStarted","Data":"940272a4ce0d21a18aa4c65e93ac361e38fd8ce5be8504fae047f740517e6132"} Jan 26 08:09:38 crc kubenswrapper[4806]: I0126 08:09:38.451224 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"376996ab-adaf-4126-80f8-09242f277fe2","Type":"ContainerStarted","Data":"602b74d93a08ef2a7a3ae1b2e4b814e368b44a98b957594ab16a717fd908c13d"} Jan 26 08:09:38 crc kubenswrapper[4806]: I0126 08:09:38.451655 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 26 08:09:38 crc kubenswrapper[4806]: I0126 08:09:38.452839 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"04a8f9ea-cfac-4d1a-8ef9-9f3f4b2efcc5","Type":"ContainerStarted","Data":"0a65e379e7f076e1bee51d4e1ee5326e09080c21f63caf89d069849d7aab15d8"} Jan 26 08:09:38 crc kubenswrapper[4806]: I0126 08:09:38.454013 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-jb2zj" event={"ID":"b5d47098-d6d7-4b59-a88c-4bfb7d643a89","Type":"ContainerStarted","Data":"5063e6b4608f8cf0f0cbb78cceb4e0997263ebe9c196c56d2974d05bbe2b88d1"} Jan 26 08:09:38 crc kubenswrapper[4806]: I0126 08:09:38.454372 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-jb2zj" Jan 26 08:09:38 crc kubenswrapper[4806]: I0126 08:09:38.499397 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-jb2zj" podStartSLOduration=27.817949998 podStartE2EDuration="32.499373523s" podCreationTimestamp="2026-01-26 08:09:06 +0000 UTC" firstStartedPulling="2026-01-26 08:09:33.387977122 +0000 UTC m=+952.652385178" lastFinishedPulling="2026-01-26 08:09:38.069400607 +0000 UTC m=+957.333808703" observedRunningTime="2026-01-26 08:09:38.492914816 +0000 UTC m=+957.757322882" watchObservedRunningTime="2026-01-26 08:09:38.499373523 +0000 UTC m=+957.763781579" Jan 26 08:09:38 crc kubenswrapper[4806]: I0126 08:09:38.512158 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=1.704812128 podStartE2EDuration="39.512142943s" podCreationTimestamp="2026-01-26 08:08:59 +0000 UTC" firstStartedPulling="2026-01-26 08:09:00.266831723 +0000 UTC m=+919.531239769" lastFinishedPulling="2026-01-26 08:09:38.074162528 +0000 UTC m=+957.338570584" observedRunningTime="2026-01-26 08:09:38.508108852 +0000 UTC m=+957.772516898" watchObservedRunningTime="2026-01-26 08:09:38.512142943 +0000 UTC m=+957.776550989" Jan 26 08:09:38 crc kubenswrapper[4806]: I0126 08:09:38.588604 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-j6k99"] Jan 26 08:09:38 crc kubenswrapper[4806]: I0126 08:09:38.596785 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-j6k99" Jan 26 08:09:38 crc kubenswrapper[4806]: I0126 08:09:38.610588 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 26 08:09:38 crc kubenswrapper[4806]: I0126 08:09:38.637590 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-j6k99"] Jan 26 08:09:38 crc kubenswrapper[4806]: I0126 08:09:38.710060 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c70579c-19d0-4675-9fab-75415cbcaf47-config\") pod \"ovn-controller-metrics-j6k99\" (UID: \"9c70579c-19d0-4675-9fab-75415cbcaf47\") " pod="openstack/ovn-controller-metrics-j6k99" Jan 26 08:09:38 crc kubenswrapper[4806]: I0126 08:09:38.710100 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/9c70579c-19d0-4675-9fab-75415cbcaf47-ovs-rundir\") pod \"ovn-controller-metrics-j6k99\" (UID: \"9c70579c-19d0-4675-9fab-75415cbcaf47\") " pod="openstack/ovn-controller-metrics-j6k99" Jan 26 08:09:38 crc kubenswrapper[4806]: I0126 08:09:38.710129 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjj8c\" (UniqueName: \"kubernetes.io/projected/9c70579c-19d0-4675-9fab-75415cbcaf47-kube-api-access-jjj8c\") pod \"ovn-controller-metrics-j6k99\" (UID: \"9c70579c-19d0-4675-9fab-75415cbcaf47\") " pod="openstack/ovn-controller-metrics-j6k99" Jan 26 08:09:38 crc kubenswrapper[4806]: I0126 08:09:38.710159 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/9c70579c-19d0-4675-9fab-75415cbcaf47-ovn-rundir\") pod \"ovn-controller-metrics-j6k99\" (UID: \"9c70579c-19d0-4675-9fab-75415cbcaf47\") " pod="openstack/ovn-controller-metrics-j6k99" Jan 26 08:09:38 crc kubenswrapper[4806]: I0126 08:09:38.710182 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c70579c-19d0-4675-9fab-75415cbcaf47-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-j6k99\" (UID: \"9c70579c-19d0-4675-9fab-75415cbcaf47\") " pod="openstack/ovn-controller-metrics-j6k99" Jan 26 08:09:38 crc kubenswrapper[4806]: I0126 08:09:38.710268 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c70579c-19d0-4675-9fab-75415cbcaf47-combined-ca-bundle\") pod \"ovn-controller-metrics-j6k99\" (UID: \"9c70579c-19d0-4675-9fab-75415cbcaf47\") " pod="openstack/ovn-controller-metrics-j6k99" Jan 26 08:09:38 crc kubenswrapper[4806]: I0126 08:09:38.740809 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-z28t5"] Jan 26 08:09:38 crc kubenswrapper[4806]: I0126 08:09:38.768578 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-kbjs2"] Jan 26 08:09:38 crc kubenswrapper[4806]: I0126 08:09:38.769829 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-kbjs2" Jan 26 08:09:38 crc kubenswrapper[4806]: I0126 08:09:38.773529 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 26 08:09:38 crc kubenswrapper[4806]: I0126 08:09:38.801635 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-kbjs2"] Jan 26 08:09:38 crc kubenswrapper[4806]: I0126 08:09:38.812293 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c70579c-19d0-4675-9fab-75415cbcaf47-combined-ca-bundle\") pod \"ovn-controller-metrics-j6k99\" (UID: \"9c70579c-19d0-4675-9fab-75415cbcaf47\") " pod="openstack/ovn-controller-metrics-j6k99" Jan 26 08:09:38 crc kubenswrapper[4806]: I0126 08:09:38.812391 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c70579c-19d0-4675-9fab-75415cbcaf47-config\") pod \"ovn-controller-metrics-j6k99\" (UID: \"9c70579c-19d0-4675-9fab-75415cbcaf47\") " pod="openstack/ovn-controller-metrics-j6k99" Jan 26 08:09:38 crc kubenswrapper[4806]: I0126 08:09:38.812414 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/9c70579c-19d0-4675-9fab-75415cbcaf47-ovs-rundir\") pod \"ovn-controller-metrics-j6k99\" (UID: \"9c70579c-19d0-4675-9fab-75415cbcaf47\") " pod="openstack/ovn-controller-metrics-j6k99" Jan 26 08:09:38 crc kubenswrapper[4806]: I0126 08:09:38.812437 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jjj8c\" (UniqueName: \"kubernetes.io/projected/9c70579c-19d0-4675-9fab-75415cbcaf47-kube-api-access-jjj8c\") pod \"ovn-controller-metrics-j6k99\" (UID: \"9c70579c-19d0-4675-9fab-75415cbcaf47\") " pod="openstack/ovn-controller-metrics-j6k99" Jan 26 08:09:38 crc kubenswrapper[4806]: I0126 08:09:38.812466 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/9c70579c-19d0-4675-9fab-75415cbcaf47-ovn-rundir\") pod \"ovn-controller-metrics-j6k99\" (UID: \"9c70579c-19d0-4675-9fab-75415cbcaf47\") " pod="openstack/ovn-controller-metrics-j6k99" Jan 26 08:09:38 crc kubenswrapper[4806]: I0126 08:09:38.812488 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c70579c-19d0-4675-9fab-75415cbcaf47-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-j6k99\" (UID: \"9c70579c-19d0-4675-9fab-75415cbcaf47\") " pod="openstack/ovn-controller-metrics-j6k99" Jan 26 08:09:38 crc kubenswrapper[4806]: I0126 08:09:38.813176 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c70579c-19d0-4675-9fab-75415cbcaf47-config\") pod \"ovn-controller-metrics-j6k99\" (UID: \"9c70579c-19d0-4675-9fab-75415cbcaf47\") " pod="openstack/ovn-controller-metrics-j6k99" Jan 26 08:09:38 crc kubenswrapper[4806]: I0126 08:09:38.813660 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/9c70579c-19d0-4675-9fab-75415cbcaf47-ovs-rundir\") pod \"ovn-controller-metrics-j6k99\" (UID: \"9c70579c-19d0-4675-9fab-75415cbcaf47\") " pod="openstack/ovn-controller-metrics-j6k99" Jan 26 08:09:38 crc kubenswrapper[4806]: I0126 08:09:38.813701 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/9c70579c-19d0-4675-9fab-75415cbcaf47-ovn-rundir\") pod \"ovn-controller-metrics-j6k99\" (UID: \"9c70579c-19d0-4675-9fab-75415cbcaf47\") " pod="openstack/ovn-controller-metrics-j6k99" Jan 26 08:09:38 crc kubenswrapper[4806]: I0126 08:09:38.817718 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c70579c-19d0-4675-9fab-75415cbcaf47-combined-ca-bundle\") pod \"ovn-controller-metrics-j6k99\" (UID: \"9c70579c-19d0-4675-9fab-75415cbcaf47\") " pod="openstack/ovn-controller-metrics-j6k99" Jan 26 08:09:38 crc kubenswrapper[4806]: I0126 08:09:38.836096 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c70579c-19d0-4675-9fab-75415cbcaf47-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-j6k99\" (UID: \"9c70579c-19d0-4675-9fab-75415cbcaf47\") " pod="openstack/ovn-controller-metrics-j6k99" Jan 26 08:09:38 crc kubenswrapper[4806]: I0126 08:09:38.846869 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jjj8c\" (UniqueName: \"kubernetes.io/projected/9c70579c-19d0-4675-9fab-75415cbcaf47-kube-api-access-jjj8c\") pod \"ovn-controller-metrics-j6k99\" (UID: \"9c70579c-19d0-4675-9fab-75415cbcaf47\") " pod="openstack/ovn-controller-metrics-j6k99" Jan 26 08:09:38 crc kubenswrapper[4806]: I0126 08:09:38.914029 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be395281-fc30-4d88-8d0c-e1528c53d8cb-config\") pod \"dnsmasq-dns-5bf47b49b7-kbjs2\" (UID: \"be395281-fc30-4d88-8d0c-e1528c53d8cb\") " pod="openstack/dnsmasq-dns-5bf47b49b7-kbjs2" Jan 26 08:09:38 crc kubenswrapper[4806]: I0126 08:09:38.914078 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6pww\" (UniqueName: \"kubernetes.io/projected/be395281-fc30-4d88-8d0c-e1528c53d8cb-kube-api-access-m6pww\") pod \"dnsmasq-dns-5bf47b49b7-kbjs2\" (UID: \"be395281-fc30-4d88-8d0c-e1528c53d8cb\") " pod="openstack/dnsmasq-dns-5bf47b49b7-kbjs2" Jan 26 08:09:38 crc kubenswrapper[4806]: I0126 08:09:38.914166 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/be395281-fc30-4d88-8d0c-e1528c53d8cb-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-kbjs2\" (UID: \"be395281-fc30-4d88-8d0c-e1528c53d8cb\") " pod="openstack/dnsmasq-dns-5bf47b49b7-kbjs2" Jan 26 08:09:38 crc kubenswrapper[4806]: I0126 08:09:38.914185 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/be395281-fc30-4d88-8d0c-e1528c53d8cb-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-kbjs2\" (UID: \"be395281-fc30-4d88-8d0c-e1528c53d8cb\") " pod="openstack/dnsmasq-dns-5bf47b49b7-kbjs2" Jan 26 08:09:38 crc kubenswrapper[4806]: I0126 08:09:38.953256 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-j6k99" Jan 26 08:09:39 crc kubenswrapper[4806]: I0126 08:09:39.015498 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be395281-fc30-4d88-8d0c-e1528c53d8cb-config\") pod \"dnsmasq-dns-5bf47b49b7-kbjs2\" (UID: \"be395281-fc30-4d88-8d0c-e1528c53d8cb\") " pod="openstack/dnsmasq-dns-5bf47b49b7-kbjs2" Jan 26 08:09:39 crc kubenswrapper[4806]: I0126 08:09:39.015656 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6pww\" (UniqueName: \"kubernetes.io/projected/be395281-fc30-4d88-8d0c-e1528c53d8cb-kube-api-access-m6pww\") pod \"dnsmasq-dns-5bf47b49b7-kbjs2\" (UID: \"be395281-fc30-4d88-8d0c-e1528c53d8cb\") " pod="openstack/dnsmasq-dns-5bf47b49b7-kbjs2" Jan 26 08:09:39 crc kubenswrapper[4806]: I0126 08:09:39.015719 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/be395281-fc30-4d88-8d0c-e1528c53d8cb-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-kbjs2\" (UID: \"be395281-fc30-4d88-8d0c-e1528c53d8cb\") " pod="openstack/dnsmasq-dns-5bf47b49b7-kbjs2" Jan 26 08:09:39 crc kubenswrapper[4806]: I0126 08:09:39.015736 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/be395281-fc30-4d88-8d0c-e1528c53d8cb-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-kbjs2\" (UID: \"be395281-fc30-4d88-8d0c-e1528c53d8cb\") " pod="openstack/dnsmasq-dns-5bf47b49b7-kbjs2" Jan 26 08:09:39 crc kubenswrapper[4806]: I0126 08:09:39.016965 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/be395281-fc30-4d88-8d0c-e1528c53d8cb-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-kbjs2\" (UID: \"be395281-fc30-4d88-8d0c-e1528c53d8cb\") " pod="openstack/dnsmasq-dns-5bf47b49b7-kbjs2" Jan 26 08:09:39 crc kubenswrapper[4806]: I0126 08:09:39.017250 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/be395281-fc30-4d88-8d0c-e1528c53d8cb-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-kbjs2\" (UID: \"be395281-fc30-4d88-8d0c-e1528c53d8cb\") " pod="openstack/dnsmasq-dns-5bf47b49b7-kbjs2" Jan 26 08:09:39 crc kubenswrapper[4806]: I0126 08:09:39.019227 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be395281-fc30-4d88-8d0c-e1528c53d8cb-config\") pod \"dnsmasq-dns-5bf47b49b7-kbjs2\" (UID: \"be395281-fc30-4d88-8d0c-e1528c53d8cb\") " pod="openstack/dnsmasq-dns-5bf47b49b7-kbjs2" Jan 26 08:09:39 crc kubenswrapper[4806]: I0126 08:09:39.041264 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6pww\" (UniqueName: \"kubernetes.io/projected/be395281-fc30-4d88-8d0c-e1528c53d8cb-kube-api-access-m6pww\") pod \"dnsmasq-dns-5bf47b49b7-kbjs2\" (UID: \"be395281-fc30-4d88-8d0c-e1528c53d8cb\") " pod="openstack/dnsmasq-dns-5bf47b49b7-kbjs2" Jan 26 08:09:39 crc kubenswrapper[4806]: I0126 08:09:39.099773 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-kbjs2" Jan 26 08:09:39 crc kubenswrapper[4806]: I0126 08:09:39.138462 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-nc6vl"] Jan 26 08:09:39 crc kubenswrapper[4806]: I0126 08:09:39.145067 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8554648995-92zz8"] Jan 26 08:09:39 crc kubenswrapper[4806]: I0126 08:09:39.149357 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-92zz8" Jan 26 08:09:39 crc kubenswrapper[4806]: I0126 08:09:39.152745 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 26 08:09:39 crc kubenswrapper[4806]: I0126 08:09:39.205965 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-92zz8"] Jan 26 08:09:39 crc kubenswrapper[4806]: I0126 08:09:39.219370 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16f86cb8-055a-4696-9aca-e994ec8ba516-config\") pod \"dnsmasq-dns-8554648995-92zz8\" (UID: \"16f86cb8-055a-4696-9aca-e994ec8ba516\") " pod="openstack/dnsmasq-dns-8554648995-92zz8" Jan 26 08:09:39 crc kubenswrapper[4806]: I0126 08:09:39.219423 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62snv\" (UniqueName: \"kubernetes.io/projected/16f86cb8-055a-4696-9aca-e994ec8ba516-kube-api-access-62snv\") pod \"dnsmasq-dns-8554648995-92zz8\" (UID: \"16f86cb8-055a-4696-9aca-e994ec8ba516\") " pod="openstack/dnsmasq-dns-8554648995-92zz8" Jan 26 08:09:39 crc kubenswrapper[4806]: I0126 08:09:39.219462 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/16f86cb8-055a-4696-9aca-e994ec8ba516-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-92zz8\" (UID: \"16f86cb8-055a-4696-9aca-e994ec8ba516\") " pod="openstack/dnsmasq-dns-8554648995-92zz8" Jan 26 08:09:39 crc kubenswrapper[4806]: I0126 08:09:39.219548 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/16f86cb8-055a-4696-9aca-e994ec8ba516-dns-svc\") pod \"dnsmasq-dns-8554648995-92zz8\" (UID: \"16f86cb8-055a-4696-9aca-e994ec8ba516\") " pod="openstack/dnsmasq-dns-8554648995-92zz8" Jan 26 08:09:39 crc kubenswrapper[4806]: I0126 08:09:39.219595 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/16f86cb8-055a-4696-9aca-e994ec8ba516-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-92zz8\" (UID: \"16f86cb8-055a-4696-9aca-e994ec8ba516\") " pod="openstack/dnsmasq-dns-8554648995-92zz8" Jan 26 08:09:39 crc kubenswrapper[4806]: I0126 08:09:39.279804 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-z28t5" Jan 26 08:09:39 crc kubenswrapper[4806]: I0126 08:09:39.320143 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-42z7w\" (UniqueName: \"kubernetes.io/projected/bf67838f-5d5e-48a8-ba5a-ed0c64d2756f-kube-api-access-42z7w\") pod \"bf67838f-5d5e-48a8-ba5a-ed0c64d2756f\" (UID: \"bf67838f-5d5e-48a8-ba5a-ed0c64d2756f\") " Jan 26 08:09:39 crc kubenswrapper[4806]: I0126 08:09:39.320329 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf67838f-5d5e-48a8-ba5a-ed0c64d2756f-config\") pod \"bf67838f-5d5e-48a8-ba5a-ed0c64d2756f\" (UID: \"bf67838f-5d5e-48a8-ba5a-ed0c64d2756f\") " Jan 26 08:09:39 crc kubenswrapper[4806]: I0126 08:09:39.320394 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bf67838f-5d5e-48a8-ba5a-ed0c64d2756f-dns-svc\") pod \"bf67838f-5d5e-48a8-ba5a-ed0c64d2756f\" (UID: \"bf67838f-5d5e-48a8-ba5a-ed0c64d2756f\") " Jan 26 08:09:39 crc kubenswrapper[4806]: I0126 08:09:39.320572 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16f86cb8-055a-4696-9aca-e994ec8ba516-config\") pod \"dnsmasq-dns-8554648995-92zz8\" (UID: \"16f86cb8-055a-4696-9aca-e994ec8ba516\") " pod="openstack/dnsmasq-dns-8554648995-92zz8" Jan 26 08:09:39 crc kubenswrapper[4806]: I0126 08:09:39.320598 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62snv\" (UniqueName: \"kubernetes.io/projected/16f86cb8-055a-4696-9aca-e994ec8ba516-kube-api-access-62snv\") pod \"dnsmasq-dns-8554648995-92zz8\" (UID: \"16f86cb8-055a-4696-9aca-e994ec8ba516\") " pod="openstack/dnsmasq-dns-8554648995-92zz8" Jan 26 08:09:39 crc kubenswrapper[4806]: I0126 08:09:39.320631 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/16f86cb8-055a-4696-9aca-e994ec8ba516-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-92zz8\" (UID: \"16f86cb8-055a-4696-9aca-e994ec8ba516\") " pod="openstack/dnsmasq-dns-8554648995-92zz8" Jan 26 08:09:39 crc kubenswrapper[4806]: I0126 08:09:39.320706 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/16f86cb8-055a-4696-9aca-e994ec8ba516-dns-svc\") pod \"dnsmasq-dns-8554648995-92zz8\" (UID: \"16f86cb8-055a-4696-9aca-e994ec8ba516\") " pod="openstack/dnsmasq-dns-8554648995-92zz8" Jan 26 08:09:39 crc kubenswrapper[4806]: I0126 08:09:39.320752 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/16f86cb8-055a-4696-9aca-e994ec8ba516-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-92zz8\" (UID: \"16f86cb8-055a-4696-9aca-e994ec8ba516\") " pod="openstack/dnsmasq-dns-8554648995-92zz8" Jan 26 08:09:39 crc kubenswrapper[4806]: I0126 08:09:39.322092 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/16f86cb8-055a-4696-9aca-e994ec8ba516-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-92zz8\" (UID: \"16f86cb8-055a-4696-9aca-e994ec8ba516\") " pod="openstack/dnsmasq-dns-8554648995-92zz8" Jan 26 08:09:39 crc kubenswrapper[4806]: I0126 08:09:39.323109 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf67838f-5d5e-48a8-ba5a-ed0c64d2756f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "bf67838f-5d5e-48a8-ba5a-ed0c64d2756f" (UID: "bf67838f-5d5e-48a8-ba5a-ed0c64d2756f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 08:09:39 crc kubenswrapper[4806]: I0126 08:09:39.323131 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf67838f-5d5e-48a8-ba5a-ed0c64d2756f-config" (OuterVolumeSpecName: "config") pod "bf67838f-5d5e-48a8-ba5a-ed0c64d2756f" (UID: "bf67838f-5d5e-48a8-ba5a-ed0c64d2756f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 08:09:39 crc kubenswrapper[4806]: I0126 08:09:39.323328 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/16f86cb8-055a-4696-9aca-e994ec8ba516-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-92zz8\" (UID: \"16f86cb8-055a-4696-9aca-e994ec8ba516\") " pod="openstack/dnsmasq-dns-8554648995-92zz8" Jan 26 08:09:39 crc kubenswrapper[4806]: I0126 08:09:39.323619 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/16f86cb8-055a-4696-9aca-e994ec8ba516-dns-svc\") pod \"dnsmasq-dns-8554648995-92zz8\" (UID: \"16f86cb8-055a-4696-9aca-e994ec8ba516\") " pod="openstack/dnsmasq-dns-8554648995-92zz8" Jan 26 08:09:39 crc kubenswrapper[4806]: I0126 08:09:39.328467 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16f86cb8-055a-4696-9aca-e994ec8ba516-config\") pod \"dnsmasq-dns-8554648995-92zz8\" (UID: \"16f86cb8-055a-4696-9aca-e994ec8ba516\") " pod="openstack/dnsmasq-dns-8554648995-92zz8" Jan 26 08:09:39 crc kubenswrapper[4806]: I0126 08:09:39.334012 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf67838f-5d5e-48a8-ba5a-ed0c64d2756f-kube-api-access-42z7w" (OuterVolumeSpecName: "kube-api-access-42z7w") pod "bf67838f-5d5e-48a8-ba5a-ed0c64d2756f" (UID: "bf67838f-5d5e-48a8-ba5a-ed0c64d2756f"). InnerVolumeSpecName "kube-api-access-42z7w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:09:39 crc kubenswrapper[4806]: I0126 08:09:39.360089 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62snv\" (UniqueName: \"kubernetes.io/projected/16f86cb8-055a-4696-9aca-e994ec8ba516-kube-api-access-62snv\") pod \"dnsmasq-dns-8554648995-92zz8\" (UID: \"16f86cb8-055a-4696-9aca-e994ec8ba516\") " pod="openstack/dnsmasq-dns-8554648995-92zz8" Jan 26 08:09:39 crc kubenswrapper[4806]: I0126 08:09:39.422659 4806 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf67838f-5d5e-48a8-ba5a-ed0c64d2756f-config\") on node \"crc\" DevicePath \"\"" Jan 26 08:09:39 crc kubenswrapper[4806]: I0126 08:09:39.422934 4806 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bf67838f-5d5e-48a8-ba5a-ed0c64d2756f-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 08:09:39 crc kubenswrapper[4806]: I0126 08:09:39.422945 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-42z7w\" (UniqueName: \"kubernetes.io/projected/bf67838f-5d5e-48a8-ba5a-ed0c64d2756f-kube-api-access-42z7w\") on node \"crc\" DevicePath \"\"" Jan 26 08:09:39 crc kubenswrapper[4806]: I0126 08:09:39.475718 4806 generic.go:334] "Generic (PLEG): container finished" podID="a9b96a34-03af-4967-bec7-e1beda976396" containerID="940272a4ce0d21a18aa4c65e93ac361e38fd8ce5be8504fae047f740517e6132" exitCode=0 Jan 26 08:09:39 crc kubenswrapper[4806]: I0126 08:09:39.475808 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-r7hjs" event={"ID":"a9b96a34-03af-4967-bec7-e1beda976396","Type":"ContainerDied","Data":"940272a4ce0d21a18aa4c65e93ac361e38fd8ce5be8504fae047f740517e6132"} Jan 26 08:09:39 crc kubenswrapper[4806]: I0126 08:09:39.479902 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-z28t5" Jan 26 08:09:39 crc kubenswrapper[4806]: I0126 08:09:39.480327 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-z28t5" event={"ID":"bf67838f-5d5e-48a8-ba5a-ed0c64d2756f","Type":"ContainerDied","Data":"fc7b5a287a722aa632011a18f078f8b31df229abc0b5e67da12cdfc8eea89399"} Jan 26 08:09:39 crc kubenswrapper[4806]: I0126 08:09:39.562830 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-z28t5"] Jan 26 08:09:39 crc kubenswrapper[4806]: I0126 08:09:39.571844 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-z28t5"] Jan 26 08:09:39 crc kubenswrapper[4806]: I0126 08:09:39.594903 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-92zz8" Jan 26 08:09:39 crc kubenswrapper[4806]: I0126 08:09:39.718256 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-j6k99"] Jan 26 08:09:39 crc kubenswrapper[4806]: W0126 08:09:39.733892 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9c70579c_19d0_4675_9fab_75415cbcaf47.slice/crio-4e187191a438db8c4149b84efc7e45e3939cc40df4b300e7dd9151cd460aaf98 WatchSource:0}: Error finding container 4e187191a438db8c4149b84efc7e45e3939cc40df4b300e7dd9151cd460aaf98: Status 404 returned error can't find the container with id 4e187191a438db8c4149b84efc7e45e3939cc40df4b300e7dd9151cd460aaf98 Jan 26 08:09:39 crc kubenswrapper[4806]: I0126 08:09:39.804035 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-nc6vl" Jan 26 08:09:39 crc kubenswrapper[4806]: I0126 08:09:39.830918 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01b8727d-453c-4e87-aaa9-e938db5d17dc-config" (OuterVolumeSpecName: "config") pod "01b8727d-453c-4e87-aaa9-e938db5d17dc" (UID: "01b8727d-453c-4e87-aaa9-e938db5d17dc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 08:09:39 crc kubenswrapper[4806]: I0126 08:09:39.831017 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01b8727d-453c-4e87-aaa9-e938db5d17dc-config\") pod \"01b8727d-453c-4e87-aaa9-e938db5d17dc\" (UID: \"01b8727d-453c-4e87-aaa9-e938db5d17dc\") " Jan 26 08:09:39 crc kubenswrapper[4806]: I0126 08:09:39.831069 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/01b8727d-453c-4e87-aaa9-e938db5d17dc-dns-svc\") pod \"01b8727d-453c-4e87-aaa9-e938db5d17dc\" (UID: \"01b8727d-453c-4e87-aaa9-e938db5d17dc\") " Jan 26 08:09:39 crc kubenswrapper[4806]: I0126 08:09:39.831224 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-89k5s\" (UniqueName: \"kubernetes.io/projected/01b8727d-453c-4e87-aaa9-e938db5d17dc-kube-api-access-89k5s\") pod \"01b8727d-453c-4e87-aaa9-e938db5d17dc\" (UID: \"01b8727d-453c-4e87-aaa9-e938db5d17dc\") " Jan 26 08:09:39 crc kubenswrapper[4806]: I0126 08:09:39.831581 4806 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01b8727d-453c-4e87-aaa9-e938db5d17dc-config\") on node \"crc\" DevicePath \"\"" Jan 26 08:09:39 crc kubenswrapper[4806]: I0126 08:09:39.832659 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01b8727d-453c-4e87-aaa9-e938db5d17dc-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "01b8727d-453c-4e87-aaa9-e938db5d17dc" (UID: "01b8727d-453c-4e87-aaa9-e938db5d17dc"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 08:09:39 crc kubenswrapper[4806]: I0126 08:09:39.841278 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01b8727d-453c-4e87-aaa9-e938db5d17dc-kube-api-access-89k5s" (OuterVolumeSpecName: "kube-api-access-89k5s") pod "01b8727d-453c-4e87-aaa9-e938db5d17dc" (UID: "01b8727d-453c-4e87-aaa9-e938db5d17dc"). InnerVolumeSpecName "kube-api-access-89k5s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:09:39 crc kubenswrapper[4806]: I0126 08:09:39.844623 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-kbjs2"] Jan 26 08:09:39 crc kubenswrapper[4806]: I0126 08:09:39.934823 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-89k5s\" (UniqueName: \"kubernetes.io/projected/01b8727d-453c-4e87-aaa9-e938db5d17dc-kube-api-access-89k5s\") on node \"crc\" DevicePath \"\"" Jan 26 08:09:39 crc kubenswrapper[4806]: I0126 08:09:39.934853 4806 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/01b8727d-453c-4e87-aaa9-e938db5d17dc-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 08:09:40 crc kubenswrapper[4806]: I0126 08:09:40.224074 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-92zz8"] Jan 26 08:09:40 crc kubenswrapper[4806]: I0126 08:09:40.492215 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-kbjs2" event={"ID":"be395281-fc30-4d88-8d0c-e1528c53d8cb","Type":"ContainerStarted","Data":"f8f189b29e5b67f2fe22e7ea1a74fe86ce9882cdb266de2bbeef69a8a2c6daa2"} Jan 26 08:09:40 crc kubenswrapper[4806]: I0126 08:09:40.496385 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"70aa246b-31a1-4800-b76e-d50a2002a5f8","Type":"ContainerStarted","Data":"f14a97a810325b3c8d26c7150edb7a84cf12200e4f6da03239730927e4dd5019"} Jan 26 08:09:40 crc kubenswrapper[4806]: I0126 08:09:40.503773 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-92zz8" event={"ID":"16f86cb8-055a-4696-9aca-e994ec8ba516","Type":"ContainerStarted","Data":"043216a60f60cc272f2b4d25e6d11834320d62a380e378d8d63635878f18a38c"} Jan 26 08:09:40 crc kubenswrapper[4806]: I0126 08:09:40.505215 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"cc07bbaf-381b-4edc-acd9-48211c3eb4c6","Type":"ContainerStarted","Data":"3eb0ce335ea65fbff8eb63b8d5d597c0e95ebd5e09fc5b915be8d057e8522be8"} Jan 26 08:09:40 crc kubenswrapper[4806]: I0126 08:09:40.509479 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-nc6vl" event={"ID":"01b8727d-453c-4e87-aaa9-e938db5d17dc","Type":"ContainerDied","Data":"163dad23fb81989fe0e451d00ea09dfa77218cacba380bed301aff1277e6275b"} Jan 26 08:09:40 crc kubenswrapper[4806]: I0126 08:09:40.509610 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-nc6vl" Jan 26 08:09:40 crc kubenswrapper[4806]: I0126 08:09:40.517935 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-j6k99" event={"ID":"9c70579c-19d0-4675-9fab-75415cbcaf47","Type":"ContainerStarted","Data":"4e187191a438db8c4149b84efc7e45e3939cc40df4b300e7dd9151cd460aaf98"} Jan 26 08:09:40 crc kubenswrapper[4806]: I0126 08:09:40.521562 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-r7hjs" event={"ID":"a9b96a34-03af-4967-bec7-e1beda976396","Type":"ContainerStarted","Data":"45faecb40f5cb73b57076697bc5bc03102855d8e2e19ec60cbf19b6fc037a49f"} Jan 26 08:09:40 crc kubenswrapper[4806]: I0126 08:09:40.608097 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-nc6vl"] Jan 26 08:09:40 crc kubenswrapper[4806]: I0126 08:09:40.611030 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-nc6vl"] Jan 26 08:09:41 crc kubenswrapper[4806]: I0126 08:09:41.084782 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01b8727d-453c-4e87-aaa9-e938db5d17dc" path="/var/lib/kubelet/pods/01b8727d-453c-4e87-aaa9-e938db5d17dc/volumes" Jan 26 08:09:41 crc kubenswrapper[4806]: I0126 08:09:41.087496 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf67838f-5d5e-48a8-ba5a-ed0c64d2756f" path="/var/lib/kubelet/pods/bf67838f-5d5e-48a8-ba5a-ed0c64d2756f/volumes" Jan 26 08:09:41 crc kubenswrapper[4806]: I0126 08:09:41.529423 4806 generic.go:334] "Generic (PLEG): container finished" podID="be395281-fc30-4d88-8d0c-e1528c53d8cb" containerID="5f1616aa5ffbd2b59b472d6fa5afcea9ec9bc96c4fb28901d02563f6aa4d81fa" exitCode=0 Jan 26 08:09:41 crc kubenswrapper[4806]: I0126 08:09:41.529489 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-kbjs2" event={"ID":"be395281-fc30-4d88-8d0c-e1528c53d8cb","Type":"ContainerDied","Data":"5f1616aa5ffbd2b59b472d6fa5afcea9ec9bc96c4fb28901d02563f6aa4d81fa"} Jan 26 08:09:41 crc kubenswrapper[4806]: I0126 08:09:41.533318 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-r7hjs" event={"ID":"a9b96a34-03af-4967-bec7-e1beda976396","Type":"ContainerStarted","Data":"cb53614e629bb16d0b7e7d1c8274d1f37492b0aca14280d9a6ec536875364771"} Jan 26 08:09:41 crc kubenswrapper[4806]: I0126 08:09:41.533462 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-r7hjs" Jan 26 08:09:41 crc kubenswrapper[4806]: I0126 08:09:41.579340 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-r7hjs" Jan 26 08:09:41 crc kubenswrapper[4806]: I0126 08:09:41.581010 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-r7hjs" podStartSLOduration=31.580649369 podStartE2EDuration="35.58099213s" podCreationTimestamp="2026-01-26 08:09:06 +0000 UTC" firstStartedPulling="2026-01-26 08:09:34.069040866 +0000 UTC m=+953.333448922" lastFinishedPulling="2026-01-26 08:09:38.069383617 +0000 UTC m=+957.333791683" observedRunningTime="2026-01-26 08:09:41.578902073 +0000 UTC m=+960.843310129" watchObservedRunningTime="2026-01-26 08:09:41.58099213 +0000 UTC m=+960.845400186" Jan 26 08:09:43 crc kubenswrapper[4806]: I0126 08:09:43.556035 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-kbjs2" event={"ID":"be395281-fc30-4d88-8d0c-e1528c53d8cb","Type":"ContainerStarted","Data":"836361e735d8428b2cb5002c0ac3c78462a362ca92f777cabbc7dfa2775ceb3d"} Jan 26 08:09:43 crc kubenswrapper[4806]: I0126 08:09:43.556396 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5bf47b49b7-kbjs2" Jan 26 08:09:43 crc kubenswrapper[4806]: I0126 08:09:43.566402 4806 generic.go:334] "Generic (PLEG): container finished" podID="16f86cb8-055a-4696-9aca-e994ec8ba516" containerID="7d417ef4490e8773232f88b1cf4592ad0f4d65ba45ff8f440e2516c9fde69bad" exitCode=0 Jan 26 08:09:43 crc kubenswrapper[4806]: I0126 08:09:43.566457 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-92zz8" event={"ID":"16f86cb8-055a-4696-9aca-e994ec8ba516","Type":"ContainerDied","Data":"7d417ef4490e8773232f88b1cf4592ad0f4d65ba45ff8f440e2516c9fde69bad"} Jan 26 08:09:43 crc kubenswrapper[4806]: I0126 08:09:43.578293 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5bf47b49b7-kbjs2" podStartSLOduration=5.155075982 podStartE2EDuration="5.578275861s" podCreationTimestamp="2026-01-26 08:09:38 +0000 UTC" firstStartedPulling="2026-01-26 08:09:39.868299136 +0000 UTC m=+959.132707192" lastFinishedPulling="2026-01-26 08:09:40.291499015 +0000 UTC m=+959.555907071" observedRunningTime="2026-01-26 08:09:43.575512965 +0000 UTC m=+962.839921021" watchObservedRunningTime="2026-01-26 08:09:43.578275861 +0000 UTC m=+962.842683917" Jan 26 08:09:44 crc kubenswrapper[4806]: I0126 08:09:44.663713 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 26 08:09:45 crc kubenswrapper[4806]: I0126 08:09:45.676295 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-92zz8" event={"ID":"16f86cb8-055a-4696-9aca-e994ec8ba516","Type":"ContainerStarted","Data":"f3389644475b61a1246b1cc340f48ba6b9a63cc291cbecb4c4ec17dc64516e81"} Jan 26 08:09:45 crc kubenswrapper[4806]: I0126 08:09:45.676784 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8554648995-92zz8" Jan 26 08:09:45 crc kubenswrapper[4806]: I0126 08:09:45.678690 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-j6k99" event={"ID":"9c70579c-19d0-4675-9fab-75415cbcaf47","Type":"ContainerStarted","Data":"3a307d0f1985cd9f4aa1443a61b7152a789408378cb57fd27f19f2447e60bdfa"} Jan 26 08:09:45 crc kubenswrapper[4806]: I0126 08:09:45.680787 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"04a8f9ea-cfac-4d1a-8ef9-9f3f4b2efcc5","Type":"ContainerStarted","Data":"246c45832e5c34c42f3d31ae600fb14c62560ea2d0b72566ee14d7fb73d7b02c"} Jan 26 08:09:45 crc kubenswrapper[4806]: I0126 08:09:45.684939 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"69ceca24-1275-4ecf-a77b-acb2728d7cc4","Type":"ContainerStarted","Data":"d267ade56db4ecceb8566ec8adda30fb251f375cc844616432af35997433941d"} Jan 26 08:09:45 crc kubenswrapper[4806]: I0126 08:09:45.694515 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8554648995-92zz8" podStartSLOduration=5.902671574 podStartE2EDuration="6.694500726s" podCreationTimestamp="2026-01-26 08:09:39 +0000 UTC" firstStartedPulling="2026-01-26 08:09:40.24536704 +0000 UTC m=+959.509775096" lastFinishedPulling="2026-01-26 08:09:41.037196192 +0000 UTC m=+960.301604248" observedRunningTime="2026-01-26 08:09:45.692174092 +0000 UTC m=+964.956582148" watchObservedRunningTime="2026-01-26 08:09:45.694500726 +0000 UTC m=+964.958908782" Jan 26 08:09:45 crc kubenswrapper[4806]: I0126 08:09:45.713884 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=32.910215775 podStartE2EDuration="42.713864037s" podCreationTimestamp="2026-01-26 08:09:03 +0000 UTC" firstStartedPulling="2026-01-26 08:09:35.565707694 +0000 UTC m=+954.830115750" lastFinishedPulling="2026-01-26 08:09:45.369355956 +0000 UTC m=+964.633764012" observedRunningTime="2026-01-26 08:09:45.706897606 +0000 UTC m=+964.971305672" watchObservedRunningTime="2026-01-26 08:09:45.713864037 +0000 UTC m=+964.978272093" Jan 26 08:09:45 crc kubenswrapper[4806]: I0126 08:09:45.741375 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=27.860096127 podStartE2EDuration="38.741344691s" podCreationTimestamp="2026-01-26 08:09:07 +0000 UTC" firstStartedPulling="2026-01-26 08:09:34.507106513 +0000 UTC m=+953.771514569" lastFinishedPulling="2026-01-26 08:09:45.388355077 +0000 UTC m=+964.652763133" observedRunningTime="2026-01-26 08:09:45.731374757 +0000 UTC m=+964.995782823" watchObservedRunningTime="2026-01-26 08:09:45.741344691 +0000 UTC m=+965.005752757" Jan 26 08:09:45 crc kubenswrapper[4806]: I0126 08:09:45.753464 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-j6k99" podStartSLOduration=2.060606482 podStartE2EDuration="7.753448353s" podCreationTimestamp="2026-01-26 08:09:38 +0000 UTC" firstStartedPulling="2026-01-26 08:09:39.73795453 +0000 UTC m=+959.002362586" lastFinishedPulling="2026-01-26 08:09:45.430796401 +0000 UTC m=+964.695204457" observedRunningTime="2026-01-26 08:09:45.750887342 +0000 UTC m=+965.015295398" watchObservedRunningTime="2026-01-26 08:09:45.753448353 +0000 UTC m=+965.017856409" Jan 26 08:09:46 crc kubenswrapper[4806]: I0126 08:09:46.692828 4806 generic.go:334] "Generic (PLEG): container finished" podID="70aa246b-31a1-4800-b76e-d50a2002a5f8" containerID="f14a97a810325b3c8d26c7150edb7a84cf12200e4f6da03239730927e4dd5019" exitCode=0 Jan 26 08:09:46 crc kubenswrapper[4806]: I0126 08:09:46.692896 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"70aa246b-31a1-4800-b76e-d50a2002a5f8","Type":"ContainerDied","Data":"f14a97a810325b3c8d26c7150edb7a84cf12200e4f6da03239730927e4dd5019"} Jan 26 08:09:46 crc kubenswrapper[4806]: I0126 08:09:46.695662 4806 generic.go:334] "Generic (PLEG): container finished" podID="cc07bbaf-381b-4edc-acd9-48211c3eb4c6" containerID="3eb0ce335ea65fbff8eb63b8d5d597c0e95ebd5e09fc5b915be8d057e8522be8" exitCode=0 Jan 26 08:09:46 crc kubenswrapper[4806]: I0126 08:09:46.696160 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"cc07bbaf-381b-4edc-acd9-48211c3eb4c6","Type":"ContainerDied","Data":"3eb0ce335ea65fbff8eb63b8d5d597c0e95ebd5e09fc5b915be8d057e8522be8"} Jan 26 08:09:47 crc kubenswrapper[4806]: I0126 08:09:47.358709 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 26 08:09:47 crc kubenswrapper[4806]: I0126 08:09:47.398219 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 26 08:09:47 crc kubenswrapper[4806]: I0126 08:09:47.665939 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 26 08:09:47 crc kubenswrapper[4806]: I0126 08:09:47.699756 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 26 08:09:47 crc kubenswrapper[4806]: I0126 08:09:47.703198 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"70aa246b-31a1-4800-b76e-d50a2002a5f8","Type":"ContainerStarted","Data":"9cb86793f2a704c19a53e12761c46e0fd0a14d187215c5d983918b2a5ecc2f11"} Jan 26 08:09:47 crc kubenswrapper[4806]: I0126 08:09:47.704945 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"cc07bbaf-381b-4edc-acd9-48211c3eb4c6","Type":"ContainerStarted","Data":"ca9801ca380b4d280d1e1e067146810bae3487ca6573587dd624cc1a1edc9683"} Jan 26 08:09:47 crc kubenswrapper[4806]: I0126 08:09:47.705491 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 26 08:09:47 crc kubenswrapper[4806]: I0126 08:09:47.705508 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 26 08:09:47 crc kubenswrapper[4806]: I0126 08:09:47.740439 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=10.887304503 podStartE2EDuration="51.740353409s" podCreationTimestamp="2026-01-26 08:08:56 +0000 UTC" firstStartedPulling="2026-01-26 08:08:58.851347473 +0000 UTC m=+918.115755529" lastFinishedPulling="2026-01-26 08:09:39.704396379 +0000 UTC m=+958.968804435" observedRunningTime="2026-01-26 08:09:47.739376742 +0000 UTC m=+967.003784808" watchObservedRunningTime="2026-01-26 08:09:47.740353409 +0000 UTC m=+967.004761465" Jan 26 08:09:47 crc kubenswrapper[4806]: I0126 08:09:47.754119 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 26 08:09:47 crc kubenswrapper[4806]: I0126 08:09:47.762143 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=10.631604223 podStartE2EDuration="49.762122586s" podCreationTimestamp="2026-01-26 08:08:58 +0000 UTC" firstStartedPulling="2026-01-26 08:09:00.576311023 +0000 UTC m=+919.840719079" lastFinishedPulling="2026-01-26 08:09:39.706829386 +0000 UTC m=+958.971237442" observedRunningTime="2026-01-26 08:09:47.760004738 +0000 UTC m=+967.024412794" watchObservedRunningTime="2026-01-26 08:09:47.762122586 +0000 UTC m=+967.026530632" Jan 26 08:09:47 crc kubenswrapper[4806]: I0126 08:09:47.764002 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 26 08:09:47 crc kubenswrapper[4806]: I0126 08:09:47.977053 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 26 08:09:47 crc kubenswrapper[4806]: I0126 08:09:47.977317 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 26 08:09:48 crc kubenswrapper[4806]: I0126 08:09:48.026734 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 26 08:09:48 crc kubenswrapper[4806]: I0126 08:09:48.028030 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 26 08:09:48 crc kubenswrapper[4806]: I0126 08:09:48.043676 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 26 08:09:48 crc kubenswrapper[4806]: I0126 08:09:48.044020 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 26 08:09:48 crc kubenswrapper[4806]: I0126 08:09:48.045429 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-p65gb" Jan 26 08:09:48 crc kubenswrapper[4806]: I0126 08:09:48.051637 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 26 08:09:48 crc kubenswrapper[4806]: I0126 08:09:48.095405 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 26 08:09:48 crc kubenswrapper[4806]: I0126 08:09:48.217771 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/55fab6bb-f40a-4964-b87e-be61729787a2-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"55fab6bb-f40a-4964-b87e-be61729787a2\") " pod="openstack/ovn-northd-0" Jan 26 08:09:48 crc kubenswrapper[4806]: I0126 08:09:48.217940 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55fab6bb-f40a-4964-b87e-be61729787a2-config\") pod \"ovn-northd-0\" (UID: \"55fab6bb-f40a-4964-b87e-be61729787a2\") " pod="openstack/ovn-northd-0" Jan 26 08:09:48 crc kubenswrapper[4806]: I0126 08:09:48.218199 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/55fab6bb-f40a-4964-b87e-be61729787a2-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"55fab6bb-f40a-4964-b87e-be61729787a2\") " pod="openstack/ovn-northd-0" Jan 26 08:09:48 crc kubenswrapper[4806]: I0126 08:09:48.218320 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2rs8\" (UniqueName: \"kubernetes.io/projected/55fab6bb-f40a-4964-b87e-be61729787a2-kube-api-access-m2rs8\") pod \"ovn-northd-0\" (UID: \"55fab6bb-f40a-4964-b87e-be61729787a2\") " pod="openstack/ovn-northd-0" Jan 26 08:09:48 crc kubenswrapper[4806]: I0126 08:09:48.218437 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/55fab6bb-f40a-4964-b87e-be61729787a2-scripts\") pod \"ovn-northd-0\" (UID: \"55fab6bb-f40a-4964-b87e-be61729787a2\") " pod="openstack/ovn-northd-0" Jan 26 08:09:48 crc kubenswrapper[4806]: I0126 08:09:48.218541 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/55fab6bb-f40a-4964-b87e-be61729787a2-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"55fab6bb-f40a-4964-b87e-be61729787a2\") " pod="openstack/ovn-northd-0" Jan 26 08:09:48 crc kubenswrapper[4806]: I0126 08:09:48.218646 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55fab6bb-f40a-4964-b87e-be61729787a2-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"55fab6bb-f40a-4964-b87e-be61729787a2\") " pod="openstack/ovn-northd-0" Jan 26 08:09:48 crc kubenswrapper[4806]: I0126 08:09:48.320715 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/55fab6bb-f40a-4964-b87e-be61729787a2-scripts\") pod \"ovn-northd-0\" (UID: \"55fab6bb-f40a-4964-b87e-be61729787a2\") " pod="openstack/ovn-northd-0" Jan 26 08:09:48 crc kubenswrapper[4806]: I0126 08:09:48.320771 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/55fab6bb-f40a-4964-b87e-be61729787a2-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"55fab6bb-f40a-4964-b87e-be61729787a2\") " pod="openstack/ovn-northd-0" Jan 26 08:09:48 crc kubenswrapper[4806]: I0126 08:09:48.320808 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55fab6bb-f40a-4964-b87e-be61729787a2-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"55fab6bb-f40a-4964-b87e-be61729787a2\") " pod="openstack/ovn-northd-0" Jan 26 08:09:48 crc kubenswrapper[4806]: I0126 08:09:48.321762 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/55fab6bb-f40a-4964-b87e-be61729787a2-scripts\") pod \"ovn-northd-0\" (UID: \"55fab6bb-f40a-4964-b87e-be61729787a2\") " pod="openstack/ovn-northd-0" Jan 26 08:09:48 crc kubenswrapper[4806]: I0126 08:09:48.321931 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/55fab6bb-f40a-4964-b87e-be61729787a2-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"55fab6bb-f40a-4964-b87e-be61729787a2\") " pod="openstack/ovn-northd-0" Jan 26 08:09:48 crc kubenswrapper[4806]: I0126 08:09:48.321994 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55fab6bb-f40a-4964-b87e-be61729787a2-config\") pod \"ovn-northd-0\" (UID: \"55fab6bb-f40a-4964-b87e-be61729787a2\") " pod="openstack/ovn-northd-0" Jan 26 08:09:48 crc kubenswrapper[4806]: I0126 08:09:48.322062 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/55fab6bb-f40a-4964-b87e-be61729787a2-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"55fab6bb-f40a-4964-b87e-be61729787a2\") " pod="openstack/ovn-northd-0" Jan 26 08:09:48 crc kubenswrapper[4806]: I0126 08:09:48.322104 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2rs8\" (UniqueName: \"kubernetes.io/projected/55fab6bb-f40a-4964-b87e-be61729787a2-kube-api-access-m2rs8\") pod \"ovn-northd-0\" (UID: \"55fab6bb-f40a-4964-b87e-be61729787a2\") " pod="openstack/ovn-northd-0" Jan 26 08:09:48 crc kubenswrapper[4806]: I0126 08:09:48.322360 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/55fab6bb-f40a-4964-b87e-be61729787a2-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"55fab6bb-f40a-4964-b87e-be61729787a2\") " pod="openstack/ovn-northd-0" Jan 26 08:09:48 crc kubenswrapper[4806]: I0126 08:09:48.323119 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55fab6bb-f40a-4964-b87e-be61729787a2-config\") pod \"ovn-northd-0\" (UID: \"55fab6bb-f40a-4964-b87e-be61729787a2\") " pod="openstack/ovn-northd-0" Jan 26 08:09:48 crc kubenswrapper[4806]: I0126 08:09:48.327148 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55fab6bb-f40a-4964-b87e-be61729787a2-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"55fab6bb-f40a-4964-b87e-be61729787a2\") " pod="openstack/ovn-northd-0" Jan 26 08:09:48 crc kubenswrapper[4806]: I0126 08:09:48.327719 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/55fab6bb-f40a-4964-b87e-be61729787a2-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"55fab6bb-f40a-4964-b87e-be61729787a2\") " pod="openstack/ovn-northd-0" Jan 26 08:09:48 crc kubenswrapper[4806]: I0126 08:09:48.341223 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/55fab6bb-f40a-4964-b87e-be61729787a2-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"55fab6bb-f40a-4964-b87e-be61729787a2\") " pod="openstack/ovn-northd-0" Jan 26 08:09:48 crc kubenswrapper[4806]: I0126 08:09:48.350665 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2rs8\" (UniqueName: \"kubernetes.io/projected/55fab6bb-f40a-4964-b87e-be61729787a2-kube-api-access-m2rs8\") pod \"ovn-northd-0\" (UID: \"55fab6bb-f40a-4964-b87e-be61729787a2\") " pod="openstack/ovn-northd-0" Jan 26 08:09:48 crc kubenswrapper[4806]: I0126 08:09:48.363049 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 26 08:09:48 crc kubenswrapper[4806]: W0126 08:09:48.870203 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod55fab6bb_f40a_4964_b87e_be61729787a2.slice/crio-42101a3efece8f77046756722c643b12ce349ed924ef8ee971749bb82f700da8 WatchSource:0}: Error finding container 42101a3efece8f77046756722c643b12ce349ed924ef8ee971749bb82f700da8: Status 404 returned error can't find the container with id 42101a3efece8f77046756722c643b12ce349ed924ef8ee971749bb82f700da8 Jan 26 08:09:48 crc kubenswrapper[4806]: I0126 08:09:48.880762 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 26 08:09:49 crc kubenswrapper[4806]: I0126 08:09:49.102168 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5bf47b49b7-kbjs2" Jan 26 08:09:49 crc kubenswrapper[4806]: I0126 08:09:49.440390 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 26 08:09:49 crc kubenswrapper[4806]: I0126 08:09:49.440720 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 26 08:09:49 crc kubenswrapper[4806]: I0126 08:09:49.717997 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"0e3b9ca7-b75b-4a84-b3f1-23b98ee8ed2a","Type":"ContainerStarted","Data":"01d822e815455b90e89921b950bd9623a731401fcd546f20ff1ebec61aab5f8e"} Jan 26 08:09:49 crc kubenswrapper[4806]: I0126 08:09:49.718443 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 26 08:09:49 crc kubenswrapper[4806]: I0126 08:09:49.720486 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"55fab6bb-f40a-4964-b87e-be61729787a2","Type":"ContainerStarted","Data":"42101a3efece8f77046756722c643b12ce349ed924ef8ee971749bb82f700da8"} Jan 26 08:09:49 crc kubenswrapper[4806]: I0126 08:09:49.733913 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=1.8633216830000001 podStartE2EDuration="48.733884588s" podCreationTimestamp="2026-01-26 08:09:01 +0000 UTC" firstStartedPulling="2026-01-26 08:09:02.573487252 +0000 UTC m=+921.837895298" lastFinishedPulling="2026-01-26 08:09:49.444050127 +0000 UTC m=+968.708458203" observedRunningTime="2026-01-26 08:09:49.72995759 +0000 UTC m=+968.994365646" watchObservedRunningTime="2026-01-26 08:09:49.733884588 +0000 UTC m=+968.998292644" Jan 26 08:09:50 crc kubenswrapper[4806]: I0126 08:09:50.728387 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"55fab6bb-f40a-4964-b87e-be61729787a2","Type":"ContainerStarted","Data":"f44729dfd740dd5fb6c30a3f99e484edc2f6ea21c13b0de6df58758d11bd7a0a"} Jan 26 08:09:50 crc kubenswrapper[4806]: I0126 08:09:50.728453 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"55fab6bb-f40a-4964-b87e-be61729787a2","Type":"ContainerStarted","Data":"f06447af9368dd5009d045e01bbebdba0df27ca665ca1d1795080a4261e0f9da"} Jan 26 08:09:50 crc kubenswrapper[4806]: I0126 08:09:50.748907 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=1.5677138290000001 podStartE2EDuration="2.748885981s" podCreationTimestamp="2026-01-26 08:09:48 +0000 UTC" firstStartedPulling="2026-01-26 08:09:48.872563809 +0000 UTC m=+968.136971865" lastFinishedPulling="2026-01-26 08:09:50.053735961 +0000 UTC m=+969.318144017" observedRunningTime="2026-01-26 08:09:50.7473872 +0000 UTC m=+970.011795256" watchObservedRunningTime="2026-01-26 08:09:50.748885981 +0000 UTC m=+970.013294037" Jan 26 08:09:51 crc kubenswrapper[4806]: I0126 08:09:51.594373 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-92zz8"] Jan 26 08:09:51 crc kubenswrapper[4806]: I0126 08:09:51.594679 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8554648995-92zz8" podUID="16f86cb8-055a-4696-9aca-e994ec8ba516" containerName="dnsmasq-dns" containerID="cri-o://f3389644475b61a1246b1cc340f48ba6b9a63cc291cbecb4c4ec17dc64516e81" gracePeriod=10 Jan 26 08:09:51 crc kubenswrapper[4806]: I0126 08:09:51.602680 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8554648995-92zz8" Jan 26 08:09:51 crc kubenswrapper[4806]: I0126 08:09:51.625658 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-xcfqg"] Jan 26 08:09:51 crc kubenswrapper[4806]: I0126 08:09:51.627111 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-xcfqg" Jan 26 08:09:51 crc kubenswrapper[4806]: I0126 08:09:51.684615 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-xcfqg"] Jan 26 08:09:51 crc kubenswrapper[4806]: I0126 08:09:51.746027 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 26 08:09:51 crc kubenswrapper[4806]: I0126 08:09:51.791331 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2nrn\" (UniqueName: \"kubernetes.io/projected/c6ba8a7a-2708-4123-90e8-5b66f4c86448-kube-api-access-p2nrn\") pod \"dnsmasq-dns-b8fbc5445-xcfqg\" (UID: \"c6ba8a7a-2708-4123-90e8-5b66f4c86448\") " pod="openstack/dnsmasq-dns-b8fbc5445-xcfqg" Jan 26 08:09:51 crc kubenswrapper[4806]: I0126 08:09:51.791651 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c6ba8a7a-2708-4123-90e8-5b66f4c86448-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-xcfqg\" (UID: \"c6ba8a7a-2708-4123-90e8-5b66f4c86448\") " pod="openstack/dnsmasq-dns-b8fbc5445-xcfqg" Jan 26 08:09:51 crc kubenswrapper[4806]: I0126 08:09:51.791755 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6ba8a7a-2708-4123-90e8-5b66f4c86448-config\") pod \"dnsmasq-dns-b8fbc5445-xcfqg\" (UID: \"c6ba8a7a-2708-4123-90e8-5b66f4c86448\") " pod="openstack/dnsmasq-dns-b8fbc5445-xcfqg" Jan 26 08:09:51 crc kubenswrapper[4806]: I0126 08:09:51.791873 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c6ba8a7a-2708-4123-90e8-5b66f4c86448-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-xcfqg\" (UID: \"c6ba8a7a-2708-4123-90e8-5b66f4c86448\") " pod="openstack/dnsmasq-dns-b8fbc5445-xcfqg" Jan 26 08:09:51 crc kubenswrapper[4806]: I0126 08:09:51.791966 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c6ba8a7a-2708-4123-90e8-5b66f4c86448-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-xcfqg\" (UID: \"c6ba8a7a-2708-4123-90e8-5b66f4c86448\") " pod="openstack/dnsmasq-dns-b8fbc5445-xcfqg" Jan 26 08:09:51 crc kubenswrapper[4806]: I0126 08:09:51.893139 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c6ba8a7a-2708-4123-90e8-5b66f4c86448-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-xcfqg\" (UID: \"c6ba8a7a-2708-4123-90e8-5b66f4c86448\") " pod="openstack/dnsmasq-dns-b8fbc5445-xcfqg" Jan 26 08:09:51 crc kubenswrapper[4806]: I0126 08:09:51.893487 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c6ba8a7a-2708-4123-90e8-5b66f4c86448-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-xcfqg\" (UID: \"c6ba8a7a-2708-4123-90e8-5b66f4c86448\") " pod="openstack/dnsmasq-dns-b8fbc5445-xcfqg" Jan 26 08:09:51 crc kubenswrapper[4806]: I0126 08:09:51.893831 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2nrn\" (UniqueName: \"kubernetes.io/projected/c6ba8a7a-2708-4123-90e8-5b66f4c86448-kube-api-access-p2nrn\") pod \"dnsmasq-dns-b8fbc5445-xcfqg\" (UID: \"c6ba8a7a-2708-4123-90e8-5b66f4c86448\") " pod="openstack/dnsmasq-dns-b8fbc5445-xcfqg" Jan 26 08:09:51 crc kubenswrapper[4806]: I0126 08:09:51.893915 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c6ba8a7a-2708-4123-90e8-5b66f4c86448-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-xcfqg\" (UID: \"c6ba8a7a-2708-4123-90e8-5b66f4c86448\") " pod="openstack/dnsmasq-dns-b8fbc5445-xcfqg" Jan 26 08:09:51 crc kubenswrapper[4806]: I0126 08:09:51.893997 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6ba8a7a-2708-4123-90e8-5b66f4c86448-config\") pod \"dnsmasq-dns-b8fbc5445-xcfqg\" (UID: \"c6ba8a7a-2708-4123-90e8-5b66f4c86448\") " pod="openstack/dnsmasq-dns-b8fbc5445-xcfqg" Jan 26 08:09:51 crc kubenswrapper[4806]: I0126 08:09:51.894378 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c6ba8a7a-2708-4123-90e8-5b66f4c86448-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-xcfqg\" (UID: \"c6ba8a7a-2708-4123-90e8-5b66f4c86448\") " pod="openstack/dnsmasq-dns-b8fbc5445-xcfqg" Jan 26 08:09:51 crc kubenswrapper[4806]: I0126 08:09:51.894550 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c6ba8a7a-2708-4123-90e8-5b66f4c86448-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-xcfqg\" (UID: \"c6ba8a7a-2708-4123-90e8-5b66f4c86448\") " pod="openstack/dnsmasq-dns-b8fbc5445-xcfqg" Jan 26 08:09:51 crc kubenswrapper[4806]: I0126 08:09:51.894818 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c6ba8a7a-2708-4123-90e8-5b66f4c86448-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-xcfqg\" (UID: \"c6ba8a7a-2708-4123-90e8-5b66f4c86448\") " pod="openstack/dnsmasq-dns-b8fbc5445-xcfqg" Jan 26 08:09:51 crc kubenswrapper[4806]: I0126 08:09:51.894989 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6ba8a7a-2708-4123-90e8-5b66f4c86448-config\") pod \"dnsmasq-dns-b8fbc5445-xcfqg\" (UID: \"c6ba8a7a-2708-4123-90e8-5b66f4c86448\") " pod="openstack/dnsmasq-dns-b8fbc5445-xcfqg" Jan 26 08:09:51 crc kubenswrapper[4806]: I0126 08:09:51.939960 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2nrn\" (UniqueName: \"kubernetes.io/projected/c6ba8a7a-2708-4123-90e8-5b66f4c86448-kube-api-access-p2nrn\") pod \"dnsmasq-dns-b8fbc5445-xcfqg\" (UID: \"c6ba8a7a-2708-4123-90e8-5b66f4c86448\") " pod="openstack/dnsmasq-dns-b8fbc5445-xcfqg" Jan 26 08:09:51 crc kubenswrapper[4806]: I0126 08:09:51.959446 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-xcfqg" Jan 26 08:09:52 crc kubenswrapper[4806]: I0126 08:09:52.291054 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 26 08:09:52 crc kubenswrapper[4806]: I0126 08:09:52.474945 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 26 08:09:52 crc kubenswrapper[4806]: I0126 08:09:52.683568 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-xcfqg"] Jan 26 08:09:52 crc kubenswrapper[4806]: I0126 08:09:52.717630 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-92zz8" Jan 26 08:09:52 crc kubenswrapper[4806]: I0126 08:09:52.755721 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-xcfqg" event={"ID":"c6ba8a7a-2708-4123-90e8-5b66f4c86448","Type":"ContainerStarted","Data":"7bdfa702d8bdd0253f1bf21e60c80fe4c36377660e1647ceb3bb25f0526f6052"} Jan 26 08:09:52 crc kubenswrapper[4806]: I0126 08:09:52.760612 4806 generic.go:334] "Generic (PLEG): container finished" podID="16f86cb8-055a-4696-9aca-e994ec8ba516" containerID="f3389644475b61a1246b1cc340f48ba6b9a63cc291cbecb4c4ec17dc64516e81" exitCode=0 Jan 26 08:09:52 crc kubenswrapper[4806]: I0126 08:09:52.762068 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-92zz8" event={"ID":"16f86cb8-055a-4696-9aca-e994ec8ba516","Type":"ContainerDied","Data":"f3389644475b61a1246b1cc340f48ba6b9a63cc291cbecb4c4ec17dc64516e81"} Jan 26 08:09:52 crc kubenswrapper[4806]: I0126 08:09:52.762461 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-92zz8" Jan 26 08:09:52 crc kubenswrapper[4806]: I0126 08:09:52.763615 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-92zz8" event={"ID":"16f86cb8-055a-4696-9aca-e994ec8ba516","Type":"ContainerDied","Data":"043216a60f60cc272f2b4d25e6d11834320d62a380e378d8d63635878f18a38c"} Jan 26 08:09:52 crc kubenswrapper[4806]: I0126 08:09:52.763659 4806 scope.go:117] "RemoveContainer" containerID="f3389644475b61a1246b1cc340f48ba6b9a63cc291cbecb4c4ec17dc64516e81" Jan 26 08:09:52 crc kubenswrapper[4806]: I0126 08:09:52.809569 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/16f86cb8-055a-4696-9aca-e994ec8ba516-ovsdbserver-nb\") pod \"16f86cb8-055a-4696-9aca-e994ec8ba516\" (UID: \"16f86cb8-055a-4696-9aca-e994ec8ba516\") " Jan 26 08:09:52 crc kubenswrapper[4806]: I0126 08:09:52.809656 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-62snv\" (UniqueName: \"kubernetes.io/projected/16f86cb8-055a-4696-9aca-e994ec8ba516-kube-api-access-62snv\") pod \"16f86cb8-055a-4696-9aca-e994ec8ba516\" (UID: \"16f86cb8-055a-4696-9aca-e994ec8ba516\") " Jan 26 08:09:52 crc kubenswrapper[4806]: I0126 08:09:52.809674 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/16f86cb8-055a-4696-9aca-e994ec8ba516-ovsdbserver-sb\") pod \"16f86cb8-055a-4696-9aca-e994ec8ba516\" (UID: \"16f86cb8-055a-4696-9aca-e994ec8ba516\") " Jan 26 08:09:52 crc kubenswrapper[4806]: I0126 08:09:52.809696 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16f86cb8-055a-4696-9aca-e994ec8ba516-config\") pod \"16f86cb8-055a-4696-9aca-e994ec8ba516\" (UID: \"16f86cb8-055a-4696-9aca-e994ec8ba516\") " Jan 26 08:09:52 crc kubenswrapper[4806]: I0126 08:09:52.809788 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/16f86cb8-055a-4696-9aca-e994ec8ba516-dns-svc\") pod \"16f86cb8-055a-4696-9aca-e994ec8ba516\" (UID: \"16f86cb8-055a-4696-9aca-e994ec8ba516\") " Jan 26 08:09:52 crc kubenswrapper[4806]: I0126 08:09:52.825688 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16f86cb8-055a-4696-9aca-e994ec8ba516-kube-api-access-62snv" (OuterVolumeSpecName: "kube-api-access-62snv") pod "16f86cb8-055a-4696-9aca-e994ec8ba516" (UID: "16f86cb8-055a-4696-9aca-e994ec8ba516"). InnerVolumeSpecName "kube-api-access-62snv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:09:52 crc kubenswrapper[4806]: I0126 08:09:52.833669 4806 scope.go:117] "RemoveContainer" containerID="7d417ef4490e8773232f88b1cf4592ad0f4d65ba45ff8f440e2516c9fde69bad" Jan 26 08:09:52 crc kubenswrapper[4806]: I0126 08:09:52.873988 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16f86cb8-055a-4696-9aca-e994ec8ba516-config" (OuterVolumeSpecName: "config") pod "16f86cb8-055a-4696-9aca-e994ec8ba516" (UID: "16f86cb8-055a-4696-9aca-e994ec8ba516"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 08:09:52 crc kubenswrapper[4806]: I0126 08:09:52.902880 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16f86cb8-055a-4696-9aca-e994ec8ba516-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "16f86cb8-055a-4696-9aca-e994ec8ba516" (UID: "16f86cb8-055a-4696-9aca-e994ec8ba516"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 08:09:52 crc kubenswrapper[4806]: I0126 08:09:52.904649 4806 scope.go:117] "RemoveContainer" containerID="f3389644475b61a1246b1cc340f48ba6b9a63cc291cbecb4c4ec17dc64516e81" Jan 26 08:09:52 crc kubenswrapper[4806]: E0126 08:09:52.907428 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f3389644475b61a1246b1cc340f48ba6b9a63cc291cbecb4c4ec17dc64516e81\": container with ID starting with f3389644475b61a1246b1cc340f48ba6b9a63cc291cbecb4c4ec17dc64516e81 not found: ID does not exist" containerID="f3389644475b61a1246b1cc340f48ba6b9a63cc291cbecb4c4ec17dc64516e81" Jan 26 08:09:52 crc kubenswrapper[4806]: I0126 08:09:52.907464 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3389644475b61a1246b1cc340f48ba6b9a63cc291cbecb4c4ec17dc64516e81"} err="failed to get container status \"f3389644475b61a1246b1cc340f48ba6b9a63cc291cbecb4c4ec17dc64516e81\": rpc error: code = NotFound desc = could not find container \"f3389644475b61a1246b1cc340f48ba6b9a63cc291cbecb4c4ec17dc64516e81\": container with ID starting with f3389644475b61a1246b1cc340f48ba6b9a63cc291cbecb4c4ec17dc64516e81 not found: ID does not exist" Jan 26 08:09:52 crc kubenswrapper[4806]: I0126 08:09:52.907489 4806 scope.go:117] "RemoveContainer" containerID="7d417ef4490e8773232f88b1cf4592ad0f4d65ba45ff8f440e2516c9fde69bad" Jan 26 08:09:52 crc kubenswrapper[4806]: E0126 08:09:52.908328 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d417ef4490e8773232f88b1cf4592ad0f4d65ba45ff8f440e2516c9fde69bad\": container with ID starting with 7d417ef4490e8773232f88b1cf4592ad0f4d65ba45ff8f440e2516c9fde69bad not found: ID does not exist" containerID="7d417ef4490e8773232f88b1cf4592ad0f4d65ba45ff8f440e2516c9fde69bad" Jan 26 08:09:52 crc kubenswrapper[4806]: I0126 08:09:52.908360 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d417ef4490e8773232f88b1cf4592ad0f4d65ba45ff8f440e2516c9fde69bad"} err="failed to get container status \"7d417ef4490e8773232f88b1cf4592ad0f4d65ba45ff8f440e2516c9fde69bad\": rpc error: code = NotFound desc = could not find container \"7d417ef4490e8773232f88b1cf4592ad0f4d65ba45ff8f440e2516c9fde69bad\": container with ID starting with 7d417ef4490e8773232f88b1cf4592ad0f4d65ba45ff8f440e2516c9fde69bad not found: ID does not exist" Jan 26 08:09:52 crc kubenswrapper[4806]: I0126 08:09:52.911954 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-62snv\" (UniqueName: \"kubernetes.io/projected/16f86cb8-055a-4696-9aca-e994ec8ba516-kube-api-access-62snv\") on node \"crc\" DevicePath \"\"" Jan 26 08:09:52 crc kubenswrapper[4806]: I0126 08:09:52.911979 4806 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/16f86cb8-055a-4696-9aca-e994ec8ba516-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 08:09:52 crc kubenswrapper[4806]: I0126 08:09:52.911989 4806 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16f86cb8-055a-4696-9aca-e994ec8ba516-config\") on node \"crc\" DevicePath \"\"" Jan 26 08:09:52 crc kubenswrapper[4806]: I0126 08:09:52.913870 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16f86cb8-055a-4696-9aca-e994ec8ba516-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "16f86cb8-055a-4696-9aca-e994ec8ba516" (UID: "16f86cb8-055a-4696-9aca-e994ec8ba516"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 08:09:52 crc kubenswrapper[4806]: I0126 08:09:52.927124 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16f86cb8-055a-4696-9aca-e994ec8ba516-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "16f86cb8-055a-4696-9aca-e994ec8ba516" (UID: "16f86cb8-055a-4696-9aca-e994ec8ba516"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 08:09:52 crc kubenswrapper[4806]: I0126 08:09:52.979320 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Jan 26 08:09:52 crc kubenswrapper[4806]: E0126 08:09:52.979641 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16f86cb8-055a-4696-9aca-e994ec8ba516" containerName="dnsmasq-dns" Jan 26 08:09:52 crc kubenswrapper[4806]: I0126 08:09:52.979657 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="16f86cb8-055a-4696-9aca-e994ec8ba516" containerName="dnsmasq-dns" Jan 26 08:09:52 crc kubenswrapper[4806]: E0126 08:09:52.979675 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16f86cb8-055a-4696-9aca-e994ec8ba516" containerName="init" Jan 26 08:09:52 crc kubenswrapper[4806]: I0126 08:09:52.979683 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="16f86cb8-055a-4696-9aca-e994ec8ba516" containerName="init" Jan 26 08:09:52 crc kubenswrapper[4806]: I0126 08:09:52.979836 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="16f86cb8-055a-4696-9aca-e994ec8ba516" containerName="dnsmasq-dns" Jan 26 08:09:52 crc kubenswrapper[4806]: I0126 08:09:52.988409 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 26 08:09:52 crc kubenswrapper[4806]: I0126 08:09:52.992029 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Jan 26 08:09:52 crc kubenswrapper[4806]: I0126 08:09:52.992044 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Jan 26 08:09:52 crc kubenswrapper[4806]: I0126 08:09:52.992061 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Jan 26 08:09:52 crc kubenswrapper[4806]: I0126 08:09:52.992197 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-44crc" Jan 26 08:09:53 crc kubenswrapper[4806]: I0126 08:09:53.003225 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 26 08:09:53 crc kubenswrapper[4806]: I0126 08:09:53.013561 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/fcc22009-cca0-438b-8f2f-5c245db7c70c-cache\") pod \"swift-storage-0\" (UID: \"fcc22009-cca0-438b-8f2f-5c245db7c70c\") " pod="openstack/swift-storage-0" Jan 26 08:09:53 crc kubenswrapper[4806]: I0126 08:09:53.013610 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcc22009-cca0-438b-8f2f-5c245db7c70c-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"fcc22009-cca0-438b-8f2f-5c245db7c70c\") " pod="openstack/swift-storage-0" Jan 26 08:09:53 crc kubenswrapper[4806]: I0126 08:09:53.013692 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"swift-storage-0\" (UID: \"fcc22009-cca0-438b-8f2f-5c245db7c70c\") " pod="openstack/swift-storage-0" Jan 26 08:09:53 crc kubenswrapper[4806]: I0126 08:09:53.013763 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fn8x\" (UniqueName: \"kubernetes.io/projected/fcc22009-cca0-438b-8f2f-5c245db7c70c-kube-api-access-7fn8x\") pod \"swift-storage-0\" (UID: \"fcc22009-cca0-438b-8f2f-5c245db7c70c\") " pod="openstack/swift-storage-0" Jan 26 08:09:53 crc kubenswrapper[4806]: I0126 08:09:53.013991 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/fcc22009-cca0-438b-8f2f-5c245db7c70c-lock\") pod \"swift-storage-0\" (UID: \"fcc22009-cca0-438b-8f2f-5c245db7c70c\") " pod="openstack/swift-storage-0" Jan 26 08:09:53 crc kubenswrapper[4806]: I0126 08:09:53.014026 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/fcc22009-cca0-438b-8f2f-5c245db7c70c-etc-swift\") pod \"swift-storage-0\" (UID: \"fcc22009-cca0-438b-8f2f-5c245db7c70c\") " pod="openstack/swift-storage-0" Jan 26 08:09:53 crc kubenswrapper[4806]: I0126 08:09:53.014114 4806 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/16f86cb8-055a-4696-9aca-e994ec8ba516-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 08:09:53 crc kubenswrapper[4806]: I0126 08:09:53.014133 4806 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/16f86cb8-055a-4696-9aca-e994ec8ba516-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 08:09:53 crc kubenswrapper[4806]: I0126 08:09:53.110889 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-92zz8"] Jan 26 08:09:53 crc kubenswrapper[4806]: I0126 08:09:53.115780 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"swift-storage-0\" (UID: \"fcc22009-cca0-438b-8f2f-5c245db7c70c\") " pod="openstack/swift-storage-0" Jan 26 08:09:53 crc kubenswrapper[4806]: I0126 08:09:53.115909 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7fn8x\" (UniqueName: \"kubernetes.io/projected/fcc22009-cca0-438b-8f2f-5c245db7c70c-kube-api-access-7fn8x\") pod \"swift-storage-0\" (UID: \"fcc22009-cca0-438b-8f2f-5c245db7c70c\") " pod="openstack/swift-storage-0" Jan 26 08:09:53 crc kubenswrapper[4806]: I0126 08:09:53.115945 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/fcc22009-cca0-438b-8f2f-5c245db7c70c-lock\") pod \"swift-storage-0\" (UID: \"fcc22009-cca0-438b-8f2f-5c245db7c70c\") " pod="openstack/swift-storage-0" Jan 26 08:09:53 crc kubenswrapper[4806]: I0126 08:09:53.115983 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/fcc22009-cca0-438b-8f2f-5c245db7c70c-etc-swift\") pod \"swift-storage-0\" (UID: \"fcc22009-cca0-438b-8f2f-5c245db7c70c\") " pod="openstack/swift-storage-0" Jan 26 08:09:53 crc kubenswrapper[4806]: I0126 08:09:53.116039 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/fcc22009-cca0-438b-8f2f-5c245db7c70c-cache\") pod \"swift-storage-0\" (UID: \"fcc22009-cca0-438b-8f2f-5c245db7c70c\") " pod="openstack/swift-storage-0" Jan 26 08:09:53 crc kubenswrapper[4806]: I0126 08:09:53.116089 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcc22009-cca0-438b-8f2f-5c245db7c70c-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"fcc22009-cca0-438b-8f2f-5c245db7c70c\") " pod="openstack/swift-storage-0" Jan 26 08:09:53 crc kubenswrapper[4806]: I0126 08:09:53.116108 4806 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"swift-storage-0\" (UID: \"fcc22009-cca0-438b-8f2f-5c245db7c70c\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/swift-storage-0" Jan 26 08:09:53 crc kubenswrapper[4806]: I0126 08:09:53.116831 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/fcc22009-cca0-438b-8f2f-5c245db7c70c-cache\") pod \"swift-storage-0\" (UID: \"fcc22009-cca0-438b-8f2f-5c245db7c70c\") " pod="openstack/swift-storage-0" Jan 26 08:09:53 crc kubenswrapper[4806]: I0126 08:09:53.116849 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8554648995-92zz8"] Jan 26 08:09:53 crc kubenswrapper[4806]: E0126 08:09:53.116946 4806 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 26 08:09:53 crc kubenswrapper[4806]: E0126 08:09:53.116970 4806 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 26 08:09:53 crc kubenswrapper[4806]: I0126 08:09:53.117180 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/fcc22009-cca0-438b-8f2f-5c245db7c70c-lock\") pod \"swift-storage-0\" (UID: \"fcc22009-cca0-438b-8f2f-5c245db7c70c\") " pod="openstack/swift-storage-0" Jan 26 08:09:53 crc kubenswrapper[4806]: E0126 08:09:53.117253 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fcc22009-cca0-438b-8f2f-5c245db7c70c-etc-swift podName:fcc22009-cca0-438b-8f2f-5c245db7c70c nodeName:}" failed. No retries permitted until 2026-01-26 08:09:53.617235012 +0000 UTC m=+972.881643138 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/fcc22009-cca0-438b-8f2f-5c245db7c70c-etc-swift") pod "swift-storage-0" (UID: "fcc22009-cca0-438b-8f2f-5c245db7c70c") : configmap "swift-ring-files" not found Jan 26 08:09:53 crc kubenswrapper[4806]: I0126 08:09:53.127377 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcc22009-cca0-438b-8f2f-5c245db7c70c-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"fcc22009-cca0-438b-8f2f-5c245db7c70c\") " pod="openstack/swift-storage-0" Jan 26 08:09:53 crc kubenswrapper[4806]: I0126 08:09:53.135322 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7fn8x\" (UniqueName: \"kubernetes.io/projected/fcc22009-cca0-438b-8f2f-5c245db7c70c-kube-api-access-7fn8x\") pod \"swift-storage-0\" (UID: \"fcc22009-cca0-438b-8f2f-5c245db7c70c\") " pod="openstack/swift-storage-0" Jan 26 08:09:53 crc kubenswrapper[4806]: I0126 08:09:53.143408 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"swift-storage-0\" (UID: \"fcc22009-cca0-438b-8f2f-5c245db7c70c\") " pod="openstack/swift-storage-0" Jan 26 08:09:53 crc kubenswrapper[4806]: I0126 08:09:53.623725 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/fcc22009-cca0-438b-8f2f-5c245db7c70c-etc-swift\") pod \"swift-storage-0\" (UID: \"fcc22009-cca0-438b-8f2f-5c245db7c70c\") " pod="openstack/swift-storage-0" Jan 26 08:09:53 crc kubenswrapper[4806]: E0126 08:09:53.623924 4806 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 26 08:09:53 crc kubenswrapper[4806]: E0126 08:09:53.624318 4806 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 26 08:09:53 crc kubenswrapper[4806]: E0126 08:09:53.624370 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fcc22009-cca0-438b-8f2f-5c245db7c70c-etc-swift podName:fcc22009-cca0-438b-8f2f-5c245db7c70c nodeName:}" failed. No retries permitted until 2026-01-26 08:09:54.624355333 +0000 UTC m=+973.888763389 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/fcc22009-cca0-438b-8f2f-5c245db7c70c-etc-swift") pod "swift-storage-0" (UID: "fcc22009-cca0-438b-8f2f-5c245db7c70c") : configmap "swift-ring-files" not found Jan 26 08:09:53 crc kubenswrapper[4806]: I0126 08:09:53.699128 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 26 08:09:53 crc kubenswrapper[4806]: I0126 08:09:53.777426 4806 generic.go:334] "Generic (PLEG): container finished" podID="c6ba8a7a-2708-4123-90e8-5b66f4c86448" containerID="b5f28c2f6936959437a8720d86a5fd0a6d58c9f8b90ccd071827bf23cd18a21a" exitCode=0 Jan 26 08:09:53 crc kubenswrapper[4806]: I0126 08:09:53.777533 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-xcfqg" event={"ID":"c6ba8a7a-2708-4123-90e8-5b66f4c86448","Type":"ContainerDied","Data":"b5f28c2f6936959437a8720d86a5fd0a6d58c9f8b90ccd071827bf23cd18a21a"} Jan 26 08:09:53 crc kubenswrapper[4806]: I0126 08:09:53.853563 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 26 08:09:54 crc kubenswrapper[4806]: I0126 08:09:54.648999 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/fcc22009-cca0-438b-8f2f-5c245db7c70c-etc-swift\") pod \"swift-storage-0\" (UID: \"fcc22009-cca0-438b-8f2f-5c245db7c70c\") " pod="openstack/swift-storage-0" Jan 26 08:09:54 crc kubenswrapper[4806]: E0126 08:09:54.649574 4806 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 26 08:09:54 crc kubenswrapper[4806]: E0126 08:09:54.649591 4806 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 26 08:09:54 crc kubenswrapper[4806]: E0126 08:09:54.649662 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fcc22009-cca0-438b-8f2f-5c245db7c70c-etc-swift podName:fcc22009-cca0-438b-8f2f-5c245db7c70c nodeName:}" failed. No retries permitted until 2026-01-26 08:09:56.64964442 +0000 UTC m=+975.914052476 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/fcc22009-cca0-438b-8f2f-5c245db7c70c-etc-swift") pod "swift-storage-0" (UID: "fcc22009-cca0-438b-8f2f-5c245db7c70c") : configmap "swift-ring-files" not found Jan 26 08:09:54 crc kubenswrapper[4806]: I0126 08:09:54.784235 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-xcfqg" event={"ID":"c6ba8a7a-2708-4123-90e8-5b66f4c86448","Type":"ContainerStarted","Data":"f5b7aba37df1ab70703a3ef3dc28df0cf9e18d2c32129934f84e93f139ee5b72"} Jan 26 08:09:54 crc kubenswrapper[4806]: I0126 08:09:54.785115 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b8fbc5445-xcfqg" Jan 26 08:09:54 crc kubenswrapper[4806]: I0126 08:09:54.811909 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-b8fbc5445-xcfqg" podStartSLOduration=3.811894301 podStartE2EDuration="3.811894301s" podCreationTimestamp="2026-01-26 08:09:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 08:09:54.809234738 +0000 UTC m=+974.073642794" watchObservedRunningTime="2026-01-26 08:09:54.811894301 +0000 UTC m=+974.076302357" Jan 26 08:09:54 crc kubenswrapper[4806]: I0126 08:09:54.911880 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-65f6-account-create-update-l6nk9"] Jan 26 08:09:54 crc kubenswrapper[4806]: I0126 08:09:54.913378 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-65f6-account-create-update-l6nk9" Jan 26 08:09:54 crc kubenswrapper[4806]: I0126 08:09:54.915335 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 26 08:09:54 crc kubenswrapper[4806]: I0126 08:09:54.929737 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-65f6-account-create-update-l6nk9"] Jan 26 08:09:54 crc kubenswrapper[4806]: I0126 08:09:54.967942 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-jlcbm"] Jan 26 08:09:54 crc kubenswrapper[4806]: I0126 08:09:54.969036 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-jlcbm" Jan 26 08:09:54 crc kubenswrapper[4806]: I0126 08:09:54.997084 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-jlcbm"] Jan 26 08:09:55 crc kubenswrapper[4806]: I0126 08:09:55.056819 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16f86cb8-055a-4696-9aca-e994ec8ba516" path="/var/lib/kubelet/pods/16f86cb8-055a-4696-9aca-e994ec8ba516/volumes" Jan 26 08:09:55 crc kubenswrapper[4806]: I0126 08:09:55.058316 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/301f16bd-223a-43a2-89cb-1bff1beac16e-operator-scripts\") pod \"glance-65f6-account-create-update-l6nk9\" (UID: \"301f16bd-223a-43a2-89cb-1bff1beac16e\") " pod="openstack/glance-65f6-account-create-update-l6nk9" Jan 26 08:09:55 crc kubenswrapper[4806]: I0126 08:09:55.058362 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnrnh\" (UniqueName: \"kubernetes.io/projected/301f16bd-223a-43a2-89cb-1bff1beac16e-kube-api-access-tnrnh\") pod \"glance-65f6-account-create-update-l6nk9\" (UID: \"301f16bd-223a-43a2-89cb-1bff1beac16e\") " pod="openstack/glance-65f6-account-create-update-l6nk9" Jan 26 08:09:55 crc kubenswrapper[4806]: I0126 08:09:55.160053 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tnrnh\" (UniqueName: \"kubernetes.io/projected/301f16bd-223a-43a2-89cb-1bff1beac16e-kube-api-access-tnrnh\") pod \"glance-65f6-account-create-update-l6nk9\" (UID: \"301f16bd-223a-43a2-89cb-1bff1beac16e\") " pod="openstack/glance-65f6-account-create-update-l6nk9" Jan 26 08:09:55 crc kubenswrapper[4806]: I0126 08:09:55.160117 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgs45\" (UniqueName: \"kubernetes.io/projected/7fecccc8-6319-47b6-9dcb-e1d09c53cc1f-kube-api-access-zgs45\") pod \"glance-db-create-jlcbm\" (UID: \"7fecccc8-6319-47b6-9dcb-e1d09c53cc1f\") " pod="openstack/glance-db-create-jlcbm" Jan 26 08:09:55 crc kubenswrapper[4806]: I0126 08:09:55.160197 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7fecccc8-6319-47b6-9dcb-e1d09c53cc1f-operator-scripts\") pod \"glance-db-create-jlcbm\" (UID: \"7fecccc8-6319-47b6-9dcb-e1d09c53cc1f\") " pod="openstack/glance-db-create-jlcbm" Jan 26 08:09:55 crc kubenswrapper[4806]: I0126 08:09:55.160293 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/301f16bd-223a-43a2-89cb-1bff1beac16e-operator-scripts\") pod \"glance-65f6-account-create-update-l6nk9\" (UID: \"301f16bd-223a-43a2-89cb-1bff1beac16e\") " pod="openstack/glance-65f6-account-create-update-l6nk9" Jan 26 08:09:55 crc kubenswrapper[4806]: I0126 08:09:55.162262 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/301f16bd-223a-43a2-89cb-1bff1beac16e-operator-scripts\") pod \"glance-65f6-account-create-update-l6nk9\" (UID: \"301f16bd-223a-43a2-89cb-1bff1beac16e\") " pod="openstack/glance-65f6-account-create-update-l6nk9" Jan 26 08:09:55 crc kubenswrapper[4806]: I0126 08:09:55.181333 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tnrnh\" (UniqueName: \"kubernetes.io/projected/301f16bd-223a-43a2-89cb-1bff1beac16e-kube-api-access-tnrnh\") pod \"glance-65f6-account-create-update-l6nk9\" (UID: \"301f16bd-223a-43a2-89cb-1bff1beac16e\") " pod="openstack/glance-65f6-account-create-update-l6nk9" Jan 26 08:09:55 crc kubenswrapper[4806]: I0126 08:09:55.258713 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-65f6-account-create-update-l6nk9" Jan 26 08:09:55 crc kubenswrapper[4806]: I0126 08:09:55.261678 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zgs45\" (UniqueName: \"kubernetes.io/projected/7fecccc8-6319-47b6-9dcb-e1d09c53cc1f-kube-api-access-zgs45\") pod \"glance-db-create-jlcbm\" (UID: \"7fecccc8-6319-47b6-9dcb-e1d09c53cc1f\") " pod="openstack/glance-db-create-jlcbm" Jan 26 08:09:55 crc kubenswrapper[4806]: I0126 08:09:55.261767 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7fecccc8-6319-47b6-9dcb-e1d09c53cc1f-operator-scripts\") pod \"glance-db-create-jlcbm\" (UID: \"7fecccc8-6319-47b6-9dcb-e1d09c53cc1f\") " pod="openstack/glance-db-create-jlcbm" Jan 26 08:09:55 crc kubenswrapper[4806]: I0126 08:09:55.262777 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7fecccc8-6319-47b6-9dcb-e1d09c53cc1f-operator-scripts\") pod \"glance-db-create-jlcbm\" (UID: \"7fecccc8-6319-47b6-9dcb-e1d09c53cc1f\") " pod="openstack/glance-db-create-jlcbm" Jan 26 08:09:55 crc kubenswrapper[4806]: I0126 08:09:55.281766 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgs45\" (UniqueName: \"kubernetes.io/projected/7fecccc8-6319-47b6-9dcb-e1d09c53cc1f-kube-api-access-zgs45\") pod \"glance-db-create-jlcbm\" (UID: \"7fecccc8-6319-47b6-9dcb-e1d09c53cc1f\") " pod="openstack/glance-db-create-jlcbm" Jan 26 08:09:55 crc kubenswrapper[4806]: I0126 08:09:55.298071 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-jlcbm" Jan 26 08:09:55 crc kubenswrapper[4806]: I0126 08:09:55.731757 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-65f6-account-create-update-l6nk9"] Jan 26 08:09:55 crc kubenswrapper[4806]: I0126 08:09:55.792457 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-65f6-account-create-update-l6nk9" event={"ID":"301f16bd-223a-43a2-89cb-1bff1beac16e","Type":"ContainerStarted","Data":"c573503fe3397572bfb1b7ea6591be8ebb09fde3e01aa04ddef755b957279636"} Jan 26 08:09:55 crc kubenswrapper[4806]: I0126 08:09:55.893330 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-jlcbm"] Jan 26 08:09:55 crc kubenswrapper[4806]: W0126 08:09:55.901343 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7fecccc8_6319_47b6_9dcb_e1d09c53cc1f.slice/crio-513cbc16b99527fdd889760734066a0cb9a72b68dd0fab84617972c32e6a9c70 WatchSource:0}: Error finding container 513cbc16b99527fdd889760734066a0cb9a72b68dd0fab84617972c32e6a9c70: Status 404 returned error can't find the container with id 513cbc16b99527fdd889760734066a0cb9a72b68dd0fab84617972c32e6a9c70 Jan 26 08:09:56 crc kubenswrapper[4806]: I0126 08:09:56.533825 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-pfzk8"] Jan 26 08:09:56 crc kubenswrapper[4806]: I0126 08:09:56.535337 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-pfzk8" Jan 26 08:09:56 crc kubenswrapper[4806]: I0126 08:09:56.543708 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-pfzk8"] Jan 26 08:09:56 crc kubenswrapper[4806]: I0126 08:09:56.546702 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 26 08:09:56 crc kubenswrapper[4806]: I0126 08:09:56.654356 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-6w7w5"] Jan 26 08:09:56 crc kubenswrapper[4806]: I0126 08:09:56.655392 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-6w7w5" Jan 26 08:09:56 crc kubenswrapper[4806]: I0126 08:09:56.657549 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Jan 26 08:09:56 crc kubenswrapper[4806]: I0126 08:09:56.658932 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Jan 26 08:09:56 crc kubenswrapper[4806]: I0126 08:09:56.669943 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 26 08:09:56 crc kubenswrapper[4806]: I0126 08:09:56.688701 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/fcc22009-cca0-438b-8f2f-5c245db7c70c-etc-swift\") pod \"swift-storage-0\" (UID: \"fcc22009-cca0-438b-8f2f-5c245db7c70c\") " pod="openstack/swift-storage-0" Jan 26 08:09:56 crc kubenswrapper[4806]: I0126 08:09:56.688792 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c73d5149-0bc4-4db1-83fe-a3ae8745d0de-operator-scripts\") pod \"root-account-create-update-pfzk8\" (UID: \"c73d5149-0bc4-4db1-83fe-a3ae8745d0de\") " pod="openstack/root-account-create-update-pfzk8" Jan 26 08:09:56 crc kubenswrapper[4806]: I0126 08:09:56.688829 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzwps\" (UniqueName: \"kubernetes.io/projected/c73d5149-0bc4-4db1-83fe-a3ae8745d0de-kube-api-access-jzwps\") pod \"root-account-create-update-pfzk8\" (UID: \"c73d5149-0bc4-4db1-83fe-a3ae8745d0de\") " pod="openstack/root-account-create-update-pfzk8" Jan 26 08:09:56 crc kubenswrapper[4806]: E0126 08:09:56.688940 4806 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 26 08:09:56 crc kubenswrapper[4806]: E0126 08:09:56.688969 4806 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 26 08:09:56 crc kubenswrapper[4806]: E0126 08:09:56.689026 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fcc22009-cca0-438b-8f2f-5c245db7c70c-etc-swift podName:fcc22009-cca0-438b-8f2f-5c245db7c70c nodeName:}" failed. No retries permitted until 2026-01-26 08:10:00.689004606 +0000 UTC m=+979.953412742 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/fcc22009-cca0-438b-8f2f-5c245db7c70c-etc-swift") pod "swift-storage-0" (UID: "fcc22009-cca0-438b-8f2f-5c245db7c70c") : configmap "swift-ring-files" not found Jan 26 08:09:56 crc kubenswrapper[4806]: I0126 08:09:56.698595 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-2dckc"] Jan 26 08:09:56 crc kubenswrapper[4806]: I0126 08:09:56.699815 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-2dckc" Jan 26 08:09:56 crc kubenswrapper[4806]: I0126 08:09:56.706157 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-6w7w5"] Jan 26 08:09:56 crc kubenswrapper[4806]: E0126 08:09:56.706871 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle dispersionconf etc-swift kube-api-access-7bbmq ring-data-devices scripts swiftconf], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/swift-ring-rebalance-6w7w5" podUID="5c44b1d5-8329-42c3-a12a-b83aba0cf701" Jan 26 08:09:56 crc kubenswrapper[4806]: I0126 08:09:56.730018 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-2dckc"] Jan 26 08:09:56 crc kubenswrapper[4806]: I0126 08:09:56.760702 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-6w7w5"] Jan 26 08:09:56 crc kubenswrapper[4806]: I0126 08:09:56.789875 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/5c44b1d5-8329-42c3-a12a-b83aba0cf701-etc-swift\") pod \"swift-ring-rebalance-6w7w5\" (UID: \"5c44b1d5-8329-42c3-a12a-b83aba0cf701\") " pod="openstack/swift-ring-rebalance-6w7w5" Jan 26 08:09:56 crc kubenswrapper[4806]: I0126 08:09:56.789934 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c44b1d5-8329-42c3-a12a-b83aba0cf701-combined-ca-bundle\") pod \"swift-ring-rebalance-6w7w5\" (UID: \"5c44b1d5-8329-42c3-a12a-b83aba0cf701\") " pod="openstack/swift-ring-rebalance-6w7w5" Jan 26 08:09:56 crc kubenswrapper[4806]: I0126 08:09:56.790078 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c73d5149-0bc4-4db1-83fe-a3ae8745d0de-operator-scripts\") pod \"root-account-create-update-pfzk8\" (UID: \"c73d5149-0bc4-4db1-83fe-a3ae8745d0de\") " pod="openstack/root-account-create-update-pfzk8" Jan 26 08:09:56 crc kubenswrapper[4806]: I0126 08:09:56.790126 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jzwps\" (UniqueName: \"kubernetes.io/projected/c73d5149-0bc4-4db1-83fe-a3ae8745d0de-kube-api-access-jzwps\") pod \"root-account-create-update-pfzk8\" (UID: \"c73d5149-0bc4-4db1-83fe-a3ae8745d0de\") " pod="openstack/root-account-create-update-pfzk8" Jan 26 08:09:56 crc kubenswrapper[4806]: I0126 08:09:56.790170 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bbmq\" (UniqueName: \"kubernetes.io/projected/5c44b1d5-8329-42c3-a12a-b83aba0cf701-kube-api-access-7bbmq\") pod \"swift-ring-rebalance-6w7w5\" (UID: \"5c44b1d5-8329-42c3-a12a-b83aba0cf701\") " pod="openstack/swift-ring-rebalance-6w7w5" Jan 26 08:09:56 crc kubenswrapper[4806]: I0126 08:09:56.790669 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/5c44b1d5-8329-42c3-a12a-b83aba0cf701-ring-data-devices\") pod \"swift-ring-rebalance-6w7w5\" (UID: \"5c44b1d5-8329-42c3-a12a-b83aba0cf701\") " pod="openstack/swift-ring-rebalance-6w7w5" Jan 26 08:09:56 crc kubenswrapper[4806]: I0126 08:09:56.790824 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5c44b1d5-8329-42c3-a12a-b83aba0cf701-scripts\") pod \"swift-ring-rebalance-6w7w5\" (UID: \"5c44b1d5-8329-42c3-a12a-b83aba0cf701\") " pod="openstack/swift-ring-rebalance-6w7w5" Jan 26 08:09:56 crc kubenswrapper[4806]: I0126 08:09:56.790855 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/5c44b1d5-8329-42c3-a12a-b83aba0cf701-swiftconf\") pod \"swift-ring-rebalance-6w7w5\" (UID: \"5c44b1d5-8329-42c3-a12a-b83aba0cf701\") " pod="openstack/swift-ring-rebalance-6w7w5" Jan 26 08:09:56 crc kubenswrapper[4806]: I0126 08:09:56.790922 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/5c44b1d5-8329-42c3-a12a-b83aba0cf701-dispersionconf\") pod \"swift-ring-rebalance-6w7w5\" (UID: \"5c44b1d5-8329-42c3-a12a-b83aba0cf701\") " pod="openstack/swift-ring-rebalance-6w7w5" Jan 26 08:09:56 crc kubenswrapper[4806]: I0126 08:09:56.790982 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c73d5149-0bc4-4db1-83fe-a3ae8745d0de-operator-scripts\") pod \"root-account-create-update-pfzk8\" (UID: \"c73d5149-0bc4-4db1-83fe-a3ae8745d0de\") " pod="openstack/root-account-create-update-pfzk8" Jan 26 08:09:56 crc kubenswrapper[4806]: I0126 08:09:56.800631 4806 generic.go:334] "Generic (PLEG): container finished" podID="7fecccc8-6319-47b6-9dcb-e1d09c53cc1f" containerID="f2bdba503a501b38e95f010154a4d5eb0e0014df07100e037c08be6a8074d791" exitCode=0 Jan 26 08:09:56 crc kubenswrapper[4806]: I0126 08:09:56.800700 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-jlcbm" event={"ID":"7fecccc8-6319-47b6-9dcb-e1d09c53cc1f","Type":"ContainerDied","Data":"f2bdba503a501b38e95f010154a4d5eb0e0014df07100e037c08be6a8074d791"} Jan 26 08:09:56 crc kubenswrapper[4806]: I0126 08:09:56.800729 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-jlcbm" event={"ID":"7fecccc8-6319-47b6-9dcb-e1d09c53cc1f","Type":"ContainerStarted","Data":"513cbc16b99527fdd889760734066a0cb9a72b68dd0fab84617972c32e6a9c70"} Jan 26 08:09:56 crc kubenswrapper[4806]: I0126 08:09:56.804601 4806 generic.go:334] "Generic (PLEG): container finished" podID="301f16bd-223a-43a2-89cb-1bff1beac16e" containerID="d839a21d9f5680a0000ddc6233a37b8e2d9de5992e5ef6d9eb9cb408d42deadb" exitCode=0 Jan 26 08:09:56 crc kubenswrapper[4806]: I0126 08:09:56.805380 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-65f6-account-create-update-l6nk9" event={"ID":"301f16bd-223a-43a2-89cb-1bff1beac16e","Type":"ContainerDied","Data":"d839a21d9f5680a0000ddc6233a37b8e2d9de5992e5ef6d9eb9cb408d42deadb"} Jan 26 08:09:56 crc kubenswrapper[4806]: I0126 08:09:56.805427 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-6w7w5" Jan 26 08:09:56 crc kubenswrapper[4806]: I0126 08:09:56.812858 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-6w7w5" Jan 26 08:09:56 crc kubenswrapper[4806]: I0126 08:09:56.816086 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jzwps\" (UniqueName: \"kubernetes.io/projected/c73d5149-0bc4-4db1-83fe-a3ae8745d0de-kube-api-access-jzwps\") pod \"root-account-create-update-pfzk8\" (UID: \"c73d5149-0bc4-4db1-83fe-a3ae8745d0de\") " pod="openstack/root-account-create-update-pfzk8" Jan 26 08:09:56 crc kubenswrapper[4806]: I0126 08:09:56.848440 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-pfzk8" Jan 26 08:09:56 crc kubenswrapper[4806]: I0126 08:09:56.892447 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/061b909a-a88f-4261-9ccf-2daaf3958621-etc-swift\") pod \"swift-ring-rebalance-2dckc\" (UID: \"061b909a-a88f-4261-9ccf-2daaf3958621\") " pod="openstack/swift-ring-rebalance-2dckc" Jan 26 08:09:56 crc kubenswrapper[4806]: I0126 08:09:56.892622 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/061b909a-a88f-4261-9ccf-2daaf3958621-scripts\") pod \"swift-ring-rebalance-2dckc\" (UID: \"061b909a-a88f-4261-9ccf-2daaf3958621\") " pod="openstack/swift-ring-rebalance-2dckc" Jan 26 08:09:56 crc kubenswrapper[4806]: I0126 08:09:56.892682 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7bbmq\" (UniqueName: \"kubernetes.io/projected/5c44b1d5-8329-42c3-a12a-b83aba0cf701-kube-api-access-7bbmq\") pod \"swift-ring-rebalance-6w7w5\" (UID: \"5c44b1d5-8329-42c3-a12a-b83aba0cf701\") " pod="openstack/swift-ring-rebalance-6w7w5" Jan 26 08:09:56 crc kubenswrapper[4806]: I0126 08:09:56.892720 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/5c44b1d5-8329-42c3-a12a-b83aba0cf701-ring-data-devices\") pod \"swift-ring-rebalance-6w7w5\" (UID: \"5c44b1d5-8329-42c3-a12a-b83aba0cf701\") " pod="openstack/swift-ring-rebalance-6w7w5" Jan 26 08:09:56 crc kubenswrapper[4806]: I0126 08:09:56.892804 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/061b909a-a88f-4261-9ccf-2daaf3958621-ring-data-devices\") pod \"swift-ring-rebalance-2dckc\" (UID: \"061b909a-a88f-4261-9ccf-2daaf3958621\") " pod="openstack/swift-ring-rebalance-2dckc" Jan 26 08:09:56 crc kubenswrapper[4806]: I0126 08:09:56.892846 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/061b909a-a88f-4261-9ccf-2daaf3958621-dispersionconf\") pod \"swift-ring-rebalance-2dckc\" (UID: \"061b909a-a88f-4261-9ccf-2daaf3958621\") " pod="openstack/swift-ring-rebalance-2dckc" Jan 26 08:09:56 crc kubenswrapper[4806]: I0126 08:09:56.892866 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5c44b1d5-8329-42c3-a12a-b83aba0cf701-scripts\") pod \"swift-ring-rebalance-6w7w5\" (UID: \"5c44b1d5-8329-42c3-a12a-b83aba0cf701\") " pod="openstack/swift-ring-rebalance-6w7w5" Jan 26 08:09:56 crc kubenswrapper[4806]: I0126 08:09:56.892908 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/5c44b1d5-8329-42c3-a12a-b83aba0cf701-swiftconf\") pod \"swift-ring-rebalance-6w7w5\" (UID: \"5c44b1d5-8329-42c3-a12a-b83aba0cf701\") " pod="openstack/swift-ring-rebalance-6w7w5" Jan 26 08:09:56 crc kubenswrapper[4806]: I0126 08:09:56.893398 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/5c44b1d5-8329-42c3-a12a-b83aba0cf701-dispersionconf\") pod \"swift-ring-rebalance-6w7w5\" (UID: \"5c44b1d5-8329-42c3-a12a-b83aba0cf701\") " pod="openstack/swift-ring-rebalance-6w7w5" Jan 26 08:09:56 crc kubenswrapper[4806]: I0126 08:09:56.893419 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/5c44b1d5-8329-42c3-a12a-b83aba0cf701-ring-data-devices\") pod \"swift-ring-rebalance-6w7w5\" (UID: \"5c44b1d5-8329-42c3-a12a-b83aba0cf701\") " pod="openstack/swift-ring-rebalance-6w7w5" Jan 26 08:09:56 crc kubenswrapper[4806]: I0126 08:09:56.893453 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5c44b1d5-8329-42c3-a12a-b83aba0cf701-scripts\") pod \"swift-ring-rebalance-6w7w5\" (UID: \"5c44b1d5-8329-42c3-a12a-b83aba0cf701\") " pod="openstack/swift-ring-rebalance-6w7w5" Jan 26 08:09:56 crc kubenswrapper[4806]: I0126 08:09:56.893476 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/5c44b1d5-8329-42c3-a12a-b83aba0cf701-etc-swift\") pod \"swift-ring-rebalance-6w7w5\" (UID: \"5c44b1d5-8329-42c3-a12a-b83aba0cf701\") " pod="openstack/swift-ring-rebalance-6w7w5" Jan 26 08:09:56 crc kubenswrapper[4806]: I0126 08:09:56.893505 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4x95\" (UniqueName: \"kubernetes.io/projected/061b909a-a88f-4261-9ccf-2daaf3958621-kube-api-access-r4x95\") pod \"swift-ring-rebalance-2dckc\" (UID: \"061b909a-a88f-4261-9ccf-2daaf3958621\") " pod="openstack/swift-ring-rebalance-2dckc" Jan 26 08:09:56 crc kubenswrapper[4806]: I0126 08:09:56.893573 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c44b1d5-8329-42c3-a12a-b83aba0cf701-combined-ca-bundle\") pod \"swift-ring-rebalance-6w7w5\" (UID: \"5c44b1d5-8329-42c3-a12a-b83aba0cf701\") " pod="openstack/swift-ring-rebalance-6w7w5" Jan 26 08:09:56 crc kubenswrapper[4806]: I0126 08:09:56.893603 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/061b909a-a88f-4261-9ccf-2daaf3958621-swiftconf\") pod \"swift-ring-rebalance-2dckc\" (UID: \"061b909a-a88f-4261-9ccf-2daaf3958621\") " pod="openstack/swift-ring-rebalance-2dckc" Jan 26 08:09:56 crc kubenswrapper[4806]: I0126 08:09:56.893620 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/061b909a-a88f-4261-9ccf-2daaf3958621-combined-ca-bundle\") pod \"swift-ring-rebalance-2dckc\" (UID: \"061b909a-a88f-4261-9ccf-2daaf3958621\") " pod="openstack/swift-ring-rebalance-2dckc" Jan 26 08:09:56 crc kubenswrapper[4806]: I0126 08:09:56.894005 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/5c44b1d5-8329-42c3-a12a-b83aba0cf701-etc-swift\") pod \"swift-ring-rebalance-6w7w5\" (UID: \"5c44b1d5-8329-42c3-a12a-b83aba0cf701\") " pod="openstack/swift-ring-rebalance-6w7w5" Jan 26 08:09:56 crc kubenswrapper[4806]: I0126 08:09:56.896201 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/5c44b1d5-8329-42c3-a12a-b83aba0cf701-swiftconf\") pod \"swift-ring-rebalance-6w7w5\" (UID: \"5c44b1d5-8329-42c3-a12a-b83aba0cf701\") " pod="openstack/swift-ring-rebalance-6w7w5" Jan 26 08:09:56 crc kubenswrapper[4806]: I0126 08:09:56.898728 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c44b1d5-8329-42c3-a12a-b83aba0cf701-combined-ca-bundle\") pod \"swift-ring-rebalance-6w7w5\" (UID: \"5c44b1d5-8329-42c3-a12a-b83aba0cf701\") " pod="openstack/swift-ring-rebalance-6w7w5" Jan 26 08:09:56 crc kubenswrapper[4806]: I0126 08:09:56.899066 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/5c44b1d5-8329-42c3-a12a-b83aba0cf701-dispersionconf\") pod \"swift-ring-rebalance-6w7w5\" (UID: \"5c44b1d5-8329-42c3-a12a-b83aba0cf701\") " pod="openstack/swift-ring-rebalance-6w7w5" Jan 26 08:09:56 crc kubenswrapper[4806]: I0126 08:09:56.909885 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bbmq\" (UniqueName: \"kubernetes.io/projected/5c44b1d5-8329-42c3-a12a-b83aba0cf701-kube-api-access-7bbmq\") pod \"swift-ring-rebalance-6w7w5\" (UID: \"5c44b1d5-8329-42c3-a12a-b83aba0cf701\") " pod="openstack/swift-ring-rebalance-6w7w5" Jan 26 08:09:56 crc kubenswrapper[4806]: I0126 08:09:56.997812 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/5c44b1d5-8329-42c3-a12a-b83aba0cf701-etc-swift\") pod \"5c44b1d5-8329-42c3-a12a-b83aba0cf701\" (UID: \"5c44b1d5-8329-42c3-a12a-b83aba0cf701\") " Jan 26 08:09:56 crc kubenswrapper[4806]: I0126 08:09:56.998138 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/5c44b1d5-8329-42c3-a12a-b83aba0cf701-swiftconf\") pod \"5c44b1d5-8329-42c3-a12a-b83aba0cf701\" (UID: \"5c44b1d5-8329-42c3-a12a-b83aba0cf701\") " Jan 26 08:09:56 crc kubenswrapper[4806]: I0126 08:09:56.998140 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c44b1d5-8329-42c3-a12a-b83aba0cf701-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "5c44b1d5-8329-42c3-a12a-b83aba0cf701" (UID: "5c44b1d5-8329-42c3-a12a-b83aba0cf701"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 08:09:56 crc kubenswrapper[4806]: I0126 08:09:56.998248 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c44b1d5-8329-42c3-a12a-b83aba0cf701-combined-ca-bundle\") pod \"5c44b1d5-8329-42c3-a12a-b83aba0cf701\" (UID: \"5c44b1d5-8329-42c3-a12a-b83aba0cf701\") " Jan 26 08:09:56 crc kubenswrapper[4806]: I0126 08:09:56.998274 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5c44b1d5-8329-42c3-a12a-b83aba0cf701-scripts\") pod \"5c44b1d5-8329-42c3-a12a-b83aba0cf701\" (UID: \"5c44b1d5-8329-42c3-a12a-b83aba0cf701\") " Jan 26 08:09:56 crc kubenswrapper[4806]: I0126 08:09:56.998334 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/5c44b1d5-8329-42c3-a12a-b83aba0cf701-ring-data-devices\") pod \"5c44b1d5-8329-42c3-a12a-b83aba0cf701\" (UID: \"5c44b1d5-8329-42c3-a12a-b83aba0cf701\") " Jan 26 08:09:56 crc kubenswrapper[4806]: I0126 08:09:56.998548 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/061b909a-a88f-4261-9ccf-2daaf3958621-ring-data-devices\") pod \"swift-ring-rebalance-2dckc\" (UID: \"061b909a-a88f-4261-9ccf-2daaf3958621\") " pod="openstack/swift-ring-rebalance-2dckc" Jan 26 08:09:56 crc kubenswrapper[4806]: I0126 08:09:56.998575 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/061b909a-a88f-4261-9ccf-2daaf3958621-dispersionconf\") pod \"swift-ring-rebalance-2dckc\" (UID: \"061b909a-a88f-4261-9ccf-2daaf3958621\") " pod="openstack/swift-ring-rebalance-2dckc" Jan 26 08:09:56 crc kubenswrapper[4806]: I0126 08:09:56.998651 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r4x95\" (UniqueName: \"kubernetes.io/projected/061b909a-a88f-4261-9ccf-2daaf3958621-kube-api-access-r4x95\") pod \"swift-ring-rebalance-2dckc\" (UID: \"061b909a-a88f-4261-9ccf-2daaf3958621\") " pod="openstack/swift-ring-rebalance-2dckc" Jan 26 08:09:56 crc kubenswrapper[4806]: I0126 08:09:56.998672 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/061b909a-a88f-4261-9ccf-2daaf3958621-swiftconf\") pod \"swift-ring-rebalance-2dckc\" (UID: \"061b909a-a88f-4261-9ccf-2daaf3958621\") " pod="openstack/swift-ring-rebalance-2dckc" Jan 26 08:09:56 crc kubenswrapper[4806]: I0126 08:09:56.998687 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/061b909a-a88f-4261-9ccf-2daaf3958621-combined-ca-bundle\") pod \"swift-ring-rebalance-2dckc\" (UID: \"061b909a-a88f-4261-9ccf-2daaf3958621\") " pod="openstack/swift-ring-rebalance-2dckc" Jan 26 08:09:56 crc kubenswrapper[4806]: I0126 08:09:56.998731 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/061b909a-a88f-4261-9ccf-2daaf3958621-etc-swift\") pod \"swift-ring-rebalance-2dckc\" (UID: \"061b909a-a88f-4261-9ccf-2daaf3958621\") " pod="openstack/swift-ring-rebalance-2dckc" Jan 26 08:09:56 crc kubenswrapper[4806]: I0126 08:09:56.998770 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/061b909a-a88f-4261-9ccf-2daaf3958621-scripts\") pod \"swift-ring-rebalance-2dckc\" (UID: \"061b909a-a88f-4261-9ccf-2daaf3958621\") " pod="openstack/swift-ring-rebalance-2dckc" Jan 26 08:09:56 crc kubenswrapper[4806]: I0126 08:09:56.998807 4806 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/5c44b1d5-8329-42c3-a12a-b83aba0cf701-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 26 08:09:56 crc kubenswrapper[4806]: I0126 08:09:56.999425 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/061b909a-a88f-4261-9ccf-2daaf3958621-scripts\") pod \"swift-ring-rebalance-2dckc\" (UID: \"061b909a-a88f-4261-9ccf-2daaf3958621\") " pod="openstack/swift-ring-rebalance-2dckc" Jan 26 08:09:56 crc kubenswrapper[4806]: I0126 08:09:56.999501 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c44b1d5-8329-42c3-a12a-b83aba0cf701-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "5c44b1d5-8329-42c3-a12a-b83aba0cf701" (UID: "5c44b1d5-8329-42c3-a12a-b83aba0cf701"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 08:09:56 crc kubenswrapper[4806]: I0126 08:09:56.999687 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/061b909a-a88f-4261-9ccf-2daaf3958621-ring-data-devices\") pod \"swift-ring-rebalance-2dckc\" (UID: \"061b909a-a88f-4261-9ccf-2daaf3958621\") " pod="openstack/swift-ring-rebalance-2dckc" Jan 26 08:09:57 crc kubenswrapper[4806]: I0126 08:09:56.999968 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/061b909a-a88f-4261-9ccf-2daaf3958621-etc-swift\") pod \"swift-ring-rebalance-2dckc\" (UID: \"061b909a-a88f-4261-9ccf-2daaf3958621\") " pod="openstack/swift-ring-rebalance-2dckc" Jan 26 08:09:57 crc kubenswrapper[4806]: I0126 08:09:57.000250 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c44b1d5-8329-42c3-a12a-b83aba0cf701-scripts" (OuterVolumeSpecName: "scripts") pod "5c44b1d5-8329-42c3-a12a-b83aba0cf701" (UID: "5c44b1d5-8329-42c3-a12a-b83aba0cf701"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 08:09:57 crc kubenswrapper[4806]: I0126 08:09:57.002838 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c44b1d5-8329-42c3-a12a-b83aba0cf701-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5c44b1d5-8329-42c3-a12a-b83aba0cf701" (UID: "5c44b1d5-8329-42c3-a12a-b83aba0cf701"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:09:57 crc kubenswrapper[4806]: I0126 08:09:57.002910 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c44b1d5-8329-42c3-a12a-b83aba0cf701-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "5c44b1d5-8329-42c3-a12a-b83aba0cf701" (UID: "5c44b1d5-8329-42c3-a12a-b83aba0cf701"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:09:57 crc kubenswrapper[4806]: I0126 08:09:57.006129 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/061b909a-a88f-4261-9ccf-2daaf3958621-combined-ca-bundle\") pod \"swift-ring-rebalance-2dckc\" (UID: \"061b909a-a88f-4261-9ccf-2daaf3958621\") " pod="openstack/swift-ring-rebalance-2dckc" Jan 26 08:09:57 crc kubenswrapper[4806]: I0126 08:09:57.008290 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/061b909a-a88f-4261-9ccf-2daaf3958621-swiftconf\") pod \"swift-ring-rebalance-2dckc\" (UID: \"061b909a-a88f-4261-9ccf-2daaf3958621\") " pod="openstack/swift-ring-rebalance-2dckc" Jan 26 08:09:57 crc kubenswrapper[4806]: I0126 08:09:57.017442 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/061b909a-a88f-4261-9ccf-2daaf3958621-dispersionconf\") pod \"swift-ring-rebalance-2dckc\" (UID: \"061b909a-a88f-4261-9ccf-2daaf3958621\") " pod="openstack/swift-ring-rebalance-2dckc" Jan 26 08:09:57 crc kubenswrapper[4806]: I0126 08:09:57.021752 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4x95\" (UniqueName: \"kubernetes.io/projected/061b909a-a88f-4261-9ccf-2daaf3958621-kube-api-access-r4x95\") pod \"swift-ring-rebalance-2dckc\" (UID: \"061b909a-a88f-4261-9ccf-2daaf3958621\") " pod="openstack/swift-ring-rebalance-2dckc" Jan 26 08:09:57 crc kubenswrapper[4806]: I0126 08:09:57.100160 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/5c44b1d5-8329-42c3-a12a-b83aba0cf701-dispersionconf\") pod \"5c44b1d5-8329-42c3-a12a-b83aba0cf701\" (UID: \"5c44b1d5-8329-42c3-a12a-b83aba0cf701\") " Jan 26 08:09:57 crc kubenswrapper[4806]: I0126 08:09:57.100488 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7bbmq\" (UniqueName: \"kubernetes.io/projected/5c44b1d5-8329-42c3-a12a-b83aba0cf701-kube-api-access-7bbmq\") pod \"5c44b1d5-8329-42c3-a12a-b83aba0cf701\" (UID: \"5c44b1d5-8329-42c3-a12a-b83aba0cf701\") " Jan 26 08:09:57 crc kubenswrapper[4806]: I0126 08:09:57.102886 4806 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/5c44b1d5-8329-42c3-a12a-b83aba0cf701-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 26 08:09:57 crc kubenswrapper[4806]: I0126 08:09:57.103148 4806 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c44b1d5-8329-42c3-a12a-b83aba0cf701-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 08:09:57 crc kubenswrapper[4806]: I0126 08:09:57.103102 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c44b1d5-8329-42c3-a12a-b83aba0cf701-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "5c44b1d5-8329-42c3-a12a-b83aba0cf701" (UID: "5c44b1d5-8329-42c3-a12a-b83aba0cf701"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:09:57 crc kubenswrapper[4806]: I0126 08:09:57.103231 4806 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5c44b1d5-8329-42c3-a12a-b83aba0cf701-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 08:09:57 crc kubenswrapper[4806]: I0126 08:09:57.103282 4806 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/5c44b1d5-8329-42c3-a12a-b83aba0cf701-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 26 08:09:57 crc kubenswrapper[4806]: I0126 08:09:57.105653 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c44b1d5-8329-42c3-a12a-b83aba0cf701-kube-api-access-7bbmq" (OuterVolumeSpecName: "kube-api-access-7bbmq") pod "5c44b1d5-8329-42c3-a12a-b83aba0cf701" (UID: "5c44b1d5-8329-42c3-a12a-b83aba0cf701"). InnerVolumeSpecName "kube-api-access-7bbmq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:09:57 crc kubenswrapper[4806]: I0126 08:09:57.204347 4806 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/5c44b1d5-8329-42c3-a12a-b83aba0cf701-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 26 08:09:57 crc kubenswrapper[4806]: I0126 08:09:57.204379 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7bbmq\" (UniqueName: \"kubernetes.io/projected/5c44b1d5-8329-42c3-a12a-b83aba0cf701-kube-api-access-7bbmq\") on node \"crc\" DevicePath \"\"" Jan 26 08:09:57 crc kubenswrapper[4806]: I0126 08:09:57.313827 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-2dckc" Jan 26 08:09:57 crc kubenswrapper[4806]: I0126 08:09:57.371904 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-pfzk8"] Jan 26 08:09:57 crc kubenswrapper[4806]: W0126 08:09:57.380830 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc73d5149_0bc4_4db1_83fe_a3ae8745d0de.slice/crio-05d04600b0391a3d3824f52f0a02ead8f771d8960efb62a6364c2afd7cb38030 WatchSource:0}: Error finding container 05d04600b0391a3d3824f52f0a02ead8f771d8960efb62a6364c2afd7cb38030: Status 404 returned error can't find the container with id 05d04600b0391a3d3824f52f0a02ead8f771d8960efb62a6364c2afd7cb38030 Jan 26 08:09:57 crc kubenswrapper[4806]: W0126 08:09:57.833806 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod061b909a_a88f_4261_9ccf_2daaf3958621.slice/crio-e90f6db0cc932c03752a37141ff563db329226ff18f85822d5ef2688073f6fb4 WatchSource:0}: Error finding container e90f6db0cc932c03752a37141ff563db329226ff18f85822d5ef2688073f6fb4: Status 404 returned error can't find the container with id e90f6db0cc932c03752a37141ff563db329226ff18f85822d5ef2688073f6fb4 Jan 26 08:09:57 crc kubenswrapper[4806]: I0126 08:09:57.835167 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-2dckc"] Jan 26 08:09:57 crc kubenswrapper[4806]: I0126 08:09:57.839970 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-pfzk8" event={"ID":"c73d5149-0bc4-4db1-83fe-a3ae8745d0de","Type":"ContainerStarted","Data":"344e67022fbd133bb9dec4fc7a0fe008a30abc1aa17d81e24539680056095056"} Jan 26 08:09:57 crc kubenswrapper[4806]: I0126 08:09:57.840032 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-pfzk8" event={"ID":"c73d5149-0bc4-4db1-83fe-a3ae8745d0de","Type":"ContainerStarted","Data":"05d04600b0391a3d3824f52f0a02ead8f771d8960efb62a6364c2afd7cb38030"} Jan 26 08:09:57 crc kubenswrapper[4806]: I0126 08:09:57.840090 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-6w7w5" Jan 26 08:09:57 crc kubenswrapper[4806]: I0126 08:09:57.859415 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-pfzk8" podStartSLOduration=1.859400363 podStartE2EDuration="1.859400363s" podCreationTimestamp="2026-01-26 08:09:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 08:09:57.857348587 +0000 UTC m=+977.121756643" watchObservedRunningTime="2026-01-26 08:09:57.859400363 +0000 UTC m=+977.123808419" Jan 26 08:09:57 crc kubenswrapper[4806]: I0126 08:09:57.900592 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-6w7w5"] Jan 26 08:09:57 crc kubenswrapper[4806]: I0126 08:09:57.906250 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-6w7w5"] Jan 26 08:09:58 crc kubenswrapper[4806]: I0126 08:09:58.370271 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-jlcbm" Jan 26 08:09:58 crc kubenswrapper[4806]: I0126 08:09:58.375069 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-65f6-account-create-update-l6nk9" Jan 26 08:09:58 crc kubenswrapper[4806]: I0126 08:09:58.443709 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgs45\" (UniqueName: \"kubernetes.io/projected/7fecccc8-6319-47b6-9dcb-e1d09c53cc1f-kube-api-access-zgs45\") pod \"7fecccc8-6319-47b6-9dcb-e1d09c53cc1f\" (UID: \"7fecccc8-6319-47b6-9dcb-e1d09c53cc1f\") " Jan 26 08:09:58 crc kubenswrapper[4806]: I0126 08:09:58.443763 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/301f16bd-223a-43a2-89cb-1bff1beac16e-operator-scripts\") pod \"301f16bd-223a-43a2-89cb-1bff1beac16e\" (UID: \"301f16bd-223a-43a2-89cb-1bff1beac16e\") " Jan 26 08:09:58 crc kubenswrapper[4806]: I0126 08:09:58.443906 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7fecccc8-6319-47b6-9dcb-e1d09c53cc1f-operator-scripts\") pod \"7fecccc8-6319-47b6-9dcb-e1d09c53cc1f\" (UID: \"7fecccc8-6319-47b6-9dcb-e1d09c53cc1f\") " Jan 26 08:09:58 crc kubenswrapper[4806]: I0126 08:09:58.443972 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tnrnh\" (UniqueName: \"kubernetes.io/projected/301f16bd-223a-43a2-89cb-1bff1beac16e-kube-api-access-tnrnh\") pod \"301f16bd-223a-43a2-89cb-1bff1beac16e\" (UID: \"301f16bd-223a-43a2-89cb-1bff1beac16e\") " Jan 26 08:09:58 crc kubenswrapper[4806]: I0126 08:09:58.444533 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/301f16bd-223a-43a2-89cb-1bff1beac16e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "301f16bd-223a-43a2-89cb-1bff1beac16e" (UID: "301f16bd-223a-43a2-89cb-1bff1beac16e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 08:09:58 crc kubenswrapper[4806]: I0126 08:09:58.445274 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7fecccc8-6319-47b6-9dcb-e1d09c53cc1f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7fecccc8-6319-47b6-9dcb-e1d09c53cc1f" (UID: "7fecccc8-6319-47b6-9dcb-e1d09c53cc1f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 08:09:58 crc kubenswrapper[4806]: I0126 08:09:58.449857 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/301f16bd-223a-43a2-89cb-1bff1beac16e-kube-api-access-tnrnh" (OuterVolumeSpecName: "kube-api-access-tnrnh") pod "301f16bd-223a-43a2-89cb-1bff1beac16e" (UID: "301f16bd-223a-43a2-89cb-1bff1beac16e"). InnerVolumeSpecName "kube-api-access-tnrnh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:09:58 crc kubenswrapper[4806]: I0126 08:09:58.461646 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fecccc8-6319-47b6-9dcb-e1d09c53cc1f-kube-api-access-zgs45" (OuterVolumeSpecName: "kube-api-access-zgs45") pod "7fecccc8-6319-47b6-9dcb-e1d09c53cc1f" (UID: "7fecccc8-6319-47b6-9dcb-e1d09c53cc1f"). InnerVolumeSpecName "kube-api-access-zgs45". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:09:58 crc kubenswrapper[4806]: I0126 08:09:58.545993 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tnrnh\" (UniqueName: \"kubernetes.io/projected/301f16bd-223a-43a2-89cb-1bff1beac16e-kube-api-access-tnrnh\") on node \"crc\" DevicePath \"\"" Jan 26 08:09:58 crc kubenswrapper[4806]: I0126 08:09:58.546037 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgs45\" (UniqueName: \"kubernetes.io/projected/7fecccc8-6319-47b6-9dcb-e1d09c53cc1f-kube-api-access-zgs45\") on node \"crc\" DevicePath \"\"" Jan 26 08:09:58 crc kubenswrapper[4806]: I0126 08:09:58.546048 4806 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/301f16bd-223a-43a2-89cb-1bff1beac16e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 08:09:58 crc kubenswrapper[4806]: I0126 08:09:58.546089 4806 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7fecccc8-6319-47b6-9dcb-e1d09c53cc1f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 08:09:58 crc kubenswrapper[4806]: I0126 08:09:58.848979 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-jlcbm" Jan 26 08:09:58 crc kubenswrapper[4806]: I0126 08:09:58.848972 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-jlcbm" event={"ID":"7fecccc8-6319-47b6-9dcb-e1d09c53cc1f","Type":"ContainerDied","Data":"513cbc16b99527fdd889760734066a0cb9a72b68dd0fab84617972c32e6a9c70"} Jan 26 08:09:58 crc kubenswrapper[4806]: I0126 08:09:58.849113 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="513cbc16b99527fdd889760734066a0cb9a72b68dd0fab84617972c32e6a9c70" Jan 26 08:09:58 crc kubenswrapper[4806]: I0126 08:09:58.850940 4806 generic.go:334] "Generic (PLEG): container finished" podID="c73d5149-0bc4-4db1-83fe-a3ae8745d0de" containerID="344e67022fbd133bb9dec4fc7a0fe008a30abc1aa17d81e24539680056095056" exitCode=0 Jan 26 08:09:58 crc kubenswrapper[4806]: I0126 08:09:58.851048 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-pfzk8" event={"ID":"c73d5149-0bc4-4db1-83fe-a3ae8745d0de","Type":"ContainerDied","Data":"344e67022fbd133bb9dec4fc7a0fe008a30abc1aa17d81e24539680056095056"} Jan 26 08:09:58 crc kubenswrapper[4806]: I0126 08:09:58.851811 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-2dckc" event={"ID":"061b909a-a88f-4261-9ccf-2daaf3958621","Type":"ContainerStarted","Data":"e90f6db0cc932c03752a37141ff563db329226ff18f85822d5ef2688073f6fb4"} Jan 26 08:09:58 crc kubenswrapper[4806]: I0126 08:09:58.853454 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-65f6-account-create-update-l6nk9" event={"ID":"301f16bd-223a-43a2-89cb-1bff1beac16e","Type":"ContainerDied","Data":"c573503fe3397572bfb1b7ea6591be8ebb09fde3e01aa04ddef755b957279636"} Jan 26 08:09:58 crc kubenswrapper[4806]: I0126 08:09:58.853473 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c573503fe3397572bfb1b7ea6591be8ebb09fde3e01aa04ddef755b957279636" Jan 26 08:09:58 crc kubenswrapper[4806]: I0126 08:09:58.853534 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-65f6-account-create-update-l6nk9" Jan 26 08:09:59 crc kubenswrapper[4806]: I0126 08:09:59.054773 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c44b1d5-8329-42c3-a12a-b83aba0cf701" path="/var/lib/kubelet/pods/5c44b1d5-8329-42c3-a12a-b83aba0cf701/volumes" Jan 26 08:09:59 crc kubenswrapper[4806]: I0126 08:09:59.112407 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-x6n6z"] Jan 26 08:09:59 crc kubenswrapper[4806]: E0126 08:09:59.112905 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="301f16bd-223a-43a2-89cb-1bff1beac16e" containerName="mariadb-account-create-update" Jan 26 08:09:59 crc kubenswrapper[4806]: I0126 08:09:59.112922 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="301f16bd-223a-43a2-89cb-1bff1beac16e" containerName="mariadb-account-create-update" Jan 26 08:09:59 crc kubenswrapper[4806]: E0126 08:09:59.112963 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fecccc8-6319-47b6-9dcb-e1d09c53cc1f" containerName="mariadb-database-create" Jan 26 08:09:59 crc kubenswrapper[4806]: I0126 08:09:59.112975 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fecccc8-6319-47b6-9dcb-e1d09c53cc1f" containerName="mariadb-database-create" Jan 26 08:09:59 crc kubenswrapper[4806]: I0126 08:09:59.113170 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fecccc8-6319-47b6-9dcb-e1d09c53cc1f" containerName="mariadb-database-create" Jan 26 08:09:59 crc kubenswrapper[4806]: I0126 08:09:59.113192 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="301f16bd-223a-43a2-89cb-1bff1beac16e" containerName="mariadb-account-create-update" Jan 26 08:09:59 crc kubenswrapper[4806]: I0126 08:09:59.113804 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-x6n6z" Jan 26 08:09:59 crc kubenswrapper[4806]: I0126 08:09:59.130866 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-x6n6z"] Jan 26 08:09:59 crc kubenswrapper[4806]: I0126 08:09:59.156634 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lw4dz\" (UniqueName: \"kubernetes.io/projected/43b0a02c-c897-4cb4-bc1e-f478cec82e6a-kube-api-access-lw4dz\") pod \"keystone-db-create-x6n6z\" (UID: \"43b0a02c-c897-4cb4-bc1e-f478cec82e6a\") " pod="openstack/keystone-db-create-x6n6z" Jan 26 08:09:59 crc kubenswrapper[4806]: I0126 08:09:59.156736 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/43b0a02c-c897-4cb4-bc1e-f478cec82e6a-operator-scripts\") pod \"keystone-db-create-x6n6z\" (UID: \"43b0a02c-c897-4cb4-bc1e-f478cec82e6a\") " pod="openstack/keystone-db-create-x6n6z" Jan 26 08:09:59 crc kubenswrapper[4806]: I0126 08:09:59.258124 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lw4dz\" (UniqueName: \"kubernetes.io/projected/43b0a02c-c897-4cb4-bc1e-f478cec82e6a-kube-api-access-lw4dz\") pod \"keystone-db-create-x6n6z\" (UID: \"43b0a02c-c897-4cb4-bc1e-f478cec82e6a\") " pod="openstack/keystone-db-create-x6n6z" Jan 26 08:09:59 crc kubenswrapper[4806]: I0126 08:09:59.258228 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/43b0a02c-c897-4cb4-bc1e-f478cec82e6a-operator-scripts\") pod \"keystone-db-create-x6n6z\" (UID: \"43b0a02c-c897-4cb4-bc1e-f478cec82e6a\") " pod="openstack/keystone-db-create-x6n6z" Jan 26 08:09:59 crc kubenswrapper[4806]: I0126 08:09:59.259639 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/43b0a02c-c897-4cb4-bc1e-f478cec82e6a-operator-scripts\") pod \"keystone-db-create-x6n6z\" (UID: \"43b0a02c-c897-4cb4-bc1e-f478cec82e6a\") " pod="openstack/keystone-db-create-x6n6z" Jan 26 08:09:59 crc kubenswrapper[4806]: I0126 08:09:59.274257 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lw4dz\" (UniqueName: \"kubernetes.io/projected/43b0a02c-c897-4cb4-bc1e-f478cec82e6a-kube-api-access-lw4dz\") pod \"keystone-db-create-x6n6z\" (UID: \"43b0a02c-c897-4cb4-bc1e-f478cec82e6a\") " pod="openstack/keystone-db-create-x6n6z" Jan 26 08:09:59 crc kubenswrapper[4806]: I0126 08:09:59.295189 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-8eec-account-create-update-5h69f"] Jan 26 08:09:59 crc kubenswrapper[4806]: I0126 08:09:59.296134 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-8eec-account-create-update-5h69f" Jan 26 08:09:59 crc kubenswrapper[4806]: I0126 08:09:59.298606 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 26 08:09:59 crc kubenswrapper[4806]: I0126 08:09:59.318569 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-8eec-account-create-update-5h69f"] Jan 26 08:09:59 crc kubenswrapper[4806]: I0126 08:09:59.359683 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/317adfae-9113-4a76-964b-063e9c840848-operator-scripts\") pod \"keystone-8eec-account-create-update-5h69f\" (UID: \"317adfae-9113-4a76-964b-063e9c840848\") " pod="openstack/keystone-8eec-account-create-update-5h69f" Jan 26 08:09:59 crc kubenswrapper[4806]: I0126 08:09:59.359759 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hltv4\" (UniqueName: \"kubernetes.io/projected/317adfae-9113-4a76-964b-063e9c840848-kube-api-access-hltv4\") pod \"keystone-8eec-account-create-update-5h69f\" (UID: \"317adfae-9113-4a76-964b-063e9c840848\") " pod="openstack/keystone-8eec-account-create-update-5h69f" Jan 26 08:09:59 crc kubenswrapper[4806]: I0126 08:09:59.435387 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-x6n6z" Jan 26 08:09:59 crc kubenswrapper[4806]: I0126 08:09:59.460939 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/317adfae-9113-4a76-964b-063e9c840848-operator-scripts\") pod \"keystone-8eec-account-create-update-5h69f\" (UID: \"317adfae-9113-4a76-964b-063e9c840848\") " pod="openstack/keystone-8eec-account-create-update-5h69f" Jan 26 08:09:59 crc kubenswrapper[4806]: I0126 08:09:59.461018 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hltv4\" (UniqueName: \"kubernetes.io/projected/317adfae-9113-4a76-964b-063e9c840848-kube-api-access-hltv4\") pod \"keystone-8eec-account-create-update-5h69f\" (UID: \"317adfae-9113-4a76-964b-063e9c840848\") " pod="openstack/keystone-8eec-account-create-update-5h69f" Jan 26 08:09:59 crc kubenswrapper[4806]: I0126 08:09:59.462200 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/317adfae-9113-4a76-964b-063e9c840848-operator-scripts\") pod \"keystone-8eec-account-create-update-5h69f\" (UID: \"317adfae-9113-4a76-964b-063e9c840848\") " pod="openstack/keystone-8eec-account-create-update-5h69f" Jan 26 08:09:59 crc kubenswrapper[4806]: I0126 08:09:59.479050 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hltv4\" (UniqueName: \"kubernetes.io/projected/317adfae-9113-4a76-964b-063e9c840848-kube-api-access-hltv4\") pod \"keystone-8eec-account-create-update-5h69f\" (UID: \"317adfae-9113-4a76-964b-063e9c840848\") " pod="openstack/keystone-8eec-account-create-update-5h69f" Jan 26 08:09:59 crc kubenswrapper[4806]: I0126 08:09:59.547712 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-v7b2d"] Jan 26 08:09:59 crc kubenswrapper[4806]: I0126 08:09:59.548675 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-v7b2d" Jan 26 08:09:59 crc kubenswrapper[4806]: I0126 08:09:59.557744 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-v7b2d"] Jan 26 08:09:59 crc kubenswrapper[4806]: I0126 08:09:59.562398 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/37971779-caab-4f56-a749-e545819352ce-operator-scripts\") pod \"placement-db-create-v7b2d\" (UID: \"37971779-caab-4f56-a749-e545819352ce\") " pod="openstack/placement-db-create-v7b2d" Jan 26 08:09:59 crc kubenswrapper[4806]: I0126 08:09:59.562548 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rs7np\" (UniqueName: \"kubernetes.io/projected/37971779-caab-4f56-a749-e545819352ce-kube-api-access-rs7np\") pod \"placement-db-create-v7b2d\" (UID: \"37971779-caab-4f56-a749-e545819352ce\") " pod="openstack/placement-db-create-v7b2d" Jan 26 08:09:59 crc kubenswrapper[4806]: I0126 08:09:59.624046 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-8eec-account-create-update-5h69f" Jan 26 08:09:59 crc kubenswrapper[4806]: I0126 08:09:59.661189 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-4502-account-create-update-kd7nt"] Jan 26 08:09:59 crc kubenswrapper[4806]: I0126 08:09:59.662258 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-4502-account-create-update-kd7nt" Jan 26 08:09:59 crc kubenswrapper[4806]: I0126 08:09:59.666996 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/37971779-caab-4f56-a749-e545819352ce-operator-scripts\") pod \"placement-db-create-v7b2d\" (UID: \"37971779-caab-4f56-a749-e545819352ce\") " pod="openstack/placement-db-create-v7b2d" Jan 26 08:09:59 crc kubenswrapper[4806]: I0126 08:09:59.667059 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rs7np\" (UniqueName: \"kubernetes.io/projected/37971779-caab-4f56-a749-e545819352ce-kube-api-access-rs7np\") pod \"placement-db-create-v7b2d\" (UID: \"37971779-caab-4f56-a749-e545819352ce\") " pod="openstack/placement-db-create-v7b2d" Jan 26 08:09:59 crc kubenswrapper[4806]: I0126 08:09:59.674227 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/37971779-caab-4f56-a749-e545819352ce-operator-scripts\") pod \"placement-db-create-v7b2d\" (UID: \"37971779-caab-4f56-a749-e545819352ce\") " pod="openstack/placement-db-create-v7b2d" Jan 26 08:09:59 crc kubenswrapper[4806]: I0126 08:09:59.675266 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-4502-account-create-update-kd7nt"] Jan 26 08:09:59 crc kubenswrapper[4806]: I0126 08:09:59.675755 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 26 08:09:59 crc kubenswrapper[4806]: I0126 08:09:59.711098 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rs7np\" (UniqueName: \"kubernetes.io/projected/37971779-caab-4f56-a749-e545819352ce-kube-api-access-rs7np\") pod \"placement-db-create-v7b2d\" (UID: \"37971779-caab-4f56-a749-e545819352ce\") " pod="openstack/placement-db-create-v7b2d" Jan 26 08:09:59 crc kubenswrapper[4806]: I0126 08:09:59.769165 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/035ee86e-30e8-4e6c-9e99-6e0abca4fa67-operator-scripts\") pod \"placement-4502-account-create-update-kd7nt\" (UID: \"035ee86e-30e8-4e6c-9e99-6e0abca4fa67\") " pod="openstack/placement-4502-account-create-update-kd7nt" Jan 26 08:09:59 crc kubenswrapper[4806]: I0126 08:09:59.769328 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjtfl\" (UniqueName: \"kubernetes.io/projected/035ee86e-30e8-4e6c-9e99-6e0abca4fa67-kube-api-access-jjtfl\") pod \"placement-4502-account-create-update-kd7nt\" (UID: \"035ee86e-30e8-4e6c-9e99-6e0abca4fa67\") " pod="openstack/placement-4502-account-create-update-kd7nt" Jan 26 08:09:59 crc kubenswrapper[4806]: I0126 08:09:59.871232 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jjtfl\" (UniqueName: \"kubernetes.io/projected/035ee86e-30e8-4e6c-9e99-6e0abca4fa67-kube-api-access-jjtfl\") pod \"placement-4502-account-create-update-kd7nt\" (UID: \"035ee86e-30e8-4e6c-9e99-6e0abca4fa67\") " pod="openstack/placement-4502-account-create-update-kd7nt" Jan 26 08:09:59 crc kubenswrapper[4806]: I0126 08:09:59.871287 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/035ee86e-30e8-4e6c-9e99-6e0abca4fa67-operator-scripts\") pod \"placement-4502-account-create-update-kd7nt\" (UID: \"035ee86e-30e8-4e6c-9e99-6e0abca4fa67\") " pod="openstack/placement-4502-account-create-update-kd7nt" Jan 26 08:09:59 crc kubenswrapper[4806]: I0126 08:09:59.874010 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/035ee86e-30e8-4e6c-9e99-6e0abca4fa67-operator-scripts\") pod \"placement-4502-account-create-update-kd7nt\" (UID: \"035ee86e-30e8-4e6c-9e99-6e0abca4fa67\") " pod="openstack/placement-4502-account-create-update-kd7nt" Jan 26 08:09:59 crc kubenswrapper[4806]: I0126 08:09:59.897203 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jjtfl\" (UniqueName: \"kubernetes.io/projected/035ee86e-30e8-4e6c-9e99-6e0abca4fa67-kube-api-access-jjtfl\") pod \"placement-4502-account-create-update-kd7nt\" (UID: \"035ee86e-30e8-4e6c-9e99-6e0abca4fa67\") " pod="openstack/placement-4502-account-create-update-kd7nt" Jan 26 08:09:59 crc kubenswrapper[4806]: I0126 08:09:59.910791 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-v7b2d" Jan 26 08:09:59 crc kubenswrapper[4806]: I0126 08:09:59.991531 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-4502-account-create-update-kd7nt" Jan 26 08:10:00 crc kubenswrapper[4806]: I0126 08:10:00.152956 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-hfpkt"] Jan 26 08:10:00 crc kubenswrapper[4806]: I0126 08:10:00.154174 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-hfpkt" Jan 26 08:10:00 crc kubenswrapper[4806]: I0126 08:10:00.156698 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 26 08:10:00 crc kubenswrapper[4806]: I0126 08:10:00.156822 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-d2qmz" Jan 26 08:10:00 crc kubenswrapper[4806]: I0126 08:10:00.171353 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-hfpkt"] Jan 26 08:10:00 crc kubenswrapper[4806]: I0126 08:10:00.184345 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwqv6\" (UniqueName: \"kubernetes.io/projected/f602c552-c375-4d9b-96fc-633ad5811f7d-kube-api-access-vwqv6\") pod \"glance-db-sync-hfpkt\" (UID: \"f602c552-c375-4d9b-96fc-633ad5811f7d\") " pod="openstack/glance-db-sync-hfpkt" Jan 26 08:10:00 crc kubenswrapper[4806]: I0126 08:10:00.186996 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f602c552-c375-4d9b-96fc-633ad5811f7d-combined-ca-bundle\") pod \"glance-db-sync-hfpkt\" (UID: \"f602c552-c375-4d9b-96fc-633ad5811f7d\") " pod="openstack/glance-db-sync-hfpkt" Jan 26 08:10:00 crc kubenswrapper[4806]: I0126 08:10:00.187204 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f602c552-c375-4d9b-96fc-633ad5811f7d-config-data\") pod \"glance-db-sync-hfpkt\" (UID: \"f602c552-c375-4d9b-96fc-633ad5811f7d\") " pod="openstack/glance-db-sync-hfpkt" Jan 26 08:10:00 crc kubenswrapper[4806]: I0126 08:10:00.187235 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f602c552-c375-4d9b-96fc-633ad5811f7d-db-sync-config-data\") pod \"glance-db-sync-hfpkt\" (UID: \"f602c552-c375-4d9b-96fc-633ad5811f7d\") " pod="openstack/glance-db-sync-hfpkt" Jan 26 08:10:00 crc kubenswrapper[4806]: I0126 08:10:00.288758 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f602c552-c375-4d9b-96fc-633ad5811f7d-config-data\") pod \"glance-db-sync-hfpkt\" (UID: \"f602c552-c375-4d9b-96fc-633ad5811f7d\") " pod="openstack/glance-db-sync-hfpkt" Jan 26 08:10:00 crc kubenswrapper[4806]: I0126 08:10:00.288811 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f602c552-c375-4d9b-96fc-633ad5811f7d-db-sync-config-data\") pod \"glance-db-sync-hfpkt\" (UID: \"f602c552-c375-4d9b-96fc-633ad5811f7d\") " pod="openstack/glance-db-sync-hfpkt" Jan 26 08:10:00 crc kubenswrapper[4806]: I0126 08:10:00.288880 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwqv6\" (UniqueName: \"kubernetes.io/projected/f602c552-c375-4d9b-96fc-633ad5811f7d-kube-api-access-vwqv6\") pod \"glance-db-sync-hfpkt\" (UID: \"f602c552-c375-4d9b-96fc-633ad5811f7d\") " pod="openstack/glance-db-sync-hfpkt" Jan 26 08:10:00 crc kubenswrapper[4806]: I0126 08:10:00.288922 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f602c552-c375-4d9b-96fc-633ad5811f7d-combined-ca-bundle\") pod \"glance-db-sync-hfpkt\" (UID: \"f602c552-c375-4d9b-96fc-633ad5811f7d\") " pod="openstack/glance-db-sync-hfpkt" Jan 26 08:10:00 crc kubenswrapper[4806]: I0126 08:10:00.294154 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f602c552-c375-4d9b-96fc-633ad5811f7d-db-sync-config-data\") pod \"glance-db-sync-hfpkt\" (UID: \"f602c552-c375-4d9b-96fc-633ad5811f7d\") " pod="openstack/glance-db-sync-hfpkt" Jan 26 08:10:00 crc kubenswrapper[4806]: I0126 08:10:00.294913 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f602c552-c375-4d9b-96fc-633ad5811f7d-config-data\") pod \"glance-db-sync-hfpkt\" (UID: \"f602c552-c375-4d9b-96fc-633ad5811f7d\") " pod="openstack/glance-db-sync-hfpkt" Jan 26 08:10:00 crc kubenswrapper[4806]: I0126 08:10:00.295607 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f602c552-c375-4d9b-96fc-633ad5811f7d-combined-ca-bundle\") pod \"glance-db-sync-hfpkt\" (UID: \"f602c552-c375-4d9b-96fc-633ad5811f7d\") " pod="openstack/glance-db-sync-hfpkt" Jan 26 08:10:00 crc kubenswrapper[4806]: I0126 08:10:00.305644 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwqv6\" (UniqueName: \"kubernetes.io/projected/f602c552-c375-4d9b-96fc-633ad5811f7d-kube-api-access-vwqv6\") pod \"glance-db-sync-hfpkt\" (UID: \"f602c552-c375-4d9b-96fc-633ad5811f7d\") " pod="openstack/glance-db-sync-hfpkt" Jan 26 08:10:00 crc kubenswrapper[4806]: I0126 08:10:00.469563 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-hfpkt" Jan 26 08:10:00 crc kubenswrapper[4806]: I0126 08:10:00.696214 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/fcc22009-cca0-438b-8f2f-5c245db7c70c-etc-swift\") pod \"swift-storage-0\" (UID: \"fcc22009-cca0-438b-8f2f-5c245db7c70c\") " pod="openstack/swift-storage-0" Jan 26 08:10:00 crc kubenswrapper[4806]: E0126 08:10:00.696415 4806 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 26 08:10:00 crc kubenswrapper[4806]: E0126 08:10:00.696502 4806 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 26 08:10:00 crc kubenswrapper[4806]: E0126 08:10:00.696571 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fcc22009-cca0-438b-8f2f-5c245db7c70c-etc-swift podName:fcc22009-cca0-438b-8f2f-5c245db7c70c nodeName:}" failed. No retries permitted until 2026-01-26 08:10:08.696553715 +0000 UTC m=+987.960961771 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/fcc22009-cca0-438b-8f2f-5c245db7c70c-etc-swift") pod "swift-storage-0" (UID: "fcc22009-cca0-438b-8f2f-5c245db7c70c") : configmap "swift-ring-files" not found Jan 26 08:10:01 crc kubenswrapper[4806]: I0126 08:10:01.665820 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 26 08:10:01 crc kubenswrapper[4806]: I0126 08:10:01.961151 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-b8fbc5445-xcfqg" Jan 26 08:10:02 crc kubenswrapper[4806]: I0126 08:10:02.031042 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-kbjs2"] Jan 26 08:10:02 crc kubenswrapper[4806]: I0126 08:10:02.031255 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5bf47b49b7-kbjs2" podUID="be395281-fc30-4d88-8d0c-e1528c53d8cb" containerName="dnsmasq-dns" containerID="cri-o://836361e735d8428b2cb5002c0ac3c78462a362ca92f777cabbc7dfa2775ceb3d" gracePeriod=10 Jan 26 08:10:02 crc kubenswrapper[4806]: I0126 08:10:02.704334 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-pfzk8" Jan 26 08:10:02 crc kubenswrapper[4806]: I0126 08:10:02.835812 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jzwps\" (UniqueName: \"kubernetes.io/projected/c73d5149-0bc4-4db1-83fe-a3ae8745d0de-kube-api-access-jzwps\") pod \"c73d5149-0bc4-4db1-83fe-a3ae8745d0de\" (UID: \"c73d5149-0bc4-4db1-83fe-a3ae8745d0de\") " Jan 26 08:10:02 crc kubenswrapper[4806]: I0126 08:10:02.836364 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c73d5149-0bc4-4db1-83fe-a3ae8745d0de-operator-scripts\") pod \"c73d5149-0bc4-4db1-83fe-a3ae8745d0de\" (UID: \"c73d5149-0bc4-4db1-83fe-a3ae8745d0de\") " Jan 26 08:10:02 crc kubenswrapper[4806]: I0126 08:10:02.837217 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c73d5149-0bc4-4db1-83fe-a3ae8745d0de-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c73d5149-0bc4-4db1-83fe-a3ae8745d0de" (UID: "c73d5149-0bc4-4db1-83fe-a3ae8745d0de"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 08:10:02 crc kubenswrapper[4806]: I0126 08:10:02.837633 4806 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c73d5149-0bc4-4db1-83fe-a3ae8745d0de-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 08:10:02 crc kubenswrapper[4806]: I0126 08:10:02.840899 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c73d5149-0bc4-4db1-83fe-a3ae8745d0de-kube-api-access-jzwps" (OuterVolumeSpecName: "kube-api-access-jzwps") pod "c73d5149-0bc4-4db1-83fe-a3ae8745d0de" (UID: "c73d5149-0bc4-4db1-83fe-a3ae8745d0de"). InnerVolumeSpecName "kube-api-access-jzwps". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:10:02 crc kubenswrapper[4806]: I0126 08:10:02.894746 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-kbjs2" Jan 26 08:10:02 crc kubenswrapper[4806]: I0126 08:10:02.941644 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be395281-fc30-4d88-8d0c-e1528c53d8cb-config\") pod \"be395281-fc30-4d88-8d0c-e1528c53d8cb\" (UID: \"be395281-fc30-4d88-8d0c-e1528c53d8cb\") " Jan 26 08:10:02 crc kubenswrapper[4806]: I0126 08:10:02.941765 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m6pww\" (UniqueName: \"kubernetes.io/projected/be395281-fc30-4d88-8d0c-e1528c53d8cb-kube-api-access-m6pww\") pod \"be395281-fc30-4d88-8d0c-e1528c53d8cb\" (UID: \"be395281-fc30-4d88-8d0c-e1528c53d8cb\") " Jan 26 08:10:02 crc kubenswrapper[4806]: I0126 08:10:02.941782 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/be395281-fc30-4d88-8d0c-e1528c53d8cb-ovsdbserver-nb\") pod \"be395281-fc30-4d88-8d0c-e1528c53d8cb\" (UID: \"be395281-fc30-4d88-8d0c-e1528c53d8cb\") " Jan 26 08:10:02 crc kubenswrapper[4806]: I0126 08:10:02.941892 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/be395281-fc30-4d88-8d0c-e1528c53d8cb-dns-svc\") pod \"be395281-fc30-4d88-8d0c-e1528c53d8cb\" (UID: \"be395281-fc30-4d88-8d0c-e1528c53d8cb\") " Jan 26 08:10:02 crc kubenswrapper[4806]: I0126 08:10:02.942269 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jzwps\" (UniqueName: \"kubernetes.io/projected/c73d5149-0bc4-4db1-83fe-a3ae8745d0de-kube-api-access-jzwps\") on node \"crc\" DevicePath \"\"" Jan 26 08:10:02 crc kubenswrapper[4806]: I0126 08:10:02.958590 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be395281-fc30-4d88-8d0c-e1528c53d8cb-kube-api-access-m6pww" (OuterVolumeSpecName: "kube-api-access-m6pww") pod "be395281-fc30-4d88-8d0c-e1528c53d8cb" (UID: "be395281-fc30-4d88-8d0c-e1528c53d8cb"). InnerVolumeSpecName "kube-api-access-m6pww". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:10:02 crc kubenswrapper[4806]: I0126 08:10:02.958842 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-pfzk8" event={"ID":"c73d5149-0bc4-4db1-83fe-a3ae8745d0de","Type":"ContainerDied","Data":"05d04600b0391a3d3824f52f0a02ead8f771d8960efb62a6364c2afd7cb38030"} Jan 26 08:10:02 crc kubenswrapper[4806]: I0126 08:10:02.958872 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="05d04600b0391a3d3824f52f0a02ead8f771d8960efb62a6364c2afd7cb38030" Jan 26 08:10:02 crc kubenswrapper[4806]: I0126 08:10:02.958927 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-pfzk8" Jan 26 08:10:02 crc kubenswrapper[4806]: I0126 08:10:02.976472 4806 generic.go:334] "Generic (PLEG): container finished" podID="be395281-fc30-4d88-8d0c-e1528c53d8cb" containerID="836361e735d8428b2cb5002c0ac3c78462a362ca92f777cabbc7dfa2775ceb3d" exitCode=0 Jan 26 08:10:02 crc kubenswrapper[4806]: I0126 08:10:02.976582 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-kbjs2" event={"ID":"be395281-fc30-4d88-8d0c-e1528c53d8cb","Type":"ContainerDied","Data":"836361e735d8428b2cb5002c0ac3c78462a362ca92f777cabbc7dfa2775ceb3d"} Jan 26 08:10:02 crc kubenswrapper[4806]: I0126 08:10:02.976635 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-kbjs2" event={"ID":"be395281-fc30-4d88-8d0c-e1528c53d8cb","Type":"ContainerDied","Data":"f8f189b29e5b67f2fe22e7ea1a74fe86ce9882cdb266de2bbeef69a8a2c6daa2"} Jan 26 08:10:02 crc kubenswrapper[4806]: I0126 08:10:02.976655 4806 scope.go:117] "RemoveContainer" containerID="836361e735d8428b2cb5002c0ac3c78462a362ca92f777cabbc7dfa2775ceb3d" Jan 26 08:10:02 crc kubenswrapper[4806]: I0126 08:10:02.977733 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-kbjs2" Jan 26 08:10:03 crc kubenswrapper[4806]: I0126 08:10:03.019760 4806 scope.go:117] "RemoveContainer" containerID="5f1616aa5ffbd2b59b472d6fa5afcea9ec9bc96c4fb28901d02563f6aa4d81fa" Jan 26 08:10:03 crc kubenswrapper[4806]: I0126 08:10:03.045781 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m6pww\" (UniqueName: \"kubernetes.io/projected/be395281-fc30-4d88-8d0c-e1528c53d8cb-kube-api-access-m6pww\") on node \"crc\" DevicePath \"\"" Jan 26 08:10:03 crc kubenswrapper[4806]: I0126 08:10:03.046679 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be395281-fc30-4d88-8d0c-e1528c53d8cb-config" (OuterVolumeSpecName: "config") pod "be395281-fc30-4d88-8d0c-e1528c53d8cb" (UID: "be395281-fc30-4d88-8d0c-e1528c53d8cb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 08:10:03 crc kubenswrapper[4806]: I0126 08:10:03.054031 4806 scope.go:117] "RemoveContainer" containerID="836361e735d8428b2cb5002c0ac3c78462a362ca92f777cabbc7dfa2775ceb3d" Jan 26 08:10:03 crc kubenswrapper[4806]: E0126 08:10:03.064333 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"836361e735d8428b2cb5002c0ac3c78462a362ca92f777cabbc7dfa2775ceb3d\": container with ID starting with 836361e735d8428b2cb5002c0ac3c78462a362ca92f777cabbc7dfa2775ceb3d not found: ID does not exist" containerID="836361e735d8428b2cb5002c0ac3c78462a362ca92f777cabbc7dfa2775ceb3d" Jan 26 08:10:03 crc kubenswrapper[4806]: I0126 08:10:03.064380 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"836361e735d8428b2cb5002c0ac3c78462a362ca92f777cabbc7dfa2775ceb3d"} err="failed to get container status \"836361e735d8428b2cb5002c0ac3c78462a362ca92f777cabbc7dfa2775ceb3d\": rpc error: code = NotFound desc = could not find container \"836361e735d8428b2cb5002c0ac3c78462a362ca92f777cabbc7dfa2775ceb3d\": container with ID starting with 836361e735d8428b2cb5002c0ac3c78462a362ca92f777cabbc7dfa2775ceb3d not found: ID does not exist" Jan 26 08:10:03 crc kubenswrapper[4806]: I0126 08:10:03.064407 4806 scope.go:117] "RemoveContainer" containerID="5f1616aa5ffbd2b59b472d6fa5afcea9ec9bc96c4fb28901d02563f6aa4d81fa" Jan 26 08:10:03 crc kubenswrapper[4806]: E0126 08:10:03.064776 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f1616aa5ffbd2b59b472d6fa5afcea9ec9bc96c4fb28901d02563f6aa4d81fa\": container with ID starting with 5f1616aa5ffbd2b59b472d6fa5afcea9ec9bc96c4fb28901d02563f6aa4d81fa not found: ID does not exist" containerID="5f1616aa5ffbd2b59b472d6fa5afcea9ec9bc96c4fb28901d02563f6aa4d81fa" Jan 26 08:10:03 crc kubenswrapper[4806]: I0126 08:10:03.064850 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f1616aa5ffbd2b59b472d6fa5afcea9ec9bc96c4fb28901d02563f6aa4d81fa"} err="failed to get container status \"5f1616aa5ffbd2b59b472d6fa5afcea9ec9bc96c4fb28901d02563f6aa4d81fa\": rpc error: code = NotFound desc = could not find container \"5f1616aa5ffbd2b59b472d6fa5afcea9ec9bc96c4fb28901d02563f6aa4d81fa\": container with ID starting with 5f1616aa5ffbd2b59b472d6fa5afcea9ec9bc96c4fb28901d02563f6aa4d81fa not found: ID does not exist" Jan 26 08:10:03 crc kubenswrapper[4806]: I0126 08:10:03.068892 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be395281-fc30-4d88-8d0c-e1528c53d8cb-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "be395281-fc30-4d88-8d0c-e1528c53d8cb" (UID: "be395281-fc30-4d88-8d0c-e1528c53d8cb"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 08:10:03 crc kubenswrapper[4806]: I0126 08:10:03.091453 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be395281-fc30-4d88-8d0c-e1528c53d8cb-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "be395281-fc30-4d88-8d0c-e1528c53d8cb" (UID: "be395281-fc30-4d88-8d0c-e1528c53d8cb"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 08:10:03 crc kubenswrapper[4806]: I0126 08:10:03.147033 4806 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be395281-fc30-4d88-8d0c-e1528c53d8cb-config\") on node \"crc\" DevicePath \"\"" Jan 26 08:10:03 crc kubenswrapper[4806]: I0126 08:10:03.147061 4806 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/be395281-fc30-4d88-8d0c-e1528c53d8cb-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 08:10:03 crc kubenswrapper[4806]: I0126 08:10:03.147071 4806 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/be395281-fc30-4d88-8d0c-e1528c53d8cb-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 08:10:03 crc kubenswrapper[4806]: I0126 08:10:03.289758 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-x6n6z"] Jan 26 08:10:03 crc kubenswrapper[4806]: I0126 08:10:03.297947 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-v7b2d"] Jan 26 08:10:03 crc kubenswrapper[4806]: I0126 08:10:03.324613 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-kbjs2"] Jan 26 08:10:03 crc kubenswrapper[4806]: I0126 08:10:03.330061 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-kbjs2"] Jan 26 08:10:03 crc kubenswrapper[4806]: I0126 08:10:03.517576 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-hfpkt"] Jan 26 08:10:03 crc kubenswrapper[4806]: I0126 08:10:03.563235 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-4502-account-create-update-kd7nt"] Jan 26 08:10:03 crc kubenswrapper[4806]: I0126 08:10:03.587673 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-8eec-account-create-update-5h69f"] Jan 26 08:10:03 crc kubenswrapper[4806]: I0126 08:10:03.982989 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-8eec-account-create-update-5h69f" event={"ID":"317adfae-9113-4a76-964b-063e9c840848","Type":"ContainerStarted","Data":"b9691dda52df678cc5aed2b824c1d97ae291997747c91e54aabf969847ef3a99"} Jan 26 08:10:03 crc kubenswrapper[4806]: I0126 08:10:03.985298 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-2dckc" event={"ID":"061b909a-a88f-4261-9ccf-2daaf3958621","Type":"ContainerStarted","Data":"47e4f4a462ececac3d3bdcbf69f78262b5a869d374f506d5a3bc0d0804830d3f"} Jan 26 08:10:03 crc kubenswrapper[4806]: I0126 08:10:03.988305 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-4502-account-create-update-kd7nt" event={"ID":"035ee86e-30e8-4e6c-9e99-6e0abca4fa67","Type":"ContainerStarted","Data":"907e206e7d1b1b2da686c4378556c94068073ada938878ed7245f28ca8dc6a50"} Jan 26 08:10:03 crc kubenswrapper[4806]: I0126 08:10:03.988397 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-4502-account-create-update-kd7nt" event={"ID":"035ee86e-30e8-4e6c-9e99-6e0abca4fa67","Type":"ContainerStarted","Data":"5e379385549a58936b5c09a4b48e410fbdc5e8d9eef27d78464109003a95540c"} Jan 26 08:10:03 crc kubenswrapper[4806]: I0126 08:10:03.990795 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-x6n6z" event={"ID":"43b0a02c-c897-4cb4-bc1e-f478cec82e6a","Type":"ContainerStarted","Data":"7a6f6c0bca9dca4ecbf086f470e8ef2e80044c7ff32490763c6c78221d131739"} Jan 26 08:10:03 crc kubenswrapper[4806]: I0126 08:10:03.990832 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-x6n6z" event={"ID":"43b0a02c-c897-4cb4-bc1e-f478cec82e6a","Type":"ContainerStarted","Data":"e7090995af1dfb9776e51ac405eeb3bd47ec65442e93a12a5c59c5bfc645a28e"} Jan 26 08:10:03 crc kubenswrapper[4806]: I0126 08:10:03.993360 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-v7b2d" event={"ID":"37971779-caab-4f56-a749-e545819352ce","Type":"ContainerStarted","Data":"b8c9a49db432ddc8cd606bc0771f04ae2157303dc1d575109f83630db4d47dd2"} Jan 26 08:10:03 crc kubenswrapper[4806]: I0126 08:10:03.993399 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-v7b2d" event={"ID":"37971779-caab-4f56-a749-e545819352ce","Type":"ContainerStarted","Data":"3b0d8c2d9033e7df2dd02924a6155b9106f61caeb1643a04a7341166d6eed333"} Jan 26 08:10:03 crc kubenswrapper[4806]: I0126 08:10:03.994763 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-hfpkt" event={"ID":"f602c552-c375-4d9b-96fc-633ad5811f7d","Type":"ContainerStarted","Data":"97217450f8475c26f843acd8407d6496d356cad593eab5b8bb4bace2a3e4fbfe"} Jan 26 08:10:04 crc kubenswrapper[4806]: I0126 08:10:04.016871 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-2dckc" podStartSLOduration=3.189794541 podStartE2EDuration="8.016858491s" podCreationTimestamp="2026-01-26 08:09:56 +0000 UTC" firstStartedPulling="2026-01-26 08:09:57.838242843 +0000 UTC m=+977.102650899" lastFinishedPulling="2026-01-26 08:10:02.665306793 +0000 UTC m=+981.929714849" observedRunningTime="2026-01-26 08:10:04.01610661 +0000 UTC m=+983.280514666" watchObservedRunningTime="2026-01-26 08:10:04.016858491 +0000 UTC m=+983.281266547" Jan 26 08:10:04 crc kubenswrapper[4806]: I0126 08:10:04.037233 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-4502-account-create-update-kd7nt" podStartSLOduration=5.037217739 podStartE2EDuration="5.037217739s" podCreationTimestamp="2026-01-26 08:09:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 08:10:04.035162323 +0000 UTC m=+983.299570379" watchObservedRunningTime="2026-01-26 08:10:04.037217739 +0000 UTC m=+983.301625795" Jan 26 08:10:04 crc kubenswrapper[4806]: I0126 08:10:04.049958 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-x6n6z" podStartSLOduration=5.049943499 podStartE2EDuration="5.049943499s" podCreationTimestamp="2026-01-26 08:09:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 08:10:04.047744918 +0000 UTC m=+983.312152974" watchObservedRunningTime="2026-01-26 08:10:04.049943499 +0000 UTC m=+983.314351555" Jan 26 08:10:04 crc kubenswrapper[4806]: I0126 08:10:04.072492 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-create-v7b2d" podStartSLOduration=5.072474557 podStartE2EDuration="5.072474557s" podCreationTimestamp="2026-01-26 08:09:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 08:10:04.066003849 +0000 UTC m=+983.330411905" watchObservedRunningTime="2026-01-26 08:10:04.072474557 +0000 UTC m=+983.336882613" Jan 26 08:10:04 crc kubenswrapper[4806]: E0126 08:10:04.561756 4806 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod317adfae_9113_4a76_964b_063e9c840848.slice/crio-conmon-371966ff7583c52a4523d9a493ecf17e3934207445d577ff97a9b3c9d9d84fe7.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod317adfae_9113_4a76_964b_063e9c840848.slice/crio-371966ff7583c52a4523d9a493ecf17e3934207445d577ff97a9b3c9d9d84fe7.scope\": RecentStats: unable to find data in memory cache]" Jan 26 08:10:05 crc kubenswrapper[4806]: I0126 08:10:05.004448 4806 generic.go:334] "Generic (PLEG): container finished" podID="37971779-caab-4f56-a749-e545819352ce" containerID="b8c9a49db432ddc8cd606bc0771f04ae2157303dc1d575109f83630db4d47dd2" exitCode=0 Jan 26 08:10:05 crc kubenswrapper[4806]: I0126 08:10:05.004541 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-v7b2d" event={"ID":"37971779-caab-4f56-a749-e545819352ce","Type":"ContainerDied","Data":"b8c9a49db432ddc8cd606bc0771f04ae2157303dc1d575109f83630db4d47dd2"} Jan 26 08:10:05 crc kubenswrapper[4806]: I0126 08:10:05.006827 4806 generic.go:334] "Generic (PLEG): container finished" podID="317adfae-9113-4a76-964b-063e9c840848" containerID="371966ff7583c52a4523d9a493ecf17e3934207445d577ff97a9b3c9d9d84fe7" exitCode=0 Jan 26 08:10:05 crc kubenswrapper[4806]: I0126 08:10:05.006892 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-8eec-account-create-update-5h69f" event={"ID":"317adfae-9113-4a76-964b-063e9c840848","Type":"ContainerDied","Data":"371966ff7583c52a4523d9a493ecf17e3934207445d577ff97a9b3c9d9d84fe7"} Jan 26 08:10:05 crc kubenswrapper[4806]: I0126 08:10:05.008434 4806 generic.go:334] "Generic (PLEG): container finished" podID="035ee86e-30e8-4e6c-9e99-6e0abca4fa67" containerID="907e206e7d1b1b2da686c4378556c94068073ada938878ed7245f28ca8dc6a50" exitCode=0 Jan 26 08:10:05 crc kubenswrapper[4806]: I0126 08:10:05.008476 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-4502-account-create-update-kd7nt" event={"ID":"035ee86e-30e8-4e6c-9e99-6e0abca4fa67","Type":"ContainerDied","Data":"907e206e7d1b1b2da686c4378556c94068073ada938878ed7245f28ca8dc6a50"} Jan 26 08:10:05 crc kubenswrapper[4806]: I0126 08:10:05.009900 4806 generic.go:334] "Generic (PLEG): container finished" podID="43b0a02c-c897-4cb4-bc1e-f478cec82e6a" containerID="7a6f6c0bca9dca4ecbf086f470e8ef2e80044c7ff32490763c6c78221d131739" exitCode=0 Jan 26 08:10:05 crc kubenswrapper[4806]: I0126 08:10:05.010840 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-x6n6z" event={"ID":"43b0a02c-c897-4cb4-bc1e-f478cec82e6a","Type":"ContainerDied","Data":"7a6f6c0bca9dca4ecbf086f470e8ef2e80044c7ff32490763c6c78221d131739"} Jan 26 08:10:05 crc kubenswrapper[4806]: I0126 08:10:05.080993 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be395281-fc30-4d88-8d0c-e1528c53d8cb" path="/var/lib/kubelet/pods/be395281-fc30-4d88-8d0c-e1528c53d8cb/volumes" Jan 26 08:10:06 crc kubenswrapper[4806]: I0126 08:10:06.528775 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-4502-account-create-update-kd7nt" Jan 26 08:10:06 crc kubenswrapper[4806]: I0126 08:10:06.709323 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/035ee86e-30e8-4e6c-9e99-6e0abca4fa67-operator-scripts\") pod \"035ee86e-30e8-4e6c-9e99-6e0abca4fa67\" (UID: \"035ee86e-30e8-4e6c-9e99-6e0abca4fa67\") " Jan 26 08:10:06 crc kubenswrapper[4806]: I0126 08:10:06.709462 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jjtfl\" (UniqueName: \"kubernetes.io/projected/035ee86e-30e8-4e6c-9e99-6e0abca4fa67-kube-api-access-jjtfl\") pod \"035ee86e-30e8-4e6c-9e99-6e0abca4fa67\" (UID: \"035ee86e-30e8-4e6c-9e99-6e0abca4fa67\") " Jan 26 08:10:06 crc kubenswrapper[4806]: I0126 08:10:06.710209 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/035ee86e-30e8-4e6c-9e99-6e0abca4fa67-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "035ee86e-30e8-4e6c-9e99-6e0abca4fa67" (UID: "035ee86e-30e8-4e6c-9e99-6e0abca4fa67"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 08:10:06 crc kubenswrapper[4806]: I0126 08:10:06.715468 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/035ee86e-30e8-4e6c-9e99-6e0abca4fa67-kube-api-access-jjtfl" (OuterVolumeSpecName: "kube-api-access-jjtfl") pod "035ee86e-30e8-4e6c-9e99-6e0abca4fa67" (UID: "035ee86e-30e8-4e6c-9e99-6e0abca4fa67"). InnerVolumeSpecName "kube-api-access-jjtfl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:10:06 crc kubenswrapper[4806]: I0126 08:10:06.746879 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-v7b2d" Jan 26 08:10:06 crc kubenswrapper[4806]: I0126 08:10:06.756684 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-x6n6z" Jan 26 08:10:06 crc kubenswrapper[4806]: I0126 08:10:06.777102 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-8eec-account-create-update-5h69f" Jan 26 08:10:06 crc kubenswrapper[4806]: I0126 08:10:06.811152 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jjtfl\" (UniqueName: \"kubernetes.io/projected/035ee86e-30e8-4e6c-9e99-6e0abca4fa67-kube-api-access-jjtfl\") on node \"crc\" DevicePath \"\"" Jan 26 08:10:06 crc kubenswrapper[4806]: I0126 08:10:06.811182 4806 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/035ee86e-30e8-4e6c-9e99-6e0abca4fa67-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 08:10:06 crc kubenswrapper[4806]: I0126 08:10:06.912146 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hltv4\" (UniqueName: \"kubernetes.io/projected/317adfae-9113-4a76-964b-063e9c840848-kube-api-access-hltv4\") pod \"317adfae-9113-4a76-964b-063e9c840848\" (UID: \"317adfae-9113-4a76-964b-063e9c840848\") " Jan 26 08:10:06 crc kubenswrapper[4806]: I0126 08:10:06.913250 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/317adfae-9113-4a76-964b-063e9c840848-operator-scripts\") pod \"317adfae-9113-4a76-964b-063e9c840848\" (UID: \"317adfae-9113-4a76-964b-063e9c840848\") " Jan 26 08:10:06 crc kubenswrapper[4806]: I0126 08:10:06.913755 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/317adfae-9113-4a76-964b-063e9c840848-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "317adfae-9113-4a76-964b-063e9c840848" (UID: "317adfae-9113-4a76-964b-063e9c840848"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 08:10:06 crc kubenswrapper[4806]: I0126 08:10:06.913830 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/37971779-caab-4f56-a749-e545819352ce-operator-scripts\") pod \"37971779-caab-4f56-a749-e545819352ce\" (UID: \"37971779-caab-4f56-a749-e545819352ce\") " Jan 26 08:10:06 crc kubenswrapper[4806]: I0126 08:10:06.914233 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37971779-caab-4f56-a749-e545819352ce-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "37971779-caab-4f56-a749-e545819352ce" (UID: "37971779-caab-4f56-a749-e545819352ce"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 08:10:06 crc kubenswrapper[4806]: I0126 08:10:06.914289 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/43b0a02c-c897-4cb4-bc1e-f478cec82e6a-operator-scripts\") pod \"43b0a02c-c897-4cb4-bc1e-f478cec82e6a\" (UID: \"43b0a02c-c897-4cb4-bc1e-f478cec82e6a\") " Jan 26 08:10:06 crc kubenswrapper[4806]: I0126 08:10:06.915077 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43b0a02c-c897-4cb4-bc1e-f478cec82e6a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "43b0a02c-c897-4cb4-bc1e-f478cec82e6a" (UID: "43b0a02c-c897-4cb4-bc1e-f478cec82e6a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 08:10:06 crc kubenswrapper[4806]: I0126 08:10:06.915230 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/317adfae-9113-4a76-964b-063e9c840848-kube-api-access-hltv4" (OuterVolumeSpecName: "kube-api-access-hltv4") pod "317adfae-9113-4a76-964b-063e9c840848" (UID: "317adfae-9113-4a76-964b-063e9c840848"). InnerVolumeSpecName "kube-api-access-hltv4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:10:06 crc kubenswrapper[4806]: I0126 08:10:06.915332 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rs7np\" (UniqueName: \"kubernetes.io/projected/37971779-caab-4f56-a749-e545819352ce-kube-api-access-rs7np\") pod \"37971779-caab-4f56-a749-e545819352ce\" (UID: \"37971779-caab-4f56-a749-e545819352ce\") " Jan 26 08:10:06 crc kubenswrapper[4806]: I0126 08:10:06.915737 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lw4dz\" (UniqueName: \"kubernetes.io/projected/43b0a02c-c897-4cb4-bc1e-f478cec82e6a-kube-api-access-lw4dz\") pod \"43b0a02c-c897-4cb4-bc1e-f478cec82e6a\" (UID: \"43b0a02c-c897-4cb4-bc1e-f478cec82e6a\") " Jan 26 08:10:06 crc kubenswrapper[4806]: I0126 08:10:06.916366 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hltv4\" (UniqueName: \"kubernetes.io/projected/317adfae-9113-4a76-964b-063e9c840848-kube-api-access-hltv4\") on node \"crc\" DevicePath \"\"" Jan 26 08:10:06 crc kubenswrapper[4806]: I0126 08:10:06.916382 4806 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/317adfae-9113-4a76-964b-063e9c840848-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 08:10:06 crc kubenswrapper[4806]: I0126 08:10:06.916397 4806 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/37971779-caab-4f56-a749-e545819352ce-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 08:10:06 crc kubenswrapper[4806]: I0126 08:10:06.916405 4806 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/43b0a02c-c897-4cb4-bc1e-f478cec82e6a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 08:10:06 crc kubenswrapper[4806]: I0126 08:10:06.917261 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37971779-caab-4f56-a749-e545819352ce-kube-api-access-rs7np" (OuterVolumeSpecName: "kube-api-access-rs7np") pod "37971779-caab-4f56-a749-e545819352ce" (UID: "37971779-caab-4f56-a749-e545819352ce"). InnerVolumeSpecName "kube-api-access-rs7np". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:10:06 crc kubenswrapper[4806]: I0126 08:10:06.918134 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43b0a02c-c897-4cb4-bc1e-f478cec82e6a-kube-api-access-lw4dz" (OuterVolumeSpecName: "kube-api-access-lw4dz") pod "43b0a02c-c897-4cb4-bc1e-f478cec82e6a" (UID: "43b0a02c-c897-4cb4-bc1e-f478cec82e6a"). InnerVolumeSpecName "kube-api-access-lw4dz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:10:07 crc kubenswrapper[4806]: I0126 08:10:07.018673 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rs7np\" (UniqueName: \"kubernetes.io/projected/37971779-caab-4f56-a749-e545819352ce-kube-api-access-rs7np\") on node \"crc\" DevicePath \"\"" Jan 26 08:10:07 crc kubenswrapper[4806]: I0126 08:10:07.018702 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lw4dz\" (UniqueName: \"kubernetes.io/projected/43b0a02c-c897-4cb4-bc1e-f478cec82e6a-kube-api-access-lw4dz\") on node \"crc\" DevicePath \"\"" Jan 26 08:10:07 crc kubenswrapper[4806]: I0126 08:10:07.033650 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-v7b2d" event={"ID":"37971779-caab-4f56-a749-e545819352ce","Type":"ContainerDied","Data":"3b0d8c2d9033e7df2dd02924a6155b9106f61caeb1643a04a7341166d6eed333"} Jan 26 08:10:07 crc kubenswrapper[4806]: I0126 08:10:07.033709 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3b0d8c2d9033e7df2dd02924a6155b9106f61caeb1643a04a7341166d6eed333" Jan 26 08:10:07 crc kubenswrapper[4806]: I0126 08:10:07.033779 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-v7b2d" Jan 26 08:10:07 crc kubenswrapper[4806]: I0126 08:10:07.036654 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-8eec-account-create-update-5h69f" event={"ID":"317adfae-9113-4a76-964b-063e9c840848","Type":"ContainerDied","Data":"b9691dda52df678cc5aed2b824c1d97ae291997747c91e54aabf969847ef3a99"} Jan 26 08:10:07 crc kubenswrapper[4806]: I0126 08:10:07.036700 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b9691dda52df678cc5aed2b824c1d97ae291997747c91e54aabf969847ef3a99" Jan 26 08:10:07 crc kubenswrapper[4806]: I0126 08:10:07.036769 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-8eec-account-create-update-5h69f" Jan 26 08:10:07 crc kubenswrapper[4806]: I0126 08:10:07.037937 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-4502-account-create-update-kd7nt" event={"ID":"035ee86e-30e8-4e6c-9e99-6e0abca4fa67","Type":"ContainerDied","Data":"5e379385549a58936b5c09a4b48e410fbdc5e8d9eef27d78464109003a95540c"} Jan 26 08:10:07 crc kubenswrapper[4806]: I0126 08:10:07.037954 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5e379385549a58936b5c09a4b48e410fbdc5e8d9eef27d78464109003a95540c" Jan 26 08:10:07 crc kubenswrapper[4806]: I0126 08:10:07.037992 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-4502-account-create-update-kd7nt" Jan 26 08:10:07 crc kubenswrapper[4806]: I0126 08:10:07.044338 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-x6n6z" Jan 26 08:10:07 crc kubenswrapper[4806]: I0126 08:10:07.064904 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-x6n6z" event={"ID":"43b0a02c-c897-4cb4-bc1e-f478cec82e6a","Type":"ContainerDied","Data":"e7090995af1dfb9776e51ac405eeb3bd47ec65442e93a12a5c59c5bfc645a28e"} Jan 26 08:10:07 crc kubenswrapper[4806]: I0126 08:10:07.064942 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e7090995af1dfb9776e51ac405eeb3bd47ec65442e93a12a5c59c5bfc645a28e" Jan 26 08:10:08 crc kubenswrapper[4806]: I0126 08:10:08.054966 4806 generic.go:334] "Generic (PLEG): container finished" podID="025ae3ca-3082-4bc8-8611-5b23cec63932" containerID="064bb866426cda4e680a27ba4d576877d2096d5ec28f01113c3d45c614189c37" exitCode=0 Jan 26 08:10:08 crc kubenswrapper[4806]: I0126 08:10:08.055065 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"025ae3ca-3082-4bc8-8611-5b23cec63932","Type":"ContainerDied","Data":"064bb866426cda4e680a27ba4d576877d2096d5ec28f01113c3d45c614189c37"} Jan 26 08:10:08 crc kubenswrapper[4806]: I0126 08:10:08.077080 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-pfzk8"] Jan 26 08:10:08 crc kubenswrapper[4806]: I0126 08:10:08.087878 4806 generic.go:334] "Generic (PLEG): container finished" podID="8f0a4edb-6a17-43a2-9c55-88ded9dfcc35" containerID="2218f5533d96af9fe346f68622866cb68caba66cdeb205c50f295727b54e7752" exitCode=0 Jan 26 08:10:08 crc kubenswrapper[4806]: I0126 08:10:08.087910 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"8f0a4edb-6a17-43a2-9c55-88ded9dfcc35","Type":"ContainerDied","Data":"2218f5533d96af9fe346f68622866cb68caba66cdeb205c50f295727b54e7752"} Jan 26 08:10:08 crc kubenswrapper[4806]: I0126 08:10:08.099810 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-pfzk8"] Jan 26 08:10:08 crc kubenswrapper[4806]: I0126 08:10:08.439174 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 26 08:10:08 crc kubenswrapper[4806]: I0126 08:10:08.771480 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/fcc22009-cca0-438b-8f2f-5c245db7c70c-etc-swift\") pod \"swift-storage-0\" (UID: \"fcc22009-cca0-438b-8f2f-5c245db7c70c\") " pod="openstack/swift-storage-0" Jan 26 08:10:08 crc kubenswrapper[4806]: E0126 08:10:08.771753 4806 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 26 08:10:08 crc kubenswrapper[4806]: E0126 08:10:08.771770 4806 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 26 08:10:08 crc kubenswrapper[4806]: E0126 08:10:08.771824 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fcc22009-cca0-438b-8f2f-5c245db7c70c-etc-swift podName:fcc22009-cca0-438b-8f2f-5c245db7c70c nodeName:}" failed. No retries permitted until 2026-01-26 08:10:24.771805612 +0000 UTC m=+1004.036213668 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/fcc22009-cca0-438b-8f2f-5c245db7c70c-etc-swift") pod "swift-storage-0" (UID: "fcc22009-cca0-438b-8f2f-5c245db7c70c") : configmap "swift-ring-files" not found Jan 26 08:10:09 crc kubenswrapper[4806]: I0126 08:10:09.049925 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c73d5149-0bc4-4db1-83fe-a3ae8745d0de" path="/var/lib/kubelet/pods/c73d5149-0bc4-4db1-83fe-a3ae8745d0de/volumes" Jan 26 08:10:09 crc kubenswrapper[4806]: I0126 08:10:09.095982 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"8f0a4edb-6a17-43a2-9c55-88ded9dfcc35","Type":"ContainerStarted","Data":"1727c328532a2dfb6251d6e5e4df741df38623a7ae21c46b0fa9b876282b4d7f"} Jan 26 08:10:09 crc kubenswrapper[4806]: I0126 08:10:09.096917 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 26 08:10:09 crc kubenswrapper[4806]: I0126 08:10:09.099964 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"025ae3ca-3082-4bc8-8611-5b23cec63932","Type":"ContainerStarted","Data":"f141762180104c521b7bffcd8ea54361bb9f059083d362577cf9100a2dd235dc"} Jan 26 08:10:09 crc kubenswrapper[4806]: I0126 08:10:09.100213 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 26 08:10:09 crc kubenswrapper[4806]: I0126 08:10:09.181590 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=39.560068946 podStartE2EDuration="1m14.181571513s" podCreationTimestamp="2026-01-26 08:08:55 +0000 UTC" firstStartedPulling="2026-01-26 08:08:58.748880432 +0000 UTC m=+918.013288488" lastFinishedPulling="2026-01-26 08:09:33.370382999 +0000 UTC m=+952.634791055" observedRunningTime="2026-01-26 08:10:09.13806523 +0000 UTC m=+988.402473286" watchObservedRunningTime="2026-01-26 08:10:09.181571513 +0000 UTC m=+988.445979569" Jan 26 08:10:09 crc kubenswrapper[4806]: I0126 08:10:09.184860 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=39.327493695 podStartE2EDuration="1m14.184846023s" podCreationTimestamp="2026-01-26 08:08:55 +0000 UTC" firstStartedPulling="2026-01-26 08:08:57.844318637 +0000 UTC m=+917.108726693" lastFinishedPulling="2026-01-26 08:09:32.701670965 +0000 UTC m=+951.966079021" observedRunningTime="2026-01-26 08:10:09.17672161 +0000 UTC m=+988.441129666" watchObservedRunningTime="2026-01-26 08:10:09.184846023 +0000 UTC m=+988.449254079" Jan 26 08:10:11 crc kubenswrapper[4806]: I0126 08:10:11.649921 4806 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-jb2zj" podUID="b5d47098-d6d7-4b59-a88c-4bfb7d643a89" containerName="ovn-controller" probeResult="failure" output=< Jan 26 08:10:11 crc kubenswrapper[4806]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 26 08:10:11 crc kubenswrapper[4806]: > Jan 26 08:10:11 crc kubenswrapper[4806]: I0126 08:10:11.668609 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-r7hjs" Jan 26 08:10:11 crc kubenswrapper[4806]: I0126 08:10:11.713908 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-r7hjs" Jan 26 08:10:11 crc kubenswrapper[4806]: I0126 08:10:11.933125 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-jb2zj-config-4f9jh"] Jan 26 08:10:11 crc kubenswrapper[4806]: E0126 08:10:11.933471 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="035ee86e-30e8-4e6c-9e99-6e0abca4fa67" containerName="mariadb-account-create-update" Jan 26 08:10:11 crc kubenswrapper[4806]: I0126 08:10:11.933494 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="035ee86e-30e8-4e6c-9e99-6e0abca4fa67" containerName="mariadb-account-create-update" Jan 26 08:10:11 crc kubenswrapper[4806]: E0126 08:10:11.933513 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be395281-fc30-4d88-8d0c-e1528c53d8cb" containerName="init" Jan 26 08:10:11 crc kubenswrapper[4806]: I0126 08:10:11.933537 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="be395281-fc30-4d88-8d0c-e1528c53d8cb" containerName="init" Jan 26 08:10:11 crc kubenswrapper[4806]: E0126 08:10:11.933552 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="317adfae-9113-4a76-964b-063e9c840848" containerName="mariadb-account-create-update" Jan 26 08:10:11 crc kubenswrapper[4806]: I0126 08:10:11.933557 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="317adfae-9113-4a76-964b-063e9c840848" containerName="mariadb-account-create-update" Jan 26 08:10:11 crc kubenswrapper[4806]: E0126 08:10:11.933573 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37971779-caab-4f56-a749-e545819352ce" containerName="mariadb-database-create" Jan 26 08:10:11 crc kubenswrapper[4806]: I0126 08:10:11.933579 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="37971779-caab-4f56-a749-e545819352ce" containerName="mariadb-database-create" Jan 26 08:10:11 crc kubenswrapper[4806]: E0126 08:10:11.933596 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c73d5149-0bc4-4db1-83fe-a3ae8745d0de" containerName="mariadb-account-create-update" Jan 26 08:10:11 crc kubenswrapper[4806]: I0126 08:10:11.933601 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="c73d5149-0bc4-4db1-83fe-a3ae8745d0de" containerName="mariadb-account-create-update" Jan 26 08:10:11 crc kubenswrapper[4806]: E0126 08:10:11.933611 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43b0a02c-c897-4cb4-bc1e-f478cec82e6a" containerName="mariadb-database-create" Jan 26 08:10:11 crc kubenswrapper[4806]: I0126 08:10:11.933617 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="43b0a02c-c897-4cb4-bc1e-f478cec82e6a" containerName="mariadb-database-create" Jan 26 08:10:11 crc kubenswrapper[4806]: E0126 08:10:11.933628 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be395281-fc30-4d88-8d0c-e1528c53d8cb" containerName="dnsmasq-dns" Jan 26 08:10:11 crc kubenswrapper[4806]: I0126 08:10:11.933634 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="be395281-fc30-4d88-8d0c-e1528c53d8cb" containerName="dnsmasq-dns" Jan 26 08:10:11 crc kubenswrapper[4806]: I0126 08:10:11.933798 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="035ee86e-30e8-4e6c-9e99-6e0abca4fa67" containerName="mariadb-account-create-update" Jan 26 08:10:11 crc kubenswrapper[4806]: I0126 08:10:11.933811 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="c73d5149-0bc4-4db1-83fe-a3ae8745d0de" containerName="mariadb-account-create-update" Jan 26 08:10:11 crc kubenswrapper[4806]: I0126 08:10:11.933823 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="317adfae-9113-4a76-964b-063e9c840848" containerName="mariadb-account-create-update" Jan 26 08:10:11 crc kubenswrapper[4806]: I0126 08:10:11.933837 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="37971779-caab-4f56-a749-e545819352ce" containerName="mariadb-database-create" Jan 26 08:10:11 crc kubenswrapper[4806]: I0126 08:10:11.933849 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="be395281-fc30-4d88-8d0c-e1528c53d8cb" containerName="dnsmasq-dns" Jan 26 08:10:11 crc kubenswrapper[4806]: I0126 08:10:11.933858 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="43b0a02c-c897-4cb4-bc1e-f478cec82e6a" containerName="mariadb-database-create" Jan 26 08:10:11 crc kubenswrapper[4806]: I0126 08:10:11.934398 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-jb2zj-config-4f9jh" Jan 26 08:10:11 crc kubenswrapper[4806]: I0126 08:10:11.937446 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 26 08:10:11 crc kubenswrapper[4806]: I0126 08:10:11.950420 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-jb2zj-config-4f9jh"] Jan 26 08:10:12 crc kubenswrapper[4806]: I0126 08:10:12.047164 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4j9ct\" (UniqueName: \"kubernetes.io/projected/07286747-f6f9-4ac7-85ed-5b882c2ac2fc-kube-api-access-4j9ct\") pod \"ovn-controller-jb2zj-config-4f9jh\" (UID: \"07286747-f6f9-4ac7-85ed-5b882c2ac2fc\") " pod="openstack/ovn-controller-jb2zj-config-4f9jh" Jan 26 08:10:12 crc kubenswrapper[4806]: I0126 08:10:12.047235 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/07286747-f6f9-4ac7-85ed-5b882c2ac2fc-var-run\") pod \"ovn-controller-jb2zj-config-4f9jh\" (UID: \"07286747-f6f9-4ac7-85ed-5b882c2ac2fc\") " pod="openstack/ovn-controller-jb2zj-config-4f9jh" Jan 26 08:10:12 crc kubenswrapper[4806]: I0126 08:10:12.047257 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/07286747-f6f9-4ac7-85ed-5b882c2ac2fc-var-log-ovn\") pod \"ovn-controller-jb2zj-config-4f9jh\" (UID: \"07286747-f6f9-4ac7-85ed-5b882c2ac2fc\") " pod="openstack/ovn-controller-jb2zj-config-4f9jh" Jan 26 08:10:12 crc kubenswrapper[4806]: I0126 08:10:12.047284 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/07286747-f6f9-4ac7-85ed-5b882c2ac2fc-additional-scripts\") pod \"ovn-controller-jb2zj-config-4f9jh\" (UID: \"07286747-f6f9-4ac7-85ed-5b882c2ac2fc\") " pod="openstack/ovn-controller-jb2zj-config-4f9jh" Jan 26 08:10:12 crc kubenswrapper[4806]: I0126 08:10:12.047483 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/07286747-f6f9-4ac7-85ed-5b882c2ac2fc-scripts\") pod \"ovn-controller-jb2zj-config-4f9jh\" (UID: \"07286747-f6f9-4ac7-85ed-5b882c2ac2fc\") " pod="openstack/ovn-controller-jb2zj-config-4f9jh" Jan 26 08:10:12 crc kubenswrapper[4806]: I0126 08:10:12.047667 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/07286747-f6f9-4ac7-85ed-5b882c2ac2fc-var-run-ovn\") pod \"ovn-controller-jb2zj-config-4f9jh\" (UID: \"07286747-f6f9-4ac7-85ed-5b882c2ac2fc\") " pod="openstack/ovn-controller-jb2zj-config-4f9jh" Jan 26 08:10:12 crc kubenswrapper[4806]: I0126 08:10:12.132876 4806 generic.go:334] "Generic (PLEG): container finished" podID="061b909a-a88f-4261-9ccf-2daaf3958621" containerID="47e4f4a462ececac3d3bdcbf69f78262b5a869d374f506d5a3bc0d0804830d3f" exitCode=0 Jan 26 08:10:12 crc kubenswrapper[4806]: I0126 08:10:12.133158 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-2dckc" event={"ID":"061b909a-a88f-4261-9ccf-2daaf3958621","Type":"ContainerDied","Data":"47e4f4a462ececac3d3bdcbf69f78262b5a869d374f506d5a3bc0d0804830d3f"} Jan 26 08:10:12 crc kubenswrapper[4806]: I0126 08:10:12.149590 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/07286747-f6f9-4ac7-85ed-5b882c2ac2fc-additional-scripts\") pod \"ovn-controller-jb2zj-config-4f9jh\" (UID: \"07286747-f6f9-4ac7-85ed-5b882c2ac2fc\") " pod="openstack/ovn-controller-jb2zj-config-4f9jh" Jan 26 08:10:12 crc kubenswrapper[4806]: I0126 08:10:12.149690 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/07286747-f6f9-4ac7-85ed-5b882c2ac2fc-scripts\") pod \"ovn-controller-jb2zj-config-4f9jh\" (UID: \"07286747-f6f9-4ac7-85ed-5b882c2ac2fc\") " pod="openstack/ovn-controller-jb2zj-config-4f9jh" Jan 26 08:10:12 crc kubenswrapper[4806]: I0126 08:10:12.149738 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/07286747-f6f9-4ac7-85ed-5b882c2ac2fc-var-run-ovn\") pod \"ovn-controller-jb2zj-config-4f9jh\" (UID: \"07286747-f6f9-4ac7-85ed-5b882c2ac2fc\") " pod="openstack/ovn-controller-jb2zj-config-4f9jh" Jan 26 08:10:12 crc kubenswrapper[4806]: I0126 08:10:12.149801 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4j9ct\" (UniqueName: \"kubernetes.io/projected/07286747-f6f9-4ac7-85ed-5b882c2ac2fc-kube-api-access-4j9ct\") pod \"ovn-controller-jb2zj-config-4f9jh\" (UID: \"07286747-f6f9-4ac7-85ed-5b882c2ac2fc\") " pod="openstack/ovn-controller-jb2zj-config-4f9jh" Jan 26 08:10:12 crc kubenswrapper[4806]: I0126 08:10:12.149828 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/07286747-f6f9-4ac7-85ed-5b882c2ac2fc-var-run\") pod \"ovn-controller-jb2zj-config-4f9jh\" (UID: \"07286747-f6f9-4ac7-85ed-5b882c2ac2fc\") " pod="openstack/ovn-controller-jb2zj-config-4f9jh" Jan 26 08:10:12 crc kubenswrapper[4806]: I0126 08:10:12.149846 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/07286747-f6f9-4ac7-85ed-5b882c2ac2fc-var-log-ovn\") pod \"ovn-controller-jb2zj-config-4f9jh\" (UID: \"07286747-f6f9-4ac7-85ed-5b882c2ac2fc\") " pod="openstack/ovn-controller-jb2zj-config-4f9jh" Jan 26 08:10:12 crc kubenswrapper[4806]: I0126 08:10:12.151122 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/07286747-f6f9-4ac7-85ed-5b882c2ac2fc-additional-scripts\") pod \"ovn-controller-jb2zj-config-4f9jh\" (UID: \"07286747-f6f9-4ac7-85ed-5b882c2ac2fc\") " pod="openstack/ovn-controller-jb2zj-config-4f9jh" Jan 26 08:10:12 crc kubenswrapper[4806]: I0126 08:10:12.151887 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/07286747-f6f9-4ac7-85ed-5b882c2ac2fc-var-run\") pod \"ovn-controller-jb2zj-config-4f9jh\" (UID: \"07286747-f6f9-4ac7-85ed-5b882c2ac2fc\") " pod="openstack/ovn-controller-jb2zj-config-4f9jh" Jan 26 08:10:12 crc kubenswrapper[4806]: I0126 08:10:12.151928 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/07286747-f6f9-4ac7-85ed-5b882c2ac2fc-var-log-ovn\") pod \"ovn-controller-jb2zj-config-4f9jh\" (UID: \"07286747-f6f9-4ac7-85ed-5b882c2ac2fc\") " pod="openstack/ovn-controller-jb2zj-config-4f9jh" Jan 26 08:10:12 crc kubenswrapper[4806]: I0126 08:10:12.152052 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/07286747-f6f9-4ac7-85ed-5b882c2ac2fc-var-run-ovn\") pod \"ovn-controller-jb2zj-config-4f9jh\" (UID: \"07286747-f6f9-4ac7-85ed-5b882c2ac2fc\") " pod="openstack/ovn-controller-jb2zj-config-4f9jh" Jan 26 08:10:12 crc kubenswrapper[4806]: I0126 08:10:12.152774 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/07286747-f6f9-4ac7-85ed-5b882c2ac2fc-scripts\") pod \"ovn-controller-jb2zj-config-4f9jh\" (UID: \"07286747-f6f9-4ac7-85ed-5b882c2ac2fc\") " pod="openstack/ovn-controller-jb2zj-config-4f9jh" Jan 26 08:10:12 crc kubenswrapper[4806]: I0126 08:10:12.198340 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4j9ct\" (UniqueName: \"kubernetes.io/projected/07286747-f6f9-4ac7-85ed-5b882c2ac2fc-kube-api-access-4j9ct\") pod \"ovn-controller-jb2zj-config-4f9jh\" (UID: \"07286747-f6f9-4ac7-85ed-5b882c2ac2fc\") " pod="openstack/ovn-controller-jb2zj-config-4f9jh" Jan 26 08:10:12 crc kubenswrapper[4806]: I0126 08:10:12.253757 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-jb2zj-config-4f9jh" Jan 26 08:10:13 crc kubenswrapper[4806]: I0126 08:10:13.089088 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-4xvcz"] Jan 26 08:10:13 crc kubenswrapper[4806]: I0126 08:10:13.090037 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-4xvcz" Jan 26 08:10:13 crc kubenswrapper[4806]: I0126 08:10:13.091985 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 26 08:10:13 crc kubenswrapper[4806]: I0126 08:10:13.127559 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-4xvcz"] Jan 26 08:10:13 crc kubenswrapper[4806]: I0126 08:10:13.172490 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mn9r4\" (UniqueName: \"kubernetes.io/projected/5e77d0cb-b2a5-443f-a47a-7ab76309eee5-kube-api-access-mn9r4\") pod \"root-account-create-update-4xvcz\" (UID: \"5e77d0cb-b2a5-443f-a47a-7ab76309eee5\") " pod="openstack/root-account-create-update-4xvcz" Jan 26 08:10:13 crc kubenswrapper[4806]: I0126 08:10:13.172555 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5e77d0cb-b2a5-443f-a47a-7ab76309eee5-operator-scripts\") pod \"root-account-create-update-4xvcz\" (UID: \"5e77d0cb-b2a5-443f-a47a-7ab76309eee5\") " pod="openstack/root-account-create-update-4xvcz" Jan 26 08:10:13 crc kubenswrapper[4806]: I0126 08:10:13.275657 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mn9r4\" (UniqueName: \"kubernetes.io/projected/5e77d0cb-b2a5-443f-a47a-7ab76309eee5-kube-api-access-mn9r4\") pod \"root-account-create-update-4xvcz\" (UID: \"5e77d0cb-b2a5-443f-a47a-7ab76309eee5\") " pod="openstack/root-account-create-update-4xvcz" Jan 26 08:10:13 crc kubenswrapper[4806]: I0126 08:10:13.275723 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5e77d0cb-b2a5-443f-a47a-7ab76309eee5-operator-scripts\") pod \"root-account-create-update-4xvcz\" (UID: \"5e77d0cb-b2a5-443f-a47a-7ab76309eee5\") " pod="openstack/root-account-create-update-4xvcz" Jan 26 08:10:13 crc kubenswrapper[4806]: I0126 08:10:13.283045 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5e77d0cb-b2a5-443f-a47a-7ab76309eee5-operator-scripts\") pod \"root-account-create-update-4xvcz\" (UID: \"5e77d0cb-b2a5-443f-a47a-7ab76309eee5\") " pod="openstack/root-account-create-update-4xvcz" Jan 26 08:10:13 crc kubenswrapper[4806]: I0126 08:10:13.333067 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mn9r4\" (UniqueName: \"kubernetes.io/projected/5e77d0cb-b2a5-443f-a47a-7ab76309eee5-kube-api-access-mn9r4\") pod \"root-account-create-update-4xvcz\" (UID: \"5e77d0cb-b2a5-443f-a47a-7ab76309eee5\") " pod="openstack/root-account-create-update-4xvcz" Jan 26 08:10:13 crc kubenswrapper[4806]: I0126 08:10:13.407089 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-4xvcz" Jan 26 08:10:15 crc kubenswrapper[4806]: I0126 08:10:15.806462 4806 patch_prober.go:28] interesting pod/machine-config-daemon-k2tlk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 08:10:15 crc kubenswrapper[4806]: I0126 08:10:15.806749 4806 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 08:10:16 crc kubenswrapper[4806]: I0126 08:10:16.612408 4806 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-jb2zj" podUID="b5d47098-d6d7-4b59-a88c-4bfb7d643a89" containerName="ovn-controller" probeResult="failure" output=< Jan 26 08:10:16 crc kubenswrapper[4806]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 26 08:10:16 crc kubenswrapper[4806]: > Jan 26 08:10:18 crc kubenswrapper[4806]: I0126 08:10:18.025022 4806 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="8f0a4edb-6a17-43a2-9c55-88ded9dfcc35" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.98:5671: connect: connection refused" Jan 26 08:10:20 crc kubenswrapper[4806]: E0126 08:10:20.948160 4806 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-glance-api:current-podified" Jan 26 08:10:20 crc kubenswrapper[4806]: E0126 08:10:20.948886 4806 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vwqv6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-hfpkt_openstack(f602c552-c375-4d9b-96fc-633ad5811f7d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 08:10:20 crc kubenswrapper[4806]: E0126 08:10:20.951183 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-hfpkt" podUID="f602c552-c375-4d9b-96fc-633ad5811f7d" Jan 26 08:10:21 crc kubenswrapper[4806]: I0126 08:10:21.057980 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-2dckc" Jan 26 08:10:21 crc kubenswrapper[4806]: I0126 08:10:21.118476 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/061b909a-a88f-4261-9ccf-2daaf3958621-ring-data-devices\") pod \"061b909a-a88f-4261-9ccf-2daaf3958621\" (UID: \"061b909a-a88f-4261-9ccf-2daaf3958621\") " Jan 26 08:10:21 crc kubenswrapper[4806]: I0126 08:10:21.118543 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/061b909a-a88f-4261-9ccf-2daaf3958621-combined-ca-bundle\") pod \"061b909a-a88f-4261-9ccf-2daaf3958621\" (UID: \"061b909a-a88f-4261-9ccf-2daaf3958621\") " Jan 26 08:10:21 crc kubenswrapper[4806]: I0126 08:10:21.118576 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/061b909a-a88f-4261-9ccf-2daaf3958621-dispersionconf\") pod \"061b909a-a88f-4261-9ccf-2daaf3958621\" (UID: \"061b909a-a88f-4261-9ccf-2daaf3958621\") " Jan 26 08:10:21 crc kubenswrapper[4806]: I0126 08:10:21.118614 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/061b909a-a88f-4261-9ccf-2daaf3958621-swiftconf\") pod \"061b909a-a88f-4261-9ccf-2daaf3958621\" (UID: \"061b909a-a88f-4261-9ccf-2daaf3958621\") " Jan 26 08:10:21 crc kubenswrapper[4806]: I0126 08:10:21.118654 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/061b909a-a88f-4261-9ccf-2daaf3958621-etc-swift\") pod \"061b909a-a88f-4261-9ccf-2daaf3958621\" (UID: \"061b909a-a88f-4261-9ccf-2daaf3958621\") " Jan 26 08:10:21 crc kubenswrapper[4806]: I0126 08:10:21.118721 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r4x95\" (UniqueName: \"kubernetes.io/projected/061b909a-a88f-4261-9ccf-2daaf3958621-kube-api-access-r4x95\") pod \"061b909a-a88f-4261-9ccf-2daaf3958621\" (UID: \"061b909a-a88f-4261-9ccf-2daaf3958621\") " Jan 26 08:10:21 crc kubenswrapper[4806]: I0126 08:10:21.118768 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/061b909a-a88f-4261-9ccf-2daaf3958621-scripts\") pod \"061b909a-a88f-4261-9ccf-2daaf3958621\" (UID: \"061b909a-a88f-4261-9ccf-2daaf3958621\") " Jan 26 08:10:21 crc kubenswrapper[4806]: I0126 08:10:21.120295 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/061b909a-a88f-4261-9ccf-2daaf3958621-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "061b909a-a88f-4261-9ccf-2daaf3958621" (UID: "061b909a-a88f-4261-9ccf-2daaf3958621"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 08:10:21 crc kubenswrapper[4806]: I0126 08:10:21.120351 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/061b909a-a88f-4261-9ccf-2daaf3958621-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "061b909a-a88f-4261-9ccf-2daaf3958621" (UID: "061b909a-a88f-4261-9ccf-2daaf3958621"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 08:10:21 crc kubenswrapper[4806]: I0126 08:10:21.124736 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/061b909a-a88f-4261-9ccf-2daaf3958621-kube-api-access-r4x95" (OuterVolumeSpecName: "kube-api-access-r4x95") pod "061b909a-a88f-4261-9ccf-2daaf3958621" (UID: "061b909a-a88f-4261-9ccf-2daaf3958621"). InnerVolumeSpecName "kube-api-access-r4x95". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:10:21 crc kubenswrapper[4806]: I0126 08:10:21.127641 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/061b909a-a88f-4261-9ccf-2daaf3958621-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "061b909a-a88f-4261-9ccf-2daaf3958621" (UID: "061b909a-a88f-4261-9ccf-2daaf3958621"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:10:21 crc kubenswrapper[4806]: I0126 08:10:21.156107 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/061b909a-a88f-4261-9ccf-2daaf3958621-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "061b909a-a88f-4261-9ccf-2daaf3958621" (UID: "061b909a-a88f-4261-9ccf-2daaf3958621"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:10:21 crc kubenswrapper[4806]: I0126 08:10:21.156218 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/061b909a-a88f-4261-9ccf-2daaf3958621-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "061b909a-a88f-4261-9ccf-2daaf3958621" (UID: "061b909a-a88f-4261-9ccf-2daaf3958621"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:10:21 crc kubenswrapper[4806]: I0126 08:10:21.180405 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/061b909a-a88f-4261-9ccf-2daaf3958621-scripts" (OuterVolumeSpecName: "scripts") pod "061b909a-a88f-4261-9ccf-2daaf3958621" (UID: "061b909a-a88f-4261-9ccf-2daaf3958621"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 08:10:21 crc kubenswrapper[4806]: I0126 08:10:21.215511 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-2dckc" Jan 26 08:10:21 crc kubenswrapper[4806]: I0126 08:10:21.216278 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-2dckc" event={"ID":"061b909a-a88f-4261-9ccf-2daaf3958621","Type":"ContainerDied","Data":"e90f6db0cc932c03752a37141ff563db329226ff18f85822d5ef2688073f6fb4"} Jan 26 08:10:21 crc kubenswrapper[4806]: I0126 08:10:21.216331 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e90f6db0cc932c03752a37141ff563db329226ff18f85822d5ef2688073f6fb4" Jan 26 08:10:21 crc kubenswrapper[4806]: E0126 08:10:21.216721 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-glance-api:current-podified\\\"\"" pod="openstack/glance-db-sync-hfpkt" podUID="f602c552-c375-4d9b-96fc-633ad5811f7d" Jan 26 08:10:21 crc kubenswrapper[4806]: I0126 08:10:21.220716 4806 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/061b909a-a88f-4261-9ccf-2daaf3958621-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 26 08:10:21 crc kubenswrapper[4806]: I0126 08:10:21.220735 4806 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/061b909a-a88f-4261-9ccf-2daaf3958621-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 26 08:10:21 crc kubenswrapper[4806]: I0126 08:10:21.220745 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r4x95\" (UniqueName: \"kubernetes.io/projected/061b909a-a88f-4261-9ccf-2daaf3958621-kube-api-access-r4x95\") on node \"crc\" DevicePath \"\"" Jan 26 08:10:21 crc kubenswrapper[4806]: I0126 08:10:21.220755 4806 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/061b909a-a88f-4261-9ccf-2daaf3958621-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 08:10:21 crc kubenswrapper[4806]: I0126 08:10:21.220765 4806 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/061b909a-a88f-4261-9ccf-2daaf3958621-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 26 08:10:21 crc kubenswrapper[4806]: I0126 08:10:21.220773 4806 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/061b909a-a88f-4261-9ccf-2daaf3958621-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 08:10:21 crc kubenswrapper[4806]: I0126 08:10:21.220782 4806 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/061b909a-a88f-4261-9ccf-2daaf3958621-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 26 08:10:21 crc kubenswrapper[4806]: W0126 08:10:21.434514 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod07286747_f6f9_4ac7_85ed_5b882c2ac2fc.slice/crio-2bbc2e02cc43415ee487b21db3947ef1ee342e607e18ccc1fb1198d4bd386ea0 WatchSource:0}: Error finding container 2bbc2e02cc43415ee487b21db3947ef1ee342e607e18ccc1fb1198d4bd386ea0: Status 404 returned error can't find the container with id 2bbc2e02cc43415ee487b21db3947ef1ee342e607e18ccc1fb1198d4bd386ea0 Jan 26 08:10:21 crc kubenswrapper[4806]: I0126 08:10:21.437005 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-jb2zj-config-4f9jh"] Jan 26 08:10:21 crc kubenswrapper[4806]: I0126 08:10:21.498238 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-4xvcz"] Jan 26 08:10:21 crc kubenswrapper[4806]: I0126 08:10:21.612877 4806 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-jb2zj" podUID="b5d47098-d6d7-4b59-a88c-4bfb7d643a89" containerName="ovn-controller" probeResult="failure" output=< Jan 26 08:10:21 crc kubenswrapper[4806]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 26 08:10:21 crc kubenswrapper[4806]: > Jan 26 08:10:22 crc kubenswrapper[4806]: I0126 08:10:22.222617 4806 generic.go:334] "Generic (PLEG): container finished" podID="07286747-f6f9-4ac7-85ed-5b882c2ac2fc" containerID="c1a8985ec4cf190a37f23e672d8a8e5ff1509ea1220c951f22b374a93149782d" exitCode=0 Jan 26 08:10:22 crc kubenswrapper[4806]: I0126 08:10:22.222659 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-jb2zj-config-4f9jh" event={"ID":"07286747-f6f9-4ac7-85ed-5b882c2ac2fc","Type":"ContainerDied","Data":"c1a8985ec4cf190a37f23e672d8a8e5ff1509ea1220c951f22b374a93149782d"} Jan 26 08:10:22 crc kubenswrapper[4806]: I0126 08:10:22.222954 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-jb2zj-config-4f9jh" event={"ID":"07286747-f6f9-4ac7-85ed-5b882c2ac2fc","Type":"ContainerStarted","Data":"2bbc2e02cc43415ee487b21db3947ef1ee342e607e18ccc1fb1198d4bd386ea0"} Jan 26 08:10:22 crc kubenswrapper[4806]: I0126 08:10:22.225514 4806 generic.go:334] "Generic (PLEG): container finished" podID="5e77d0cb-b2a5-443f-a47a-7ab76309eee5" containerID="c677256371bd406932dc9892939917e181d2d520d385383e77cf3beeff2cd9c4" exitCode=0 Jan 26 08:10:22 crc kubenswrapper[4806]: I0126 08:10:22.225560 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-4xvcz" event={"ID":"5e77d0cb-b2a5-443f-a47a-7ab76309eee5","Type":"ContainerDied","Data":"c677256371bd406932dc9892939917e181d2d520d385383e77cf3beeff2cd9c4"} Jan 26 08:10:22 crc kubenswrapper[4806]: I0126 08:10:22.225577 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-4xvcz" event={"ID":"5e77d0cb-b2a5-443f-a47a-7ab76309eee5","Type":"ContainerStarted","Data":"3d29917de885864635bc957837ec5dbacbc69097089247f6b9eb8891f7a7c987"} Jan 26 08:10:23 crc kubenswrapper[4806]: I0126 08:10:23.611147 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-4xvcz" Jan 26 08:10:23 crc kubenswrapper[4806]: I0126 08:10:23.618600 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-jb2zj-config-4f9jh" Jan 26 08:10:23 crc kubenswrapper[4806]: I0126 08:10:23.664303 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/07286747-f6f9-4ac7-85ed-5b882c2ac2fc-additional-scripts\") pod \"07286747-f6f9-4ac7-85ed-5b882c2ac2fc\" (UID: \"07286747-f6f9-4ac7-85ed-5b882c2ac2fc\") " Jan 26 08:10:23 crc kubenswrapper[4806]: I0126 08:10:23.664389 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/07286747-f6f9-4ac7-85ed-5b882c2ac2fc-var-run-ovn\") pod \"07286747-f6f9-4ac7-85ed-5b882c2ac2fc\" (UID: \"07286747-f6f9-4ac7-85ed-5b882c2ac2fc\") " Jan 26 08:10:23 crc kubenswrapper[4806]: I0126 08:10:23.664418 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4j9ct\" (UniqueName: \"kubernetes.io/projected/07286747-f6f9-4ac7-85ed-5b882c2ac2fc-kube-api-access-4j9ct\") pod \"07286747-f6f9-4ac7-85ed-5b882c2ac2fc\" (UID: \"07286747-f6f9-4ac7-85ed-5b882c2ac2fc\") " Jan 26 08:10:23 crc kubenswrapper[4806]: I0126 08:10:23.664445 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/07286747-f6f9-4ac7-85ed-5b882c2ac2fc-var-log-ovn\") pod \"07286747-f6f9-4ac7-85ed-5b882c2ac2fc\" (UID: \"07286747-f6f9-4ac7-85ed-5b882c2ac2fc\") " Jan 26 08:10:23 crc kubenswrapper[4806]: I0126 08:10:23.664485 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mn9r4\" (UniqueName: \"kubernetes.io/projected/5e77d0cb-b2a5-443f-a47a-7ab76309eee5-kube-api-access-mn9r4\") pod \"5e77d0cb-b2a5-443f-a47a-7ab76309eee5\" (UID: \"5e77d0cb-b2a5-443f-a47a-7ab76309eee5\") " Jan 26 08:10:23 crc kubenswrapper[4806]: I0126 08:10:23.664502 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/07286747-f6f9-4ac7-85ed-5b882c2ac2fc-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "07286747-f6f9-4ac7-85ed-5b882c2ac2fc" (UID: "07286747-f6f9-4ac7-85ed-5b882c2ac2fc"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 08:10:23 crc kubenswrapper[4806]: I0126 08:10:23.664551 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/07286747-f6f9-4ac7-85ed-5b882c2ac2fc-scripts\") pod \"07286747-f6f9-4ac7-85ed-5b882c2ac2fc\" (UID: \"07286747-f6f9-4ac7-85ed-5b882c2ac2fc\") " Jan 26 08:10:23 crc kubenswrapper[4806]: I0126 08:10:23.664582 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5e77d0cb-b2a5-443f-a47a-7ab76309eee5-operator-scripts\") pod \"5e77d0cb-b2a5-443f-a47a-7ab76309eee5\" (UID: \"5e77d0cb-b2a5-443f-a47a-7ab76309eee5\") " Jan 26 08:10:23 crc kubenswrapper[4806]: I0126 08:10:23.664550 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/07286747-f6f9-4ac7-85ed-5b882c2ac2fc-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "07286747-f6f9-4ac7-85ed-5b882c2ac2fc" (UID: "07286747-f6f9-4ac7-85ed-5b882c2ac2fc"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 08:10:23 crc kubenswrapper[4806]: I0126 08:10:23.664641 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/07286747-f6f9-4ac7-85ed-5b882c2ac2fc-var-run\") pod \"07286747-f6f9-4ac7-85ed-5b882c2ac2fc\" (UID: \"07286747-f6f9-4ac7-85ed-5b882c2ac2fc\") " Jan 26 08:10:23 crc kubenswrapper[4806]: I0126 08:10:23.664988 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/07286747-f6f9-4ac7-85ed-5b882c2ac2fc-var-run" (OuterVolumeSpecName: "var-run") pod "07286747-f6f9-4ac7-85ed-5b882c2ac2fc" (UID: "07286747-f6f9-4ac7-85ed-5b882c2ac2fc"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 08:10:23 crc kubenswrapper[4806]: I0126 08:10:23.665214 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e77d0cb-b2a5-443f-a47a-7ab76309eee5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5e77d0cb-b2a5-443f-a47a-7ab76309eee5" (UID: "5e77d0cb-b2a5-443f-a47a-7ab76309eee5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 08:10:23 crc kubenswrapper[4806]: I0126 08:10:23.665343 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07286747-f6f9-4ac7-85ed-5b882c2ac2fc-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "07286747-f6f9-4ac7-85ed-5b882c2ac2fc" (UID: "07286747-f6f9-4ac7-85ed-5b882c2ac2fc"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 08:10:23 crc kubenswrapper[4806]: I0126 08:10:23.665597 4806 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/07286747-f6f9-4ac7-85ed-5b882c2ac2fc-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 26 08:10:23 crc kubenswrapper[4806]: I0126 08:10:23.665614 4806 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/07286747-f6f9-4ac7-85ed-5b882c2ac2fc-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 26 08:10:23 crc kubenswrapper[4806]: I0126 08:10:23.665624 4806 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5e77d0cb-b2a5-443f-a47a-7ab76309eee5-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 08:10:23 crc kubenswrapper[4806]: I0126 08:10:23.665634 4806 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/07286747-f6f9-4ac7-85ed-5b882c2ac2fc-var-run\") on node \"crc\" DevicePath \"\"" Jan 26 08:10:23 crc kubenswrapper[4806]: I0126 08:10:23.665642 4806 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/07286747-f6f9-4ac7-85ed-5b882c2ac2fc-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 08:10:23 crc kubenswrapper[4806]: I0126 08:10:23.665637 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07286747-f6f9-4ac7-85ed-5b882c2ac2fc-scripts" (OuterVolumeSpecName: "scripts") pod "07286747-f6f9-4ac7-85ed-5b882c2ac2fc" (UID: "07286747-f6f9-4ac7-85ed-5b882c2ac2fc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 08:10:23 crc kubenswrapper[4806]: I0126 08:10:23.669934 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e77d0cb-b2a5-443f-a47a-7ab76309eee5-kube-api-access-mn9r4" (OuterVolumeSpecName: "kube-api-access-mn9r4") pod "5e77d0cb-b2a5-443f-a47a-7ab76309eee5" (UID: "5e77d0cb-b2a5-443f-a47a-7ab76309eee5"). InnerVolumeSpecName "kube-api-access-mn9r4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:10:23 crc kubenswrapper[4806]: I0126 08:10:23.691446 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07286747-f6f9-4ac7-85ed-5b882c2ac2fc-kube-api-access-4j9ct" (OuterVolumeSpecName: "kube-api-access-4j9ct") pod "07286747-f6f9-4ac7-85ed-5b882c2ac2fc" (UID: "07286747-f6f9-4ac7-85ed-5b882c2ac2fc"). InnerVolumeSpecName "kube-api-access-4j9ct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:10:23 crc kubenswrapper[4806]: I0126 08:10:23.767207 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mn9r4\" (UniqueName: \"kubernetes.io/projected/5e77d0cb-b2a5-443f-a47a-7ab76309eee5-kube-api-access-mn9r4\") on node \"crc\" DevicePath \"\"" Jan 26 08:10:23 crc kubenswrapper[4806]: I0126 08:10:23.767243 4806 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/07286747-f6f9-4ac7-85ed-5b882c2ac2fc-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 08:10:23 crc kubenswrapper[4806]: I0126 08:10:23.767254 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4j9ct\" (UniqueName: \"kubernetes.io/projected/07286747-f6f9-4ac7-85ed-5b882c2ac2fc-kube-api-access-4j9ct\") on node \"crc\" DevicePath \"\"" Jan 26 08:10:24 crc kubenswrapper[4806]: I0126 08:10:24.243814 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-jb2zj-config-4f9jh" event={"ID":"07286747-f6f9-4ac7-85ed-5b882c2ac2fc","Type":"ContainerDied","Data":"2bbc2e02cc43415ee487b21db3947ef1ee342e607e18ccc1fb1198d4bd386ea0"} Jan 26 08:10:24 crc kubenswrapper[4806]: I0126 08:10:24.243844 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-jb2zj-config-4f9jh" Jan 26 08:10:24 crc kubenswrapper[4806]: I0126 08:10:24.243854 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2bbc2e02cc43415ee487b21db3947ef1ee342e607e18ccc1fb1198d4bd386ea0" Jan 26 08:10:24 crc kubenswrapper[4806]: I0126 08:10:24.245781 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-4xvcz" event={"ID":"5e77d0cb-b2a5-443f-a47a-7ab76309eee5","Type":"ContainerDied","Data":"3d29917de885864635bc957837ec5dbacbc69097089247f6b9eb8891f7a7c987"} Jan 26 08:10:24 crc kubenswrapper[4806]: I0126 08:10:24.245812 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3d29917de885864635bc957837ec5dbacbc69097089247f6b9eb8891f7a7c987" Jan 26 08:10:24 crc kubenswrapper[4806]: I0126 08:10:24.245830 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-4xvcz" Jan 26 08:10:24 crc kubenswrapper[4806]: I0126 08:10:24.777204 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-jb2zj-config-4f9jh"] Jan 26 08:10:24 crc kubenswrapper[4806]: I0126 08:10:24.784213 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/fcc22009-cca0-438b-8f2f-5c245db7c70c-etc-swift\") pod \"swift-storage-0\" (UID: \"fcc22009-cca0-438b-8f2f-5c245db7c70c\") " pod="openstack/swift-storage-0" Jan 26 08:10:24 crc kubenswrapper[4806]: I0126 08:10:24.786054 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-jb2zj-config-4f9jh"] Jan 26 08:10:24 crc kubenswrapper[4806]: I0126 08:10:24.805509 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/fcc22009-cca0-438b-8f2f-5c245db7c70c-etc-swift\") pod \"swift-storage-0\" (UID: \"fcc22009-cca0-438b-8f2f-5c245db7c70c\") " pod="openstack/swift-storage-0" Jan 26 08:10:24 crc kubenswrapper[4806]: I0126 08:10:24.848665 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 26 08:10:25 crc kubenswrapper[4806]: I0126 08:10:25.051051 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07286747-f6f9-4ac7-85ed-5b882c2ac2fc" path="/var/lib/kubelet/pods/07286747-f6f9-4ac7-85ed-5b882c2ac2fc/volumes" Jan 26 08:10:25 crc kubenswrapper[4806]: I0126 08:10:25.365028 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 26 08:10:26 crc kubenswrapper[4806]: I0126 08:10:26.263120 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"fcc22009-cca0-438b-8f2f-5c245db7c70c","Type":"ContainerStarted","Data":"fa47fa6f238f0f5552ee63fc3390720767f515d6ffb67552b90fbe59eeda921f"} Jan 26 08:10:26 crc kubenswrapper[4806]: I0126 08:10:26.599439 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-jb2zj" Jan 26 08:10:27 crc kubenswrapper[4806]: I0126 08:10:27.161715 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 26 08:10:27 crc kubenswrapper[4806]: I0126 08:10:27.274640 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"fcc22009-cca0-438b-8f2f-5c245db7c70c","Type":"ContainerStarted","Data":"7beb5f2bc6548e32191cb9f189c7692529dd82c26f9febfdd7627d5ec4617aa0"} Jan 26 08:10:27 crc kubenswrapper[4806]: I0126 08:10:27.274973 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"fcc22009-cca0-438b-8f2f-5c245db7c70c","Type":"ContainerStarted","Data":"167d8765be1168ab526933678c62b1b54f00aff8e56740983aac66afe3869072"} Jan 26 08:10:27 crc kubenswrapper[4806]: I0126 08:10:27.274984 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"fcc22009-cca0-438b-8f2f-5c245db7c70c","Type":"ContainerStarted","Data":"b339cbeee65f2b247ce9ac4a5574aecf302b07ed6f14fd460f19dc056886d465"} Jan 26 08:10:28 crc kubenswrapper[4806]: I0126 08:10:28.026723 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 26 08:10:28 crc kubenswrapper[4806]: I0126 08:10:28.284137 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"fcc22009-cca0-438b-8f2f-5c245db7c70c","Type":"ContainerStarted","Data":"624346aedac4b6baa4eafeb8c28f157acb8da87d4f30929dac3f03732922891e"} Jan 26 08:10:29 crc kubenswrapper[4806]: I0126 08:10:29.209480 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-77pc8"] Jan 26 08:10:29 crc kubenswrapper[4806]: E0126 08:10:29.210077 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07286747-f6f9-4ac7-85ed-5b882c2ac2fc" containerName="ovn-config" Jan 26 08:10:29 crc kubenswrapper[4806]: I0126 08:10:29.210094 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="07286747-f6f9-4ac7-85ed-5b882c2ac2fc" containerName="ovn-config" Jan 26 08:10:29 crc kubenswrapper[4806]: E0126 08:10:29.210116 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="061b909a-a88f-4261-9ccf-2daaf3958621" containerName="swift-ring-rebalance" Jan 26 08:10:29 crc kubenswrapper[4806]: I0126 08:10:29.210122 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="061b909a-a88f-4261-9ccf-2daaf3958621" containerName="swift-ring-rebalance" Jan 26 08:10:29 crc kubenswrapper[4806]: E0126 08:10:29.210142 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e77d0cb-b2a5-443f-a47a-7ab76309eee5" containerName="mariadb-account-create-update" Jan 26 08:10:29 crc kubenswrapper[4806]: I0126 08:10:29.210149 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e77d0cb-b2a5-443f-a47a-7ab76309eee5" containerName="mariadb-account-create-update" Jan 26 08:10:29 crc kubenswrapper[4806]: I0126 08:10:29.210311 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e77d0cb-b2a5-443f-a47a-7ab76309eee5" containerName="mariadb-account-create-update" Jan 26 08:10:29 crc kubenswrapper[4806]: I0126 08:10:29.210324 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="07286747-f6f9-4ac7-85ed-5b882c2ac2fc" containerName="ovn-config" Jan 26 08:10:29 crc kubenswrapper[4806]: I0126 08:10:29.210335 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="061b909a-a88f-4261-9ccf-2daaf3958621" containerName="swift-ring-rebalance" Jan 26 08:10:29 crc kubenswrapper[4806]: I0126 08:10:29.210792 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-77pc8" Jan 26 08:10:29 crc kubenswrapper[4806]: I0126 08:10:29.257906 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6ec02cc0-9e30-460d-938a-b04b357649d3-operator-scripts\") pod \"cinder-db-create-77pc8\" (UID: \"6ec02cc0-9e30-460d-938a-b04b357649d3\") " pod="openstack/cinder-db-create-77pc8" Jan 26 08:10:29 crc kubenswrapper[4806]: I0126 08:10:29.257961 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qspq\" (UniqueName: \"kubernetes.io/projected/6ec02cc0-9e30-460d-938a-b04b357649d3-kube-api-access-8qspq\") pod \"cinder-db-create-77pc8\" (UID: \"6ec02cc0-9e30-460d-938a-b04b357649d3\") " pod="openstack/cinder-db-create-77pc8" Jan 26 08:10:29 crc kubenswrapper[4806]: I0126 08:10:29.329531 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"fcc22009-cca0-438b-8f2f-5c245db7c70c","Type":"ContainerStarted","Data":"e9b545ca484f5b9c11848ddb0a2c8e826c15798f52f18a17431be4c03d2e09b4"} Jan 26 08:10:29 crc kubenswrapper[4806]: I0126 08:10:29.336686 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-mj4kn"] Jan 26 08:10:29 crc kubenswrapper[4806]: I0126 08:10:29.338194 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-mj4kn" Jan 26 08:10:29 crc kubenswrapper[4806]: I0126 08:10:29.349574 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-7f99-account-create-update-7hh6g"] Jan 26 08:10:29 crc kubenswrapper[4806]: I0126 08:10:29.350652 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-7f99-account-create-update-7hh6g" Jan 26 08:10:29 crc kubenswrapper[4806]: W0126 08:10:29.352177 4806 reflector.go:561] object-"openstack"/"cinder-db-secret": failed to list *v1.Secret: secrets "cinder-db-secret" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openstack": no relationship found between node 'crc' and this object Jan 26 08:10:29 crc kubenswrapper[4806]: E0126 08:10:29.352205 4806 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cinder-db-secret\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cinder-db-secret\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openstack\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 26 08:10:29 crc kubenswrapper[4806]: I0126 08:10:29.359705 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zg99\" (UniqueName: \"kubernetes.io/projected/979ab357-98b5-4ee2-87d8-678702adfab2-kube-api-access-8zg99\") pod \"barbican-db-create-mj4kn\" (UID: \"979ab357-98b5-4ee2-87d8-678702adfab2\") " pod="openstack/barbican-db-create-mj4kn" Jan 26 08:10:29 crc kubenswrapper[4806]: I0126 08:10:29.359813 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/979ab357-98b5-4ee2-87d8-678702adfab2-operator-scripts\") pod \"barbican-db-create-mj4kn\" (UID: \"979ab357-98b5-4ee2-87d8-678702adfab2\") " pod="openstack/barbican-db-create-mj4kn" Jan 26 08:10:29 crc kubenswrapper[4806]: I0126 08:10:29.359859 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0bffa786-3a1c-4303-b303-8500b3614ab8-operator-scripts\") pod \"cinder-7f99-account-create-update-7hh6g\" (UID: \"0bffa786-3a1c-4303-b303-8500b3614ab8\") " pod="openstack/cinder-7f99-account-create-update-7hh6g" Jan 26 08:10:29 crc kubenswrapper[4806]: I0126 08:10:29.359887 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6ec02cc0-9e30-460d-938a-b04b357649d3-operator-scripts\") pod \"cinder-db-create-77pc8\" (UID: \"6ec02cc0-9e30-460d-938a-b04b357649d3\") " pod="openstack/cinder-db-create-77pc8" Jan 26 08:10:29 crc kubenswrapper[4806]: I0126 08:10:29.359924 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8qspq\" (UniqueName: \"kubernetes.io/projected/6ec02cc0-9e30-460d-938a-b04b357649d3-kube-api-access-8qspq\") pod \"cinder-db-create-77pc8\" (UID: \"6ec02cc0-9e30-460d-938a-b04b357649d3\") " pod="openstack/cinder-db-create-77pc8" Jan 26 08:10:29 crc kubenswrapper[4806]: I0126 08:10:29.359944 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwngb\" (UniqueName: \"kubernetes.io/projected/0bffa786-3a1c-4303-b303-8500b3614ab8-kube-api-access-wwngb\") pod \"cinder-7f99-account-create-update-7hh6g\" (UID: \"0bffa786-3a1c-4303-b303-8500b3614ab8\") " pod="openstack/cinder-7f99-account-create-update-7hh6g" Jan 26 08:10:29 crc kubenswrapper[4806]: I0126 08:10:29.360828 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6ec02cc0-9e30-460d-938a-b04b357649d3-operator-scripts\") pod \"cinder-db-create-77pc8\" (UID: \"6ec02cc0-9e30-460d-938a-b04b357649d3\") " pod="openstack/cinder-db-create-77pc8" Jan 26 08:10:29 crc kubenswrapper[4806]: I0126 08:10:29.377358 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-77pc8"] Jan 26 08:10:29 crc kubenswrapper[4806]: I0126 08:10:29.405271 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-mj4kn"] Jan 26 08:10:29 crc kubenswrapper[4806]: I0126 08:10:29.420581 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-7f99-account-create-update-7hh6g"] Jan 26 08:10:29 crc kubenswrapper[4806]: I0126 08:10:29.429638 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qspq\" (UniqueName: \"kubernetes.io/projected/6ec02cc0-9e30-460d-938a-b04b357649d3-kube-api-access-8qspq\") pod \"cinder-db-create-77pc8\" (UID: \"6ec02cc0-9e30-460d-938a-b04b357649d3\") " pod="openstack/cinder-db-create-77pc8" Jan 26 08:10:29 crc kubenswrapper[4806]: I0126 08:10:29.460990 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/979ab357-98b5-4ee2-87d8-678702adfab2-operator-scripts\") pod \"barbican-db-create-mj4kn\" (UID: \"979ab357-98b5-4ee2-87d8-678702adfab2\") " pod="openstack/barbican-db-create-mj4kn" Jan 26 08:10:29 crc kubenswrapper[4806]: I0126 08:10:29.461044 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0bffa786-3a1c-4303-b303-8500b3614ab8-operator-scripts\") pod \"cinder-7f99-account-create-update-7hh6g\" (UID: \"0bffa786-3a1c-4303-b303-8500b3614ab8\") " pod="openstack/cinder-7f99-account-create-update-7hh6g" Jan 26 08:10:29 crc kubenswrapper[4806]: I0126 08:10:29.461075 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwngb\" (UniqueName: \"kubernetes.io/projected/0bffa786-3a1c-4303-b303-8500b3614ab8-kube-api-access-wwngb\") pod \"cinder-7f99-account-create-update-7hh6g\" (UID: \"0bffa786-3a1c-4303-b303-8500b3614ab8\") " pod="openstack/cinder-7f99-account-create-update-7hh6g" Jan 26 08:10:29 crc kubenswrapper[4806]: I0126 08:10:29.461133 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8zg99\" (UniqueName: \"kubernetes.io/projected/979ab357-98b5-4ee2-87d8-678702adfab2-kube-api-access-8zg99\") pod \"barbican-db-create-mj4kn\" (UID: \"979ab357-98b5-4ee2-87d8-678702adfab2\") " pod="openstack/barbican-db-create-mj4kn" Jan 26 08:10:29 crc kubenswrapper[4806]: I0126 08:10:29.463322 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0bffa786-3a1c-4303-b303-8500b3614ab8-operator-scripts\") pod \"cinder-7f99-account-create-update-7hh6g\" (UID: \"0bffa786-3a1c-4303-b303-8500b3614ab8\") " pod="openstack/cinder-7f99-account-create-update-7hh6g" Jan 26 08:10:29 crc kubenswrapper[4806]: I0126 08:10:29.474022 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/979ab357-98b5-4ee2-87d8-678702adfab2-operator-scripts\") pod \"barbican-db-create-mj4kn\" (UID: \"979ab357-98b5-4ee2-87d8-678702adfab2\") " pod="openstack/barbican-db-create-mj4kn" Jan 26 08:10:29 crc kubenswrapper[4806]: I0126 08:10:29.485640 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-create-qc6mc"] Jan 26 08:10:29 crc kubenswrapper[4806]: I0126 08:10:29.486747 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-qc6mc" Jan 26 08:10:29 crc kubenswrapper[4806]: I0126 08:10:29.510626 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zg99\" (UniqueName: \"kubernetes.io/projected/979ab357-98b5-4ee2-87d8-678702adfab2-kube-api-access-8zg99\") pod \"barbican-db-create-mj4kn\" (UID: \"979ab357-98b5-4ee2-87d8-678702adfab2\") " pod="openstack/barbican-db-create-mj4kn" Jan 26 08:10:29 crc kubenswrapper[4806]: I0126 08:10:29.513280 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwngb\" (UniqueName: \"kubernetes.io/projected/0bffa786-3a1c-4303-b303-8500b3614ab8-kube-api-access-wwngb\") pod \"cinder-7f99-account-create-update-7hh6g\" (UID: \"0bffa786-3a1c-4303-b303-8500b3614ab8\") " pod="openstack/cinder-7f99-account-create-update-7hh6g" Jan 26 08:10:29 crc kubenswrapper[4806]: I0126 08:10:29.514419 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-qc6mc"] Jan 26 08:10:29 crc kubenswrapper[4806]: I0126 08:10:29.525960 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-77pc8" Jan 26 08:10:29 crc kubenswrapper[4806]: I0126 08:10:29.667154 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgl79\" (UniqueName: \"kubernetes.io/projected/78af78c0-adca-4ff1-960f-5d8f918e2a1a-kube-api-access-hgl79\") pod \"heat-db-create-qc6mc\" (UID: \"78af78c0-adca-4ff1-960f-5d8f918e2a1a\") " pod="openstack/heat-db-create-qc6mc" Jan 26 08:10:29 crc kubenswrapper[4806]: I0126 08:10:29.667399 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/78af78c0-adca-4ff1-960f-5d8f918e2a1a-operator-scripts\") pod \"heat-db-create-qc6mc\" (UID: \"78af78c0-adca-4ff1-960f-5d8f918e2a1a\") " pod="openstack/heat-db-create-qc6mc" Jan 26 08:10:29 crc kubenswrapper[4806]: I0126 08:10:29.673320 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-mj4kn" Jan 26 08:10:29 crc kubenswrapper[4806]: I0126 08:10:29.683726 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-7f99-account-create-update-7hh6g" Jan 26 08:10:29 crc kubenswrapper[4806]: I0126 08:10:29.749120 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-afd2-account-create-update-dnjwx"] Jan 26 08:10:29 crc kubenswrapper[4806]: I0126 08:10:29.750199 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-afd2-account-create-update-dnjwx" Jan 26 08:10:29 crc kubenswrapper[4806]: I0126 08:10:29.756362 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 26 08:10:29 crc kubenswrapper[4806]: I0126 08:10:29.772731 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c7cf5184-3f6b-426b-a01c-07ba5de2b9fc-operator-scripts\") pod \"barbican-afd2-account-create-update-dnjwx\" (UID: \"c7cf5184-3f6b-426b-a01c-07ba5de2b9fc\") " pod="openstack/barbican-afd2-account-create-update-dnjwx" Jan 26 08:10:29 crc kubenswrapper[4806]: I0126 08:10:29.772812 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/78af78c0-adca-4ff1-960f-5d8f918e2a1a-operator-scripts\") pod \"heat-db-create-qc6mc\" (UID: \"78af78c0-adca-4ff1-960f-5d8f918e2a1a\") " pod="openstack/heat-db-create-qc6mc" Jan 26 08:10:29 crc kubenswrapper[4806]: I0126 08:10:29.772848 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hgl79\" (UniqueName: \"kubernetes.io/projected/78af78c0-adca-4ff1-960f-5d8f918e2a1a-kube-api-access-hgl79\") pod \"heat-db-create-qc6mc\" (UID: \"78af78c0-adca-4ff1-960f-5d8f918e2a1a\") " pod="openstack/heat-db-create-qc6mc" Jan 26 08:10:29 crc kubenswrapper[4806]: I0126 08:10:29.772924 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2tjt\" (UniqueName: \"kubernetes.io/projected/c7cf5184-3f6b-426b-a01c-07ba5de2b9fc-kube-api-access-l2tjt\") pod \"barbican-afd2-account-create-update-dnjwx\" (UID: \"c7cf5184-3f6b-426b-a01c-07ba5de2b9fc\") " pod="openstack/barbican-afd2-account-create-update-dnjwx" Jan 26 08:10:29 crc kubenswrapper[4806]: I0126 08:10:29.773563 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/78af78c0-adca-4ff1-960f-5d8f918e2a1a-operator-scripts\") pod \"heat-db-create-qc6mc\" (UID: \"78af78c0-adca-4ff1-960f-5d8f918e2a1a\") " pod="openstack/heat-db-create-qc6mc" Jan 26 08:10:29 crc kubenswrapper[4806]: I0126 08:10:29.781833 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-afd2-account-create-update-dnjwx"] Jan 26 08:10:29 crc kubenswrapper[4806]: I0126 08:10:29.813142 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgl79\" (UniqueName: \"kubernetes.io/projected/78af78c0-adca-4ff1-960f-5d8f918e2a1a-kube-api-access-hgl79\") pod \"heat-db-create-qc6mc\" (UID: \"78af78c0-adca-4ff1-960f-5d8f918e2a1a\") " pod="openstack/heat-db-create-qc6mc" Jan 26 08:10:29 crc kubenswrapper[4806]: I0126 08:10:29.864947 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-qc6mc" Jan 26 08:10:29 crc kubenswrapper[4806]: I0126 08:10:29.872374 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-pbrx8"] Jan 26 08:10:29 crc kubenswrapper[4806]: I0126 08:10:29.873376 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-pbrx8" Jan 26 08:10:29 crc kubenswrapper[4806]: I0126 08:10:29.875235 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2tjt\" (UniqueName: \"kubernetes.io/projected/c7cf5184-3f6b-426b-a01c-07ba5de2b9fc-kube-api-access-l2tjt\") pod \"barbican-afd2-account-create-update-dnjwx\" (UID: \"c7cf5184-3f6b-426b-a01c-07ba5de2b9fc\") " pod="openstack/barbican-afd2-account-create-update-dnjwx" Jan 26 08:10:29 crc kubenswrapper[4806]: I0126 08:10:29.875261 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c7cf5184-3f6b-426b-a01c-07ba5de2b9fc-operator-scripts\") pod \"barbican-afd2-account-create-update-dnjwx\" (UID: \"c7cf5184-3f6b-426b-a01c-07ba5de2b9fc\") " pod="openstack/barbican-afd2-account-create-update-dnjwx" Jan 26 08:10:29 crc kubenswrapper[4806]: I0126 08:10:29.876128 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c7cf5184-3f6b-426b-a01c-07ba5de2b9fc-operator-scripts\") pod \"barbican-afd2-account-create-update-dnjwx\" (UID: \"c7cf5184-3f6b-426b-a01c-07ba5de2b9fc\") " pod="openstack/barbican-afd2-account-create-update-dnjwx" Jan 26 08:10:29 crc kubenswrapper[4806]: I0126 08:10:29.877156 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 26 08:10:29 crc kubenswrapper[4806]: I0126 08:10:29.877586 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 26 08:10:29 crc kubenswrapper[4806]: I0126 08:10:29.877805 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 26 08:10:29 crc kubenswrapper[4806]: I0126 08:10:29.877923 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-wsvrh" Jan 26 08:10:29 crc kubenswrapper[4806]: I0126 08:10:29.905790 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-pbrx8"] Jan 26 08:10:29 crc kubenswrapper[4806]: I0126 08:10:29.948087 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2tjt\" (UniqueName: \"kubernetes.io/projected/c7cf5184-3f6b-426b-a01c-07ba5de2b9fc-kube-api-access-l2tjt\") pod \"barbican-afd2-account-create-update-dnjwx\" (UID: \"c7cf5184-3f6b-426b-a01c-07ba5de2b9fc\") " pod="openstack/barbican-afd2-account-create-update-dnjwx" Jan 26 08:10:29 crc kubenswrapper[4806]: I0126 08:10:29.981235 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ba70c1a-7213-421b-b154-ac57621252b8-combined-ca-bundle\") pod \"keystone-db-sync-pbrx8\" (UID: \"0ba70c1a-7213-421b-b154-ac57621252b8\") " pod="openstack/keystone-db-sync-pbrx8" Jan 26 08:10:29 crc kubenswrapper[4806]: I0126 08:10:29.981530 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ba70c1a-7213-421b-b154-ac57621252b8-config-data\") pod \"keystone-db-sync-pbrx8\" (UID: \"0ba70c1a-7213-421b-b154-ac57621252b8\") " pod="openstack/keystone-db-sync-pbrx8" Jan 26 08:10:29 crc kubenswrapper[4806]: I0126 08:10:29.981563 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mffh\" (UniqueName: \"kubernetes.io/projected/0ba70c1a-7213-421b-b154-ac57621252b8-kube-api-access-5mffh\") pod \"keystone-db-sync-pbrx8\" (UID: \"0ba70c1a-7213-421b-b154-ac57621252b8\") " pod="openstack/keystone-db-sync-pbrx8" Jan 26 08:10:29 crc kubenswrapper[4806]: I0126 08:10:29.984136 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-1468-account-create-update-kfzbt"] Jan 26 08:10:29 crc kubenswrapper[4806]: I0126 08:10:29.985775 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-1468-account-create-update-kfzbt" Jan 26 08:10:29 crc kubenswrapper[4806]: I0126 08:10:29.995783 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-db-secret" Jan 26 08:10:30 crc kubenswrapper[4806]: I0126 08:10:30.017912 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-1468-account-create-update-kfzbt"] Jan 26 08:10:30 crc kubenswrapper[4806]: I0126 08:10:30.082950 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ba70c1a-7213-421b-b154-ac57621252b8-combined-ca-bundle\") pod \"keystone-db-sync-pbrx8\" (UID: \"0ba70c1a-7213-421b-b154-ac57621252b8\") " pod="openstack/keystone-db-sync-pbrx8" Jan 26 08:10:30 crc kubenswrapper[4806]: I0126 08:10:30.083002 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ba70c1a-7213-421b-b154-ac57621252b8-config-data\") pod \"keystone-db-sync-pbrx8\" (UID: \"0ba70c1a-7213-421b-b154-ac57621252b8\") " pod="openstack/keystone-db-sync-pbrx8" Jan 26 08:10:30 crc kubenswrapper[4806]: I0126 08:10:30.083028 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mffh\" (UniqueName: \"kubernetes.io/projected/0ba70c1a-7213-421b-b154-ac57621252b8-kube-api-access-5mffh\") pod \"keystone-db-sync-pbrx8\" (UID: \"0ba70c1a-7213-421b-b154-ac57621252b8\") " pod="openstack/keystone-db-sync-pbrx8" Jan 26 08:10:30 crc kubenswrapper[4806]: I0126 08:10:30.087578 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ba70c1a-7213-421b-b154-ac57621252b8-config-data\") pod \"keystone-db-sync-pbrx8\" (UID: \"0ba70c1a-7213-421b-b154-ac57621252b8\") " pod="openstack/keystone-db-sync-pbrx8" Jan 26 08:10:30 crc kubenswrapper[4806]: I0126 08:10:30.091320 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ba70c1a-7213-421b-b154-ac57621252b8-combined-ca-bundle\") pod \"keystone-db-sync-pbrx8\" (UID: \"0ba70c1a-7213-421b-b154-ac57621252b8\") " pod="openstack/keystone-db-sync-pbrx8" Jan 26 08:10:30 crc kubenswrapper[4806]: I0126 08:10:30.120450 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-9cfwh"] Jan 26 08:10:30 crc kubenswrapper[4806]: I0126 08:10:30.121695 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-9cfwh" Jan 26 08:10:30 crc kubenswrapper[4806]: I0126 08:10:30.122212 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mffh\" (UniqueName: \"kubernetes.io/projected/0ba70c1a-7213-421b-b154-ac57621252b8-kube-api-access-5mffh\") pod \"keystone-db-sync-pbrx8\" (UID: \"0ba70c1a-7213-421b-b154-ac57621252b8\") " pod="openstack/keystone-db-sync-pbrx8" Jan 26 08:10:30 crc kubenswrapper[4806]: I0126 08:10:30.146597 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-b311-account-create-update-mv65h"] Jan 26 08:10:30 crc kubenswrapper[4806]: I0126 08:10:30.147654 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-b311-account-create-update-mv65h" Jan 26 08:10:30 crc kubenswrapper[4806]: I0126 08:10:30.149813 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 26 08:10:30 crc kubenswrapper[4806]: I0126 08:10:30.161391 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-9cfwh"] Jan 26 08:10:30 crc kubenswrapper[4806]: I0126 08:10:30.170306 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-b311-account-create-update-mv65h"] Jan 26 08:10:30 crc kubenswrapper[4806]: I0126 08:10:30.186181 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khd9f\" (UniqueName: \"kubernetes.io/projected/ece2b5de-984b-4a8c-8115-e84363f5f599-kube-api-access-khd9f\") pod \"neutron-db-create-9cfwh\" (UID: \"ece2b5de-984b-4a8c-8115-e84363f5f599\") " pod="openstack/neutron-db-create-9cfwh" Jan 26 08:10:30 crc kubenswrapper[4806]: I0126 08:10:30.186276 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bb76c3b5-a137-4408-9cc4-7e17505b7989-operator-scripts\") pod \"neutron-b311-account-create-update-mv65h\" (UID: \"bb76c3b5-a137-4408-9cc4-7e17505b7989\") " pod="openstack/neutron-b311-account-create-update-mv65h" Jan 26 08:10:30 crc kubenswrapper[4806]: I0126 08:10:30.186353 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ece2b5de-984b-4a8c-8115-e84363f5f599-operator-scripts\") pod \"neutron-db-create-9cfwh\" (UID: \"ece2b5de-984b-4a8c-8115-e84363f5f599\") " pod="openstack/neutron-db-create-9cfwh" Jan 26 08:10:30 crc kubenswrapper[4806]: I0126 08:10:30.186394 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ce3e16b-0b9d-42fc-bd7d-d8e01a190a5f-operator-scripts\") pod \"heat-1468-account-create-update-kfzbt\" (UID: \"2ce3e16b-0b9d-42fc-bd7d-d8e01a190a5f\") " pod="openstack/heat-1468-account-create-update-kfzbt" Jan 26 08:10:30 crc kubenswrapper[4806]: I0126 08:10:30.186421 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hm56\" (UniqueName: \"kubernetes.io/projected/bb76c3b5-a137-4408-9cc4-7e17505b7989-kube-api-access-4hm56\") pod \"neutron-b311-account-create-update-mv65h\" (UID: \"bb76c3b5-a137-4408-9cc4-7e17505b7989\") " pod="openstack/neutron-b311-account-create-update-mv65h" Jan 26 08:10:30 crc kubenswrapper[4806]: I0126 08:10:30.186444 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jgkw\" (UniqueName: \"kubernetes.io/projected/2ce3e16b-0b9d-42fc-bd7d-d8e01a190a5f-kube-api-access-6jgkw\") pod \"heat-1468-account-create-update-kfzbt\" (UID: \"2ce3e16b-0b9d-42fc-bd7d-d8e01a190a5f\") " pod="openstack/heat-1468-account-create-update-kfzbt" Jan 26 08:10:30 crc kubenswrapper[4806]: I0126 08:10:30.192699 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-afd2-account-create-update-dnjwx" Jan 26 08:10:30 crc kubenswrapper[4806]: I0126 08:10:30.240664 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-pbrx8" Jan 26 08:10:30 crc kubenswrapper[4806]: I0126 08:10:30.289551 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ce3e16b-0b9d-42fc-bd7d-d8e01a190a5f-operator-scripts\") pod \"heat-1468-account-create-update-kfzbt\" (UID: \"2ce3e16b-0b9d-42fc-bd7d-d8e01a190a5f\") " pod="openstack/heat-1468-account-create-update-kfzbt" Jan 26 08:10:30 crc kubenswrapper[4806]: I0126 08:10:30.289653 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hm56\" (UniqueName: \"kubernetes.io/projected/bb76c3b5-a137-4408-9cc4-7e17505b7989-kube-api-access-4hm56\") pod \"neutron-b311-account-create-update-mv65h\" (UID: \"bb76c3b5-a137-4408-9cc4-7e17505b7989\") " pod="openstack/neutron-b311-account-create-update-mv65h" Jan 26 08:10:30 crc kubenswrapper[4806]: I0126 08:10:30.289686 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jgkw\" (UniqueName: \"kubernetes.io/projected/2ce3e16b-0b9d-42fc-bd7d-d8e01a190a5f-kube-api-access-6jgkw\") pod \"heat-1468-account-create-update-kfzbt\" (UID: \"2ce3e16b-0b9d-42fc-bd7d-d8e01a190a5f\") " pod="openstack/heat-1468-account-create-update-kfzbt" Jan 26 08:10:30 crc kubenswrapper[4806]: I0126 08:10:30.289819 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-khd9f\" (UniqueName: \"kubernetes.io/projected/ece2b5de-984b-4a8c-8115-e84363f5f599-kube-api-access-khd9f\") pod \"neutron-db-create-9cfwh\" (UID: \"ece2b5de-984b-4a8c-8115-e84363f5f599\") " pod="openstack/neutron-db-create-9cfwh" Jan 26 08:10:30 crc kubenswrapper[4806]: I0126 08:10:30.289897 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bb76c3b5-a137-4408-9cc4-7e17505b7989-operator-scripts\") pod \"neutron-b311-account-create-update-mv65h\" (UID: \"bb76c3b5-a137-4408-9cc4-7e17505b7989\") " pod="openstack/neutron-b311-account-create-update-mv65h" Jan 26 08:10:30 crc kubenswrapper[4806]: I0126 08:10:30.289954 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ece2b5de-984b-4a8c-8115-e84363f5f599-operator-scripts\") pod \"neutron-db-create-9cfwh\" (UID: \"ece2b5de-984b-4a8c-8115-e84363f5f599\") " pod="openstack/neutron-db-create-9cfwh" Jan 26 08:10:30 crc kubenswrapper[4806]: I0126 08:10:30.291188 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ece2b5de-984b-4a8c-8115-e84363f5f599-operator-scripts\") pod \"neutron-db-create-9cfwh\" (UID: \"ece2b5de-984b-4a8c-8115-e84363f5f599\") " pod="openstack/neutron-db-create-9cfwh" Jan 26 08:10:30 crc kubenswrapper[4806]: I0126 08:10:30.291871 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ce3e16b-0b9d-42fc-bd7d-d8e01a190a5f-operator-scripts\") pod \"heat-1468-account-create-update-kfzbt\" (UID: \"2ce3e16b-0b9d-42fc-bd7d-d8e01a190a5f\") " pod="openstack/heat-1468-account-create-update-kfzbt" Jan 26 08:10:30 crc kubenswrapper[4806]: I0126 08:10:30.293111 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bb76c3b5-a137-4408-9cc4-7e17505b7989-operator-scripts\") pod \"neutron-b311-account-create-update-mv65h\" (UID: \"bb76c3b5-a137-4408-9cc4-7e17505b7989\") " pod="openstack/neutron-b311-account-create-update-mv65h" Jan 26 08:10:30 crc kubenswrapper[4806]: I0126 08:10:30.316871 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-khd9f\" (UniqueName: \"kubernetes.io/projected/ece2b5de-984b-4a8c-8115-e84363f5f599-kube-api-access-khd9f\") pod \"neutron-db-create-9cfwh\" (UID: \"ece2b5de-984b-4a8c-8115-e84363f5f599\") " pod="openstack/neutron-db-create-9cfwh" Jan 26 08:10:30 crc kubenswrapper[4806]: I0126 08:10:30.325159 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hm56\" (UniqueName: \"kubernetes.io/projected/bb76c3b5-a137-4408-9cc4-7e17505b7989-kube-api-access-4hm56\") pod \"neutron-b311-account-create-update-mv65h\" (UID: \"bb76c3b5-a137-4408-9cc4-7e17505b7989\") " pod="openstack/neutron-b311-account-create-update-mv65h" Jan 26 08:10:30 crc kubenswrapper[4806]: I0126 08:10:30.335293 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jgkw\" (UniqueName: \"kubernetes.io/projected/2ce3e16b-0b9d-42fc-bd7d-d8e01a190a5f-kube-api-access-6jgkw\") pod \"heat-1468-account-create-update-kfzbt\" (UID: \"2ce3e16b-0b9d-42fc-bd7d-d8e01a190a5f\") " pod="openstack/heat-1468-account-create-update-kfzbt" Jan 26 08:10:30 crc kubenswrapper[4806]: I0126 08:10:30.357033 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 26 08:10:30 crc kubenswrapper[4806]: I0126 08:10:30.421585 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"fcc22009-cca0-438b-8f2f-5c245db7c70c","Type":"ContainerStarted","Data":"9820885a606f9b6248d92c48de743bce098f3b8d3b8d2333780d065f05c92469"} Jan 26 08:10:30 crc kubenswrapper[4806]: I0126 08:10:30.421908 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"fcc22009-cca0-438b-8f2f-5c245db7c70c","Type":"ContainerStarted","Data":"60814b4599011e61cb99ba7aa29ad743416abff7118a621b93c39b0bdb768839"} Jan 26 08:10:30 crc kubenswrapper[4806]: I0126 08:10:30.474914 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-9cfwh" Jan 26 08:10:30 crc kubenswrapper[4806]: I0126 08:10:30.494201 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-77pc8"] Jan 26 08:10:30 crc kubenswrapper[4806]: I0126 08:10:30.498379 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-b311-account-create-update-mv65h" Jan 26 08:10:30 crc kubenswrapper[4806]: I0126 08:10:30.630865 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-1468-account-create-update-kfzbt" Jan 26 08:10:30 crc kubenswrapper[4806]: I0126 08:10:30.786842 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-mj4kn"] Jan 26 08:10:30 crc kubenswrapper[4806]: I0126 08:10:30.843764 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-7f99-account-create-update-7hh6g"] Jan 26 08:10:30 crc kubenswrapper[4806]: I0126 08:10:30.877288 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-afd2-account-create-update-dnjwx"] Jan 26 08:10:30 crc kubenswrapper[4806]: I0126 08:10:30.924553 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-pbrx8"] Jan 26 08:10:30 crc kubenswrapper[4806]: I0126 08:10:30.955170 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-qc6mc"] Jan 26 08:10:31 crc kubenswrapper[4806]: I0126 08:10:31.366467 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-9cfwh"] Jan 26 08:10:31 crc kubenswrapper[4806]: I0126 08:10:31.383763 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-b311-account-create-update-mv65h"] Jan 26 08:10:31 crc kubenswrapper[4806]: W0126 08:10:31.399024 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podece2b5de_984b_4a8c_8115_e84363f5f599.slice/crio-25b491095020b9b1ff1e4a147015316b952b813165bc92cb9ae854db6ef529fd WatchSource:0}: Error finding container 25b491095020b9b1ff1e4a147015316b952b813165bc92cb9ae854db6ef529fd: Status 404 returned error can't find the container with id 25b491095020b9b1ff1e4a147015316b952b813165bc92cb9ae854db6ef529fd Jan 26 08:10:31 crc kubenswrapper[4806]: I0126 08:10:31.443094 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-b311-account-create-update-mv65h" event={"ID":"bb76c3b5-a137-4408-9cc4-7e17505b7989","Type":"ContainerStarted","Data":"e9df2615df616fa13f83ad6c6802aa3f80d49044a2c1e90b5e6cfdc8990a9e40"} Jan 26 08:10:31 crc kubenswrapper[4806]: I0126 08:10:31.445933 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-9cfwh" event={"ID":"ece2b5de-984b-4a8c-8115-e84363f5f599","Type":"ContainerStarted","Data":"25b491095020b9b1ff1e4a147015316b952b813165bc92cb9ae854db6ef529fd"} Jan 26 08:10:31 crc kubenswrapper[4806]: I0126 08:10:31.448863 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-afd2-account-create-update-dnjwx" event={"ID":"c7cf5184-3f6b-426b-a01c-07ba5de2b9fc","Type":"ContainerStarted","Data":"0905251ab4adf243890245d4a714a0591b6c1f6013cf18411905feebb817b2c3"} Jan 26 08:10:31 crc kubenswrapper[4806]: I0126 08:10:31.448894 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-afd2-account-create-update-dnjwx" event={"ID":"c7cf5184-3f6b-426b-a01c-07ba5de2b9fc","Type":"ContainerStarted","Data":"79ad83af342ac762529d2644615f388ab64de1fef9bcfee33027ab64ba176309"} Jan 26 08:10:31 crc kubenswrapper[4806]: I0126 08:10:31.472454 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-qc6mc" event={"ID":"78af78c0-adca-4ff1-960f-5d8f918e2a1a","Type":"ContainerStarted","Data":"15a235ce747e901deba70eabeb9041e9204463033815d5fac24a664085f0b122"} Jan 26 08:10:31 crc kubenswrapper[4806]: I0126 08:10:31.472704 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-qc6mc" event={"ID":"78af78c0-adca-4ff1-960f-5d8f918e2a1a","Type":"ContainerStarted","Data":"50ee38742550af988d3076d4d0a56a59c563b82f87ba2a73fcc700a579f718b8"} Jan 26 08:10:31 crc kubenswrapper[4806]: I0126 08:10:31.491320 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-7f99-account-create-update-7hh6g" event={"ID":"0bffa786-3a1c-4303-b303-8500b3614ab8","Type":"ContainerStarted","Data":"9a79b1dcfcbfab56017d36700ba15abd4bff6228c5d823928b0e2b73ce2cf02b"} Jan 26 08:10:31 crc kubenswrapper[4806]: I0126 08:10:31.491366 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-7f99-account-create-update-7hh6g" event={"ID":"0bffa786-3a1c-4303-b303-8500b3614ab8","Type":"ContainerStarted","Data":"b8b6f94e49dc1f08859826ef468184390650866cdeb61518cbdde9b3d9d63937"} Jan 26 08:10:31 crc kubenswrapper[4806]: I0126 08:10:31.499312 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-afd2-account-create-update-dnjwx" podStartSLOduration=2.499296201 podStartE2EDuration="2.499296201s" podCreationTimestamp="2026-01-26 08:10:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 08:10:31.471069288 +0000 UTC m=+1010.735477344" watchObservedRunningTime="2026-01-26 08:10:31.499296201 +0000 UTC m=+1010.763704257" Jan 26 08:10:31 crc kubenswrapper[4806]: I0126 08:10:31.517054 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-create-qc6mc" podStartSLOduration=2.517035279 podStartE2EDuration="2.517035279s" podCreationTimestamp="2026-01-26 08:10:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 08:10:31.492655024 +0000 UTC m=+1010.757063080" watchObservedRunningTime="2026-01-26 08:10:31.517035279 +0000 UTC m=+1010.781443335" Jan 26 08:10:31 crc kubenswrapper[4806]: I0126 08:10:31.523286 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"fcc22009-cca0-438b-8f2f-5c245db7c70c","Type":"ContainerStarted","Data":"96c944400177ecc3cefeb8c046c6aae7c28bb351ecfff91531448023537a8cc3"} Jan 26 08:10:31 crc kubenswrapper[4806]: I0126 08:10:31.532615 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-7f99-account-create-update-7hh6g" podStartSLOduration=2.532596816 podStartE2EDuration="2.532596816s" podCreationTimestamp="2026-01-26 08:10:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 08:10:31.512571754 +0000 UTC m=+1010.776979810" watchObservedRunningTime="2026-01-26 08:10:31.532596816 +0000 UTC m=+1010.797004872" Jan 26 08:10:31 crc kubenswrapper[4806]: I0126 08:10:31.533223 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-77pc8" event={"ID":"6ec02cc0-9e30-460d-938a-b04b357649d3","Type":"ContainerStarted","Data":"1c2571f0fb7c51c720262fd6186f9b1504a9adee5c43e49791b0b411023e70f7"} Jan 26 08:10:31 crc kubenswrapper[4806]: I0126 08:10:31.533265 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-77pc8" event={"ID":"6ec02cc0-9e30-460d-938a-b04b357649d3","Type":"ContainerStarted","Data":"fe4e88557a20cad32f7c5ffe00d28c4e2639ed19898f4ebeca8d7ad03f6e413f"} Jan 26 08:10:31 crc kubenswrapper[4806]: I0126 08:10:31.536096 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-pbrx8" event={"ID":"0ba70c1a-7213-421b-b154-ac57621252b8","Type":"ContainerStarted","Data":"43294f97ab9c3754cc176932874929014155bf084434ee1213b877ec9b24d440"} Jan 26 08:10:31 crc kubenswrapper[4806]: I0126 08:10:31.538039 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-mj4kn" event={"ID":"979ab357-98b5-4ee2-87d8-678702adfab2","Type":"ContainerStarted","Data":"bb20d674a5a435f4c70852326dd1a654cf2c0661e2fe07882dc9bad948f27578"} Jan 26 08:10:31 crc kubenswrapper[4806]: I0126 08:10:31.538063 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-mj4kn" event={"ID":"979ab357-98b5-4ee2-87d8-678702adfab2","Type":"ContainerStarted","Data":"eb9ce88c0e80876ac2618168dde2f87f948f2e42744d3069af67978da7baae1b"} Jan 26 08:10:31 crc kubenswrapper[4806]: I0126 08:10:31.572911 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-mj4kn" podStartSLOduration=2.572893838 podStartE2EDuration="2.572893838s" podCreationTimestamp="2026-01-26 08:10:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 08:10:31.568855005 +0000 UTC m=+1010.833263061" watchObservedRunningTime="2026-01-26 08:10:31.572893838 +0000 UTC m=+1010.837301894" Jan 26 08:10:31 crc kubenswrapper[4806]: I0126 08:10:31.641732 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-1468-account-create-update-kfzbt"] Jan 26 08:10:32 crc kubenswrapper[4806]: I0126 08:10:32.558806 4806 generic.go:334] "Generic (PLEG): container finished" podID="2ce3e16b-0b9d-42fc-bd7d-d8e01a190a5f" containerID="ae8bbb0365b30b6e5f5085780b82e519ddb37473f7440ea8445c9598ce41ad54" exitCode=0 Jan 26 08:10:32 crc kubenswrapper[4806]: I0126 08:10:32.558881 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-1468-account-create-update-kfzbt" event={"ID":"2ce3e16b-0b9d-42fc-bd7d-d8e01a190a5f","Type":"ContainerDied","Data":"ae8bbb0365b30b6e5f5085780b82e519ddb37473f7440ea8445c9598ce41ad54"} Jan 26 08:10:32 crc kubenswrapper[4806]: I0126 08:10:32.558907 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-1468-account-create-update-kfzbt" event={"ID":"2ce3e16b-0b9d-42fc-bd7d-d8e01a190a5f","Type":"ContainerStarted","Data":"cb0753f6691eec3e81178f509b73dd697914c34ca8ef3f38e3f51e5089b7c830"} Jan 26 08:10:32 crc kubenswrapper[4806]: I0126 08:10:32.560614 4806 generic.go:334] "Generic (PLEG): container finished" podID="6ec02cc0-9e30-460d-938a-b04b357649d3" containerID="1c2571f0fb7c51c720262fd6186f9b1504a9adee5c43e49791b0b411023e70f7" exitCode=0 Jan 26 08:10:32 crc kubenswrapper[4806]: I0126 08:10:32.560679 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-77pc8" event={"ID":"6ec02cc0-9e30-460d-938a-b04b357649d3","Type":"ContainerDied","Data":"1c2571f0fb7c51c720262fd6186f9b1504a9adee5c43e49791b0b411023e70f7"} Jan 26 08:10:32 crc kubenswrapper[4806]: I0126 08:10:32.562174 4806 generic.go:334] "Generic (PLEG): container finished" podID="c7cf5184-3f6b-426b-a01c-07ba5de2b9fc" containerID="0905251ab4adf243890245d4a714a0591b6c1f6013cf18411905feebb817b2c3" exitCode=0 Jan 26 08:10:32 crc kubenswrapper[4806]: I0126 08:10:32.562210 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-afd2-account-create-update-dnjwx" event={"ID":"c7cf5184-3f6b-426b-a01c-07ba5de2b9fc","Type":"ContainerDied","Data":"0905251ab4adf243890245d4a714a0591b6c1f6013cf18411905feebb817b2c3"} Jan 26 08:10:32 crc kubenswrapper[4806]: I0126 08:10:32.573137 4806 generic.go:334] "Generic (PLEG): container finished" podID="979ab357-98b5-4ee2-87d8-678702adfab2" containerID="bb20d674a5a435f4c70852326dd1a654cf2c0661e2fe07882dc9bad948f27578" exitCode=0 Jan 26 08:10:32 crc kubenswrapper[4806]: I0126 08:10:32.573232 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-mj4kn" event={"ID":"979ab357-98b5-4ee2-87d8-678702adfab2","Type":"ContainerDied","Data":"bb20d674a5a435f4c70852326dd1a654cf2c0661e2fe07882dc9bad948f27578"} Jan 26 08:10:32 crc kubenswrapper[4806]: I0126 08:10:32.576481 4806 generic.go:334] "Generic (PLEG): container finished" podID="78af78c0-adca-4ff1-960f-5d8f918e2a1a" containerID="15a235ce747e901deba70eabeb9041e9204463033815d5fac24a664085f0b122" exitCode=0 Jan 26 08:10:32 crc kubenswrapper[4806]: I0126 08:10:32.576573 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-qc6mc" event={"ID":"78af78c0-adca-4ff1-960f-5d8f918e2a1a","Type":"ContainerDied","Data":"15a235ce747e901deba70eabeb9041e9204463033815d5fac24a664085f0b122"} Jan 26 08:10:32 crc kubenswrapper[4806]: I0126 08:10:32.578430 4806 generic.go:334] "Generic (PLEG): container finished" podID="0bffa786-3a1c-4303-b303-8500b3614ab8" containerID="9a79b1dcfcbfab56017d36700ba15abd4bff6228c5d823928b0e2b73ce2cf02b" exitCode=0 Jan 26 08:10:32 crc kubenswrapper[4806]: I0126 08:10:32.578473 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-7f99-account-create-update-7hh6g" event={"ID":"0bffa786-3a1c-4303-b303-8500b3614ab8","Type":"ContainerDied","Data":"9a79b1dcfcbfab56017d36700ba15abd4bff6228c5d823928b0e2b73ce2cf02b"} Jan 26 08:10:32 crc kubenswrapper[4806]: I0126 08:10:32.593929 4806 generic.go:334] "Generic (PLEG): container finished" podID="bb76c3b5-a137-4408-9cc4-7e17505b7989" containerID="2d56710877ad8fe55db99dc1c23ddf68117a93ae6e90fdf4e80f5fbff5a790c3" exitCode=0 Jan 26 08:10:32 crc kubenswrapper[4806]: I0126 08:10:32.594025 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-b311-account-create-update-mv65h" event={"ID":"bb76c3b5-a137-4408-9cc4-7e17505b7989","Type":"ContainerDied","Data":"2d56710877ad8fe55db99dc1c23ddf68117a93ae6e90fdf4e80f5fbff5a790c3"} Jan 26 08:10:32 crc kubenswrapper[4806]: I0126 08:10:32.596215 4806 generic.go:334] "Generic (PLEG): container finished" podID="ece2b5de-984b-4a8c-8115-e84363f5f599" containerID="05c7a5902e97ab25c9fcfb02045aa100127567179f4be68e49409958a671cc9c" exitCode=0 Jan 26 08:10:32 crc kubenswrapper[4806]: I0126 08:10:32.596272 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-9cfwh" event={"ID":"ece2b5de-984b-4a8c-8115-e84363f5f599","Type":"ContainerDied","Data":"05c7a5902e97ab25c9fcfb02045aa100127567179f4be68e49409958a671cc9c"} Jan 26 08:10:33 crc kubenswrapper[4806]: I0126 08:10:33.044891 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-77pc8" Jan 26 08:10:33 crc kubenswrapper[4806]: I0126 08:10:33.165152 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8qspq\" (UniqueName: \"kubernetes.io/projected/6ec02cc0-9e30-460d-938a-b04b357649d3-kube-api-access-8qspq\") pod \"6ec02cc0-9e30-460d-938a-b04b357649d3\" (UID: \"6ec02cc0-9e30-460d-938a-b04b357649d3\") " Jan 26 08:10:33 crc kubenswrapper[4806]: I0126 08:10:33.165192 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6ec02cc0-9e30-460d-938a-b04b357649d3-operator-scripts\") pod \"6ec02cc0-9e30-460d-938a-b04b357649d3\" (UID: \"6ec02cc0-9e30-460d-938a-b04b357649d3\") " Jan 26 08:10:33 crc kubenswrapper[4806]: I0126 08:10:33.166962 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ec02cc0-9e30-460d-938a-b04b357649d3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6ec02cc0-9e30-460d-938a-b04b357649d3" (UID: "6ec02cc0-9e30-460d-938a-b04b357649d3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 08:10:33 crc kubenswrapper[4806]: I0126 08:10:33.171683 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ec02cc0-9e30-460d-938a-b04b357649d3-kube-api-access-8qspq" (OuterVolumeSpecName: "kube-api-access-8qspq") pod "6ec02cc0-9e30-460d-938a-b04b357649d3" (UID: "6ec02cc0-9e30-460d-938a-b04b357649d3"). InnerVolumeSpecName "kube-api-access-8qspq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:10:33 crc kubenswrapper[4806]: I0126 08:10:33.267112 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8qspq\" (UniqueName: \"kubernetes.io/projected/6ec02cc0-9e30-460d-938a-b04b357649d3-kube-api-access-8qspq\") on node \"crc\" DevicePath \"\"" Jan 26 08:10:33 crc kubenswrapper[4806]: I0126 08:10:33.267143 4806 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6ec02cc0-9e30-460d-938a-b04b357649d3-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 08:10:33 crc kubenswrapper[4806]: I0126 08:10:33.613722 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-77pc8" event={"ID":"6ec02cc0-9e30-460d-938a-b04b357649d3","Type":"ContainerDied","Data":"fe4e88557a20cad32f7c5ffe00d28c4e2639ed19898f4ebeca8d7ad03f6e413f"} Jan 26 08:10:33 crc kubenswrapper[4806]: I0126 08:10:33.613955 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe4e88557a20cad32f7c5ffe00d28c4e2639ed19898f4ebeca8d7ad03f6e413f" Jan 26 08:10:33 crc kubenswrapper[4806]: I0126 08:10:33.613990 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-77pc8" Jan 26 08:10:33 crc kubenswrapper[4806]: I0126 08:10:33.641784 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"fcc22009-cca0-438b-8f2f-5c245db7c70c","Type":"ContainerStarted","Data":"f5ed546af7a4bb13c2aee75f064443c803213da0174e0162685689a799c6741c"} Jan 26 08:10:33 crc kubenswrapper[4806]: I0126 08:10:33.641830 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"fcc22009-cca0-438b-8f2f-5c245db7c70c","Type":"ContainerStarted","Data":"03e51f1633e974d9972de07d4a45be8059fd984c3616db9b4453646580c96650"} Jan 26 08:10:33 crc kubenswrapper[4806]: I0126 08:10:33.641843 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"fcc22009-cca0-438b-8f2f-5c245db7c70c","Type":"ContainerStarted","Data":"ababa3e1570f6c587d9ec59acbd53d8de3c84b1db696eedbc7e1b865be98ffc6"} Jan 26 08:10:33 crc kubenswrapper[4806]: I0126 08:10:33.641855 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"fcc22009-cca0-438b-8f2f-5c245db7c70c","Type":"ContainerStarted","Data":"0b382202162dc05667c661f6b9a8225cc8dfa57ba81c2bbec574b7970136f14e"} Jan 26 08:10:33 crc kubenswrapper[4806]: I0126 08:10:33.641866 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"fcc22009-cca0-438b-8f2f-5c245db7c70c","Type":"ContainerStarted","Data":"31827aa97340d15149fdea50bc9164b89fbd7d882620ebe3bb8029acdc5c6404"} Jan 26 08:10:34 crc kubenswrapper[4806]: I0126 08:10:34.133228 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-1468-account-create-update-kfzbt" Jan 26 08:10:34 crc kubenswrapper[4806]: I0126 08:10:34.206795 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ce3e16b-0b9d-42fc-bd7d-d8e01a190a5f-operator-scripts\") pod \"2ce3e16b-0b9d-42fc-bd7d-d8e01a190a5f\" (UID: \"2ce3e16b-0b9d-42fc-bd7d-d8e01a190a5f\") " Jan 26 08:10:34 crc kubenswrapper[4806]: I0126 08:10:34.207029 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6jgkw\" (UniqueName: \"kubernetes.io/projected/2ce3e16b-0b9d-42fc-bd7d-d8e01a190a5f-kube-api-access-6jgkw\") pod \"2ce3e16b-0b9d-42fc-bd7d-d8e01a190a5f\" (UID: \"2ce3e16b-0b9d-42fc-bd7d-d8e01a190a5f\") " Jan 26 08:10:34 crc kubenswrapper[4806]: I0126 08:10:34.208685 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ce3e16b-0b9d-42fc-bd7d-d8e01a190a5f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2ce3e16b-0b9d-42fc-bd7d-d8e01a190a5f" (UID: "2ce3e16b-0b9d-42fc-bd7d-d8e01a190a5f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 08:10:34 crc kubenswrapper[4806]: I0126 08:10:34.227985 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ce3e16b-0b9d-42fc-bd7d-d8e01a190a5f-kube-api-access-6jgkw" (OuterVolumeSpecName: "kube-api-access-6jgkw") pod "2ce3e16b-0b9d-42fc-bd7d-d8e01a190a5f" (UID: "2ce3e16b-0b9d-42fc-bd7d-d8e01a190a5f"). InnerVolumeSpecName "kube-api-access-6jgkw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:10:34 crc kubenswrapper[4806]: I0126 08:10:34.258769 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-7f99-account-create-update-7hh6g" Jan 26 08:10:34 crc kubenswrapper[4806]: I0126 08:10:34.280772 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-afd2-account-create-update-dnjwx" Jan 26 08:10:34 crc kubenswrapper[4806]: I0126 08:10:34.309133 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6jgkw\" (UniqueName: \"kubernetes.io/projected/2ce3e16b-0b9d-42fc-bd7d-d8e01a190a5f-kube-api-access-6jgkw\") on node \"crc\" DevicePath \"\"" Jan 26 08:10:34 crc kubenswrapper[4806]: I0126 08:10:34.309160 4806 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ce3e16b-0b9d-42fc-bd7d-d8e01a190a5f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 08:10:34 crc kubenswrapper[4806]: I0126 08:10:34.315082 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-mj4kn" Jan 26 08:10:34 crc kubenswrapper[4806]: I0126 08:10:34.322722 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-9cfwh" Jan 26 08:10:34 crc kubenswrapper[4806]: I0126 08:10:34.410131 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0bffa786-3a1c-4303-b303-8500b3614ab8-operator-scripts\") pod \"0bffa786-3a1c-4303-b303-8500b3614ab8\" (UID: \"0bffa786-3a1c-4303-b303-8500b3614ab8\") " Jan 26 08:10:34 crc kubenswrapper[4806]: I0126 08:10:34.410165 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ece2b5de-984b-4a8c-8115-e84363f5f599-operator-scripts\") pod \"ece2b5de-984b-4a8c-8115-e84363f5f599\" (UID: \"ece2b5de-984b-4a8c-8115-e84363f5f599\") " Jan 26 08:10:34 crc kubenswrapper[4806]: I0126 08:10:34.410232 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-khd9f\" (UniqueName: \"kubernetes.io/projected/ece2b5de-984b-4a8c-8115-e84363f5f599-kube-api-access-khd9f\") pod \"ece2b5de-984b-4a8c-8115-e84363f5f599\" (UID: \"ece2b5de-984b-4a8c-8115-e84363f5f599\") " Jan 26 08:10:34 crc kubenswrapper[4806]: I0126 08:10:34.410274 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wwngb\" (UniqueName: \"kubernetes.io/projected/0bffa786-3a1c-4303-b303-8500b3614ab8-kube-api-access-wwngb\") pod \"0bffa786-3a1c-4303-b303-8500b3614ab8\" (UID: \"0bffa786-3a1c-4303-b303-8500b3614ab8\") " Jan 26 08:10:34 crc kubenswrapper[4806]: I0126 08:10:34.410303 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8zg99\" (UniqueName: \"kubernetes.io/projected/979ab357-98b5-4ee2-87d8-678702adfab2-kube-api-access-8zg99\") pod \"979ab357-98b5-4ee2-87d8-678702adfab2\" (UID: \"979ab357-98b5-4ee2-87d8-678702adfab2\") " Jan 26 08:10:34 crc kubenswrapper[4806]: I0126 08:10:34.410364 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/979ab357-98b5-4ee2-87d8-678702adfab2-operator-scripts\") pod \"979ab357-98b5-4ee2-87d8-678702adfab2\" (UID: \"979ab357-98b5-4ee2-87d8-678702adfab2\") " Jan 26 08:10:34 crc kubenswrapper[4806]: I0126 08:10:34.410393 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l2tjt\" (UniqueName: \"kubernetes.io/projected/c7cf5184-3f6b-426b-a01c-07ba5de2b9fc-kube-api-access-l2tjt\") pod \"c7cf5184-3f6b-426b-a01c-07ba5de2b9fc\" (UID: \"c7cf5184-3f6b-426b-a01c-07ba5de2b9fc\") " Jan 26 08:10:34 crc kubenswrapper[4806]: I0126 08:10:34.410445 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c7cf5184-3f6b-426b-a01c-07ba5de2b9fc-operator-scripts\") pod \"c7cf5184-3f6b-426b-a01c-07ba5de2b9fc\" (UID: \"c7cf5184-3f6b-426b-a01c-07ba5de2b9fc\") " Jan 26 08:10:34 crc kubenswrapper[4806]: I0126 08:10:34.410661 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0bffa786-3a1c-4303-b303-8500b3614ab8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0bffa786-3a1c-4303-b303-8500b3614ab8" (UID: "0bffa786-3a1c-4303-b303-8500b3614ab8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 08:10:34 crc kubenswrapper[4806]: I0126 08:10:34.410689 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ece2b5de-984b-4a8c-8115-e84363f5f599-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ece2b5de-984b-4a8c-8115-e84363f5f599" (UID: "ece2b5de-984b-4a8c-8115-e84363f5f599"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 08:10:34 crc kubenswrapper[4806]: I0126 08:10:34.411087 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7cf5184-3f6b-426b-a01c-07ba5de2b9fc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c7cf5184-3f6b-426b-a01c-07ba5de2b9fc" (UID: "c7cf5184-3f6b-426b-a01c-07ba5de2b9fc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 08:10:34 crc kubenswrapper[4806]: I0126 08:10:34.411102 4806 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0bffa786-3a1c-4303-b303-8500b3614ab8-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 08:10:34 crc kubenswrapper[4806]: I0126 08:10:34.411117 4806 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ece2b5de-984b-4a8c-8115-e84363f5f599-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 08:10:34 crc kubenswrapper[4806]: I0126 08:10:34.411450 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/979ab357-98b5-4ee2-87d8-678702adfab2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "979ab357-98b5-4ee2-87d8-678702adfab2" (UID: "979ab357-98b5-4ee2-87d8-678702adfab2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 08:10:34 crc kubenswrapper[4806]: I0126 08:10:34.414658 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0bffa786-3a1c-4303-b303-8500b3614ab8-kube-api-access-wwngb" (OuterVolumeSpecName: "kube-api-access-wwngb") pod "0bffa786-3a1c-4303-b303-8500b3614ab8" (UID: "0bffa786-3a1c-4303-b303-8500b3614ab8"). InnerVolumeSpecName "kube-api-access-wwngb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:10:34 crc kubenswrapper[4806]: I0126 08:10:34.414818 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/979ab357-98b5-4ee2-87d8-678702adfab2-kube-api-access-8zg99" (OuterVolumeSpecName: "kube-api-access-8zg99") pod "979ab357-98b5-4ee2-87d8-678702adfab2" (UID: "979ab357-98b5-4ee2-87d8-678702adfab2"). InnerVolumeSpecName "kube-api-access-8zg99". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:10:34 crc kubenswrapper[4806]: I0126 08:10:34.414969 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7cf5184-3f6b-426b-a01c-07ba5de2b9fc-kube-api-access-l2tjt" (OuterVolumeSpecName: "kube-api-access-l2tjt") pod "c7cf5184-3f6b-426b-a01c-07ba5de2b9fc" (UID: "c7cf5184-3f6b-426b-a01c-07ba5de2b9fc"). InnerVolumeSpecName "kube-api-access-l2tjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:10:34 crc kubenswrapper[4806]: I0126 08:10:34.415942 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ece2b5de-984b-4a8c-8115-e84363f5f599-kube-api-access-khd9f" (OuterVolumeSpecName: "kube-api-access-khd9f") pod "ece2b5de-984b-4a8c-8115-e84363f5f599" (UID: "ece2b5de-984b-4a8c-8115-e84363f5f599"). InnerVolumeSpecName "kube-api-access-khd9f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:10:34 crc kubenswrapper[4806]: I0126 08:10:34.512785 4806 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/979ab357-98b5-4ee2-87d8-678702adfab2-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 08:10:34 crc kubenswrapper[4806]: I0126 08:10:34.512818 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l2tjt\" (UniqueName: \"kubernetes.io/projected/c7cf5184-3f6b-426b-a01c-07ba5de2b9fc-kube-api-access-l2tjt\") on node \"crc\" DevicePath \"\"" Jan 26 08:10:34 crc kubenswrapper[4806]: I0126 08:10:34.512830 4806 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c7cf5184-3f6b-426b-a01c-07ba5de2b9fc-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 08:10:34 crc kubenswrapper[4806]: I0126 08:10:34.512839 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-khd9f\" (UniqueName: \"kubernetes.io/projected/ece2b5de-984b-4a8c-8115-e84363f5f599-kube-api-access-khd9f\") on node \"crc\" DevicePath \"\"" Jan 26 08:10:34 crc kubenswrapper[4806]: I0126 08:10:34.512847 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wwngb\" (UniqueName: \"kubernetes.io/projected/0bffa786-3a1c-4303-b303-8500b3614ab8-kube-api-access-wwngb\") on node \"crc\" DevicePath \"\"" Jan 26 08:10:34 crc kubenswrapper[4806]: I0126 08:10:34.512857 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8zg99\" (UniqueName: \"kubernetes.io/projected/979ab357-98b5-4ee2-87d8-678702adfab2-kube-api-access-8zg99\") on node \"crc\" DevicePath \"\"" Jan 26 08:10:34 crc kubenswrapper[4806]: I0126 08:10:34.654544 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-9cfwh" event={"ID":"ece2b5de-984b-4a8c-8115-e84363f5f599","Type":"ContainerDied","Data":"25b491095020b9b1ff1e4a147015316b952b813165bc92cb9ae854db6ef529fd"} Jan 26 08:10:34 crc kubenswrapper[4806]: I0126 08:10:34.654580 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="25b491095020b9b1ff1e4a147015316b952b813165bc92cb9ae854db6ef529fd" Jan 26 08:10:34 crc kubenswrapper[4806]: I0126 08:10:34.654631 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-9cfwh" Jan 26 08:10:34 crc kubenswrapper[4806]: I0126 08:10:34.656122 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-1468-account-create-update-kfzbt" event={"ID":"2ce3e16b-0b9d-42fc-bd7d-d8e01a190a5f","Type":"ContainerDied","Data":"cb0753f6691eec3e81178f509b73dd697914c34ca8ef3f38e3f51e5089b7c830"} Jan 26 08:10:34 crc kubenswrapper[4806]: I0126 08:10:34.656139 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cb0753f6691eec3e81178f509b73dd697914c34ca8ef3f38e3f51e5089b7c830" Jan 26 08:10:34 crc kubenswrapper[4806]: I0126 08:10:34.656175 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-1468-account-create-update-kfzbt" Jan 26 08:10:34 crc kubenswrapper[4806]: I0126 08:10:34.667779 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-afd2-account-create-update-dnjwx" Jan 26 08:10:34 crc kubenswrapper[4806]: I0126 08:10:34.667794 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-afd2-account-create-update-dnjwx" event={"ID":"c7cf5184-3f6b-426b-a01c-07ba5de2b9fc","Type":"ContainerDied","Data":"79ad83af342ac762529d2644615f388ab64de1fef9bcfee33027ab64ba176309"} Jan 26 08:10:34 crc kubenswrapper[4806]: I0126 08:10:34.667813 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="79ad83af342ac762529d2644615f388ab64de1fef9bcfee33027ab64ba176309" Jan 26 08:10:34 crc kubenswrapper[4806]: I0126 08:10:34.686696 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"fcc22009-cca0-438b-8f2f-5c245db7c70c","Type":"ContainerStarted","Data":"cf4e88ab5b941d45e27eef640795f940b2dfea2f2ff78ef2d0a8365db76b0942"} Jan 26 08:10:34 crc kubenswrapper[4806]: I0126 08:10:34.686733 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"fcc22009-cca0-438b-8f2f-5c245db7c70c","Type":"ContainerStarted","Data":"52f586a570ec079bff85f36d703b22bf3b6ff90ff145fe59d65655daaa0dd8af"} Jan 26 08:10:34 crc kubenswrapper[4806]: I0126 08:10:34.693215 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-mj4kn" event={"ID":"979ab357-98b5-4ee2-87d8-678702adfab2","Type":"ContainerDied","Data":"eb9ce88c0e80876ac2618168dde2f87f948f2e42744d3069af67978da7baae1b"} Jan 26 08:10:34 crc kubenswrapper[4806]: I0126 08:10:34.693261 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eb9ce88c0e80876ac2618168dde2f87f948f2e42744d3069af67978da7baae1b" Jan 26 08:10:34 crc kubenswrapper[4806]: I0126 08:10:34.693351 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-mj4kn" Jan 26 08:10:34 crc kubenswrapper[4806]: I0126 08:10:34.696853 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-7f99-account-create-update-7hh6g" event={"ID":"0bffa786-3a1c-4303-b303-8500b3614ab8","Type":"ContainerDied","Data":"b8b6f94e49dc1f08859826ef468184390650866cdeb61518cbdde9b3d9d63937"} Jan 26 08:10:34 crc kubenswrapper[4806]: I0126 08:10:34.696894 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b8b6f94e49dc1f08859826ef468184390650866cdeb61518cbdde9b3d9d63937" Jan 26 08:10:34 crc kubenswrapper[4806]: I0126 08:10:34.696981 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-7f99-account-create-update-7hh6g" Jan 26 08:10:34 crc kubenswrapper[4806]: I0126 08:10:34.993457 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=36.988403372 podStartE2EDuration="43.993439699s" podCreationTimestamp="2026-01-26 08:09:51 +0000 UTC" firstStartedPulling="2026-01-26 08:10:25.361316571 +0000 UTC m=+1004.625724637" lastFinishedPulling="2026-01-26 08:10:32.366352908 +0000 UTC m=+1011.630760964" observedRunningTime="2026-01-26 08:10:34.735138753 +0000 UTC m=+1013.999546819" watchObservedRunningTime="2026-01-26 08:10:34.993439699 +0000 UTC m=+1014.257847755" Jan 26 08:10:34 crc kubenswrapper[4806]: I0126 08:10:34.996302 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-kx6s2"] Jan 26 08:10:34 crc kubenswrapper[4806]: E0126 08:10:34.996631 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ec02cc0-9e30-460d-938a-b04b357649d3" containerName="mariadb-database-create" Jan 26 08:10:34 crc kubenswrapper[4806]: I0126 08:10:34.996646 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ec02cc0-9e30-460d-938a-b04b357649d3" containerName="mariadb-database-create" Jan 26 08:10:34 crc kubenswrapper[4806]: E0126 08:10:34.996660 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ece2b5de-984b-4a8c-8115-e84363f5f599" containerName="mariadb-database-create" Jan 26 08:10:34 crc kubenswrapper[4806]: I0126 08:10:34.996666 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="ece2b5de-984b-4a8c-8115-e84363f5f599" containerName="mariadb-database-create" Jan 26 08:10:34 crc kubenswrapper[4806]: E0126 08:10:34.996681 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7cf5184-3f6b-426b-a01c-07ba5de2b9fc" containerName="mariadb-account-create-update" Jan 26 08:10:34 crc kubenswrapper[4806]: I0126 08:10:34.996687 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7cf5184-3f6b-426b-a01c-07ba5de2b9fc" containerName="mariadb-account-create-update" Jan 26 08:10:34 crc kubenswrapper[4806]: E0126 08:10:34.996698 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="979ab357-98b5-4ee2-87d8-678702adfab2" containerName="mariadb-database-create" Jan 26 08:10:34 crc kubenswrapper[4806]: I0126 08:10:34.996704 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="979ab357-98b5-4ee2-87d8-678702adfab2" containerName="mariadb-database-create" Jan 26 08:10:34 crc kubenswrapper[4806]: E0126 08:10:34.996715 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ce3e16b-0b9d-42fc-bd7d-d8e01a190a5f" containerName="mariadb-account-create-update" Jan 26 08:10:34 crc kubenswrapper[4806]: I0126 08:10:34.996720 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ce3e16b-0b9d-42fc-bd7d-d8e01a190a5f" containerName="mariadb-account-create-update" Jan 26 08:10:34 crc kubenswrapper[4806]: E0126 08:10:34.996736 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bffa786-3a1c-4303-b303-8500b3614ab8" containerName="mariadb-account-create-update" Jan 26 08:10:34 crc kubenswrapper[4806]: I0126 08:10:34.996742 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bffa786-3a1c-4303-b303-8500b3614ab8" containerName="mariadb-account-create-update" Jan 26 08:10:34 crc kubenswrapper[4806]: I0126 08:10:34.996895 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="ece2b5de-984b-4a8c-8115-e84363f5f599" containerName="mariadb-database-create" Jan 26 08:10:34 crc kubenswrapper[4806]: I0126 08:10:34.996912 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="0bffa786-3a1c-4303-b303-8500b3614ab8" containerName="mariadb-account-create-update" Jan 26 08:10:34 crc kubenswrapper[4806]: I0126 08:10:34.996924 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="979ab357-98b5-4ee2-87d8-678702adfab2" containerName="mariadb-database-create" Jan 26 08:10:34 crc kubenswrapper[4806]: I0126 08:10:34.996938 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ce3e16b-0b9d-42fc-bd7d-d8e01a190a5f" containerName="mariadb-account-create-update" Jan 26 08:10:34 crc kubenswrapper[4806]: I0126 08:10:34.996947 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7cf5184-3f6b-426b-a01c-07ba5de2b9fc" containerName="mariadb-account-create-update" Jan 26 08:10:34 crc kubenswrapper[4806]: I0126 08:10:34.996953 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ec02cc0-9e30-460d-938a-b04b357649d3" containerName="mariadb-database-create" Jan 26 08:10:34 crc kubenswrapper[4806]: I0126 08:10:34.997772 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-kx6s2" Jan 26 08:10:35 crc kubenswrapper[4806]: I0126 08:10:35.009180 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-kx6s2"] Jan 26 08:10:35 crc kubenswrapper[4806]: I0126 08:10:35.010699 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Jan 26 08:10:35 crc kubenswrapper[4806]: I0126 08:10:35.128173 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6b83d6c-c555-44af-a218-2c2f81f3bb0c-config\") pod \"dnsmasq-dns-5c79d794d7-kx6s2\" (UID: \"c6b83d6c-c555-44af-a218-2c2f81f3bb0c\") " pod="openstack/dnsmasq-dns-5c79d794d7-kx6s2" Jan 26 08:10:35 crc kubenswrapper[4806]: I0126 08:10:35.128549 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c6b83d6c-c555-44af-a218-2c2f81f3bb0c-dns-svc\") pod \"dnsmasq-dns-5c79d794d7-kx6s2\" (UID: \"c6b83d6c-c555-44af-a218-2c2f81f3bb0c\") " pod="openstack/dnsmasq-dns-5c79d794d7-kx6s2" Jan 26 08:10:35 crc kubenswrapper[4806]: I0126 08:10:35.128583 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c6b83d6c-c555-44af-a218-2c2f81f3bb0c-ovsdbserver-nb\") pod \"dnsmasq-dns-5c79d794d7-kx6s2\" (UID: \"c6b83d6c-c555-44af-a218-2c2f81f3bb0c\") " pod="openstack/dnsmasq-dns-5c79d794d7-kx6s2" Jan 26 08:10:35 crc kubenswrapper[4806]: I0126 08:10:35.128598 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c6b83d6c-c555-44af-a218-2c2f81f3bb0c-ovsdbserver-sb\") pod \"dnsmasq-dns-5c79d794d7-kx6s2\" (UID: \"c6b83d6c-c555-44af-a218-2c2f81f3bb0c\") " pod="openstack/dnsmasq-dns-5c79d794d7-kx6s2" Jan 26 08:10:35 crc kubenswrapper[4806]: I0126 08:10:35.128618 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fq9kk\" (UniqueName: \"kubernetes.io/projected/c6b83d6c-c555-44af-a218-2c2f81f3bb0c-kube-api-access-fq9kk\") pod \"dnsmasq-dns-5c79d794d7-kx6s2\" (UID: \"c6b83d6c-c555-44af-a218-2c2f81f3bb0c\") " pod="openstack/dnsmasq-dns-5c79d794d7-kx6s2" Jan 26 08:10:35 crc kubenswrapper[4806]: I0126 08:10:35.128641 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c6b83d6c-c555-44af-a218-2c2f81f3bb0c-dns-swift-storage-0\") pod \"dnsmasq-dns-5c79d794d7-kx6s2\" (UID: \"c6b83d6c-c555-44af-a218-2c2f81f3bb0c\") " pod="openstack/dnsmasq-dns-5c79d794d7-kx6s2" Jan 26 08:10:35 crc kubenswrapper[4806]: I0126 08:10:35.229667 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c6b83d6c-c555-44af-a218-2c2f81f3bb0c-ovsdbserver-nb\") pod \"dnsmasq-dns-5c79d794d7-kx6s2\" (UID: \"c6b83d6c-c555-44af-a218-2c2f81f3bb0c\") " pod="openstack/dnsmasq-dns-5c79d794d7-kx6s2" Jan 26 08:10:35 crc kubenswrapper[4806]: I0126 08:10:35.229707 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c6b83d6c-c555-44af-a218-2c2f81f3bb0c-ovsdbserver-sb\") pod \"dnsmasq-dns-5c79d794d7-kx6s2\" (UID: \"c6b83d6c-c555-44af-a218-2c2f81f3bb0c\") " pod="openstack/dnsmasq-dns-5c79d794d7-kx6s2" Jan 26 08:10:35 crc kubenswrapper[4806]: I0126 08:10:35.229730 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fq9kk\" (UniqueName: \"kubernetes.io/projected/c6b83d6c-c555-44af-a218-2c2f81f3bb0c-kube-api-access-fq9kk\") pod \"dnsmasq-dns-5c79d794d7-kx6s2\" (UID: \"c6b83d6c-c555-44af-a218-2c2f81f3bb0c\") " pod="openstack/dnsmasq-dns-5c79d794d7-kx6s2" Jan 26 08:10:35 crc kubenswrapper[4806]: I0126 08:10:35.229767 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c6b83d6c-c555-44af-a218-2c2f81f3bb0c-dns-swift-storage-0\") pod \"dnsmasq-dns-5c79d794d7-kx6s2\" (UID: \"c6b83d6c-c555-44af-a218-2c2f81f3bb0c\") " pod="openstack/dnsmasq-dns-5c79d794d7-kx6s2" Jan 26 08:10:35 crc kubenswrapper[4806]: I0126 08:10:35.229955 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6b83d6c-c555-44af-a218-2c2f81f3bb0c-config\") pod \"dnsmasq-dns-5c79d794d7-kx6s2\" (UID: \"c6b83d6c-c555-44af-a218-2c2f81f3bb0c\") " pod="openstack/dnsmasq-dns-5c79d794d7-kx6s2" Jan 26 08:10:35 crc kubenswrapper[4806]: I0126 08:10:35.229997 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c6b83d6c-c555-44af-a218-2c2f81f3bb0c-dns-svc\") pod \"dnsmasq-dns-5c79d794d7-kx6s2\" (UID: \"c6b83d6c-c555-44af-a218-2c2f81f3bb0c\") " pod="openstack/dnsmasq-dns-5c79d794d7-kx6s2" Jan 26 08:10:35 crc kubenswrapper[4806]: I0126 08:10:35.230681 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c6b83d6c-c555-44af-a218-2c2f81f3bb0c-ovsdbserver-nb\") pod \"dnsmasq-dns-5c79d794d7-kx6s2\" (UID: \"c6b83d6c-c555-44af-a218-2c2f81f3bb0c\") " pod="openstack/dnsmasq-dns-5c79d794d7-kx6s2" Jan 26 08:10:35 crc kubenswrapper[4806]: I0126 08:10:35.230866 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6b83d6c-c555-44af-a218-2c2f81f3bb0c-config\") pod \"dnsmasq-dns-5c79d794d7-kx6s2\" (UID: \"c6b83d6c-c555-44af-a218-2c2f81f3bb0c\") " pod="openstack/dnsmasq-dns-5c79d794d7-kx6s2" Jan 26 08:10:35 crc kubenswrapper[4806]: I0126 08:10:35.230984 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c6b83d6c-c555-44af-a218-2c2f81f3bb0c-dns-swift-storage-0\") pod \"dnsmasq-dns-5c79d794d7-kx6s2\" (UID: \"c6b83d6c-c555-44af-a218-2c2f81f3bb0c\") " pod="openstack/dnsmasq-dns-5c79d794d7-kx6s2" Jan 26 08:10:35 crc kubenswrapper[4806]: I0126 08:10:35.231184 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c6b83d6c-c555-44af-a218-2c2f81f3bb0c-dns-svc\") pod \"dnsmasq-dns-5c79d794d7-kx6s2\" (UID: \"c6b83d6c-c555-44af-a218-2c2f81f3bb0c\") " pod="openstack/dnsmasq-dns-5c79d794d7-kx6s2" Jan 26 08:10:35 crc kubenswrapper[4806]: I0126 08:10:35.231446 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c6b83d6c-c555-44af-a218-2c2f81f3bb0c-ovsdbserver-sb\") pod \"dnsmasq-dns-5c79d794d7-kx6s2\" (UID: \"c6b83d6c-c555-44af-a218-2c2f81f3bb0c\") " pod="openstack/dnsmasq-dns-5c79d794d7-kx6s2" Jan 26 08:10:35 crc kubenswrapper[4806]: I0126 08:10:35.246808 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fq9kk\" (UniqueName: \"kubernetes.io/projected/c6b83d6c-c555-44af-a218-2c2f81f3bb0c-kube-api-access-fq9kk\") pod \"dnsmasq-dns-5c79d794d7-kx6s2\" (UID: \"c6b83d6c-c555-44af-a218-2c2f81f3bb0c\") " pod="openstack/dnsmasq-dns-5c79d794d7-kx6s2" Jan 26 08:10:35 crc kubenswrapper[4806]: I0126 08:10:35.357540 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-kx6s2" Jan 26 08:10:38 crc kubenswrapper[4806]: I0126 08:10:38.119839 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-b311-account-create-update-mv65h" Jan 26 08:10:38 crc kubenswrapper[4806]: I0126 08:10:38.179463 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4hm56\" (UniqueName: \"kubernetes.io/projected/bb76c3b5-a137-4408-9cc4-7e17505b7989-kube-api-access-4hm56\") pod \"bb76c3b5-a137-4408-9cc4-7e17505b7989\" (UID: \"bb76c3b5-a137-4408-9cc4-7e17505b7989\") " Jan 26 08:10:38 crc kubenswrapper[4806]: I0126 08:10:38.179639 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bb76c3b5-a137-4408-9cc4-7e17505b7989-operator-scripts\") pod \"bb76c3b5-a137-4408-9cc4-7e17505b7989\" (UID: \"bb76c3b5-a137-4408-9cc4-7e17505b7989\") " Jan 26 08:10:38 crc kubenswrapper[4806]: I0126 08:10:38.184296 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb76c3b5-a137-4408-9cc4-7e17505b7989-kube-api-access-4hm56" (OuterVolumeSpecName: "kube-api-access-4hm56") pod "bb76c3b5-a137-4408-9cc4-7e17505b7989" (UID: "bb76c3b5-a137-4408-9cc4-7e17505b7989"). InnerVolumeSpecName "kube-api-access-4hm56". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:10:38 crc kubenswrapper[4806]: I0126 08:10:38.184720 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb76c3b5-a137-4408-9cc4-7e17505b7989-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bb76c3b5-a137-4408-9cc4-7e17505b7989" (UID: "bb76c3b5-a137-4408-9cc4-7e17505b7989"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 08:10:38 crc kubenswrapper[4806]: I0126 08:10:38.233144 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-qc6mc" Jan 26 08:10:38 crc kubenswrapper[4806]: I0126 08:10:38.281943 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hgl79\" (UniqueName: \"kubernetes.io/projected/78af78c0-adca-4ff1-960f-5d8f918e2a1a-kube-api-access-hgl79\") pod \"78af78c0-adca-4ff1-960f-5d8f918e2a1a\" (UID: \"78af78c0-adca-4ff1-960f-5d8f918e2a1a\") " Jan 26 08:10:38 crc kubenswrapper[4806]: I0126 08:10:38.282294 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/78af78c0-adca-4ff1-960f-5d8f918e2a1a-operator-scripts\") pod \"78af78c0-adca-4ff1-960f-5d8f918e2a1a\" (UID: \"78af78c0-adca-4ff1-960f-5d8f918e2a1a\") " Jan 26 08:10:38 crc kubenswrapper[4806]: I0126 08:10:38.282753 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4hm56\" (UniqueName: \"kubernetes.io/projected/bb76c3b5-a137-4408-9cc4-7e17505b7989-kube-api-access-4hm56\") on node \"crc\" DevicePath \"\"" Jan 26 08:10:38 crc kubenswrapper[4806]: I0126 08:10:38.282764 4806 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bb76c3b5-a137-4408-9cc4-7e17505b7989-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 08:10:38 crc kubenswrapper[4806]: I0126 08:10:38.283078 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78af78c0-adca-4ff1-960f-5d8f918e2a1a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "78af78c0-adca-4ff1-960f-5d8f918e2a1a" (UID: "78af78c0-adca-4ff1-960f-5d8f918e2a1a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 08:10:38 crc kubenswrapper[4806]: I0126 08:10:38.289746 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78af78c0-adca-4ff1-960f-5d8f918e2a1a-kube-api-access-hgl79" (OuterVolumeSpecName: "kube-api-access-hgl79") pod "78af78c0-adca-4ff1-960f-5d8f918e2a1a" (UID: "78af78c0-adca-4ff1-960f-5d8f918e2a1a"). InnerVolumeSpecName "kube-api-access-hgl79". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:10:38 crc kubenswrapper[4806]: I0126 08:10:38.384758 4806 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/78af78c0-adca-4ff1-960f-5d8f918e2a1a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 08:10:38 crc kubenswrapper[4806]: I0126 08:10:38.384806 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hgl79\" (UniqueName: \"kubernetes.io/projected/78af78c0-adca-4ff1-960f-5d8f918e2a1a-kube-api-access-hgl79\") on node \"crc\" DevicePath \"\"" Jan 26 08:10:38 crc kubenswrapper[4806]: I0126 08:10:38.545944 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-kx6s2"] Jan 26 08:10:38 crc kubenswrapper[4806]: W0126 08:10:38.548787 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc6b83d6c_c555_44af_a218_2c2f81f3bb0c.slice/crio-376005f75c92d5e0bc54afb0a9fc8ef2837fcf31d0f7cb3915820de363053bef WatchSource:0}: Error finding container 376005f75c92d5e0bc54afb0a9fc8ef2837fcf31d0f7cb3915820de363053bef: Status 404 returned error can't find the container with id 376005f75c92d5e0bc54afb0a9fc8ef2837fcf31d0f7cb3915820de363053bef Jan 26 08:10:38 crc kubenswrapper[4806]: I0126 08:10:38.731052 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-hfpkt" event={"ID":"f602c552-c375-4d9b-96fc-633ad5811f7d","Type":"ContainerStarted","Data":"a4fedad710b52b7d491be18c79d774060b6fd791076c1359f72a6fc755541add"} Jan 26 08:10:38 crc kubenswrapper[4806]: I0126 08:10:38.733700 4806 generic.go:334] "Generic (PLEG): container finished" podID="c6b83d6c-c555-44af-a218-2c2f81f3bb0c" containerID="84349ee66015b9ecf4c1fdc86fdb0e6a013b6987ec74b1a25a2245c77c8c5fee" exitCode=0 Jan 26 08:10:38 crc kubenswrapper[4806]: I0126 08:10:38.733999 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-kx6s2" event={"ID":"c6b83d6c-c555-44af-a218-2c2f81f3bb0c","Type":"ContainerDied","Data":"84349ee66015b9ecf4c1fdc86fdb0e6a013b6987ec74b1a25a2245c77c8c5fee"} Jan 26 08:10:38 crc kubenswrapper[4806]: I0126 08:10:38.734062 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-kx6s2" event={"ID":"c6b83d6c-c555-44af-a218-2c2f81f3bb0c","Type":"ContainerStarted","Data":"376005f75c92d5e0bc54afb0a9fc8ef2837fcf31d0f7cb3915820de363053bef"} Jan 26 08:10:38 crc kubenswrapper[4806]: I0126 08:10:38.741941 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-qc6mc" event={"ID":"78af78c0-adca-4ff1-960f-5d8f918e2a1a","Type":"ContainerDied","Data":"50ee38742550af988d3076d4d0a56a59c563b82f87ba2a73fcc700a579f718b8"} Jan 26 08:10:38 crc kubenswrapper[4806]: I0126 08:10:38.741977 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="50ee38742550af988d3076d4d0a56a59c563b82f87ba2a73fcc700a579f718b8" Jan 26 08:10:38 crc kubenswrapper[4806]: I0126 08:10:38.742033 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-qc6mc" Jan 26 08:10:38 crc kubenswrapper[4806]: I0126 08:10:38.752505 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-b311-account-create-update-mv65h" event={"ID":"bb76c3b5-a137-4408-9cc4-7e17505b7989","Type":"ContainerDied","Data":"e9df2615df616fa13f83ad6c6802aa3f80d49044a2c1e90b5e6cfdc8990a9e40"} Jan 26 08:10:38 crc kubenswrapper[4806]: I0126 08:10:38.752571 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e9df2615df616fa13f83ad6c6802aa3f80d49044a2c1e90b5e6cfdc8990a9e40" Jan 26 08:10:38 crc kubenswrapper[4806]: I0126 08:10:38.752734 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-hfpkt" podStartSLOduration=4.174701479 podStartE2EDuration="38.752711974s" podCreationTimestamp="2026-01-26 08:10:00 +0000 UTC" firstStartedPulling="2026-01-26 08:10:03.540856863 +0000 UTC m=+982.805264919" lastFinishedPulling="2026-01-26 08:10:38.118867358 +0000 UTC m=+1017.383275414" observedRunningTime="2026-01-26 08:10:38.752647502 +0000 UTC m=+1018.017055558" watchObservedRunningTime="2026-01-26 08:10:38.752711974 +0000 UTC m=+1018.017120030" Jan 26 08:10:38 crc kubenswrapper[4806]: I0126 08:10:38.752960 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-b311-account-create-update-mv65h" Jan 26 08:10:38 crc kubenswrapper[4806]: I0126 08:10:38.759472 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-pbrx8" event={"ID":"0ba70c1a-7213-421b-b154-ac57621252b8","Type":"ContainerStarted","Data":"cb07ccce8416892e78fbdd092a4131eded9eeed3dcf01fc6280660e1a124e48a"} Jan 26 08:10:38 crc kubenswrapper[4806]: I0126 08:10:38.800930 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-pbrx8" podStartSLOduration=2.641604579 podStartE2EDuration="9.800914638s" podCreationTimestamp="2026-01-26 08:10:29 +0000 UTC" firstStartedPulling="2026-01-26 08:10:30.958019565 +0000 UTC m=+1010.222427621" lastFinishedPulling="2026-01-26 08:10:38.117329624 +0000 UTC m=+1017.381737680" observedRunningTime="2026-01-26 08:10:38.794151628 +0000 UTC m=+1018.058559684" watchObservedRunningTime="2026-01-26 08:10:38.800914638 +0000 UTC m=+1018.065322694" Jan 26 08:10:39 crc kubenswrapper[4806]: I0126 08:10:39.769071 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-kx6s2" event={"ID":"c6b83d6c-c555-44af-a218-2c2f81f3bb0c","Type":"ContainerStarted","Data":"d197da3e9aee1ab798a86b1e5dd13236ba62e7824f83f3a643665df3be0383c8"} Jan 26 08:10:39 crc kubenswrapper[4806]: I0126 08:10:39.789820 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c79d794d7-kx6s2" podStartSLOduration=5.789798318 podStartE2EDuration="5.789798318s" podCreationTimestamp="2026-01-26 08:10:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 08:10:39.784353475 +0000 UTC m=+1019.048761541" watchObservedRunningTime="2026-01-26 08:10:39.789798318 +0000 UTC m=+1019.054206374" Jan 26 08:10:40 crc kubenswrapper[4806]: I0126 08:10:40.358305 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c79d794d7-kx6s2" Jan 26 08:10:41 crc kubenswrapper[4806]: I0126 08:10:41.784510 4806 generic.go:334] "Generic (PLEG): container finished" podID="0ba70c1a-7213-421b-b154-ac57621252b8" containerID="cb07ccce8416892e78fbdd092a4131eded9eeed3dcf01fc6280660e1a124e48a" exitCode=0 Jan 26 08:10:41 crc kubenswrapper[4806]: I0126 08:10:41.784561 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-pbrx8" event={"ID":"0ba70c1a-7213-421b-b154-ac57621252b8","Type":"ContainerDied","Data":"cb07ccce8416892e78fbdd092a4131eded9eeed3dcf01fc6280660e1a124e48a"} Jan 26 08:10:43 crc kubenswrapper[4806]: I0126 08:10:43.245766 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-pbrx8" Jan 26 08:10:43 crc kubenswrapper[4806]: I0126 08:10:43.370883 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ba70c1a-7213-421b-b154-ac57621252b8-combined-ca-bundle\") pod \"0ba70c1a-7213-421b-b154-ac57621252b8\" (UID: \"0ba70c1a-7213-421b-b154-ac57621252b8\") " Jan 26 08:10:43 crc kubenswrapper[4806]: I0126 08:10:43.371222 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5mffh\" (UniqueName: \"kubernetes.io/projected/0ba70c1a-7213-421b-b154-ac57621252b8-kube-api-access-5mffh\") pod \"0ba70c1a-7213-421b-b154-ac57621252b8\" (UID: \"0ba70c1a-7213-421b-b154-ac57621252b8\") " Jan 26 08:10:43 crc kubenswrapper[4806]: I0126 08:10:43.371434 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ba70c1a-7213-421b-b154-ac57621252b8-config-data\") pod \"0ba70c1a-7213-421b-b154-ac57621252b8\" (UID: \"0ba70c1a-7213-421b-b154-ac57621252b8\") " Jan 26 08:10:43 crc kubenswrapper[4806]: I0126 08:10:43.386761 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ba70c1a-7213-421b-b154-ac57621252b8-kube-api-access-5mffh" (OuterVolumeSpecName: "kube-api-access-5mffh") pod "0ba70c1a-7213-421b-b154-ac57621252b8" (UID: "0ba70c1a-7213-421b-b154-ac57621252b8"). InnerVolumeSpecName "kube-api-access-5mffh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:10:43 crc kubenswrapper[4806]: I0126 08:10:43.415215 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ba70c1a-7213-421b-b154-ac57621252b8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0ba70c1a-7213-421b-b154-ac57621252b8" (UID: "0ba70c1a-7213-421b-b154-ac57621252b8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:10:43 crc kubenswrapper[4806]: I0126 08:10:43.428489 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ba70c1a-7213-421b-b154-ac57621252b8-config-data" (OuterVolumeSpecName: "config-data") pod "0ba70c1a-7213-421b-b154-ac57621252b8" (UID: "0ba70c1a-7213-421b-b154-ac57621252b8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:10:43 crc kubenswrapper[4806]: I0126 08:10:43.483478 4806 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ba70c1a-7213-421b-b154-ac57621252b8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 08:10:43 crc kubenswrapper[4806]: I0126 08:10:43.483509 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5mffh\" (UniqueName: \"kubernetes.io/projected/0ba70c1a-7213-421b-b154-ac57621252b8-kube-api-access-5mffh\") on node \"crc\" DevicePath \"\"" Jan 26 08:10:43 crc kubenswrapper[4806]: I0126 08:10:43.483541 4806 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ba70c1a-7213-421b-b154-ac57621252b8-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 08:10:43 crc kubenswrapper[4806]: I0126 08:10:43.805831 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-pbrx8" event={"ID":"0ba70c1a-7213-421b-b154-ac57621252b8","Type":"ContainerDied","Data":"43294f97ab9c3754cc176932874929014155bf084434ee1213b877ec9b24d440"} Jan 26 08:10:43 crc kubenswrapper[4806]: I0126 08:10:43.805869 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="43294f97ab9c3754cc176932874929014155bf084434ee1213b877ec9b24d440" Jan 26 08:10:43 crc kubenswrapper[4806]: I0126 08:10:43.805933 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-pbrx8" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.117017 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-hvd56"] Jan 26 08:10:44 crc kubenswrapper[4806]: E0126 08:10:44.117601 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78af78c0-adca-4ff1-960f-5d8f918e2a1a" containerName="mariadb-database-create" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.117616 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="78af78c0-adca-4ff1-960f-5d8f918e2a1a" containerName="mariadb-database-create" Jan 26 08:10:44 crc kubenswrapper[4806]: E0126 08:10:44.117636 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb76c3b5-a137-4408-9cc4-7e17505b7989" containerName="mariadb-account-create-update" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.117642 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb76c3b5-a137-4408-9cc4-7e17505b7989" containerName="mariadb-account-create-update" Jan 26 08:10:44 crc kubenswrapper[4806]: E0126 08:10:44.117651 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ba70c1a-7213-421b-b154-ac57621252b8" containerName="keystone-db-sync" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.117657 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ba70c1a-7213-421b-b154-ac57621252b8" containerName="keystone-db-sync" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.117803 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb76c3b5-a137-4408-9cc4-7e17505b7989" containerName="mariadb-account-create-update" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.117817 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="78af78c0-adca-4ff1-960f-5d8f918e2a1a" containerName="mariadb-database-create" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.117832 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ba70c1a-7213-421b-b154-ac57621252b8" containerName="keystone-db-sync" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.118343 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-hvd56" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.124454 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.124464 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.124500 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.124677 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.128017 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-wsvrh" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.144052 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-kx6s2"] Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.144260 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c79d794d7-kx6s2" podUID="c6b83d6c-c555-44af-a218-2c2f81f3bb0c" containerName="dnsmasq-dns" containerID="cri-o://d197da3e9aee1ab798a86b1e5dd13236ba62e7824f83f3a643665df3be0383c8" gracePeriod=10 Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.150808 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c79d794d7-kx6s2" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.155858 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-hvd56"] Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.198062 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b868669f-fcbc8"] Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.199909 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b868669f-fcbc8" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.210233 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7c574ca2-3991-4eec-80f8-2e389d6a0e4b-scripts\") pod \"keystone-bootstrap-hvd56\" (UID: \"7c574ca2-3991-4eec-80f8-2e389d6a0e4b\") " pod="openstack/keystone-bootstrap-hvd56" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.210288 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c574ca2-3991-4eec-80f8-2e389d6a0e4b-config-data\") pod \"keystone-bootstrap-hvd56\" (UID: \"7c574ca2-3991-4eec-80f8-2e389d6a0e4b\") " pod="openstack/keystone-bootstrap-hvd56" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.210324 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9f9c\" (UniqueName: \"kubernetes.io/projected/7c574ca2-3991-4eec-80f8-2e389d6a0e4b-kube-api-access-c9f9c\") pod \"keystone-bootstrap-hvd56\" (UID: \"7c574ca2-3991-4eec-80f8-2e389d6a0e4b\") " pod="openstack/keystone-bootstrap-hvd56" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.210358 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7c574ca2-3991-4eec-80f8-2e389d6a0e4b-credential-keys\") pod \"keystone-bootstrap-hvd56\" (UID: \"7c574ca2-3991-4eec-80f8-2e389d6a0e4b\") " pod="openstack/keystone-bootstrap-hvd56" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.210375 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7c574ca2-3991-4eec-80f8-2e389d6a0e4b-fernet-keys\") pod \"keystone-bootstrap-hvd56\" (UID: \"7c574ca2-3991-4eec-80f8-2e389d6a0e4b\") " pod="openstack/keystone-bootstrap-hvd56" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.210390 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c574ca2-3991-4eec-80f8-2e389d6a0e4b-combined-ca-bundle\") pod \"keystone-bootstrap-hvd56\" (UID: \"7c574ca2-3991-4eec-80f8-2e389d6a0e4b\") " pod="openstack/keystone-bootstrap-hvd56" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.231808 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b868669f-fcbc8"] Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.267757 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-bjtkx"] Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.268700 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-bjtkx" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.272594 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-6ksgl" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.277178 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.305190 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-bjtkx"] Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.311564 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/30f57a1f-8a72-4dfa-88da-db7bd1312809-ovsdbserver-nb\") pod \"dnsmasq-dns-5b868669f-fcbc8\" (UID: \"30f57a1f-8a72-4dfa-88da-db7bd1312809\") " pod="openstack/dnsmasq-dns-5b868669f-fcbc8" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.311634 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8kqc\" (UniqueName: \"kubernetes.io/projected/30f57a1f-8a72-4dfa-88da-db7bd1312809-kube-api-access-p8kqc\") pod \"dnsmasq-dns-5b868669f-fcbc8\" (UID: \"30f57a1f-8a72-4dfa-88da-db7bd1312809\") " pod="openstack/dnsmasq-dns-5b868669f-fcbc8" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.311671 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7c574ca2-3991-4eec-80f8-2e389d6a0e4b-scripts\") pod \"keystone-bootstrap-hvd56\" (UID: \"7c574ca2-3991-4eec-80f8-2e389d6a0e4b\") " pod="openstack/keystone-bootstrap-hvd56" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.311697 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/30f57a1f-8a72-4dfa-88da-db7bd1312809-dns-svc\") pod \"dnsmasq-dns-5b868669f-fcbc8\" (UID: \"30f57a1f-8a72-4dfa-88da-db7bd1312809\") " pod="openstack/dnsmasq-dns-5b868669f-fcbc8" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.311736 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c574ca2-3991-4eec-80f8-2e389d6a0e4b-config-data\") pod \"keystone-bootstrap-hvd56\" (UID: \"7c574ca2-3991-4eec-80f8-2e389d6a0e4b\") " pod="openstack/keystone-bootstrap-hvd56" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.311778 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9f9c\" (UniqueName: \"kubernetes.io/projected/7c574ca2-3991-4eec-80f8-2e389d6a0e4b-kube-api-access-c9f9c\") pod \"keystone-bootstrap-hvd56\" (UID: \"7c574ca2-3991-4eec-80f8-2e389d6a0e4b\") " pod="openstack/keystone-bootstrap-hvd56" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.311802 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/30f57a1f-8a72-4dfa-88da-db7bd1312809-dns-swift-storage-0\") pod \"dnsmasq-dns-5b868669f-fcbc8\" (UID: \"30f57a1f-8a72-4dfa-88da-db7bd1312809\") " pod="openstack/dnsmasq-dns-5b868669f-fcbc8" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.311828 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/30f57a1f-8a72-4dfa-88da-db7bd1312809-ovsdbserver-sb\") pod \"dnsmasq-dns-5b868669f-fcbc8\" (UID: \"30f57a1f-8a72-4dfa-88da-db7bd1312809\") " pod="openstack/dnsmasq-dns-5b868669f-fcbc8" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.311885 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7c574ca2-3991-4eec-80f8-2e389d6a0e4b-credential-keys\") pod \"keystone-bootstrap-hvd56\" (UID: \"7c574ca2-3991-4eec-80f8-2e389d6a0e4b\") " pod="openstack/keystone-bootstrap-hvd56" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.311912 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7c574ca2-3991-4eec-80f8-2e389d6a0e4b-fernet-keys\") pod \"keystone-bootstrap-hvd56\" (UID: \"7c574ca2-3991-4eec-80f8-2e389d6a0e4b\") " pod="openstack/keystone-bootstrap-hvd56" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.311935 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c574ca2-3991-4eec-80f8-2e389d6a0e4b-combined-ca-bundle\") pod \"keystone-bootstrap-hvd56\" (UID: \"7c574ca2-3991-4eec-80f8-2e389d6a0e4b\") " pod="openstack/keystone-bootstrap-hvd56" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.311956 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30f57a1f-8a72-4dfa-88da-db7bd1312809-config\") pod \"dnsmasq-dns-5b868669f-fcbc8\" (UID: \"30f57a1f-8a72-4dfa-88da-db7bd1312809\") " pod="openstack/dnsmasq-dns-5b868669f-fcbc8" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.317420 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c574ca2-3991-4eec-80f8-2e389d6a0e4b-config-data\") pod \"keystone-bootstrap-hvd56\" (UID: \"7c574ca2-3991-4eec-80f8-2e389d6a0e4b\") " pod="openstack/keystone-bootstrap-hvd56" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.321758 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7c574ca2-3991-4eec-80f8-2e389d6a0e4b-scripts\") pod \"keystone-bootstrap-hvd56\" (UID: \"7c574ca2-3991-4eec-80f8-2e389d6a0e4b\") " pod="openstack/keystone-bootstrap-hvd56" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.322106 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7c574ca2-3991-4eec-80f8-2e389d6a0e4b-credential-keys\") pod \"keystone-bootstrap-hvd56\" (UID: \"7c574ca2-3991-4eec-80f8-2e389d6a0e4b\") " pod="openstack/keystone-bootstrap-hvd56" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.322417 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c574ca2-3991-4eec-80f8-2e389d6a0e4b-combined-ca-bundle\") pod \"keystone-bootstrap-hvd56\" (UID: \"7c574ca2-3991-4eec-80f8-2e389d6a0e4b\") " pod="openstack/keystone-bootstrap-hvd56" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.329385 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7c574ca2-3991-4eec-80f8-2e389d6a0e4b-fernet-keys\") pod \"keystone-bootstrap-hvd56\" (UID: \"7c574ca2-3991-4eec-80f8-2e389d6a0e4b\") " pod="openstack/keystone-bootstrap-hvd56" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.350217 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9f9c\" (UniqueName: \"kubernetes.io/projected/7c574ca2-3991-4eec-80f8-2e389d6a0e4b-kube-api-access-c9f9c\") pod \"keystone-bootstrap-hvd56\" (UID: \"7c574ca2-3991-4eec-80f8-2e389d6a0e4b\") " pod="openstack/keystone-bootstrap-hvd56" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.403096 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-r5vvf"] Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.409071 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-r5vvf" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.413146 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/30f57a1f-8a72-4dfa-88da-db7bd1312809-dns-swift-storage-0\") pod \"dnsmasq-dns-5b868669f-fcbc8\" (UID: \"30f57a1f-8a72-4dfa-88da-db7bd1312809\") " pod="openstack/dnsmasq-dns-5b868669f-fcbc8" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.413183 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/30f57a1f-8a72-4dfa-88da-db7bd1312809-ovsdbserver-sb\") pod \"dnsmasq-dns-5b868669f-fcbc8\" (UID: \"30f57a1f-8a72-4dfa-88da-db7bd1312809\") " pod="openstack/dnsmasq-dns-5b868669f-fcbc8" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.413224 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30f57a1f-8a72-4dfa-88da-db7bd1312809-config\") pod \"dnsmasq-dns-5b868669f-fcbc8\" (UID: \"30f57a1f-8a72-4dfa-88da-db7bd1312809\") " pod="openstack/dnsmasq-dns-5b868669f-fcbc8" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.413275 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19528149-09a1-44a5-b419-bbe91789d493-combined-ca-bundle\") pod \"heat-db-sync-bjtkx\" (UID: \"19528149-09a1-44a5-b419-bbe91789d493\") " pod="openstack/heat-db-sync-bjtkx" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.413292 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19528149-09a1-44a5-b419-bbe91789d493-config-data\") pod \"heat-db-sync-bjtkx\" (UID: \"19528149-09a1-44a5-b419-bbe91789d493\") " pod="openstack/heat-db-sync-bjtkx" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.413309 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/30f57a1f-8a72-4dfa-88da-db7bd1312809-ovsdbserver-nb\") pod \"dnsmasq-dns-5b868669f-fcbc8\" (UID: \"30f57a1f-8a72-4dfa-88da-db7bd1312809\") " pod="openstack/dnsmasq-dns-5b868669f-fcbc8" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.413329 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvctp\" (UniqueName: \"kubernetes.io/projected/19528149-09a1-44a5-b419-bbe91789d493-kube-api-access-kvctp\") pod \"heat-db-sync-bjtkx\" (UID: \"19528149-09a1-44a5-b419-bbe91789d493\") " pod="openstack/heat-db-sync-bjtkx" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.413348 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p8kqc\" (UniqueName: \"kubernetes.io/projected/30f57a1f-8a72-4dfa-88da-db7bd1312809-kube-api-access-p8kqc\") pod \"dnsmasq-dns-5b868669f-fcbc8\" (UID: \"30f57a1f-8a72-4dfa-88da-db7bd1312809\") " pod="openstack/dnsmasq-dns-5b868669f-fcbc8" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.413371 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/30f57a1f-8a72-4dfa-88da-db7bd1312809-dns-svc\") pod \"dnsmasq-dns-5b868669f-fcbc8\" (UID: \"30f57a1f-8a72-4dfa-88da-db7bd1312809\") " pod="openstack/dnsmasq-dns-5b868669f-fcbc8" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.414173 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/30f57a1f-8a72-4dfa-88da-db7bd1312809-dns-svc\") pod \"dnsmasq-dns-5b868669f-fcbc8\" (UID: \"30f57a1f-8a72-4dfa-88da-db7bd1312809\") " pod="openstack/dnsmasq-dns-5b868669f-fcbc8" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.414188 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/30f57a1f-8a72-4dfa-88da-db7bd1312809-dns-swift-storage-0\") pod \"dnsmasq-dns-5b868669f-fcbc8\" (UID: \"30f57a1f-8a72-4dfa-88da-db7bd1312809\") " pod="openstack/dnsmasq-dns-5b868669f-fcbc8" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.414719 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/30f57a1f-8a72-4dfa-88da-db7bd1312809-ovsdbserver-nb\") pod \"dnsmasq-dns-5b868669f-fcbc8\" (UID: \"30f57a1f-8a72-4dfa-88da-db7bd1312809\") " pod="openstack/dnsmasq-dns-5b868669f-fcbc8" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.414803 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/30f57a1f-8a72-4dfa-88da-db7bd1312809-ovsdbserver-sb\") pod \"dnsmasq-dns-5b868669f-fcbc8\" (UID: \"30f57a1f-8a72-4dfa-88da-db7bd1312809\") " pod="openstack/dnsmasq-dns-5b868669f-fcbc8" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.415287 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30f57a1f-8a72-4dfa-88da-db7bd1312809-config\") pod \"dnsmasq-dns-5b868669f-fcbc8\" (UID: \"30f57a1f-8a72-4dfa-88da-db7bd1312809\") " pod="openstack/dnsmasq-dns-5b868669f-fcbc8" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.431239 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.435816 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.435842 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-gd76f" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.436241 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-hvd56" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.467637 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8kqc\" (UniqueName: \"kubernetes.io/projected/30f57a1f-8a72-4dfa-88da-db7bd1312809-kube-api-access-p8kqc\") pod \"dnsmasq-dns-5b868669f-fcbc8\" (UID: \"30f57a1f-8a72-4dfa-88da-db7bd1312809\") " pod="openstack/dnsmasq-dns-5b868669f-fcbc8" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.510788 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-cbdcb8bcc-96jf5"] Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.512034 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-cbdcb8bcc-96jf5" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.514403 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0a51881-d18e-40dd-8dfb-a243d798133a-combined-ca-bundle\") pod \"neutron-db-sync-r5vvf\" (UID: \"b0a51881-d18e-40dd-8dfb-a243d798133a\") " pod="openstack/neutron-db-sync-r5vvf" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.516205 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19528149-09a1-44a5-b419-bbe91789d493-combined-ca-bundle\") pod \"heat-db-sync-bjtkx\" (UID: \"19528149-09a1-44a5-b419-bbe91789d493\") " pod="openstack/heat-db-sync-bjtkx" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.516287 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19528149-09a1-44a5-b419-bbe91789d493-config-data\") pod \"heat-db-sync-bjtkx\" (UID: \"19528149-09a1-44a5-b419-bbe91789d493\") " pod="openstack/heat-db-sync-bjtkx" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.516383 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvctp\" (UniqueName: \"kubernetes.io/projected/19528149-09a1-44a5-b419-bbe91789d493-kube-api-access-kvctp\") pod \"heat-db-sync-bjtkx\" (UID: \"19528149-09a1-44a5-b419-bbe91789d493\") " pod="openstack/heat-db-sync-bjtkx" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.516589 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b0a51881-d18e-40dd-8dfb-a243d798133a-config\") pod \"neutron-db-sync-r5vvf\" (UID: \"b0a51881-d18e-40dd-8dfb-a243d798133a\") " pod="openstack/neutron-db-sync-r5vvf" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.516682 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8h9bg\" (UniqueName: \"kubernetes.io/projected/b0a51881-d18e-40dd-8dfb-a243d798133a-kube-api-access-8h9bg\") pod \"neutron-db-sync-r5vvf\" (UID: \"b0a51881-d18e-40dd-8dfb-a243d798133a\") " pod="openstack/neutron-db-sync-r5vvf" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.518540 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b868669f-fcbc8" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.524004 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.524109 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.524195 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-76mdj" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.524381 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.526453 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19528149-09a1-44a5-b419-bbe91789d493-config-data\") pod \"heat-db-sync-bjtkx\" (UID: \"19528149-09a1-44a5-b419-bbe91789d493\") " pod="openstack/heat-db-sync-bjtkx" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.543870 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19528149-09a1-44a5-b419-bbe91789d493-combined-ca-bundle\") pod \"heat-db-sync-bjtkx\" (UID: \"19528149-09a1-44a5-b419-bbe91789d493\") " pod="openstack/heat-db-sync-bjtkx" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.570575 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.572359 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.573633 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvctp\" (UniqueName: \"kubernetes.io/projected/19528149-09a1-44a5-b419-bbe91789d493-kube-api-access-kvctp\") pod \"heat-db-sync-bjtkx\" (UID: \"19528149-09a1-44a5-b419-bbe91789d493\") " pod="openstack/heat-db-sync-bjtkx" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.589413 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-bjtkx" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.592977 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.604569 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.609147 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-qw29c"] Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.610129 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-qw29c" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.620374 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-qw29c"] Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.621055 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b0a51881-d18e-40dd-8dfb-a243d798133a-config\") pod \"neutron-db-sync-r5vvf\" (UID: \"b0a51881-d18e-40dd-8dfb-a243d798133a\") " pod="openstack/neutron-db-sync-r5vvf" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.621090 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/36dba152-b43d-47c4-94bb-874f93b0884f-logs\") pod \"horizon-cbdcb8bcc-96jf5\" (UID: \"36dba152-b43d-47c4-94bb-874f93b0884f\") " pod="openstack/horizon-cbdcb8bcc-96jf5" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.621162 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tmzg\" (UniqueName: \"kubernetes.io/projected/36dba152-b43d-47c4-94bb-874f93b0884f-kube-api-access-9tmzg\") pod \"horizon-cbdcb8bcc-96jf5\" (UID: \"36dba152-b43d-47c4-94bb-874f93b0884f\") " pod="openstack/horizon-cbdcb8bcc-96jf5" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.621182 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/36dba152-b43d-47c4-94bb-874f93b0884f-scripts\") pod \"horizon-cbdcb8bcc-96jf5\" (UID: \"36dba152-b43d-47c4-94bb-874f93b0884f\") " pod="openstack/horizon-cbdcb8bcc-96jf5" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.621200 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8h9bg\" (UniqueName: \"kubernetes.io/projected/b0a51881-d18e-40dd-8dfb-a243d798133a-kube-api-access-8h9bg\") pod \"neutron-db-sync-r5vvf\" (UID: \"b0a51881-d18e-40dd-8dfb-a243d798133a\") " pod="openstack/neutron-db-sync-r5vvf" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.621224 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/36dba152-b43d-47c4-94bb-874f93b0884f-horizon-secret-key\") pod \"horizon-cbdcb8bcc-96jf5\" (UID: \"36dba152-b43d-47c4-94bb-874f93b0884f\") " pod="openstack/horizon-cbdcb8bcc-96jf5" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.621284 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/36dba152-b43d-47c4-94bb-874f93b0884f-config-data\") pod \"horizon-cbdcb8bcc-96jf5\" (UID: \"36dba152-b43d-47c4-94bb-874f93b0884f\") " pod="openstack/horizon-cbdcb8bcc-96jf5" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.621341 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0a51881-d18e-40dd-8dfb-a243d798133a-combined-ca-bundle\") pod \"neutron-db-sync-r5vvf\" (UID: \"b0a51881-d18e-40dd-8dfb-a243d798133a\") " pod="openstack/neutron-db-sync-r5vvf" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.630587 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0a51881-d18e-40dd-8dfb-a243d798133a-combined-ca-bundle\") pod \"neutron-db-sync-r5vvf\" (UID: \"b0a51881-d18e-40dd-8dfb-a243d798133a\") " pod="openstack/neutron-db-sync-r5vvf" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.636181 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-zk99g" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.636568 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.636731 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.639739 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-r5vvf"] Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.652938 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.658577 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/b0a51881-d18e-40dd-8dfb-a243d798133a-config\") pod \"neutron-db-sync-r5vvf\" (UID: \"b0a51881-d18e-40dd-8dfb-a243d798133a\") " pod="openstack/neutron-db-sync-r5vvf" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.669622 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-cbdcb8bcc-96jf5"] Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.671210 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8h9bg\" (UniqueName: \"kubernetes.io/projected/b0a51881-d18e-40dd-8dfb-a243d798133a-kube-api-access-8h9bg\") pod \"neutron-db-sync-r5vvf\" (UID: \"b0a51881-d18e-40dd-8dfb-a243d798133a\") " pod="openstack/neutron-db-sync-r5vvf" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.726140 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc6102bf-7483-4063-af9d-841e78398b0c-config-data\") pod \"cinder-db-sync-qw29c\" (UID: \"bc6102bf-7483-4063-af9d-841e78398b0c\") " pod="openstack/cinder-db-sync-qw29c" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.726182 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc6102bf-7483-4063-af9d-841e78398b0c-combined-ca-bundle\") pod \"cinder-db-sync-qw29c\" (UID: \"bc6102bf-7483-4063-af9d-841e78398b0c\") " pod="openstack/cinder-db-sync-qw29c" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.726207 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/93bf46a8-2942-4b36-9853-88ff5c6e756b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"93bf46a8-2942-4b36-9853-88ff5c6e756b\") " pod="openstack/ceilometer-0" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.726227 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/93bf46a8-2942-4b36-9853-88ff5c6e756b-log-httpd\") pod \"ceilometer-0\" (UID: \"93bf46a8-2942-4b36-9853-88ff5c6e756b\") " pod="openstack/ceilometer-0" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.726280 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93bf46a8-2942-4b36-9853-88ff5c6e756b-config-data\") pod \"ceilometer-0\" (UID: \"93bf46a8-2942-4b36-9853-88ff5c6e756b\") " pod="openstack/ceilometer-0" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.726301 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bc6102bf-7483-4063-af9d-841e78398b0c-db-sync-config-data\") pod \"cinder-db-sync-qw29c\" (UID: \"bc6102bf-7483-4063-af9d-841e78398b0c\") " pod="openstack/cinder-db-sync-qw29c" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.726326 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrfc7\" (UniqueName: \"kubernetes.io/projected/93bf46a8-2942-4b36-9853-88ff5c6e756b-kube-api-access-vrfc7\") pod \"ceilometer-0\" (UID: \"93bf46a8-2942-4b36-9853-88ff5c6e756b\") " pod="openstack/ceilometer-0" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.726358 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/93bf46a8-2942-4b36-9853-88ff5c6e756b-run-httpd\") pod \"ceilometer-0\" (UID: \"93bf46a8-2942-4b36-9853-88ff5c6e756b\") " pod="openstack/ceilometer-0" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.726403 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc6102bf-7483-4063-af9d-841e78398b0c-scripts\") pod \"cinder-db-sync-qw29c\" (UID: \"bc6102bf-7483-4063-af9d-841e78398b0c\") " pod="openstack/cinder-db-sync-qw29c" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.726425 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/36dba152-b43d-47c4-94bb-874f93b0884f-logs\") pod \"horizon-cbdcb8bcc-96jf5\" (UID: \"36dba152-b43d-47c4-94bb-874f93b0884f\") " pod="openstack/horizon-cbdcb8bcc-96jf5" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.726450 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/36dba152-b43d-47c4-94bb-874f93b0884f-scripts\") pod \"horizon-cbdcb8bcc-96jf5\" (UID: \"36dba152-b43d-47c4-94bb-874f93b0884f\") " pod="openstack/horizon-cbdcb8bcc-96jf5" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.726473 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9tmzg\" (UniqueName: \"kubernetes.io/projected/36dba152-b43d-47c4-94bb-874f93b0884f-kube-api-access-9tmzg\") pod \"horizon-cbdcb8bcc-96jf5\" (UID: \"36dba152-b43d-47c4-94bb-874f93b0884f\") " pod="openstack/horizon-cbdcb8bcc-96jf5" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.726496 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngzhn\" (UniqueName: \"kubernetes.io/projected/bc6102bf-7483-4063-af9d-841e78398b0c-kube-api-access-ngzhn\") pod \"cinder-db-sync-qw29c\" (UID: \"bc6102bf-7483-4063-af9d-841e78398b0c\") " pod="openstack/cinder-db-sync-qw29c" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.726531 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/36dba152-b43d-47c4-94bb-874f93b0884f-horizon-secret-key\") pod \"horizon-cbdcb8bcc-96jf5\" (UID: \"36dba152-b43d-47c4-94bb-874f93b0884f\") " pod="openstack/horizon-cbdcb8bcc-96jf5" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.726557 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bc6102bf-7483-4063-af9d-841e78398b0c-etc-machine-id\") pod \"cinder-db-sync-qw29c\" (UID: \"bc6102bf-7483-4063-af9d-841e78398b0c\") " pod="openstack/cinder-db-sync-qw29c" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.726583 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93bf46a8-2942-4b36-9853-88ff5c6e756b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"93bf46a8-2942-4b36-9853-88ff5c6e756b\") " pod="openstack/ceilometer-0" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.726600 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/93bf46a8-2942-4b36-9853-88ff5c6e756b-scripts\") pod \"ceilometer-0\" (UID: \"93bf46a8-2942-4b36-9853-88ff5c6e756b\") " pod="openstack/ceilometer-0" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.726642 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/36dba152-b43d-47c4-94bb-874f93b0884f-config-data\") pod \"horizon-cbdcb8bcc-96jf5\" (UID: \"36dba152-b43d-47c4-94bb-874f93b0884f\") " pod="openstack/horizon-cbdcb8bcc-96jf5" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.727996 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/36dba152-b43d-47c4-94bb-874f93b0884f-config-data\") pod \"horizon-cbdcb8bcc-96jf5\" (UID: \"36dba152-b43d-47c4-94bb-874f93b0884f\") " pod="openstack/horizon-cbdcb8bcc-96jf5" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.729045 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/36dba152-b43d-47c4-94bb-874f93b0884f-logs\") pod \"horizon-cbdcb8bcc-96jf5\" (UID: \"36dba152-b43d-47c4-94bb-874f93b0884f\") " pod="openstack/horizon-cbdcb8bcc-96jf5" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.729817 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/36dba152-b43d-47c4-94bb-874f93b0884f-scripts\") pod \"horizon-cbdcb8bcc-96jf5\" (UID: \"36dba152-b43d-47c4-94bb-874f93b0884f\") " pod="openstack/horizon-cbdcb8bcc-96jf5" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.745029 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/36dba152-b43d-47c4-94bb-874f93b0884f-horizon-secret-key\") pod \"horizon-cbdcb8bcc-96jf5\" (UID: \"36dba152-b43d-47c4-94bb-874f93b0884f\") " pod="openstack/horizon-cbdcb8bcc-96jf5" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.789132 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-8dksv"] Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.804947 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-8dksv" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.810118 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9tmzg\" (UniqueName: \"kubernetes.io/projected/36dba152-b43d-47c4-94bb-874f93b0884f-kube-api-access-9tmzg\") pod \"horizon-cbdcb8bcc-96jf5\" (UID: \"36dba152-b43d-47c4-94bb-874f93b0884f\") " pod="openstack/horizon-cbdcb8bcc-96jf5" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.832745 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-bs8sm" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.837226 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc6102bf-7483-4063-af9d-841e78398b0c-config-data\") pod \"cinder-db-sync-qw29c\" (UID: \"bc6102bf-7483-4063-af9d-841e78398b0c\") " pod="openstack/cinder-db-sync-qw29c" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.859931 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc6102bf-7483-4063-af9d-841e78398b0c-combined-ca-bundle\") pod \"cinder-db-sync-qw29c\" (UID: \"bc6102bf-7483-4063-af9d-841e78398b0c\") " pod="openstack/cinder-db-sync-qw29c" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.860029 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/93bf46a8-2942-4b36-9853-88ff5c6e756b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"93bf46a8-2942-4b36-9853-88ff5c6e756b\") " pod="openstack/ceilometer-0" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.860124 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/93bf46a8-2942-4b36-9853-88ff5c6e756b-log-httpd\") pod \"ceilometer-0\" (UID: \"93bf46a8-2942-4b36-9853-88ff5c6e756b\") " pod="openstack/ceilometer-0" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.860212 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93bf46a8-2942-4b36-9853-88ff5c6e756b-config-data\") pod \"ceilometer-0\" (UID: \"93bf46a8-2942-4b36-9853-88ff5c6e756b\") " pod="openstack/ceilometer-0" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.860293 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bc6102bf-7483-4063-af9d-841e78398b0c-db-sync-config-data\") pod \"cinder-db-sync-qw29c\" (UID: \"bc6102bf-7483-4063-af9d-841e78398b0c\") " pod="openstack/cinder-db-sync-qw29c" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.866820 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrfc7\" (UniqueName: \"kubernetes.io/projected/93bf46a8-2942-4b36-9853-88ff5c6e756b-kube-api-access-vrfc7\") pod \"ceilometer-0\" (UID: \"93bf46a8-2942-4b36-9853-88ff5c6e756b\") " pod="openstack/ceilometer-0" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.867064 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/93bf46a8-2942-4b36-9853-88ff5c6e756b-run-httpd\") pod \"ceilometer-0\" (UID: \"93bf46a8-2942-4b36-9853-88ff5c6e756b\") " pod="openstack/ceilometer-0" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.867215 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc6102bf-7483-4063-af9d-841e78398b0c-scripts\") pod \"cinder-db-sync-qw29c\" (UID: \"bc6102bf-7483-4063-af9d-841e78398b0c\") " pod="openstack/cinder-db-sync-qw29c" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.867336 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngzhn\" (UniqueName: \"kubernetes.io/projected/bc6102bf-7483-4063-af9d-841e78398b0c-kube-api-access-ngzhn\") pod \"cinder-db-sync-qw29c\" (UID: \"bc6102bf-7483-4063-af9d-841e78398b0c\") " pod="openstack/cinder-db-sync-qw29c" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.867433 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bc6102bf-7483-4063-af9d-841e78398b0c-etc-machine-id\") pod \"cinder-db-sync-qw29c\" (UID: \"bc6102bf-7483-4063-af9d-841e78398b0c\") " pod="openstack/cinder-db-sync-qw29c" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.867539 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93bf46a8-2942-4b36-9853-88ff5c6e756b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"93bf46a8-2942-4b36-9853-88ff5c6e756b\") " pod="openstack/ceilometer-0" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.867614 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/93bf46a8-2942-4b36-9853-88ff5c6e756b-scripts\") pod \"ceilometer-0\" (UID: \"93bf46a8-2942-4b36-9853-88ff5c6e756b\") " pod="openstack/ceilometer-0" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.833357 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.863436 4806 generic.go:334] "Generic (PLEG): container finished" podID="c6b83d6c-c555-44af-a218-2c2f81f3bb0c" containerID="d197da3e9aee1ab798a86b1e5dd13236ba62e7824f83f3a643665df3be0383c8" exitCode=0 Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.870838 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/93bf46a8-2942-4b36-9853-88ff5c6e756b-run-httpd\") pod \"ceilometer-0\" (UID: \"93bf46a8-2942-4b36-9853-88ff5c6e756b\") " pod="openstack/ceilometer-0" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.850337 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc6102bf-7483-4063-af9d-841e78398b0c-config-data\") pod \"cinder-db-sync-qw29c\" (UID: \"bc6102bf-7483-4063-af9d-841e78398b0c\") " pod="openstack/cinder-db-sync-qw29c" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.864695 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/93bf46a8-2942-4b36-9853-88ff5c6e756b-log-httpd\") pod \"ceilometer-0\" (UID: \"93bf46a8-2942-4b36-9853-88ff5c6e756b\") " pod="openstack/ceilometer-0" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.859460 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-hnszz"] Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.872639 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-kx6s2" event={"ID":"c6b83d6c-c555-44af-a218-2c2f81f3bb0c","Type":"ContainerDied","Data":"d197da3e9aee1ab798a86b1e5dd13236ba62e7824f83f3a643665df3be0383c8"} Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.872724 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-hnszz" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.873839 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/93bf46a8-2942-4b36-9853-88ff5c6e756b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"93bf46a8-2942-4b36-9853-88ff5c6e756b\") " pod="openstack/ceilometer-0" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.875764 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bc6102bf-7483-4063-af9d-841e78398b0c-etc-machine-id\") pod \"cinder-db-sync-qw29c\" (UID: \"bc6102bf-7483-4063-af9d-841e78398b0c\") " pod="openstack/cinder-db-sync-qw29c" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.878722 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93bf46a8-2942-4b36-9853-88ff5c6e756b-config-data\") pod \"ceilometer-0\" (UID: \"93bf46a8-2942-4b36-9853-88ff5c6e756b\") " pod="openstack/ceilometer-0" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.882467 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.882830 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-mwbpz" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.883046 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.893464 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/93bf46a8-2942-4b36-9853-88ff5c6e756b-scripts\") pod \"ceilometer-0\" (UID: \"93bf46a8-2942-4b36-9853-88ff5c6e756b\") " pod="openstack/ceilometer-0" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.905060 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-r5vvf" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.932238 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc6102bf-7483-4063-af9d-841e78398b0c-combined-ca-bundle\") pod \"cinder-db-sync-qw29c\" (UID: \"bc6102bf-7483-4063-af9d-841e78398b0c\") " pod="openstack/cinder-db-sync-qw29c" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.932256 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-cbdcb8bcc-96jf5" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.932438 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc6102bf-7483-4063-af9d-841e78398b0c-scripts\") pod \"cinder-db-sync-qw29c\" (UID: \"bc6102bf-7483-4063-af9d-841e78398b0c\") " pod="openstack/cinder-db-sync-qw29c" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.933313 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bc6102bf-7483-4063-af9d-841e78398b0c-db-sync-config-data\") pod \"cinder-db-sync-qw29c\" (UID: \"bc6102bf-7483-4063-af9d-841e78398b0c\") " pod="openstack/cinder-db-sync-qw29c" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.933658 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrfc7\" (UniqueName: \"kubernetes.io/projected/93bf46a8-2942-4b36-9853-88ff5c6e756b-kube-api-access-vrfc7\") pod \"ceilometer-0\" (UID: \"93bf46a8-2942-4b36-9853-88ff5c6e756b\") " pod="openstack/ceilometer-0" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.934272 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-8dksv"] Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.947627 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngzhn\" (UniqueName: \"kubernetes.io/projected/bc6102bf-7483-4063-af9d-841e78398b0c-kube-api-access-ngzhn\") pod \"cinder-db-sync-qw29c\" (UID: \"bc6102bf-7483-4063-af9d-841e78398b0c\") " pod="openstack/cinder-db-sync-qw29c" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.949958 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-hnszz"] Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.961140 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-qw29c" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.966435 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93bf46a8-2942-4b36-9853-88ff5c6e756b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"93bf46a8-2942-4b36-9853-88ff5c6e756b\") " pod="openstack/ceilometer-0" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.970234 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86ed2345-2edc-46bb-a416-3cfa5c01b38d-config-data\") pod \"placement-db-sync-hnszz\" (UID: \"86ed2345-2edc-46bb-a416-3cfa5c01b38d\") " pod="openstack/placement-db-sync-hnszz" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.970297 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rthdl\" (UniqueName: \"kubernetes.io/projected/4588263f-b01b-4a54-829f-1cef11d1dbd3-kube-api-access-rthdl\") pod \"barbican-db-sync-8dksv\" (UID: \"4588263f-b01b-4a54-829f-1cef11d1dbd3\") " pod="openstack/barbican-db-sync-8dksv" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.970374 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/86ed2345-2edc-46bb-a416-3cfa5c01b38d-logs\") pod \"placement-db-sync-hnszz\" (UID: \"86ed2345-2edc-46bb-a416-3cfa5c01b38d\") " pod="openstack/placement-db-sync-hnszz" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.970430 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4588263f-b01b-4a54-829f-1cef11d1dbd3-combined-ca-bundle\") pod \"barbican-db-sync-8dksv\" (UID: \"4588263f-b01b-4a54-829f-1cef11d1dbd3\") " pod="openstack/barbican-db-sync-8dksv" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.970509 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86ed2345-2edc-46bb-a416-3cfa5c01b38d-combined-ca-bundle\") pod \"placement-db-sync-hnszz\" (UID: \"86ed2345-2edc-46bb-a416-3cfa5c01b38d\") " pod="openstack/placement-db-sync-hnszz" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.970565 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4588263f-b01b-4a54-829f-1cef11d1dbd3-db-sync-config-data\") pod \"barbican-db-sync-8dksv\" (UID: \"4588263f-b01b-4a54-829f-1cef11d1dbd3\") " pod="openstack/barbican-db-sync-8dksv" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.970582 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/86ed2345-2edc-46bb-a416-3cfa5c01b38d-scripts\") pod \"placement-db-sync-hnszz\" (UID: \"86ed2345-2edc-46bb-a416-3cfa5c01b38d\") " pod="openstack/placement-db-sync-hnszz" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.970635 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlpp6\" (UniqueName: \"kubernetes.io/projected/86ed2345-2edc-46bb-a416-3cfa5c01b38d-kube-api-access-rlpp6\") pod \"placement-db-sync-hnszz\" (UID: \"86ed2345-2edc-46bb-a416-3cfa5c01b38d\") " pod="openstack/placement-db-sync-hnszz" Jan 26 08:10:44 crc kubenswrapper[4806]: I0126 08:10:44.989135 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b868669f-fcbc8"] Jan 26 08:10:45 crc kubenswrapper[4806]: I0126 08:10:45.019726 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-795f9f9b67-n94zj"] Jan 26 08:10:45 crc kubenswrapper[4806]: I0126 08:10:45.024411 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-795f9f9b67-n94zj" Jan 26 08:10:45 crc kubenswrapper[4806]: I0126 08:10:45.034222 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-795f9f9b67-n94zj"] Jan 26 08:10:45 crc kubenswrapper[4806]: I0126 08:10:45.042493 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-cf78879c9-42c6r"] Jan 26 08:10:45 crc kubenswrapper[4806]: I0126 08:10:45.046222 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cf78879c9-42c6r" Jan 26 08:10:45 crc kubenswrapper[4806]: I0126 08:10:45.076552 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86ed2345-2edc-46bb-a416-3cfa5c01b38d-combined-ca-bundle\") pod \"placement-db-sync-hnszz\" (UID: \"86ed2345-2edc-46bb-a416-3cfa5c01b38d\") " pod="openstack/placement-db-sync-hnszz" Jan 26 08:10:45 crc kubenswrapper[4806]: I0126 08:10:45.076604 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4588263f-b01b-4a54-829f-1cef11d1dbd3-db-sync-config-data\") pod \"barbican-db-sync-8dksv\" (UID: \"4588263f-b01b-4a54-829f-1cef11d1dbd3\") " pod="openstack/barbican-db-sync-8dksv" Jan 26 08:10:45 crc kubenswrapper[4806]: I0126 08:10:45.076630 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/86ed2345-2edc-46bb-a416-3cfa5c01b38d-scripts\") pod \"placement-db-sync-hnszz\" (UID: \"86ed2345-2edc-46bb-a416-3cfa5c01b38d\") " pod="openstack/placement-db-sync-hnszz" Jan 26 08:10:45 crc kubenswrapper[4806]: I0126 08:10:45.076664 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rlpp6\" (UniqueName: \"kubernetes.io/projected/86ed2345-2edc-46bb-a416-3cfa5c01b38d-kube-api-access-rlpp6\") pod \"placement-db-sync-hnszz\" (UID: \"86ed2345-2edc-46bb-a416-3cfa5c01b38d\") " pod="openstack/placement-db-sync-hnszz" Jan 26 08:10:45 crc kubenswrapper[4806]: I0126 08:10:45.076685 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86ed2345-2edc-46bb-a416-3cfa5c01b38d-config-data\") pod \"placement-db-sync-hnszz\" (UID: \"86ed2345-2edc-46bb-a416-3cfa5c01b38d\") " pod="openstack/placement-db-sync-hnszz" Jan 26 08:10:45 crc kubenswrapper[4806]: I0126 08:10:45.076702 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rthdl\" (UniqueName: \"kubernetes.io/projected/4588263f-b01b-4a54-829f-1cef11d1dbd3-kube-api-access-rthdl\") pod \"barbican-db-sync-8dksv\" (UID: \"4588263f-b01b-4a54-829f-1cef11d1dbd3\") " pod="openstack/barbican-db-sync-8dksv" Jan 26 08:10:45 crc kubenswrapper[4806]: I0126 08:10:45.076746 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/86ed2345-2edc-46bb-a416-3cfa5c01b38d-logs\") pod \"placement-db-sync-hnszz\" (UID: \"86ed2345-2edc-46bb-a416-3cfa5c01b38d\") " pod="openstack/placement-db-sync-hnszz" Jan 26 08:10:45 crc kubenswrapper[4806]: I0126 08:10:45.076776 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4588263f-b01b-4a54-829f-1cef11d1dbd3-combined-ca-bundle\") pod \"barbican-db-sync-8dksv\" (UID: \"4588263f-b01b-4a54-829f-1cef11d1dbd3\") " pod="openstack/barbican-db-sync-8dksv" Jan 26 08:10:45 crc kubenswrapper[4806]: I0126 08:10:45.078585 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/86ed2345-2edc-46bb-a416-3cfa5c01b38d-logs\") pod \"placement-db-sync-hnszz\" (UID: \"86ed2345-2edc-46bb-a416-3cfa5c01b38d\") " pod="openstack/placement-db-sync-hnszz" Jan 26 08:10:45 crc kubenswrapper[4806]: I0126 08:10:45.097759 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rlpp6\" (UniqueName: \"kubernetes.io/projected/86ed2345-2edc-46bb-a416-3cfa5c01b38d-kube-api-access-rlpp6\") pod \"placement-db-sync-hnszz\" (UID: \"86ed2345-2edc-46bb-a416-3cfa5c01b38d\") " pod="openstack/placement-db-sync-hnszz" Jan 26 08:10:45 crc kubenswrapper[4806]: I0126 08:10:45.099801 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86ed2345-2edc-46bb-a416-3cfa5c01b38d-combined-ca-bundle\") pod \"placement-db-sync-hnszz\" (UID: \"86ed2345-2edc-46bb-a416-3cfa5c01b38d\") " pod="openstack/placement-db-sync-hnszz" Jan 26 08:10:45 crc kubenswrapper[4806]: I0126 08:10:45.100121 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4588263f-b01b-4a54-829f-1cef11d1dbd3-combined-ca-bundle\") pod \"barbican-db-sync-8dksv\" (UID: \"4588263f-b01b-4a54-829f-1cef11d1dbd3\") " pod="openstack/barbican-db-sync-8dksv" Jan 26 08:10:45 crc kubenswrapper[4806]: I0126 08:10:45.100978 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rthdl\" (UniqueName: \"kubernetes.io/projected/4588263f-b01b-4a54-829f-1cef11d1dbd3-kube-api-access-rthdl\") pod \"barbican-db-sync-8dksv\" (UID: \"4588263f-b01b-4a54-829f-1cef11d1dbd3\") " pod="openstack/barbican-db-sync-8dksv" Jan 26 08:10:45 crc kubenswrapper[4806]: I0126 08:10:45.101469 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86ed2345-2edc-46bb-a416-3cfa5c01b38d-config-data\") pod \"placement-db-sync-hnszz\" (UID: \"86ed2345-2edc-46bb-a416-3cfa5c01b38d\") " pod="openstack/placement-db-sync-hnszz" Jan 26 08:10:45 crc kubenswrapper[4806]: I0126 08:10:45.102075 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4588263f-b01b-4a54-829f-1cef11d1dbd3-db-sync-config-data\") pod \"barbican-db-sync-8dksv\" (UID: \"4588263f-b01b-4a54-829f-1cef11d1dbd3\") " pod="openstack/barbican-db-sync-8dksv" Jan 26 08:10:45 crc kubenswrapper[4806]: I0126 08:10:45.102083 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/86ed2345-2edc-46bb-a416-3cfa5c01b38d-scripts\") pod \"placement-db-sync-hnszz\" (UID: \"86ed2345-2edc-46bb-a416-3cfa5c01b38d\") " pod="openstack/placement-db-sync-hnszz" Jan 26 08:10:45 crc kubenswrapper[4806]: I0126 08:10:45.123315 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cf78879c9-42c6r"] Jan 26 08:10:45 crc kubenswrapper[4806]: I0126 08:10:45.157564 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-8dksv" Jan 26 08:10:45 crc kubenswrapper[4806]: I0126 08:10:45.179623 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2762738e-b53a-4f25-ae4e-fa5182994a78-config\") pod \"dnsmasq-dns-cf78879c9-42c6r\" (UID: \"2762738e-b53a-4f25-ae4e-fa5182994a78\") " pod="openstack/dnsmasq-dns-cf78879c9-42c6r" Jan 26 08:10:45 crc kubenswrapper[4806]: I0126 08:10:45.179708 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2762738e-b53a-4f25-ae4e-fa5182994a78-ovsdbserver-nb\") pod \"dnsmasq-dns-cf78879c9-42c6r\" (UID: \"2762738e-b53a-4f25-ae4e-fa5182994a78\") " pod="openstack/dnsmasq-dns-cf78879c9-42c6r" Jan 26 08:10:45 crc kubenswrapper[4806]: I0126 08:10:45.179761 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a15cfc51-1d28-4476-b9ac-2ef08300220f-scripts\") pod \"horizon-795f9f9b67-n94zj\" (UID: \"a15cfc51-1d28-4476-b9ac-2ef08300220f\") " pod="openstack/horizon-795f9f9b67-n94zj" Jan 26 08:10:45 crc kubenswrapper[4806]: I0126 08:10:45.179794 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a15cfc51-1d28-4476-b9ac-2ef08300220f-config-data\") pod \"horizon-795f9f9b67-n94zj\" (UID: \"a15cfc51-1d28-4476-b9ac-2ef08300220f\") " pod="openstack/horizon-795f9f9b67-n94zj" Jan 26 08:10:45 crc kubenswrapper[4806]: I0126 08:10:45.179812 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a15cfc51-1d28-4476-b9ac-2ef08300220f-logs\") pod \"horizon-795f9f9b67-n94zj\" (UID: \"a15cfc51-1d28-4476-b9ac-2ef08300220f\") " pod="openstack/horizon-795f9f9b67-n94zj" Jan 26 08:10:45 crc kubenswrapper[4806]: I0126 08:10:45.179877 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgddk\" (UniqueName: \"kubernetes.io/projected/2762738e-b53a-4f25-ae4e-fa5182994a78-kube-api-access-rgddk\") pod \"dnsmasq-dns-cf78879c9-42c6r\" (UID: \"2762738e-b53a-4f25-ae4e-fa5182994a78\") " pod="openstack/dnsmasq-dns-cf78879c9-42c6r" Jan 26 08:10:45 crc kubenswrapper[4806]: I0126 08:10:45.179947 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhzvf\" (UniqueName: \"kubernetes.io/projected/a15cfc51-1d28-4476-b9ac-2ef08300220f-kube-api-access-hhzvf\") pod \"horizon-795f9f9b67-n94zj\" (UID: \"a15cfc51-1d28-4476-b9ac-2ef08300220f\") " pod="openstack/horizon-795f9f9b67-n94zj" Jan 26 08:10:45 crc kubenswrapper[4806]: I0126 08:10:45.179974 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2762738e-b53a-4f25-ae4e-fa5182994a78-ovsdbserver-sb\") pod \"dnsmasq-dns-cf78879c9-42c6r\" (UID: \"2762738e-b53a-4f25-ae4e-fa5182994a78\") " pod="openstack/dnsmasq-dns-cf78879c9-42c6r" Jan 26 08:10:45 crc kubenswrapper[4806]: I0126 08:10:45.180045 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2762738e-b53a-4f25-ae4e-fa5182994a78-dns-svc\") pod \"dnsmasq-dns-cf78879c9-42c6r\" (UID: \"2762738e-b53a-4f25-ae4e-fa5182994a78\") " pod="openstack/dnsmasq-dns-cf78879c9-42c6r" Jan 26 08:10:45 crc kubenswrapper[4806]: I0126 08:10:45.180091 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a15cfc51-1d28-4476-b9ac-2ef08300220f-horizon-secret-key\") pod \"horizon-795f9f9b67-n94zj\" (UID: \"a15cfc51-1d28-4476-b9ac-2ef08300220f\") " pod="openstack/horizon-795f9f9b67-n94zj" Jan 26 08:10:45 crc kubenswrapper[4806]: I0126 08:10:45.180114 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2762738e-b53a-4f25-ae4e-fa5182994a78-dns-swift-storage-0\") pod \"dnsmasq-dns-cf78879c9-42c6r\" (UID: \"2762738e-b53a-4f25-ae4e-fa5182994a78\") " pod="openstack/dnsmasq-dns-cf78879c9-42c6r" Jan 26 08:10:45 crc kubenswrapper[4806]: I0126 08:10:45.226183 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 08:10:45 crc kubenswrapper[4806]: I0126 08:10:45.266985 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-hnszz" Jan 26 08:10:45 crc kubenswrapper[4806]: I0126 08:10:45.281473 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2762738e-b53a-4f25-ae4e-fa5182994a78-ovsdbserver-nb\") pod \"dnsmasq-dns-cf78879c9-42c6r\" (UID: \"2762738e-b53a-4f25-ae4e-fa5182994a78\") " pod="openstack/dnsmasq-dns-cf78879c9-42c6r" Jan 26 08:10:45 crc kubenswrapper[4806]: I0126 08:10:45.281572 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a15cfc51-1d28-4476-b9ac-2ef08300220f-scripts\") pod \"horizon-795f9f9b67-n94zj\" (UID: \"a15cfc51-1d28-4476-b9ac-2ef08300220f\") " pod="openstack/horizon-795f9f9b67-n94zj" Jan 26 08:10:45 crc kubenswrapper[4806]: I0126 08:10:45.281610 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a15cfc51-1d28-4476-b9ac-2ef08300220f-config-data\") pod \"horizon-795f9f9b67-n94zj\" (UID: \"a15cfc51-1d28-4476-b9ac-2ef08300220f\") " pod="openstack/horizon-795f9f9b67-n94zj" Jan 26 08:10:45 crc kubenswrapper[4806]: I0126 08:10:45.281634 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a15cfc51-1d28-4476-b9ac-2ef08300220f-logs\") pod \"horizon-795f9f9b67-n94zj\" (UID: \"a15cfc51-1d28-4476-b9ac-2ef08300220f\") " pod="openstack/horizon-795f9f9b67-n94zj" Jan 26 08:10:45 crc kubenswrapper[4806]: I0126 08:10:45.281689 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rgddk\" (UniqueName: \"kubernetes.io/projected/2762738e-b53a-4f25-ae4e-fa5182994a78-kube-api-access-rgddk\") pod \"dnsmasq-dns-cf78879c9-42c6r\" (UID: \"2762738e-b53a-4f25-ae4e-fa5182994a78\") " pod="openstack/dnsmasq-dns-cf78879c9-42c6r" Jan 26 08:10:45 crc kubenswrapper[4806]: I0126 08:10:45.281729 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hhzvf\" (UniqueName: \"kubernetes.io/projected/a15cfc51-1d28-4476-b9ac-2ef08300220f-kube-api-access-hhzvf\") pod \"horizon-795f9f9b67-n94zj\" (UID: \"a15cfc51-1d28-4476-b9ac-2ef08300220f\") " pod="openstack/horizon-795f9f9b67-n94zj" Jan 26 08:10:45 crc kubenswrapper[4806]: I0126 08:10:45.281750 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2762738e-b53a-4f25-ae4e-fa5182994a78-ovsdbserver-sb\") pod \"dnsmasq-dns-cf78879c9-42c6r\" (UID: \"2762738e-b53a-4f25-ae4e-fa5182994a78\") " pod="openstack/dnsmasq-dns-cf78879c9-42c6r" Jan 26 08:10:45 crc kubenswrapper[4806]: I0126 08:10:45.281795 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2762738e-b53a-4f25-ae4e-fa5182994a78-dns-svc\") pod \"dnsmasq-dns-cf78879c9-42c6r\" (UID: \"2762738e-b53a-4f25-ae4e-fa5182994a78\") " pod="openstack/dnsmasq-dns-cf78879c9-42c6r" Jan 26 08:10:45 crc kubenswrapper[4806]: I0126 08:10:45.281822 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a15cfc51-1d28-4476-b9ac-2ef08300220f-horizon-secret-key\") pod \"horizon-795f9f9b67-n94zj\" (UID: \"a15cfc51-1d28-4476-b9ac-2ef08300220f\") " pod="openstack/horizon-795f9f9b67-n94zj" Jan 26 08:10:45 crc kubenswrapper[4806]: I0126 08:10:45.281878 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2762738e-b53a-4f25-ae4e-fa5182994a78-dns-swift-storage-0\") pod \"dnsmasq-dns-cf78879c9-42c6r\" (UID: \"2762738e-b53a-4f25-ae4e-fa5182994a78\") " pod="openstack/dnsmasq-dns-cf78879c9-42c6r" Jan 26 08:10:45 crc kubenswrapper[4806]: I0126 08:10:45.281919 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2762738e-b53a-4f25-ae4e-fa5182994a78-config\") pod \"dnsmasq-dns-cf78879c9-42c6r\" (UID: \"2762738e-b53a-4f25-ae4e-fa5182994a78\") " pod="openstack/dnsmasq-dns-cf78879c9-42c6r" Jan 26 08:10:45 crc kubenswrapper[4806]: I0126 08:10:45.283608 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a15cfc51-1d28-4476-b9ac-2ef08300220f-logs\") pod \"horizon-795f9f9b67-n94zj\" (UID: \"a15cfc51-1d28-4476-b9ac-2ef08300220f\") " pod="openstack/horizon-795f9f9b67-n94zj" Jan 26 08:10:45 crc kubenswrapper[4806]: I0126 08:10:45.284319 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2762738e-b53a-4f25-ae4e-fa5182994a78-config\") pod \"dnsmasq-dns-cf78879c9-42c6r\" (UID: \"2762738e-b53a-4f25-ae4e-fa5182994a78\") " pod="openstack/dnsmasq-dns-cf78879c9-42c6r" Jan 26 08:10:45 crc kubenswrapper[4806]: I0126 08:10:45.284576 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2762738e-b53a-4f25-ae4e-fa5182994a78-ovsdbserver-sb\") pod \"dnsmasq-dns-cf78879c9-42c6r\" (UID: \"2762738e-b53a-4f25-ae4e-fa5182994a78\") " pod="openstack/dnsmasq-dns-cf78879c9-42c6r" Jan 26 08:10:45 crc kubenswrapper[4806]: I0126 08:10:45.284657 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2762738e-b53a-4f25-ae4e-fa5182994a78-dns-svc\") pod \"dnsmasq-dns-cf78879c9-42c6r\" (UID: \"2762738e-b53a-4f25-ae4e-fa5182994a78\") " pod="openstack/dnsmasq-dns-cf78879c9-42c6r" Jan 26 08:10:45 crc kubenswrapper[4806]: I0126 08:10:45.284833 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2762738e-b53a-4f25-ae4e-fa5182994a78-dns-swift-storage-0\") pod \"dnsmasq-dns-cf78879c9-42c6r\" (UID: \"2762738e-b53a-4f25-ae4e-fa5182994a78\") " pod="openstack/dnsmasq-dns-cf78879c9-42c6r" Jan 26 08:10:45 crc kubenswrapper[4806]: I0126 08:10:45.285236 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2762738e-b53a-4f25-ae4e-fa5182994a78-ovsdbserver-nb\") pod \"dnsmasq-dns-cf78879c9-42c6r\" (UID: \"2762738e-b53a-4f25-ae4e-fa5182994a78\") " pod="openstack/dnsmasq-dns-cf78879c9-42c6r" Jan 26 08:10:45 crc kubenswrapper[4806]: I0126 08:10:45.286089 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a15cfc51-1d28-4476-b9ac-2ef08300220f-config-data\") pod \"horizon-795f9f9b67-n94zj\" (UID: \"a15cfc51-1d28-4476-b9ac-2ef08300220f\") " pod="openstack/horizon-795f9f9b67-n94zj" Jan 26 08:10:45 crc kubenswrapper[4806]: I0126 08:10:45.290559 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a15cfc51-1d28-4476-b9ac-2ef08300220f-scripts\") pod \"horizon-795f9f9b67-n94zj\" (UID: \"a15cfc51-1d28-4476-b9ac-2ef08300220f\") " pod="openstack/horizon-795f9f9b67-n94zj" Jan 26 08:10:45 crc kubenswrapper[4806]: I0126 08:10:45.299004 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a15cfc51-1d28-4476-b9ac-2ef08300220f-horizon-secret-key\") pod \"horizon-795f9f9b67-n94zj\" (UID: \"a15cfc51-1d28-4476-b9ac-2ef08300220f\") " pod="openstack/horizon-795f9f9b67-n94zj" Jan 26 08:10:45 crc kubenswrapper[4806]: I0126 08:10:45.312684 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rgddk\" (UniqueName: \"kubernetes.io/projected/2762738e-b53a-4f25-ae4e-fa5182994a78-kube-api-access-rgddk\") pod \"dnsmasq-dns-cf78879c9-42c6r\" (UID: \"2762738e-b53a-4f25-ae4e-fa5182994a78\") " pod="openstack/dnsmasq-dns-cf78879c9-42c6r" Jan 26 08:10:45 crc kubenswrapper[4806]: I0126 08:10:45.328804 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hhzvf\" (UniqueName: \"kubernetes.io/projected/a15cfc51-1d28-4476-b9ac-2ef08300220f-kube-api-access-hhzvf\") pod \"horizon-795f9f9b67-n94zj\" (UID: \"a15cfc51-1d28-4476-b9ac-2ef08300220f\") " pod="openstack/horizon-795f9f9b67-n94zj" Jan 26 08:10:45 crc kubenswrapper[4806]: I0126 08:10:45.450090 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-795f9f9b67-n94zj" Jan 26 08:10:45 crc kubenswrapper[4806]: I0126 08:10:45.541627 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cf78879c9-42c6r" Jan 26 08:10:45 crc kubenswrapper[4806]: I0126 08:10:45.590480 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-hvd56"] Jan 26 08:10:45 crc kubenswrapper[4806]: I0126 08:10:45.605388 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-kx6s2" Jan 26 08:10:45 crc kubenswrapper[4806]: I0126 08:10:45.641990 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b868669f-fcbc8"] Jan 26 08:10:45 crc kubenswrapper[4806]: I0126 08:10:45.683786 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-bjtkx"] Jan 26 08:10:45 crc kubenswrapper[4806]: I0126 08:10:45.700718 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c6b83d6c-c555-44af-a218-2c2f81f3bb0c-ovsdbserver-sb\") pod \"c6b83d6c-c555-44af-a218-2c2f81f3bb0c\" (UID: \"c6b83d6c-c555-44af-a218-2c2f81f3bb0c\") " Jan 26 08:10:45 crc kubenswrapper[4806]: I0126 08:10:45.701570 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fq9kk\" (UniqueName: \"kubernetes.io/projected/c6b83d6c-c555-44af-a218-2c2f81f3bb0c-kube-api-access-fq9kk\") pod \"c6b83d6c-c555-44af-a218-2c2f81f3bb0c\" (UID: \"c6b83d6c-c555-44af-a218-2c2f81f3bb0c\") " Jan 26 08:10:45 crc kubenswrapper[4806]: I0126 08:10:45.701794 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c6b83d6c-c555-44af-a218-2c2f81f3bb0c-ovsdbserver-nb\") pod \"c6b83d6c-c555-44af-a218-2c2f81f3bb0c\" (UID: \"c6b83d6c-c555-44af-a218-2c2f81f3bb0c\") " Jan 26 08:10:45 crc kubenswrapper[4806]: I0126 08:10:45.701928 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c6b83d6c-c555-44af-a218-2c2f81f3bb0c-dns-svc\") pod \"c6b83d6c-c555-44af-a218-2c2f81f3bb0c\" (UID: \"c6b83d6c-c555-44af-a218-2c2f81f3bb0c\") " Jan 26 08:10:45 crc kubenswrapper[4806]: I0126 08:10:45.702033 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6b83d6c-c555-44af-a218-2c2f81f3bb0c-config\") pod \"c6b83d6c-c555-44af-a218-2c2f81f3bb0c\" (UID: \"c6b83d6c-c555-44af-a218-2c2f81f3bb0c\") " Jan 26 08:10:45 crc kubenswrapper[4806]: I0126 08:10:45.702131 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c6b83d6c-c555-44af-a218-2c2f81f3bb0c-dns-swift-storage-0\") pod \"c6b83d6c-c555-44af-a218-2c2f81f3bb0c\" (UID: \"c6b83d6c-c555-44af-a218-2c2f81f3bb0c\") " Jan 26 08:10:45 crc kubenswrapper[4806]: I0126 08:10:45.723323 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6b83d6c-c555-44af-a218-2c2f81f3bb0c-kube-api-access-fq9kk" (OuterVolumeSpecName: "kube-api-access-fq9kk") pod "c6b83d6c-c555-44af-a218-2c2f81f3bb0c" (UID: "c6b83d6c-c555-44af-a218-2c2f81f3bb0c"). InnerVolumeSpecName "kube-api-access-fq9kk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:10:45 crc kubenswrapper[4806]: I0126 08:10:45.805642 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fq9kk\" (UniqueName: \"kubernetes.io/projected/c6b83d6c-c555-44af-a218-2c2f81f3bb0c-kube-api-access-fq9kk\") on node \"crc\" DevicePath \"\"" Jan 26 08:10:45 crc kubenswrapper[4806]: I0126 08:10:45.807305 4806 patch_prober.go:28] interesting pod/machine-config-daemon-k2tlk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 08:10:45 crc kubenswrapper[4806]: I0126 08:10:45.807355 4806 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 08:10:45 crc kubenswrapper[4806]: I0126 08:10:45.891234 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-hvd56" event={"ID":"7c574ca2-3991-4eec-80f8-2e389d6a0e4b","Type":"ContainerStarted","Data":"0abc2a138d7496cfd9806d5d67c871d23f503c780ca64bdaf5297de8b0e98f95"} Jan 26 08:10:45 crc kubenswrapper[4806]: I0126 08:10:45.894757 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-bjtkx" event={"ID":"19528149-09a1-44a5-b419-bbe91789d493","Type":"ContainerStarted","Data":"fd5e70421906e9710119a2f5550ab75b467b5cac848c81709f2f2e7f6bb2530d"} Jan 26 08:10:45 crc kubenswrapper[4806]: I0126 08:10:45.898023 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c6b83d6c-c555-44af-a218-2c2f81f3bb0c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c6b83d6c-c555-44af-a218-2c2f81f3bb0c" (UID: "c6b83d6c-c555-44af-a218-2c2f81f3bb0c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 08:10:45 crc kubenswrapper[4806]: I0126 08:10:45.898801 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b868669f-fcbc8" event={"ID":"30f57a1f-8a72-4dfa-88da-db7bd1312809","Type":"ContainerStarted","Data":"0333a20ffbab4a3ec2f8f055052dd3a78bb81cd41e56eed14bc139e754b0a544"} Jan 26 08:10:45 crc kubenswrapper[4806]: I0126 08:10:45.907506 4806 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c6b83d6c-c555-44af-a218-2c2f81f3bb0c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 08:10:45 crc kubenswrapper[4806]: I0126 08:10:45.910207 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-kx6s2" event={"ID":"c6b83d6c-c555-44af-a218-2c2f81f3bb0c","Type":"ContainerDied","Data":"376005f75c92d5e0bc54afb0a9fc8ef2837fcf31d0f7cb3915820de363053bef"} Jan 26 08:10:45 crc kubenswrapper[4806]: I0126 08:10:45.910292 4806 scope.go:117] "RemoveContainer" containerID="d197da3e9aee1ab798a86b1e5dd13236ba62e7824f83f3a643665df3be0383c8" Jan 26 08:10:45 crc kubenswrapper[4806]: I0126 08:10:45.910414 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-kx6s2" Jan 26 08:10:45 crc kubenswrapper[4806]: I0126 08:10:45.927092 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c6b83d6c-c555-44af-a218-2c2f81f3bb0c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c6b83d6c-c555-44af-a218-2c2f81f3bb0c" (UID: "c6b83d6c-c555-44af-a218-2c2f81f3bb0c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 08:10:45 crc kubenswrapper[4806]: I0126 08:10:45.927371 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c6b83d6c-c555-44af-a218-2c2f81f3bb0c-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "c6b83d6c-c555-44af-a218-2c2f81f3bb0c" (UID: "c6b83d6c-c555-44af-a218-2c2f81f3bb0c"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 08:10:45 crc kubenswrapper[4806]: I0126 08:10:45.934045 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c6b83d6c-c555-44af-a218-2c2f81f3bb0c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c6b83d6c-c555-44af-a218-2c2f81f3bb0c" (UID: "c6b83d6c-c555-44af-a218-2c2f81f3bb0c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 08:10:45 crc kubenswrapper[4806]: I0126 08:10:45.939238 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c6b83d6c-c555-44af-a218-2c2f81f3bb0c-config" (OuterVolumeSpecName: "config") pod "c6b83d6c-c555-44af-a218-2c2f81f3bb0c" (UID: "c6b83d6c-c555-44af-a218-2c2f81f3bb0c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 08:10:45 crc kubenswrapper[4806]: I0126 08:10:45.975713 4806 scope.go:117] "RemoveContainer" containerID="84349ee66015b9ecf4c1fdc86fdb0e6a013b6987ec74b1a25a2245c77c8c5fee" Jan 26 08:10:46 crc kubenswrapper[4806]: I0126 08:10:46.011480 4806 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c6b83d6c-c555-44af-a218-2c2f81f3bb0c-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 26 08:10:46 crc kubenswrapper[4806]: I0126 08:10:46.011706 4806 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c6b83d6c-c555-44af-a218-2c2f81f3bb0c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 08:10:46 crc kubenswrapper[4806]: I0126 08:10:46.012015 4806 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c6b83d6c-c555-44af-a218-2c2f81f3bb0c-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 08:10:46 crc kubenswrapper[4806]: I0126 08:10:46.012093 4806 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6b83d6c-c555-44af-a218-2c2f81f3bb0c-config\") on node \"crc\" DevicePath \"\"" Jan 26 08:10:46 crc kubenswrapper[4806]: I0126 08:10:46.290179 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-kx6s2"] Jan 26 08:10:46 crc kubenswrapper[4806]: I0126 08:10:46.304230 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-kx6s2"] Jan 26 08:10:46 crc kubenswrapper[4806]: W0126 08:10:46.374214 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod36dba152_b43d_47c4_94bb_874f93b0884f.slice/crio-c362f398f6f7a61971db6576b414a84f06551e1ea006f0559447851533f9a772 WatchSource:0}: Error finding container c362f398f6f7a61971db6576b414a84f06551e1ea006f0559447851533f9a772: Status 404 returned error can't find the container with id c362f398f6f7a61971db6576b414a84f06551e1ea006f0559447851533f9a772 Jan 26 08:10:46 crc kubenswrapper[4806]: I0126 08:10:46.399966 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-8dksv"] Jan 26 08:10:46 crc kubenswrapper[4806]: I0126 08:10:46.408769 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-cbdcb8bcc-96jf5"] Jan 26 08:10:46 crc kubenswrapper[4806]: I0126 08:10:46.419425 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-r5vvf"] Jan 26 08:10:46 crc kubenswrapper[4806]: I0126 08:10:46.425664 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-qw29c"] Jan 26 08:10:46 crc kubenswrapper[4806]: I0126 08:10:46.551480 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-hnszz"] Jan 26 08:10:46 crc kubenswrapper[4806]: I0126 08:10:46.561321 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cf78879c9-42c6r"] Jan 26 08:10:46 crc kubenswrapper[4806]: W0126 08:10:46.586851 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod86ed2345_2edc_46bb_a416_3cfa5c01b38d.slice/crio-89e2ffb816380037443529f69c704fb204139c80493e6d6008daccbaf329aa32 WatchSource:0}: Error finding container 89e2ffb816380037443529f69c704fb204139c80493e6d6008daccbaf329aa32: Status 404 returned error can't find the container with id 89e2ffb816380037443529f69c704fb204139c80493e6d6008daccbaf329aa32 Jan 26 08:10:46 crc kubenswrapper[4806]: I0126 08:10:46.604578 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 08:10:46 crc kubenswrapper[4806]: I0126 08:10:46.654185 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-795f9f9b67-n94zj"] Jan 26 08:10:46 crc kubenswrapper[4806]: W0126 08:10:46.690011 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda15cfc51_1d28_4476_b9ac_2ef08300220f.slice/crio-8aab465a27a6f72104325bfd7323b25831c57fabd1466520f25f05843140d12f WatchSource:0}: Error finding container 8aab465a27a6f72104325bfd7323b25831c57fabd1466520f25f05843140d12f: Status 404 returned error can't find the container with id 8aab465a27a6f72104325bfd7323b25831c57fabd1466520f25f05843140d12f Jan 26 08:10:46 crc kubenswrapper[4806]: I0126 08:10:46.927919 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-cbdcb8bcc-96jf5" event={"ID":"36dba152-b43d-47c4-94bb-874f93b0884f","Type":"ContainerStarted","Data":"c362f398f6f7a61971db6576b414a84f06551e1ea006f0559447851533f9a772"} Jan 26 08:10:46 crc kubenswrapper[4806]: I0126 08:10:46.944188 4806 generic.go:334] "Generic (PLEG): container finished" podID="30f57a1f-8a72-4dfa-88da-db7bd1312809" containerID="e5294748c08c442eeda0d2209728457956dd45748f5d7fcd48f240d38ebe63ec" exitCode=0 Jan 26 08:10:46 crc kubenswrapper[4806]: I0126 08:10:46.944265 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b868669f-fcbc8" event={"ID":"30f57a1f-8a72-4dfa-88da-db7bd1312809","Type":"ContainerDied","Data":"e5294748c08c442eeda0d2209728457956dd45748f5d7fcd48f240d38ebe63ec"} Jan 26 08:10:46 crc kubenswrapper[4806]: I0126 08:10:46.959401 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-8dksv" event={"ID":"4588263f-b01b-4a54-829f-1cef11d1dbd3","Type":"ContainerStarted","Data":"018297ed617cbeeea51a1718d31dc809e8c02f42977f4e15b58124578dcb7d4b"} Jan 26 08:10:46 crc kubenswrapper[4806]: I0126 08:10:46.962897 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cf78879c9-42c6r" event={"ID":"2762738e-b53a-4f25-ae4e-fa5182994a78","Type":"ContainerStarted","Data":"1c564710fbb3c31334906cf48cfbbc12782e81af23377856a47e6e1105bff444"} Jan 26 08:10:46 crc kubenswrapper[4806]: I0126 08:10:46.962952 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cf78879c9-42c6r" event={"ID":"2762738e-b53a-4f25-ae4e-fa5182994a78","Type":"ContainerStarted","Data":"a7d938e28470073b1c56ced327081624d82b3bb0ed1a48ee35ba203adeea5a4d"} Jan 26 08:10:46 crc kubenswrapper[4806]: I0126 08:10:46.980006 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-795f9f9b67-n94zj"] Jan 26 08:10:46 crc kubenswrapper[4806]: I0126 08:10:46.995131 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-r5vvf" event={"ID":"b0a51881-d18e-40dd-8dfb-a243d798133a","Type":"ContainerStarted","Data":"a5bf5421c602860c92dc7c4b244d6e55a1de2ebe55850dc369e30a8d0d3dcd53"} Jan 26 08:10:47 crc kubenswrapper[4806]: I0126 08:10:47.006404 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-795f9f9b67-n94zj" event={"ID":"a15cfc51-1d28-4476-b9ac-2ef08300220f","Type":"ContainerStarted","Data":"8aab465a27a6f72104325bfd7323b25831c57fabd1466520f25f05843140d12f"} Jan 26 08:10:47 crc kubenswrapper[4806]: I0126 08:10:47.028233 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-d6959ff45-2jnxn"] Jan 26 08:10:47 crc kubenswrapper[4806]: E0126 08:10:47.028779 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6b83d6c-c555-44af-a218-2c2f81f3bb0c" containerName="dnsmasq-dns" Jan 26 08:10:47 crc kubenswrapper[4806]: I0126 08:10:47.028889 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6b83d6c-c555-44af-a218-2c2f81f3bb0c" containerName="dnsmasq-dns" Jan 26 08:10:47 crc kubenswrapper[4806]: E0126 08:10:47.028922 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6b83d6c-c555-44af-a218-2c2f81f3bb0c" containerName="init" Jan 26 08:10:47 crc kubenswrapper[4806]: I0126 08:10:47.028929 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6b83d6c-c555-44af-a218-2c2f81f3bb0c" containerName="init" Jan 26 08:10:47 crc kubenswrapper[4806]: I0126 08:10:47.029158 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6b83d6c-c555-44af-a218-2c2f81f3bb0c" containerName="dnsmasq-dns" Jan 26 08:10:47 crc kubenswrapper[4806]: I0126 08:10:47.030366 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"93bf46a8-2942-4b36-9853-88ff5c6e756b","Type":"ContainerStarted","Data":"567c7947d52421557b86a983e71f80bdae2f486af0016a50a666767ebbd09ef3"} Jan 26 08:10:47 crc kubenswrapper[4806]: I0126 08:10:47.030501 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-d6959ff45-2jnxn" Jan 26 08:10:47 crc kubenswrapper[4806]: I0126 08:10:47.031009 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-qw29c" event={"ID":"bc6102bf-7483-4063-af9d-841e78398b0c","Type":"ContainerStarted","Data":"c89fec84698ff1592ba8f352fd5b70972d628d8262b81d473618c696f80f66fc"} Jan 26 08:10:47 crc kubenswrapper[4806]: I0126 08:10:47.067033 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6b83d6c-c555-44af-a218-2c2f81f3bb0c" path="/var/lib/kubelet/pods/c6b83d6c-c555-44af-a218-2c2f81f3bb0c/volumes" Jan 26 08:10:47 crc kubenswrapper[4806]: I0126 08:10:47.067633 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-hvd56" event={"ID":"7c574ca2-3991-4eec-80f8-2e389d6a0e4b","Type":"ContainerStarted","Data":"eec2b19530f5fed972c2de385b241db04e751ef987f525135d79fd350fcc0a31"} Jan 26 08:10:47 crc kubenswrapper[4806]: I0126 08:10:47.067665 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-hnszz" event={"ID":"86ed2345-2edc-46bb-a416-3cfa5c01b38d","Type":"ContainerStarted","Data":"89e2ffb816380037443529f69c704fb204139c80493e6d6008daccbaf329aa32"} Jan 26 08:10:47 crc kubenswrapper[4806]: I0126 08:10:47.087598 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-d6959ff45-2jnxn"] Jan 26 08:10:47 crc kubenswrapper[4806]: I0126 08:10:47.126029 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 08:10:47 crc kubenswrapper[4806]: I0126 08:10:47.155206 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-hvd56" podStartSLOduration=3.155190158 podStartE2EDuration="3.155190158s" podCreationTimestamp="2026-01-26 08:10:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 08:10:47.097097756 +0000 UTC m=+1026.361505812" watchObservedRunningTime="2026-01-26 08:10:47.155190158 +0000 UTC m=+1026.419598214" Jan 26 08:10:47 crc kubenswrapper[4806]: I0126 08:10:47.161251 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k552v\" (UniqueName: \"kubernetes.io/projected/1ded787e-1546-468b-a693-640272090020-kube-api-access-k552v\") pod \"horizon-d6959ff45-2jnxn\" (UID: \"1ded787e-1546-468b-a693-640272090020\") " pod="openstack/horizon-d6959ff45-2jnxn" Jan 26 08:10:47 crc kubenswrapper[4806]: I0126 08:10:47.161293 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1ded787e-1546-468b-a693-640272090020-scripts\") pod \"horizon-d6959ff45-2jnxn\" (UID: \"1ded787e-1546-468b-a693-640272090020\") " pod="openstack/horizon-d6959ff45-2jnxn" Jan 26 08:10:47 crc kubenswrapper[4806]: I0126 08:10:47.161345 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1ded787e-1546-468b-a693-640272090020-logs\") pod \"horizon-d6959ff45-2jnxn\" (UID: \"1ded787e-1546-468b-a693-640272090020\") " pod="openstack/horizon-d6959ff45-2jnxn" Jan 26 08:10:47 crc kubenswrapper[4806]: I0126 08:10:47.161390 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1ded787e-1546-468b-a693-640272090020-horizon-secret-key\") pod \"horizon-d6959ff45-2jnxn\" (UID: \"1ded787e-1546-468b-a693-640272090020\") " pod="openstack/horizon-d6959ff45-2jnxn" Jan 26 08:10:47 crc kubenswrapper[4806]: I0126 08:10:47.161448 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1ded787e-1546-468b-a693-640272090020-config-data\") pod \"horizon-d6959ff45-2jnxn\" (UID: \"1ded787e-1546-468b-a693-640272090020\") " pod="openstack/horizon-d6959ff45-2jnxn" Jan 26 08:10:47 crc kubenswrapper[4806]: I0126 08:10:47.267693 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1ded787e-1546-468b-a693-640272090020-logs\") pod \"horizon-d6959ff45-2jnxn\" (UID: \"1ded787e-1546-468b-a693-640272090020\") " pod="openstack/horizon-d6959ff45-2jnxn" Jan 26 08:10:47 crc kubenswrapper[4806]: I0126 08:10:47.267920 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1ded787e-1546-468b-a693-640272090020-horizon-secret-key\") pod \"horizon-d6959ff45-2jnxn\" (UID: \"1ded787e-1546-468b-a693-640272090020\") " pod="openstack/horizon-d6959ff45-2jnxn" Jan 26 08:10:47 crc kubenswrapper[4806]: I0126 08:10:47.267980 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1ded787e-1546-468b-a693-640272090020-config-data\") pod \"horizon-d6959ff45-2jnxn\" (UID: \"1ded787e-1546-468b-a693-640272090020\") " pod="openstack/horizon-d6959ff45-2jnxn" Jan 26 08:10:47 crc kubenswrapper[4806]: I0126 08:10:47.268075 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k552v\" (UniqueName: \"kubernetes.io/projected/1ded787e-1546-468b-a693-640272090020-kube-api-access-k552v\") pod \"horizon-d6959ff45-2jnxn\" (UID: \"1ded787e-1546-468b-a693-640272090020\") " pod="openstack/horizon-d6959ff45-2jnxn" Jan 26 08:10:47 crc kubenswrapper[4806]: I0126 08:10:47.268108 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1ded787e-1546-468b-a693-640272090020-scripts\") pod \"horizon-d6959ff45-2jnxn\" (UID: \"1ded787e-1546-468b-a693-640272090020\") " pod="openstack/horizon-d6959ff45-2jnxn" Jan 26 08:10:47 crc kubenswrapper[4806]: I0126 08:10:47.268933 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1ded787e-1546-468b-a693-640272090020-logs\") pod \"horizon-d6959ff45-2jnxn\" (UID: \"1ded787e-1546-468b-a693-640272090020\") " pod="openstack/horizon-d6959ff45-2jnxn" Jan 26 08:10:47 crc kubenswrapper[4806]: I0126 08:10:47.268990 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1ded787e-1546-468b-a693-640272090020-scripts\") pod \"horizon-d6959ff45-2jnxn\" (UID: \"1ded787e-1546-468b-a693-640272090020\") " pod="openstack/horizon-d6959ff45-2jnxn" Jan 26 08:10:47 crc kubenswrapper[4806]: I0126 08:10:47.272294 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1ded787e-1546-468b-a693-640272090020-config-data\") pod \"horizon-d6959ff45-2jnxn\" (UID: \"1ded787e-1546-468b-a693-640272090020\") " pod="openstack/horizon-d6959ff45-2jnxn" Jan 26 08:10:47 crc kubenswrapper[4806]: I0126 08:10:47.286969 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k552v\" (UniqueName: \"kubernetes.io/projected/1ded787e-1546-468b-a693-640272090020-kube-api-access-k552v\") pod \"horizon-d6959ff45-2jnxn\" (UID: \"1ded787e-1546-468b-a693-640272090020\") " pod="openstack/horizon-d6959ff45-2jnxn" Jan 26 08:10:47 crc kubenswrapper[4806]: I0126 08:10:47.288190 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1ded787e-1546-468b-a693-640272090020-horizon-secret-key\") pod \"horizon-d6959ff45-2jnxn\" (UID: \"1ded787e-1546-468b-a693-640272090020\") " pod="openstack/horizon-d6959ff45-2jnxn" Jan 26 08:10:47 crc kubenswrapper[4806]: I0126 08:10:47.372374 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-d6959ff45-2jnxn" Jan 26 08:10:47 crc kubenswrapper[4806]: I0126 08:10:47.536778 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b868669f-fcbc8" Jan 26 08:10:47 crc kubenswrapper[4806]: I0126 08:10:47.676092 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p8kqc\" (UniqueName: \"kubernetes.io/projected/30f57a1f-8a72-4dfa-88da-db7bd1312809-kube-api-access-p8kqc\") pod \"30f57a1f-8a72-4dfa-88da-db7bd1312809\" (UID: \"30f57a1f-8a72-4dfa-88da-db7bd1312809\") " Jan 26 08:10:47 crc kubenswrapper[4806]: I0126 08:10:47.676475 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/30f57a1f-8a72-4dfa-88da-db7bd1312809-dns-swift-storage-0\") pod \"30f57a1f-8a72-4dfa-88da-db7bd1312809\" (UID: \"30f57a1f-8a72-4dfa-88da-db7bd1312809\") " Jan 26 08:10:47 crc kubenswrapper[4806]: I0126 08:10:47.676543 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/30f57a1f-8a72-4dfa-88da-db7bd1312809-dns-svc\") pod \"30f57a1f-8a72-4dfa-88da-db7bd1312809\" (UID: \"30f57a1f-8a72-4dfa-88da-db7bd1312809\") " Jan 26 08:10:47 crc kubenswrapper[4806]: I0126 08:10:47.676592 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/30f57a1f-8a72-4dfa-88da-db7bd1312809-ovsdbserver-nb\") pod \"30f57a1f-8a72-4dfa-88da-db7bd1312809\" (UID: \"30f57a1f-8a72-4dfa-88da-db7bd1312809\") " Jan 26 08:10:47 crc kubenswrapper[4806]: I0126 08:10:47.676709 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30f57a1f-8a72-4dfa-88da-db7bd1312809-config\") pod \"30f57a1f-8a72-4dfa-88da-db7bd1312809\" (UID: \"30f57a1f-8a72-4dfa-88da-db7bd1312809\") " Jan 26 08:10:47 crc kubenswrapper[4806]: I0126 08:10:47.676727 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/30f57a1f-8a72-4dfa-88da-db7bd1312809-ovsdbserver-sb\") pod \"30f57a1f-8a72-4dfa-88da-db7bd1312809\" (UID: \"30f57a1f-8a72-4dfa-88da-db7bd1312809\") " Jan 26 08:10:47 crc kubenswrapper[4806]: I0126 08:10:47.683329 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30f57a1f-8a72-4dfa-88da-db7bd1312809-kube-api-access-p8kqc" (OuterVolumeSpecName: "kube-api-access-p8kqc") pod "30f57a1f-8a72-4dfa-88da-db7bd1312809" (UID: "30f57a1f-8a72-4dfa-88da-db7bd1312809"). InnerVolumeSpecName "kube-api-access-p8kqc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:10:47 crc kubenswrapper[4806]: I0126 08:10:47.724847 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30f57a1f-8a72-4dfa-88da-db7bd1312809-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "30f57a1f-8a72-4dfa-88da-db7bd1312809" (UID: "30f57a1f-8a72-4dfa-88da-db7bd1312809"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 08:10:47 crc kubenswrapper[4806]: I0126 08:10:47.743714 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30f57a1f-8a72-4dfa-88da-db7bd1312809-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "30f57a1f-8a72-4dfa-88da-db7bd1312809" (UID: "30f57a1f-8a72-4dfa-88da-db7bd1312809"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 08:10:47 crc kubenswrapper[4806]: I0126 08:10:47.744044 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30f57a1f-8a72-4dfa-88da-db7bd1312809-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "30f57a1f-8a72-4dfa-88da-db7bd1312809" (UID: "30f57a1f-8a72-4dfa-88da-db7bd1312809"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 08:10:47 crc kubenswrapper[4806]: I0126 08:10:47.746986 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30f57a1f-8a72-4dfa-88da-db7bd1312809-config" (OuterVolumeSpecName: "config") pod "30f57a1f-8a72-4dfa-88da-db7bd1312809" (UID: "30f57a1f-8a72-4dfa-88da-db7bd1312809"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 08:10:47 crc kubenswrapper[4806]: I0126 08:10:47.750040 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30f57a1f-8a72-4dfa-88da-db7bd1312809-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "30f57a1f-8a72-4dfa-88da-db7bd1312809" (UID: "30f57a1f-8a72-4dfa-88da-db7bd1312809"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 08:10:47 crc kubenswrapper[4806]: I0126 08:10:47.778269 4806 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30f57a1f-8a72-4dfa-88da-db7bd1312809-config\") on node \"crc\" DevicePath \"\"" Jan 26 08:10:47 crc kubenswrapper[4806]: I0126 08:10:47.778305 4806 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/30f57a1f-8a72-4dfa-88da-db7bd1312809-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 08:10:47 crc kubenswrapper[4806]: I0126 08:10:47.778318 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p8kqc\" (UniqueName: \"kubernetes.io/projected/30f57a1f-8a72-4dfa-88da-db7bd1312809-kube-api-access-p8kqc\") on node \"crc\" DevicePath \"\"" Jan 26 08:10:47 crc kubenswrapper[4806]: I0126 08:10:47.778328 4806 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/30f57a1f-8a72-4dfa-88da-db7bd1312809-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 26 08:10:47 crc kubenswrapper[4806]: I0126 08:10:47.778337 4806 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/30f57a1f-8a72-4dfa-88da-db7bd1312809-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 08:10:47 crc kubenswrapper[4806]: I0126 08:10:47.778345 4806 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/30f57a1f-8a72-4dfa-88da-db7bd1312809-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 08:10:47 crc kubenswrapper[4806]: I0126 08:10:47.969984 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-d6959ff45-2jnxn"] Jan 26 08:10:48 crc kubenswrapper[4806]: W0126 08:10:48.033631 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1ded787e_1546_468b_a693_640272090020.slice/crio-6ccddb363d27b58595bab4e448598ce827cc961567d9efd7af8dafd8c572dadd WatchSource:0}: Error finding container 6ccddb363d27b58595bab4e448598ce827cc961567d9efd7af8dafd8c572dadd: Status 404 returned error can't find the container with id 6ccddb363d27b58595bab4e448598ce827cc961567d9efd7af8dafd8c572dadd Jan 26 08:10:48 crc kubenswrapper[4806]: I0126 08:10:48.072565 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-d6959ff45-2jnxn" event={"ID":"1ded787e-1546-468b-a693-640272090020","Type":"ContainerStarted","Data":"6ccddb363d27b58595bab4e448598ce827cc961567d9efd7af8dafd8c572dadd"} Jan 26 08:10:48 crc kubenswrapper[4806]: I0126 08:10:48.095759 4806 generic.go:334] "Generic (PLEG): container finished" podID="2762738e-b53a-4f25-ae4e-fa5182994a78" containerID="1c564710fbb3c31334906cf48cfbbc12782e81af23377856a47e6e1105bff444" exitCode=0 Jan 26 08:10:48 crc kubenswrapper[4806]: I0126 08:10:48.095830 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cf78879c9-42c6r" event={"ID":"2762738e-b53a-4f25-ae4e-fa5182994a78","Type":"ContainerDied","Data":"1c564710fbb3c31334906cf48cfbbc12782e81af23377856a47e6e1105bff444"} Jan 26 08:10:48 crc kubenswrapper[4806]: I0126 08:10:48.107118 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-r5vvf" event={"ID":"b0a51881-d18e-40dd-8dfb-a243d798133a","Type":"ContainerStarted","Data":"8a25122d6c9fad04d754046184c87ea909a7a3437fdcfded27819a663fb1f063"} Jan 26 08:10:48 crc kubenswrapper[4806]: I0126 08:10:48.126628 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b868669f-fcbc8" Jan 26 08:10:48 crc kubenswrapper[4806]: I0126 08:10:48.126715 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b868669f-fcbc8" event={"ID":"30f57a1f-8a72-4dfa-88da-db7bd1312809","Type":"ContainerDied","Data":"0333a20ffbab4a3ec2f8f055052dd3a78bb81cd41e56eed14bc139e754b0a544"} Jan 26 08:10:48 crc kubenswrapper[4806]: I0126 08:10:48.126747 4806 scope.go:117] "RemoveContainer" containerID="e5294748c08c442eeda0d2209728457956dd45748f5d7fcd48f240d38ebe63ec" Jan 26 08:10:48 crc kubenswrapper[4806]: I0126 08:10:48.144984 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-r5vvf" podStartSLOduration=4.144965642 podStartE2EDuration="4.144965642s" podCreationTimestamp="2026-01-26 08:10:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 08:10:48.14310016 +0000 UTC m=+1027.407508216" watchObservedRunningTime="2026-01-26 08:10:48.144965642 +0000 UTC m=+1027.409373698" Jan 26 08:10:48 crc kubenswrapper[4806]: I0126 08:10:48.207818 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b868669f-fcbc8"] Jan 26 08:10:48 crc kubenswrapper[4806]: I0126 08:10:48.215006 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b868669f-fcbc8"] Jan 26 08:10:49 crc kubenswrapper[4806]: I0126 08:10:49.062716 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30f57a1f-8a72-4dfa-88da-db7bd1312809" path="/var/lib/kubelet/pods/30f57a1f-8a72-4dfa-88da-db7bd1312809/volumes" Jan 26 08:10:49 crc kubenswrapper[4806]: I0126 08:10:49.139292 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cf78879c9-42c6r" event={"ID":"2762738e-b53a-4f25-ae4e-fa5182994a78","Type":"ContainerStarted","Data":"9adff4692a6d52a6838b189713eb2f631570866b5371a0c6407a2857a0019017"} Jan 26 08:10:49 crc kubenswrapper[4806]: I0126 08:10:49.140263 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-cf78879c9-42c6r" Jan 26 08:10:49 crc kubenswrapper[4806]: I0126 08:10:49.169626 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-cf78879c9-42c6r" podStartSLOduration=5.169608447 podStartE2EDuration="5.169608447s" podCreationTimestamp="2026-01-26 08:10:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 08:10:49.168971619 +0000 UTC m=+1028.433379675" watchObservedRunningTime="2026-01-26 08:10:49.169608447 +0000 UTC m=+1028.434016503" Jan 26 08:10:50 crc kubenswrapper[4806]: I0126 08:10:50.164700 4806 generic.go:334] "Generic (PLEG): container finished" podID="f602c552-c375-4d9b-96fc-633ad5811f7d" containerID="a4fedad710b52b7d491be18c79d774060b6fd791076c1359f72a6fc755541add" exitCode=0 Jan 26 08:10:50 crc kubenswrapper[4806]: I0126 08:10:50.165197 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-hfpkt" event={"ID":"f602c552-c375-4d9b-96fc-633ad5811f7d","Type":"ContainerDied","Data":"a4fedad710b52b7d491be18c79d774060b6fd791076c1359f72a6fc755541add"} Jan 26 08:10:50 crc kubenswrapper[4806]: I0126 08:10:50.359755 4806 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5c79d794d7-kx6s2" podUID="c6b83d6c-c555-44af-a218-2c2f81f3bb0c" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.136:5353: i/o timeout" Jan 26 08:10:52 crc kubenswrapper[4806]: I0126 08:10:52.188987 4806 generic.go:334] "Generic (PLEG): container finished" podID="7c574ca2-3991-4eec-80f8-2e389d6a0e4b" containerID="eec2b19530f5fed972c2de385b241db04e751ef987f525135d79fd350fcc0a31" exitCode=0 Jan 26 08:10:52 crc kubenswrapper[4806]: I0126 08:10:52.189078 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-hvd56" event={"ID":"7c574ca2-3991-4eec-80f8-2e389d6a0e4b","Type":"ContainerDied","Data":"eec2b19530f5fed972c2de385b241db04e751ef987f525135d79fd350fcc0a31"} Jan 26 08:10:53 crc kubenswrapper[4806]: I0126 08:10:53.339487 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-cbdcb8bcc-96jf5"] Jan 26 08:10:53 crc kubenswrapper[4806]: I0126 08:10:53.381830 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7d485d788d-5q4tb"] Jan 26 08:10:53 crc kubenswrapper[4806]: E0126 08:10:53.382205 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30f57a1f-8a72-4dfa-88da-db7bd1312809" containerName="init" Jan 26 08:10:53 crc kubenswrapper[4806]: I0126 08:10:53.382219 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="30f57a1f-8a72-4dfa-88da-db7bd1312809" containerName="init" Jan 26 08:10:53 crc kubenswrapper[4806]: I0126 08:10:53.382412 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="30f57a1f-8a72-4dfa-88da-db7bd1312809" containerName="init" Jan 26 08:10:53 crc kubenswrapper[4806]: I0126 08:10:53.383267 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7d485d788d-5q4tb" Jan 26 08:10:53 crc kubenswrapper[4806]: I0126 08:10:53.389921 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Jan 26 08:10:53 crc kubenswrapper[4806]: I0126 08:10:53.431442 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7d485d788d-5q4tb"] Jan 26 08:10:53 crc kubenswrapper[4806]: I0126 08:10:53.446270 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d7b4ee8d-6333-4683-94c4-b79229c76537-horizon-secret-key\") pod \"horizon-7d485d788d-5q4tb\" (UID: \"d7b4ee8d-6333-4683-94c4-b79229c76537\") " pod="openstack/horizon-7d485d788d-5q4tb" Jan 26 08:10:53 crc kubenswrapper[4806]: I0126 08:10:53.446353 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d7b4ee8d-6333-4683-94c4-b79229c76537-logs\") pod \"horizon-7d485d788d-5q4tb\" (UID: \"d7b4ee8d-6333-4683-94c4-b79229c76537\") " pod="openstack/horizon-7d485d788d-5q4tb" Jan 26 08:10:53 crc kubenswrapper[4806]: I0126 08:10:53.446373 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7b4ee8d-6333-4683-94c4-b79229c76537-horizon-tls-certs\") pod \"horizon-7d485d788d-5q4tb\" (UID: \"d7b4ee8d-6333-4683-94c4-b79229c76537\") " pod="openstack/horizon-7d485d788d-5q4tb" Jan 26 08:10:53 crc kubenswrapper[4806]: I0126 08:10:53.446417 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8k79\" (UniqueName: \"kubernetes.io/projected/d7b4ee8d-6333-4683-94c4-b79229c76537-kube-api-access-h8k79\") pod \"horizon-7d485d788d-5q4tb\" (UID: \"d7b4ee8d-6333-4683-94c4-b79229c76537\") " pod="openstack/horizon-7d485d788d-5q4tb" Jan 26 08:10:53 crc kubenswrapper[4806]: I0126 08:10:53.446442 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d7b4ee8d-6333-4683-94c4-b79229c76537-scripts\") pod \"horizon-7d485d788d-5q4tb\" (UID: \"d7b4ee8d-6333-4683-94c4-b79229c76537\") " pod="openstack/horizon-7d485d788d-5q4tb" Jan 26 08:10:53 crc kubenswrapper[4806]: I0126 08:10:53.446509 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d7b4ee8d-6333-4683-94c4-b79229c76537-config-data\") pod \"horizon-7d485d788d-5q4tb\" (UID: \"d7b4ee8d-6333-4683-94c4-b79229c76537\") " pod="openstack/horizon-7d485d788d-5q4tb" Jan 26 08:10:53 crc kubenswrapper[4806]: I0126 08:10:53.446569 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7b4ee8d-6333-4683-94c4-b79229c76537-combined-ca-bundle\") pod \"horizon-7d485d788d-5q4tb\" (UID: \"d7b4ee8d-6333-4683-94c4-b79229c76537\") " pod="openstack/horizon-7d485d788d-5q4tb" Jan 26 08:10:53 crc kubenswrapper[4806]: I0126 08:10:53.499399 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-d6959ff45-2jnxn"] Jan 26 08:10:53 crc kubenswrapper[4806]: I0126 08:10:53.538552 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6b8f96b47b-sbsnb"] Jan 26 08:10:53 crc kubenswrapper[4806]: I0126 08:10:53.540202 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6b8f96b47b-sbsnb" Jan 26 08:10:53 crc kubenswrapper[4806]: I0126 08:10:53.548602 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7b4ee8d-6333-4683-94c4-b79229c76537-combined-ca-bundle\") pod \"horizon-7d485d788d-5q4tb\" (UID: \"d7b4ee8d-6333-4683-94c4-b79229c76537\") " pod="openstack/horizon-7d485d788d-5q4tb" Jan 26 08:10:53 crc kubenswrapper[4806]: I0126 08:10:53.548692 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d7b4ee8d-6333-4683-94c4-b79229c76537-horizon-secret-key\") pod \"horizon-7d485d788d-5q4tb\" (UID: \"d7b4ee8d-6333-4683-94c4-b79229c76537\") " pod="openstack/horizon-7d485d788d-5q4tb" Jan 26 08:10:53 crc kubenswrapper[4806]: I0126 08:10:53.548731 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d7b4ee8d-6333-4683-94c4-b79229c76537-logs\") pod \"horizon-7d485d788d-5q4tb\" (UID: \"d7b4ee8d-6333-4683-94c4-b79229c76537\") " pod="openstack/horizon-7d485d788d-5q4tb" Jan 26 08:10:53 crc kubenswrapper[4806]: I0126 08:10:53.548749 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7b4ee8d-6333-4683-94c4-b79229c76537-horizon-tls-certs\") pod \"horizon-7d485d788d-5q4tb\" (UID: \"d7b4ee8d-6333-4683-94c4-b79229c76537\") " pod="openstack/horizon-7d485d788d-5q4tb" Jan 26 08:10:53 crc kubenswrapper[4806]: I0126 08:10:53.548783 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8k79\" (UniqueName: \"kubernetes.io/projected/d7b4ee8d-6333-4683-94c4-b79229c76537-kube-api-access-h8k79\") pod \"horizon-7d485d788d-5q4tb\" (UID: \"d7b4ee8d-6333-4683-94c4-b79229c76537\") " pod="openstack/horizon-7d485d788d-5q4tb" Jan 26 08:10:53 crc kubenswrapper[4806]: I0126 08:10:53.548806 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d7b4ee8d-6333-4683-94c4-b79229c76537-scripts\") pod \"horizon-7d485d788d-5q4tb\" (UID: \"d7b4ee8d-6333-4683-94c4-b79229c76537\") " pod="openstack/horizon-7d485d788d-5q4tb" Jan 26 08:10:53 crc kubenswrapper[4806]: I0126 08:10:53.548853 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d7b4ee8d-6333-4683-94c4-b79229c76537-config-data\") pod \"horizon-7d485d788d-5q4tb\" (UID: \"d7b4ee8d-6333-4683-94c4-b79229c76537\") " pod="openstack/horizon-7d485d788d-5q4tb" Jan 26 08:10:53 crc kubenswrapper[4806]: I0126 08:10:53.549796 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d7b4ee8d-6333-4683-94c4-b79229c76537-logs\") pod \"horizon-7d485d788d-5q4tb\" (UID: \"d7b4ee8d-6333-4683-94c4-b79229c76537\") " pod="openstack/horizon-7d485d788d-5q4tb" Jan 26 08:10:53 crc kubenswrapper[4806]: I0126 08:10:53.551093 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d7b4ee8d-6333-4683-94c4-b79229c76537-config-data\") pod \"horizon-7d485d788d-5q4tb\" (UID: \"d7b4ee8d-6333-4683-94c4-b79229c76537\") " pod="openstack/horizon-7d485d788d-5q4tb" Jan 26 08:10:53 crc kubenswrapper[4806]: I0126 08:10:53.553133 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d7b4ee8d-6333-4683-94c4-b79229c76537-scripts\") pod \"horizon-7d485d788d-5q4tb\" (UID: \"d7b4ee8d-6333-4683-94c4-b79229c76537\") " pod="openstack/horizon-7d485d788d-5q4tb" Jan 26 08:10:53 crc kubenswrapper[4806]: I0126 08:10:53.561604 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6b8f96b47b-sbsnb"] Jan 26 08:10:53 crc kubenswrapper[4806]: I0126 08:10:53.569477 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7b4ee8d-6333-4683-94c4-b79229c76537-horizon-tls-certs\") pod \"horizon-7d485d788d-5q4tb\" (UID: \"d7b4ee8d-6333-4683-94c4-b79229c76537\") " pod="openstack/horizon-7d485d788d-5q4tb" Jan 26 08:10:53 crc kubenswrapper[4806]: I0126 08:10:53.582847 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7b4ee8d-6333-4683-94c4-b79229c76537-combined-ca-bundle\") pod \"horizon-7d485d788d-5q4tb\" (UID: \"d7b4ee8d-6333-4683-94c4-b79229c76537\") " pod="openstack/horizon-7d485d788d-5q4tb" Jan 26 08:10:53 crc kubenswrapper[4806]: I0126 08:10:53.587652 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d7b4ee8d-6333-4683-94c4-b79229c76537-horizon-secret-key\") pod \"horizon-7d485d788d-5q4tb\" (UID: \"d7b4ee8d-6333-4683-94c4-b79229c76537\") " pod="openstack/horizon-7d485d788d-5q4tb" Jan 26 08:10:53 crc kubenswrapper[4806]: I0126 08:10:53.600237 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8k79\" (UniqueName: \"kubernetes.io/projected/d7b4ee8d-6333-4683-94c4-b79229c76537-kube-api-access-h8k79\") pod \"horizon-7d485d788d-5q4tb\" (UID: \"d7b4ee8d-6333-4683-94c4-b79229c76537\") " pod="openstack/horizon-7d485d788d-5q4tb" Jan 26 08:10:53 crc kubenswrapper[4806]: I0126 08:10:53.651947 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4ed3e96-22ec-410e-8f50-afd310343aa8-horizon-tls-certs\") pod \"horizon-6b8f96b47b-sbsnb\" (UID: \"d4ed3e96-22ec-410e-8f50-afd310343aa8\") " pod="openstack/horizon-6b8f96b47b-sbsnb" Jan 26 08:10:53 crc kubenswrapper[4806]: I0126 08:10:53.652037 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d4ed3e96-22ec-410e-8f50-afd310343aa8-horizon-secret-key\") pod \"horizon-6b8f96b47b-sbsnb\" (UID: \"d4ed3e96-22ec-410e-8f50-afd310343aa8\") " pod="openstack/horizon-6b8f96b47b-sbsnb" Jan 26 08:10:53 crc kubenswrapper[4806]: I0126 08:10:53.652220 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmzrs\" (UniqueName: \"kubernetes.io/projected/d4ed3e96-22ec-410e-8f50-afd310343aa8-kube-api-access-mmzrs\") pod \"horizon-6b8f96b47b-sbsnb\" (UID: \"d4ed3e96-22ec-410e-8f50-afd310343aa8\") " pod="openstack/horizon-6b8f96b47b-sbsnb" Jan 26 08:10:53 crc kubenswrapper[4806]: I0126 08:10:53.652297 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d4ed3e96-22ec-410e-8f50-afd310343aa8-scripts\") pod \"horizon-6b8f96b47b-sbsnb\" (UID: \"d4ed3e96-22ec-410e-8f50-afd310343aa8\") " pod="openstack/horizon-6b8f96b47b-sbsnb" Jan 26 08:10:53 crc kubenswrapper[4806]: I0126 08:10:53.652332 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4ed3e96-22ec-410e-8f50-afd310343aa8-logs\") pod \"horizon-6b8f96b47b-sbsnb\" (UID: \"d4ed3e96-22ec-410e-8f50-afd310343aa8\") " pod="openstack/horizon-6b8f96b47b-sbsnb" Jan 26 08:10:53 crc kubenswrapper[4806]: I0126 08:10:53.652439 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d4ed3e96-22ec-410e-8f50-afd310343aa8-config-data\") pod \"horizon-6b8f96b47b-sbsnb\" (UID: \"d4ed3e96-22ec-410e-8f50-afd310343aa8\") " pod="openstack/horizon-6b8f96b47b-sbsnb" Jan 26 08:10:53 crc kubenswrapper[4806]: I0126 08:10:53.652533 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4ed3e96-22ec-410e-8f50-afd310343aa8-combined-ca-bundle\") pod \"horizon-6b8f96b47b-sbsnb\" (UID: \"d4ed3e96-22ec-410e-8f50-afd310343aa8\") " pod="openstack/horizon-6b8f96b47b-sbsnb" Jan 26 08:10:53 crc kubenswrapper[4806]: I0126 08:10:53.727054 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7d485d788d-5q4tb" Jan 26 08:10:53 crc kubenswrapper[4806]: I0126 08:10:53.754461 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mmzrs\" (UniqueName: \"kubernetes.io/projected/d4ed3e96-22ec-410e-8f50-afd310343aa8-kube-api-access-mmzrs\") pod \"horizon-6b8f96b47b-sbsnb\" (UID: \"d4ed3e96-22ec-410e-8f50-afd310343aa8\") " pod="openstack/horizon-6b8f96b47b-sbsnb" Jan 26 08:10:53 crc kubenswrapper[4806]: I0126 08:10:53.754626 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d4ed3e96-22ec-410e-8f50-afd310343aa8-scripts\") pod \"horizon-6b8f96b47b-sbsnb\" (UID: \"d4ed3e96-22ec-410e-8f50-afd310343aa8\") " pod="openstack/horizon-6b8f96b47b-sbsnb" Jan 26 08:10:53 crc kubenswrapper[4806]: I0126 08:10:53.754656 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4ed3e96-22ec-410e-8f50-afd310343aa8-logs\") pod \"horizon-6b8f96b47b-sbsnb\" (UID: \"d4ed3e96-22ec-410e-8f50-afd310343aa8\") " pod="openstack/horizon-6b8f96b47b-sbsnb" Jan 26 08:10:53 crc kubenswrapper[4806]: I0126 08:10:53.754721 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d4ed3e96-22ec-410e-8f50-afd310343aa8-config-data\") pod \"horizon-6b8f96b47b-sbsnb\" (UID: \"d4ed3e96-22ec-410e-8f50-afd310343aa8\") " pod="openstack/horizon-6b8f96b47b-sbsnb" Jan 26 08:10:53 crc kubenswrapper[4806]: I0126 08:10:53.754763 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4ed3e96-22ec-410e-8f50-afd310343aa8-combined-ca-bundle\") pod \"horizon-6b8f96b47b-sbsnb\" (UID: \"d4ed3e96-22ec-410e-8f50-afd310343aa8\") " pod="openstack/horizon-6b8f96b47b-sbsnb" Jan 26 08:10:53 crc kubenswrapper[4806]: I0126 08:10:53.754806 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4ed3e96-22ec-410e-8f50-afd310343aa8-horizon-tls-certs\") pod \"horizon-6b8f96b47b-sbsnb\" (UID: \"d4ed3e96-22ec-410e-8f50-afd310343aa8\") " pod="openstack/horizon-6b8f96b47b-sbsnb" Jan 26 08:10:53 crc kubenswrapper[4806]: I0126 08:10:53.754868 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d4ed3e96-22ec-410e-8f50-afd310343aa8-horizon-secret-key\") pod \"horizon-6b8f96b47b-sbsnb\" (UID: \"d4ed3e96-22ec-410e-8f50-afd310343aa8\") " pod="openstack/horizon-6b8f96b47b-sbsnb" Jan 26 08:10:53 crc kubenswrapper[4806]: I0126 08:10:53.755147 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4ed3e96-22ec-410e-8f50-afd310343aa8-logs\") pod \"horizon-6b8f96b47b-sbsnb\" (UID: \"d4ed3e96-22ec-410e-8f50-afd310343aa8\") " pod="openstack/horizon-6b8f96b47b-sbsnb" Jan 26 08:10:53 crc kubenswrapper[4806]: I0126 08:10:53.762511 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d4ed3e96-22ec-410e-8f50-afd310343aa8-scripts\") pod \"horizon-6b8f96b47b-sbsnb\" (UID: \"d4ed3e96-22ec-410e-8f50-afd310343aa8\") " pod="openstack/horizon-6b8f96b47b-sbsnb" Jan 26 08:10:53 crc kubenswrapper[4806]: I0126 08:10:53.762824 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d4ed3e96-22ec-410e-8f50-afd310343aa8-config-data\") pod \"horizon-6b8f96b47b-sbsnb\" (UID: \"d4ed3e96-22ec-410e-8f50-afd310343aa8\") " pod="openstack/horizon-6b8f96b47b-sbsnb" Jan 26 08:10:53 crc kubenswrapper[4806]: I0126 08:10:53.766125 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4ed3e96-22ec-410e-8f50-afd310343aa8-combined-ca-bundle\") pod \"horizon-6b8f96b47b-sbsnb\" (UID: \"d4ed3e96-22ec-410e-8f50-afd310343aa8\") " pod="openstack/horizon-6b8f96b47b-sbsnb" Jan 26 08:10:53 crc kubenswrapper[4806]: I0126 08:10:53.766455 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4ed3e96-22ec-410e-8f50-afd310343aa8-horizon-tls-certs\") pod \"horizon-6b8f96b47b-sbsnb\" (UID: \"d4ed3e96-22ec-410e-8f50-afd310343aa8\") " pod="openstack/horizon-6b8f96b47b-sbsnb" Jan 26 08:10:53 crc kubenswrapper[4806]: I0126 08:10:53.767874 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d4ed3e96-22ec-410e-8f50-afd310343aa8-horizon-secret-key\") pod \"horizon-6b8f96b47b-sbsnb\" (UID: \"d4ed3e96-22ec-410e-8f50-afd310343aa8\") " pod="openstack/horizon-6b8f96b47b-sbsnb" Jan 26 08:10:53 crc kubenswrapper[4806]: I0126 08:10:53.772942 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmzrs\" (UniqueName: \"kubernetes.io/projected/d4ed3e96-22ec-410e-8f50-afd310343aa8-kube-api-access-mmzrs\") pod \"horizon-6b8f96b47b-sbsnb\" (UID: \"d4ed3e96-22ec-410e-8f50-afd310343aa8\") " pod="openstack/horizon-6b8f96b47b-sbsnb" Jan 26 08:10:53 crc kubenswrapper[4806]: I0126 08:10:53.993123 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6b8f96b47b-sbsnb" Jan 26 08:10:54 crc kubenswrapper[4806]: I0126 08:10:54.817947 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-hfpkt" Jan 26 08:10:54 crc kubenswrapper[4806]: I0126 08:10:54.874290 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f602c552-c375-4d9b-96fc-633ad5811f7d-config-data\") pod \"f602c552-c375-4d9b-96fc-633ad5811f7d\" (UID: \"f602c552-c375-4d9b-96fc-633ad5811f7d\") " Jan 26 08:10:54 crc kubenswrapper[4806]: I0126 08:10:54.874366 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vwqv6\" (UniqueName: \"kubernetes.io/projected/f602c552-c375-4d9b-96fc-633ad5811f7d-kube-api-access-vwqv6\") pod \"f602c552-c375-4d9b-96fc-633ad5811f7d\" (UID: \"f602c552-c375-4d9b-96fc-633ad5811f7d\") " Jan 26 08:10:54 crc kubenswrapper[4806]: I0126 08:10:54.874432 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f602c552-c375-4d9b-96fc-633ad5811f7d-db-sync-config-data\") pod \"f602c552-c375-4d9b-96fc-633ad5811f7d\" (UID: \"f602c552-c375-4d9b-96fc-633ad5811f7d\") " Jan 26 08:10:54 crc kubenswrapper[4806]: I0126 08:10:54.874900 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f602c552-c375-4d9b-96fc-633ad5811f7d-combined-ca-bundle\") pod \"f602c552-c375-4d9b-96fc-633ad5811f7d\" (UID: \"f602c552-c375-4d9b-96fc-633ad5811f7d\") " Jan 26 08:10:54 crc kubenswrapper[4806]: I0126 08:10:54.881683 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f602c552-c375-4d9b-96fc-633ad5811f7d-kube-api-access-vwqv6" (OuterVolumeSpecName: "kube-api-access-vwqv6") pod "f602c552-c375-4d9b-96fc-633ad5811f7d" (UID: "f602c552-c375-4d9b-96fc-633ad5811f7d"). InnerVolumeSpecName "kube-api-access-vwqv6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:10:54 crc kubenswrapper[4806]: I0126 08:10:54.892658 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f602c552-c375-4d9b-96fc-633ad5811f7d-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "f602c552-c375-4d9b-96fc-633ad5811f7d" (UID: "f602c552-c375-4d9b-96fc-633ad5811f7d"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:10:54 crc kubenswrapper[4806]: I0126 08:10:54.907741 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f602c552-c375-4d9b-96fc-633ad5811f7d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f602c552-c375-4d9b-96fc-633ad5811f7d" (UID: "f602c552-c375-4d9b-96fc-633ad5811f7d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:10:54 crc kubenswrapper[4806]: I0126 08:10:54.933540 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f602c552-c375-4d9b-96fc-633ad5811f7d-config-data" (OuterVolumeSpecName: "config-data") pod "f602c552-c375-4d9b-96fc-633ad5811f7d" (UID: "f602c552-c375-4d9b-96fc-633ad5811f7d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:10:54 crc kubenswrapper[4806]: I0126 08:10:54.986441 4806 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f602c552-c375-4d9b-96fc-633ad5811f7d-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 08:10:54 crc kubenswrapper[4806]: I0126 08:10:54.986493 4806 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f602c552-c375-4d9b-96fc-633ad5811f7d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 08:10:54 crc kubenswrapper[4806]: I0126 08:10:54.986507 4806 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f602c552-c375-4d9b-96fc-633ad5811f7d-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 08:10:54 crc kubenswrapper[4806]: I0126 08:10:54.986530 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vwqv6\" (UniqueName: \"kubernetes.io/projected/f602c552-c375-4d9b-96fc-633ad5811f7d-kube-api-access-vwqv6\") on node \"crc\" DevicePath \"\"" Jan 26 08:10:55 crc kubenswrapper[4806]: I0126 08:10:55.220447 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-hfpkt" event={"ID":"f602c552-c375-4d9b-96fc-633ad5811f7d","Type":"ContainerDied","Data":"97217450f8475c26f843acd8407d6496d356cad593eab5b8bb4bace2a3e4fbfe"} Jan 26 08:10:55 crc kubenswrapper[4806]: I0126 08:10:55.220496 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="97217450f8475c26f843acd8407d6496d356cad593eab5b8bb4bace2a3e4fbfe" Jan 26 08:10:55 crc kubenswrapper[4806]: I0126 08:10:55.220887 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-hfpkt" Jan 26 08:10:55 crc kubenswrapper[4806]: I0126 08:10:55.542723 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-cf78879c9-42c6r" Jan 26 08:10:55 crc kubenswrapper[4806]: I0126 08:10:55.614817 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-xcfqg"] Jan 26 08:10:55 crc kubenswrapper[4806]: I0126 08:10:55.615094 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-b8fbc5445-xcfqg" podUID="c6ba8a7a-2708-4123-90e8-5b66f4c86448" containerName="dnsmasq-dns" containerID="cri-o://f5b7aba37df1ab70703a3ef3dc28df0cf9e18d2c32129934f84e93f139ee5b72" gracePeriod=10 Jan 26 08:10:56 crc kubenswrapper[4806]: I0126 08:10:56.265984 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-q5h8d"] Jan 26 08:10:56 crc kubenswrapper[4806]: E0126 08:10:56.266624 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f602c552-c375-4d9b-96fc-633ad5811f7d" containerName="glance-db-sync" Jan 26 08:10:56 crc kubenswrapper[4806]: I0126 08:10:56.266638 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="f602c552-c375-4d9b-96fc-633ad5811f7d" containerName="glance-db-sync" Jan 26 08:10:56 crc kubenswrapper[4806]: I0126 08:10:56.268656 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="f602c552-c375-4d9b-96fc-633ad5811f7d" containerName="glance-db-sync" Jan 26 08:10:56 crc kubenswrapper[4806]: I0126 08:10:56.270362 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-q5h8d" Jan 26 08:10:56 crc kubenswrapper[4806]: I0126 08:10:56.285996 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-q5h8d"] Jan 26 08:10:56 crc kubenswrapper[4806]: I0126 08:10:56.318421 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8a935821-0f8f-4e4f-9ae8-00fa265f8269-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-q5h8d\" (UID: \"8a935821-0f8f-4e4f-9ae8-00fa265f8269\") " pod="openstack/dnsmasq-dns-56df8fb6b7-q5h8d" Jan 26 08:10:56 crc kubenswrapper[4806]: I0126 08:10:56.318534 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8a935821-0f8f-4e4f-9ae8-00fa265f8269-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-q5h8d\" (UID: \"8a935821-0f8f-4e4f-9ae8-00fa265f8269\") " pod="openstack/dnsmasq-dns-56df8fb6b7-q5h8d" Jan 26 08:10:56 crc kubenswrapper[4806]: I0126 08:10:56.318555 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8a935821-0f8f-4e4f-9ae8-00fa265f8269-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-q5h8d\" (UID: \"8a935821-0f8f-4e4f-9ae8-00fa265f8269\") " pod="openstack/dnsmasq-dns-56df8fb6b7-q5h8d" Jan 26 08:10:56 crc kubenswrapper[4806]: I0126 08:10:56.318576 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a935821-0f8f-4e4f-9ae8-00fa265f8269-config\") pod \"dnsmasq-dns-56df8fb6b7-q5h8d\" (UID: \"8a935821-0f8f-4e4f-9ae8-00fa265f8269\") " pod="openstack/dnsmasq-dns-56df8fb6b7-q5h8d" Jan 26 08:10:56 crc kubenswrapper[4806]: I0126 08:10:56.318648 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8a935821-0f8f-4e4f-9ae8-00fa265f8269-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-q5h8d\" (UID: \"8a935821-0f8f-4e4f-9ae8-00fa265f8269\") " pod="openstack/dnsmasq-dns-56df8fb6b7-q5h8d" Jan 26 08:10:56 crc kubenswrapper[4806]: I0126 08:10:56.318671 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5x2m\" (UniqueName: \"kubernetes.io/projected/8a935821-0f8f-4e4f-9ae8-00fa265f8269-kube-api-access-g5x2m\") pod \"dnsmasq-dns-56df8fb6b7-q5h8d\" (UID: \"8a935821-0f8f-4e4f-9ae8-00fa265f8269\") " pod="openstack/dnsmasq-dns-56df8fb6b7-q5h8d" Jan 26 08:10:56 crc kubenswrapper[4806]: I0126 08:10:56.421207 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8a935821-0f8f-4e4f-9ae8-00fa265f8269-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-q5h8d\" (UID: \"8a935821-0f8f-4e4f-9ae8-00fa265f8269\") " pod="openstack/dnsmasq-dns-56df8fb6b7-q5h8d" Jan 26 08:10:56 crc kubenswrapper[4806]: I0126 08:10:56.421254 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5x2m\" (UniqueName: \"kubernetes.io/projected/8a935821-0f8f-4e4f-9ae8-00fa265f8269-kube-api-access-g5x2m\") pod \"dnsmasq-dns-56df8fb6b7-q5h8d\" (UID: \"8a935821-0f8f-4e4f-9ae8-00fa265f8269\") " pod="openstack/dnsmasq-dns-56df8fb6b7-q5h8d" Jan 26 08:10:56 crc kubenswrapper[4806]: I0126 08:10:56.421273 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8a935821-0f8f-4e4f-9ae8-00fa265f8269-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-q5h8d\" (UID: \"8a935821-0f8f-4e4f-9ae8-00fa265f8269\") " pod="openstack/dnsmasq-dns-56df8fb6b7-q5h8d" Jan 26 08:10:56 crc kubenswrapper[4806]: I0126 08:10:56.421343 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8a935821-0f8f-4e4f-9ae8-00fa265f8269-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-q5h8d\" (UID: \"8a935821-0f8f-4e4f-9ae8-00fa265f8269\") " pod="openstack/dnsmasq-dns-56df8fb6b7-q5h8d" Jan 26 08:10:56 crc kubenswrapper[4806]: I0126 08:10:56.421364 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8a935821-0f8f-4e4f-9ae8-00fa265f8269-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-q5h8d\" (UID: \"8a935821-0f8f-4e4f-9ae8-00fa265f8269\") " pod="openstack/dnsmasq-dns-56df8fb6b7-q5h8d" Jan 26 08:10:56 crc kubenswrapper[4806]: I0126 08:10:56.421388 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a935821-0f8f-4e4f-9ae8-00fa265f8269-config\") pod \"dnsmasq-dns-56df8fb6b7-q5h8d\" (UID: \"8a935821-0f8f-4e4f-9ae8-00fa265f8269\") " pod="openstack/dnsmasq-dns-56df8fb6b7-q5h8d" Jan 26 08:10:56 crc kubenswrapper[4806]: I0126 08:10:56.422534 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8a935821-0f8f-4e4f-9ae8-00fa265f8269-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-q5h8d\" (UID: \"8a935821-0f8f-4e4f-9ae8-00fa265f8269\") " pod="openstack/dnsmasq-dns-56df8fb6b7-q5h8d" Jan 26 08:10:56 crc kubenswrapper[4806]: I0126 08:10:56.422904 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a935821-0f8f-4e4f-9ae8-00fa265f8269-config\") pod \"dnsmasq-dns-56df8fb6b7-q5h8d\" (UID: \"8a935821-0f8f-4e4f-9ae8-00fa265f8269\") " pod="openstack/dnsmasq-dns-56df8fb6b7-q5h8d" Jan 26 08:10:56 crc kubenswrapper[4806]: I0126 08:10:56.423035 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8a935821-0f8f-4e4f-9ae8-00fa265f8269-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-q5h8d\" (UID: \"8a935821-0f8f-4e4f-9ae8-00fa265f8269\") " pod="openstack/dnsmasq-dns-56df8fb6b7-q5h8d" Jan 26 08:10:56 crc kubenswrapper[4806]: I0126 08:10:56.423107 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8a935821-0f8f-4e4f-9ae8-00fa265f8269-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-q5h8d\" (UID: \"8a935821-0f8f-4e4f-9ae8-00fa265f8269\") " pod="openstack/dnsmasq-dns-56df8fb6b7-q5h8d" Jan 26 08:10:56 crc kubenswrapper[4806]: I0126 08:10:56.423278 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8a935821-0f8f-4e4f-9ae8-00fa265f8269-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-q5h8d\" (UID: \"8a935821-0f8f-4e4f-9ae8-00fa265f8269\") " pod="openstack/dnsmasq-dns-56df8fb6b7-q5h8d" Jan 26 08:10:56 crc kubenswrapper[4806]: I0126 08:10:56.449457 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5x2m\" (UniqueName: \"kubernetes.io/projected/8a935821-0f8f-4e4f-9ae8-00fa265f8269-kube-api-access-g5x2m\") pod \"dnsmasq-dns-56df8fb6b7-q5h8d\" (UID: \"8a935821-0f8f-4e4f-9ae8-00fa265f8269\") " pod="openstack/dnsmasq-dns-56df8fb6b7-q5h8d" Jan 26 08:10:56 crc kubenswrapper[4806]: I0126 08:10:56.608690 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-q5h8d" Jan 26 08:10:56 crc kubenswrapper[4806]: I0126 08:10:56.961140 4806 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-xcfqg" podUID="c6ba8a7a-2708-4123-90e8-5b66f4c86448" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.112:5353: connect: connection refused" Jan 26 08:10:57 crc kubenswrapper[4806]: I0126 08:10:57.164955 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 08:10:57 crc kubenswrapper[4806]: I0126 08:10:57.177394 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 26 08:10:57 crc kubenswrapper[4806]: I0126 08:10:57.179938 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 26 08:10:57 crc kubenswrapper[4806]: I0126 08:10:57.180358 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-d2qmz" Jan 26 08:10:57 crc kubenswrapper[4806]: I0126 08:10:57.182764 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 26 08:10:57 crc kubenswrapper[4806]: I0126 08:10:57.182863 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 08:10:57 crc kubenswrapper[4806]: I0126 08:10:57.235072 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"0abdef54-e353-49e8-9dbd-bc47d32d131e\") " pod="openstack/glance-default-external-api-0" Jan 26 08:10:57 crc kubenswrapper[4806]: I0126 08:10:57.235136 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0abdef54-e353-49e8-9dbd-bc47d32d131e-scripts\") pod \"glance-default-external-api-0\" (UID: \"0abdef54-e353-49e8-9dbd-bc47d32d131e\") " pod="openstack/glance-default-external-api-0" Jan 26 08:10:57 crc kubenswrapper[4806]: I0126 08:10:57.235171 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0abdef54-e353-49e8-9dbd-bc47d32d131e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"0abdef54-e353-49e8-9dbd-bc47d32d131e\") " pod="openstack/glance-default-external-api-0" Jan 26 08:10:57 crc kubenswrapper[4806]: I0126 08:10:57.235197 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ck7db\" (UniqueName: \"kubernetes.io/projected/0abdef54-e353-49e8-9dbd-bc47d32d131e-kube-api-access-ck7db\") pod \"glance-default-external-api-0\" (UID: \"0abdef54-e353-49e8-9dbd-bc47d32d131e\") " pod="openstack/glance-default-external-api-0" Jan 26 08:10:57 crc kubenswrapper[4806]: I0126 08:10:57.235219 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0abdef54-e353-49e8-9dbd-bc47d32d131e-logs\") pod \"glance-default-external-api-0\" (UID: \"0abdef54-e353-49e8-9dbd-bc47d32d131e\") " pod="openstack/glance-default-external-api-0" Jan 26 08:10:57 crc kubenswrapper[4806]: I0126 08:10:57.235259 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0abdef54-e353-49e8-9dbd-bc47d32d131e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"0abdef54-e353-49e8-9dbd-bc47d32d131e\") " pod="openstack/glance-default-external-api-0" Jan 26 08:10:57 crc kubenswrapper[4806]: I0126 08:10:57.235299 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0abdef54-e353-49e8-9dbd-bc47d32d131e-config-data\") pod \"glance-default-external-api-0\" (UID: \"0abdef54-e353-49e8-9dbd-bc47d32d131e\") " pod="openstack/glance-default-external-api-0" Jan 26 08:10:57 crc kubenswrapper[4806]: I0126 08:10:57.250129 4806 generic.go:334] "Generic (PLEG): container finished" podID="c6ba8a7a-2708-4123-90e8-5b66f4c86448" containerID="f5b7aba37df1ab70703a3ef3dc28df0cf9e18d2c32129934f84e93f139ee5b72" exitCode=0 Jan 26 08:10:57 crc kubenswrapper[4806]: I0126 08:10:57.250177 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-xcfqg" event={"ID":"c6ba8a7a-2708-4123-90e8-5b66f4c86448","Type":"ContainerDied","Data":"f5b7aba37df1ab70703a3ef3dc28df0cf9e18d2c32129934f84e93f139ee5b72"} Jan 26 08:10:57 crc kubenswrapper[4806]: I0126 08:10:57.337977 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"0abdef54-e353-49e8-9dbd-bc47d32d131e\") " pod="openstack/glance-default-external-api-0" Jan 26 08:10:57 crc kubenswrapper[4806]: I0126 08:10:57.338024 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0abdef54-e353-49e8-9dbd-bc47d32d131e-scripts\") pod \"glance-default-external-api-0\" (UID: \"0abdef54-e353-49e8-9dbd-bc47d32d131e\") " pod="openstack/glance-default-external-api-0" Jan 26 08:10:57 crc kubenswrapper[4806]: I0126 08:10:57.338062 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0abdef54-e353-49e8-9dbd-bc47d32d131e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"0abdef54-e353-49e8-9dbd-bc47d32d131e\") " pod="openstack/glance-default-external-api-0" Jan 26 08:10:57 crc kubenswrapper[4806]: I0126 08:10:57.338104 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ck7db\" (UniqueName: \"kubernetes.io/projected/0abdef54-e353-49e8-9dbd-bc47d32d131e-kube-api-access-ck7db\") pod \"glance-default-external-api-0\" (UID: \"0abdef54-e353-49e8-9dbd-bc47d32d131e\") " pod="openstack/glance-default-external-api-0" Jan 26 08:10:57 crc kubenswrapper[4806]: I0126 08:10:57.338133 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0abdef54-e353-49e8-9dbd-bc47d32d131e-logs\") pod \"glance-default-external-api-0\" (UID: \"0abdef54-e353-49e8-9dbd-bc47d32d131e\") " pod="openstack/glance-default-external-api-0" Jan 26 08:10:57 crc kubenswrapper[4806]: I0126 08:10:57.338183 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0abdef54-e353-49e8-9dbd-bc47d32d131e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"0abdef54-e353-49e8-9dbd-bc47d32d131e\") " pod="openstack/glance-default-external-api-0" Jan 26 08:10:57 crc kubenswrapper[4806]: I0126 08:10:57.338220 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0abdef54-e353-49e8-9dbd-bc47d32d131e-config-data\") pod \"glance-default-external-api-0\" (UID: \"0abdef54-e353-49e8-9dbd-bc47d32d131e\") " pod="openstack/glance-default-external-api-0" Jan 26 08:10:57 crc kubenswrapper[4806]: I0126 08:10:57.338495 4806 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"0abdef54-e353-49e8-9dbd-bc47d32d131e\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/glance-default-external-api-0" Jan 26 08:10:57 crc kubenswrapper[4806]: I0126 08:10:57.340279 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0abdef54-e353-49e8-9dbd-bc47d32d131e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"0abdef54-e353-49e8-9dbd-bc47d32d131e\") " pod="openstack/glance-default-external-api-0" Jan 26 08:10:57 crc kubenswrapper[4806]: I0126 08:10:57.340648 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0abdef54-e353-49e8-9dbd-bc47d32d131e-logs\") pod \"glance-default-external-api-0\" (UID: \"0abdef54-e353-49e8-9dbd-bc47d32d131e\") " pod="openstack/glance-default-external-api-0" Jan 26 08:10:57 crc kubenswrapper[4806]: I0126 08:10:57.372673 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0abdef54-e353-49e8-9dbd-bc47d32d131e-config-data\") pod \"glance-default-external-api-0\" (UID: \"0abdef54-e353-49e8-9dbd-bc47d32d131e\") " pod="openstack/glance-default-external-api-0" Jan 26 08:10:57 crc kubenswrapper[4806]: I0126 08:10:57.375392 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0abdef54-e353-49e8-9dbd-bc47d32d131e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"0abdef54-e353-49e8-9dbd-bc47d32d131e\") " pod="openstack/glance-default-external-api-0" Jan 26 08:10:57 crc kubenswrapper[4806]: I0126 08:10:57.377566 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ck7db\" (UniqueName: \"kubernetes.io/projected/0abdef54-e353-49e8-9dbd-bc47d32d131e-kube-api-access-ck7db\") pod \"glance-default-external-api-0\" (UID: \"0abdef54-e353-49e8-9dbd-bc47d32d131e\") " pod="openstack/glance-default-external-api-0" Jan 26 08:10:57 crc kubenswrapper[4806]: I0126 08:10:57.380308 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0abdef54-e353-49e8-9dbd-bc47d32d131e-scripts\") pod \"glance-default-external-api-0\" (UID: \"0abdef54-e353-49e8-9dbd-bc47d32d131e\") " pod="openstack/glance-default-external-api-0" Jan 26 08:10:57 crc kubenswrapper[4806]: I0126 08:10:57.441163 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"0abdef54-e353-49e8-9dbd-bc47d32d131e\") " pod="openstack/glance-default-external-api-0" Jan 26 08:10:57 crc kubenswrapper[4806]: I0126 08:10:57.452673 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 08:10:57 crc kubenswrapper[4806]: I0126 08:10:57.459461 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 26 08:10:57 crc kubenswrapper[4806]: I0126 08:10:57.461201 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 26 08:10:57 crc kubenswrapper[4806]: I0126 08:10:57.466070 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 08:10:57 crc kubenswrapper[4806]: I0126 08:10:57.512487 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 26 08:10:57 crc kubenswrapper[4806]: I0126 08:10:57.543362 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6cdc8231-c5bf-4074-883b-94949b9e00dc-scripts\") pod \"glance-default-internal-api-0\" (UID: \"6cdc8231-c5bf-4074-883b-94949b9e00dc\") " pod="openstack/glance-default-internal-api-0" Jan 26 08:10:57 crc kubenswrapper[4806]: I0126 08:10:57.543407 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6cdc8231-c5bf-4074-883b-94949b9e00dc-logs\") pod \"glance-default-internal-api-0\" (UID: \"6cdc8231-c5bf-4074-883b-94949b9e00dc\") " pod="openstack/glance-default-internal-api-0" Jan 26 08:10:57 crc kubenswrapper[4806]: I0126 08:10:57.543425 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szc25\" (UniqueName: \"kubernetes.io/projected/6cdc8231-c5bf-4074-883b-94949b9e00dc-kube-api-access-szc25\") pod \"glance-default-internal-api-0\" (UID: \"6cdc8231-c5bf-4074-883b-94949b9e00dc\") " pod="openstack/glance-default-internal-api-0" Jan 26 08:10:57 crc kubenswrapper[4806]: I0126 08:10:57.543488 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6cdc8231-c5bf-4074-883b-94949b9e00dc-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"6cdc8231-c5bf-4074-883b-94949b9e00dc\") " pod="openstack/glance-default-internal-api-0" Jan 26 08:10:57 crc kubenswrapper[4806]: I0126 08:10:57.543557 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6cdc8231-c5bf-4074-883b-94949b9e00dc-config-data\") pod \"glance-default-internal-api-0\" (UID: \"6cdc8231-c5bf-4074-883b-94949b9e00dc\") " pod="openstack/glance-default-internal-api-0" Jan 26 08:10:57 crc kubenswrapper[4806]: I0126 08:10:57.543582 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cdc8231-c5bf-4074-883b-94949b9e00dc-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"6cdc8231-c5bf-4074-883b-94949b9e00dc\") " pod="openstack/glance-default-internal-api-0" Jan 26 08:10:57 crc kubenswrapper[4806]: I0126 08:10:57.543641 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"6cdc8231-c5bf-4074-883b-94949b9e00dc\") " pod="openstack/glance-default-internal-api-0" Jan 26 08:10:57 crc kubenswrapper[4806]: I0126 08:10:57.645048 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6cdc8231-c5bf-4074-883b-94949b9e00dc-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"6cdc8231-c5bf-4074-883b-94949b9e00dc\") " pod="openstack/glance-default-internal-api-0" Jan 26 08:10:57 crc kubenswrapper[4806]: I0126 08:10:57.645102 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6cdc8231-c5bf-4074-883b-94949b9e00dc-config-data\") pod \"glance-default-internal-api-0\" (UID: \"6cdc8231-c5bf-4074-883b-94949b9e00dc\") " pod="openstack/glance-default-internal-api-0" Jan 26 08:10:57 crc kubenswrapper[4806]: I0126 08:10:57.645129 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cdc8231-c5bf-4074-883b-94949b9e00dc-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"6cdc8231-c5bf-4074-883b-94949b9e00dc\") " pod="openstack/glance-default-internal-api-0" Jan 26 08:10:57 crc kubenswrapper[4806]: I0126 08:10:57.645177 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"6cdc8231-c5bf-4074-883b-94949b9e00dc\") " pod="openstack/glance-default-internal-api-0" Jan 26 08:10:57 crc kubenswrapper[4806]: I0126 08:10:57.645226 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6cdc8231-c5bf-4074-883b-94949b9e00dc-scripts\") pod \"glance-default-internal-api-0\" (UID: \"6cdc8231-c5bf-4074-883b-94949b9e00dc\") " pod="openstack/glance-default-internal-api-0" Jan 26 08:10:57 crc kubenswrapper[4806]: I0126 08:10:57.645248 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6cdc8231-c5bf-4074-883b-94949b9e00dc-logs\") pod \"glance-default-internal-api-0\" (UID: \"6cdc8231-c5bf-4074-883b-94949b9e00dc\") " pod="openstack/glance-default-internal-api-0" Jan 26 08:10:57 crc kubenswrapper[4806]: I0126 08:10:57.645267 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szc25\" (UniqueName: \"kubernetes.io/projected/6cdc8231-c5bf-4074-883b-94949b9e00dc-kube-api-access-szc25\") pod \"glance-default-internal-api-0\" (UID: \"6cdc8231-c5bf-4074-883b-94949b9e00dc\") " pod="openstack/glance-default-internal-api-0" Jan 26 08:10:57 crc kubenswrapper[4806]: I0126 08:10:57.645856 4806 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"6cdc8231-c5bf-4074-883b-94949b9e00dc\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/glance-default-internal-api-0" Jan 26 08:10:57 crc kubenswrapper[4806]: I0126 08:10:57.646360 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6cdc8231-c5bf-4074-883b-94949b9e00dc-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"6cdc8231-c5bf-4074-883b-94949b9e00dc\") " pod="openstack/glance-default-internal-api-0" Jan 26 08:10:57 crc kubenswrapper[4806]: I0126 08:10:57.648625 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6cdc8231-c5bf-4074-883b-94949b9e00dc-logs\") pod \"glance-default-internal-api-0\" (UID: \"6cdc8231-c5bf-4074-883b-94949b9e00dc\") " pod="openstack/glance-default-internal-api-0" Jan 26 08:10:57 crc kubenswrapper[4806]: I0126 08:10:57.651991 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6cdc8231-c5bf-4074-883b-94949b9e00dc-scripts\") pod \"glance-default-internal-api-0\" (UID: \"6cdc8231-c5bf-4074-883b-94949b9e00dc\") " pod="openstack/glance-default-internal-api-0" Jan 26 08:10:57 crc kubenswrapper[4806]: I0126 08:10:57.653233 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cdc8231-c5bf-4074-883b-94949b9e00dc-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"6cdc8231-c5bf-4074-883b-94949b9e00dc\") " pod="openstack/glance-default-internal-api-0" Jan 26 08:10:57 crc kubenswrapper[4806]: I0126 08:10:57.661877 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szc25\" (UniqueName: \"kubernetes.io/projected/6cdc8231-c5bf-4074-883b-94949b9e00dc-kube-api-access-szc25\") pod \"glance-default-internal-api-0\" (UID: \"6cdc8231-c5bf-4074-883b-94949b9e00dc\") " pod="openstack/glance-default-internal-api-0" Jan 26 08:10:57 crc kubenswrapper[4806]: I0126 08:10:57.667938 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6cdc8231-c5bf-4074-883b-94949b9e00dc-config-data\") pod \"glance-default-internal-api-0\" (UID: \"6cdc8231-c5bf-4074-883b-94949b9e00dc\") " pod="openstack/glance-default-internal-api-0" Jan 26 08:10:57 crc kubenswrapper[4806]: I0126 08:10:57.670884 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"6cdc8231-c5bf-4074-883b-94949b9e00dc\") " pod="openstack/glance-default-internal-api-0" Jan 26 08:10:57 crc kubenswrapper[4806]: I0126 08:10:57.802378 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 26 08:10:59 crc kubenswrapper[4806]: I0126 08:10:59.715798 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 08:10:59 crc kubenswrapper[4806]: I0126 08:10:59.786897 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 08:11:01 crc kubenswrapper[4806]: I0126 08:11:01.704537 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-hvd56" Jan 26 08:11:01 crc kubenswrapper[4806]: I0126 08:11:01.730087 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7c574ca2-3991-4eec-80f8-2e389d6a0e4b-fernet-keys\") pod \"7c574ca2-3991-4eec-80f8-2e389d6a0e4b\" (UID: \"7c574ca2-3991-4eec-80f8-2e389d6a0e4b\") " Jan 26 08:11:01 crc kubenswrapper[4806]: I0126 08:11:01.730140 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7c574ca2-3991-4eec-80f8-2e389d6a0e4b-credential-keys\") pod \"7c574ca2-3991-4eec-80f8-2e389d6a0e4b\" (UID: \"7c574ca2-3991-4eec-80f8-2e389d6a0e4b\") " Jan 26 08:11:01 crc kubenswrapper[4806]: I0126 08:11:01.730170 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c9f9c\" (UniqueName: \"kubernetes.io/projected/7c574ca2-3991-4eec-80f8-2e389d6a0e4b-kube-api-access-c9f9c\") pod \"7c574ca2-3991-4eec-80f8-2e389d6a0e4b\" (UID: \"7c574ca2-3991-4eec-80f8-2e389d6a0e4b\") " Jan 26 08:11:01 crc kubenswrapper[4806]: I0126 08:11:01.730196 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c574ca2-3991-4eec-80f8-2e389d6a0e4b-config-data\") pod \"7c574ca2-3991-4eec-80f8-2e389d6a0e4b\" (UID: \"7c574ca2-3991-4eec-80f8-2e389d6a0e4b\") " Jan 26 08:11:01 crc kubenswrapper[4806]: I0126 08:11:01.730244 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7c574ca2-3991-4eec-80f8-2e389d6a0e4b-scripts\") pod \"7c574ca2-3991-4eec-80f8-2e389d6a0e4b\" (UID: \"7c574ca2-3991-4eec-80f8-2e389d6a0e4b\") " Jan 26 08:11:01 crc kubenswrapper[4806]: I0126 08:11:01.730282 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c574ca2-3991-4eec-80f8-2e389d6a0e4b-combined-ca-bundle\") pod \"7c574ca2-3991-4eec-80f8-2e389d6a0e4b\" (UID: \"7c574ca2-3991-4eec-80f8-2e389d6a0e4b\") " Jan 26 08:11:01 crc kubenswrapper[4806]: I0126 08:11:01.756129 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c574ca2-3991-4eec-80f8-2e389d6a0e4b-scripts" (OuterVolumeSpecName: "scripts") pod "7c574ca2-3991-4eec-80f8-2e389d6a0e4b" (UID: "7c574ca2-3991-4eec-80f8-2e389d6a0e4b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:11:01 crc kubenswrapper[4806]: I0126 08:11:01.762061 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c574ca2-3991-4eec-80f8-2e389d6a0e4b-kube-api-access-c9f9c" (OuterVolumeSpecName: "kube-api-access-c9f9c") pod "7c574ca2-3991-4eec-80f8-2e389d6a0e4b" (UID: "7c574ca2-3991-4eec-80f8-2e389d6a0e4b"). InnerVolumeSpecName "kube-api-access-c9f9c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:11:01 crc kubenswrapper[4806]: I0126 08:11:01.764514 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c574ca2-3991-4eec-80f8-2e389d6a0e4b-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "7c574ca2-3991-4eec-80f8-2e389d6a0e4b" (UID: "7c574ca2-3991-4eec-80f8-2e389d6a0e4b"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:11:01 crc kubenswrapper[4806]: I0126 08:11:01.794861 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c574ca2-3991-4eec-80f8-2e389d6a0e4b-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "7c574ca2-3991-4eec-80f8-2e389d6a0e4b" (UID: "7c574ca2-3991-4eec-80f8-2e389d6a0e4b"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:11:01 crc kubenswrapper[4806]: I0126 08:11:01.808992 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c574ca2-3991-4eec-80f8-2e389d6a0e4b-config-data" (OuterVolumeSpecName: "config-data") pod "7c574ca2-3991-4eec-80f8-2e389d6a0e4b" (UID: "7c574ca2-3991-4eec-80f8-2e389d6a0e4b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:11:01 crc kubenswrapper[4806]: I0126 08:11:01.814665 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c574ca2-3991-4eec-80f8-2e389d6a0e4b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7c574ca2-3991-4eec-80f8-2e389d6a0e4b" (UID: "7c574ca2-3991-4eec-80f8-2e389d6a0e4b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:11:01 crc kubenswrapper[4806]: I0126 08:11:01.832629 4806 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c574ca2-3991-4eec-80f8-2e389d6a0e4b-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 08:11:01 crc kubenswrapper[4806]: I0126 08:11:01.832654 4806 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7c574ca2-3991-4eec-80f8-2e389d6a0e4b-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 08:11:01 crc kubenswrapper[4806]: I0126 08:11:01.832663 4806 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c574ca2-3991-4eec-80f8-2e389d6a0e4b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 08:11:01 crc kubenswrapper[4806]: I0126 08:11:01.832673 4806 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7c574ca2-3991-4eec-80f8-2e389d6a0e4b-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 26 08:11:01 crc kubenswrapper[4806]: I0126 08:11:01.832680 4806 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7c574ca2-3991-4eec-80f8-2e389d6a0e4b-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 26 08:11:01 crc kubenswrapper[4806]: I0126 08:11:01.832697 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c9f9c\" (UniqueName: \"kubernetes.io/projected/7c574ca2-3991-4eec-80f8-2e389d6a0e4b-kube-api-access-c9f9c\") on node \"crc\" DevicePath \"\"" Jan 26 08:11:02 crc kubenswrapper[4806]: I0126 08:11:02.298623 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-hvd56" event={"ID":"7c574ca2-3991-4eec-80f8-2e389d6a0e4b","Type":"ContainerDied","Data":"0abc2a138d7496cfd9806d5d67c871d23f503c780ca64bdaf5297de8b0e98f95"} Jan 26 08:11:02 crc kubenswrapper[4806]: I0126 08:11:02.298658 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0abc2a138d7496cfd9806d5d67c871d23f503c780ca64bdaf5297de8b0e98f95" Jan 26 08:11:02 crc kubenswrapper[4806]: I0126 08:11:02.298911 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-hvd56" Jan 26 08:11:02 crc kubenswrapper[4806]: I0126 08:11:02.800175 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-hvd56"] Jan 26 08:11:02 crc kubenswrapper[4806]: I0126 08:11:02.808579 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-hvd56"] Jan 26 08:11:02 crc kubenswrapper[4806]: I0126 08:11:02.899837 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-btkzm"] Jan 26 08:11:02 crc kubenswrapper[4806]: E0126 08:11:02.900316 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c574ca2-3991-4eec-80f8-2e389d6a0e4b" containerName="keystone-bootstrap" Jan 26 08:11:02 crc kubenswrapper[4806]: I0126 08:11:02.900341 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c574ca2-3991-4eec-80f8-2e389d6a0e4b" containerName="keystone-bootstrap" Jan 26 08:11:02 crc kubenswrapper[4806]: I0126 08:11:02.900604 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c574ca2-3991-4eec-80f8-2e389d6a0e4b" containerName="keystone-bootstrap" Jan 26 08:11:02 crc kubenswrapper[4806]: I0126 08:11:02.901340 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-btkzm" Jan 26 08:11:02 crc kubenswrapper[4806]: I0126 08:11:02.905143 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 26 08:11:02 crc kubenswrapper[4806]: I0126 08:11:02.905621 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 26 08:11:02 crc kubenswrapper[4806]: I0126 08:11:02.905764 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 26 08:11:02 crc kubenswrapper[4806]: I0126 08:11:02.905983 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 26 08:11:02 crc kubenswrapper[4806]: I0126 08:11:02.906871 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-wsvrh" Jan 26 08:11:02 crc kubenswrapper[4806]: I0126 08:11:02.927460 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-btkzm"] Jan 26 08:11:02 crc kubenswrapper[4806]: I0126 08:11:02.951634 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a63a316-0795-4795-8662-5c0b2de2597f-combined-ca-bundle\") pod \"keystone-bootstrap-btkzm\" (UID: \"6a63a316-0795-4795-8662-5c0b2de2597f\") " pod="openstack/keystone-bootstrap-btkzm" Jan 26 08:11:02 crc kubenswrapper[4806]: I0126 08:11:02.951733 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74ls5\" (UniqueName: \"kubernetes.io/projected/6a63a316-0795-4795-8662-5c0b2de2597f-kube-api-access-74ls5\") pod \"keystone-bootstrap-btkzm\" (UID: \"6a63a316-0795-4795-8662-5c0b2de2597f\") " pod="openstack/keystone-bootstrap-btkzm" Jan 26 08:11:02 crc kubenswrapper[4806]: I0126 08:11:02.951798 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6a63a316-0795-4795-8662-5c0b2de2597f-fernet-keys\") pod \"keystone-bootstrap-btkzm\" (UID: \"6a63a316-0795-4795-8662-5c0b2de2597f\") " pod="openstack/keystone-bootstrap-btkzm" Jan 26 08:11:02 crc kubenswrapper[4806]: I0126 08:11:02.951830 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6a63a316-0795-4795-8662-5c0b2de2597f-credential-keys\") pod \"keystone-bootstrap-btkzm\" (UID: \"6a63a316-0795-4795-8662-5c0b2de2597f\") " pod="openstack/keystone-bootstrap-btkzm" Jan 26 08:11:02 crc kubenswrapper[4806]: I0126 08:11:02.951861 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a63a316-0795-4795-8662-5c0b2de2597f-config-data\") pod \"keystone-bootstrap-btkzm\" (UID: \"6a63a316-0795-4795-8662-5c0b2de2597f\") " pod="openstack/keystone-bootstrap-btkzm" Jan 26 08:11:02 crc kubenswrapper[4806]: I0126 08:11:02.951925 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a63a316-0795-4795-8662-5c0b2de2597f-scripts\") pod \"keystone-bootstrap-btkzm\" (UID: \"6a63a316-0795-4795-8662-5c0b2de2597f\") " pod="openstack/keystone-bootstrap-btkzm" Jan 26 08:11:03 crc kubenswrapper[4806]: I0126 08:11:03.054113 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c574ca2-3991-4eec-80f8-2e389d6a0e4b" path="/var/lib/kubelet/pods/7c574ca2-3991-4eec-80f8-2e389d6a0e4b/volumes" Jan 26 08:11:03 crc kubenswrapper[4806]: I0126 08:11:03.054117 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-74ls5\" (UniqueName: \"kubernetes.io/projected/6a63a316-0795-4795-8662-5c0b2de2597f-kube-api-access-74ls5\") pod \"keystone-bootstrap-btkzm\" (UID: \"6a63a316-0795-4795-8662-5c0b2de2597f\") " pod="openstack/keystone-bootstrap-btkzm" Jan 26 08:11:03 crc kubenswrapper[4806]: I0126 08:11:03.054633 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6a63a316-0795-4795-8662-5c0b2de2597f-fernet-keys\") pod \"keystone-bootstrap-btkzm\" (UID: \"6a63a316-0795-4795-8662-5c0b2de2597f\") " pod="openstack/keystone-bootstrap-btkzm" Jan 26 08:11:03 crc kubenswrapper[4806]: I0126 08:11:03.054664 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6a63a316-0795-4795-8662-5c0b2de2597f-credential-keys\") pod \"keystone-bootstrap-btkzm\" (UID: \"6a63a316-0795-4795-8662-5c0b2de2597f\") " pod="openstack/keystone-bootstrap-btkzm" Jan 26 08:11:03 crc kubenswrapper[4806]: I0126 08:11:03.054719 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a63a316-0795-4795-8662-5c0b2de2597f-config-data\") pod \"keystone-bootstrap-btkzm\" (UID: \"6a63a316-0795-4795-8662-5c0b2de2597f\") " pod="openstack/keystone-bootstrap-btkzm" Jan 26 08:11:03 crc kubenswrapper[4806]: I0126 08:11:03.054802 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a63a316-0795-4795-8662-5c0b2de2597f-scripts\") pod \"keystone-bootstrap-btkzm\" (UID: \"6a63a316-0795-4795-8662-5c0b2de2597f\") " pod="openstack/keystone-bootstrap-btkzm" Jan 26 08:11:03 crc kubenswrapper[4806]: I0126 08:11:03.054902 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a63a316-0795-4795-8662-5c0b2de2597f-combined-ca-bundle\") pod \"keystone-bootstrap-btkzm\" (UID: \"6a63a316-0795-4795-8662-5c0b2de2597f\") " pod="openstack/keystone-bootstrap-btkzm" Jan 26 08:11:03 crc kubenswrapper[4806]: I0126 08:11:03.062972 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6a63a316-0795-4795-8662-5c0b2de2597f-credential-keys\") pod \"keystone-bootstrap-btkzm\" (UID: \"6a63a316-0795-4795-8662-5c0b2de2597f\") " pod="openstack/keystone-bootstrap-btkzm" Jan 26 08:11:03 crc kubenswrapper[4806]: I0126 08:11:03.063785 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6a63a316-0795-4795-8662-5c0b2de2597f-fernet-keys\") pod \"keystone-bootstrap-btkzm\" (UID: \"6a63a316-0795-4795-8662-5c0b2de2597f\") " pod="openstack/keystone-bootstrap-btkzm" Jan 26 08:11:03 crc kubenswrapper[4806]: I0126 08:11:03.064800 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a63a316-0795-4795-8662-5c0b2de2597f-scripts\") pod \"keystone-bootstrap-btkzm\" (UID: \"6a63a316-0795-4795-8662-5c0b2de2597f\") " pod="openstack/keystone-bootstrap-btkzm" Jan 26 08:11:03 crc kubenswrapper[4806]: I0126 08:11:03.064929 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a63a316-0795-4795-8662-5c0b2de2597f-combined-ca-bundle\") pod \"keystone-bootstrap-btkzm\" (UID: \"6a63a316-0795-4795-8662-5c0b2de2597f\") " pod="openstack/keystone-bootstrap-btkzm" Jan 26 08:11:03 crc kubenswrapper[4806]: I0126 08:11:03.065616 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a63a316-0795-4795-8662-5c0b2de2597f-config-data\") pod \"keystone-bootstrap-btkzm\" (UID: \"6a63a316-0795-4795-8662-5c0b2de2597f\") " pod="openstack/keystone-bootstrap-btkzm" Jan 26 08:11:03 crc kubenswrapper[4806]: I0126 08:11:03.085187 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-74ls5\" (UniqueName: \"kubernetes.io/projected/6a63a316-0795-4795-8662-5c0b2de2597f-kube-api-access-74ls5\") pod \"keystone-bootstrap-btkzm\" (UID: \"6a63a316-0795-4795-8662-5c0b2de2597f\") " pod="openstack/keystone-bootstrap-btkzm" Jan 26 08:11:03 crc kubenswrapper[4806]: I0126 08:11:03.224069 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-btkzm" Jan 26 08:11:06 crc kubenswrapper[4806]: E0126 08:11:06.683963 4806 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-placement-api:current-podified" Jan 26 08:11:06 crc kubenswrapper[4806]: E0126 08:11:06.684813 4806 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:placement-db-sync,Image:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/placement,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:placement-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rlpp6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42482,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-db-sync-hnszz_openstack(86ed2345-2edc-46bb-a416-3cfa5c01b38d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 08:11:06 crc kubenswrapper[4806]: E0126 08:11:06.686017 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/placement-db-sync-hnszz" podUID="86ed2345-2edc-46bb-a416-3cfa5c01b38d" Jan 26 08:11:06 crc kubenswrapper[4806]: E0126 08:11:06.734922 4806 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Jan 26 08:11:06 crc kubenswrapper[4806]: E0126 08:11:06.735107 4806 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5c5hf4h694hc5h8ch54hbdh5ffh65dh5b8h5f7h5bh678hfch56ch558h74h5bdh9h648h7fh5c6h5ffh644h69hcdh647h5cfh59h694h676h5b8q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k552v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-d6959ff45-2jnxn_openstack(1ded787e-1546-468b-a693-640272090020): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 08:11:06 crc kubenswrapper[4806]: E0126 08:11:06.738895 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-d6959ff45-2jnxn" podUID="1ded787e-1546-468b-a693-640272090020" Jan 26 08:11:06 crc kubenswrapper[4806]: I0126 08:11:06.960792 4806 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-xcfqg" podUID="c6ba8a7a-2708-4123-90e8-5b66f4c86448" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.112:5353: i/o timeout" Jan 26 08:11:07 crc kubenswrapper[4806]: E0126 08:11:07.344940 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-placement-api:current-podified\\\"\"" pod="openstack/placement-db-sync-hnszz" podUID="86ed2345-2edc-46bb-a416-3cfa5c01b38d" Jan 26 08:11:10 crc kubenswrapper[4806]: E0126 08:11:10.258218 4806 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Jan 26 08:11:10 crc kubenswrapper[4806]: E0126 08:11:10.259368 4806 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n58ch5d4h589h646h59dh64ch6bh59chc8hf4h56fh665h7fh7fhd9hdfh6bh589hf5h84h5c7h5cch9ch67fh689h689h95h5c7h68fh5c4h648hd6q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hhzvf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-795f9f9b67-n94zj_openstack(a15cfc51-1d28-4476-b9ac-2ef08300220f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 08:11:10 crc kubenswrapper[4806]: E0126 08:11:10.262374 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-795f9f9b67-n94zj" podUID="a15cfc51-1d28-4476-b9ac-2ef08300220f" Jan 26 08:11:10 crc kubenswrapper[4806]: I0126 08:11:10.360941 4806 generic.go:334] "Generic (PLEG): container finished" podID="b0a51881-d18e-40dd-8dfb-a243d798133a" containerID="8a25122d6c9fad04d754046184c87ea909a7a3437fdcfded27819a663fb1f063" exitCode=0 Jan 26 08:11:10 crc kubenswrapper[4806]: I0126 08:11:10.361082 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-r5vvf" event={"ID":"b0a51881-d18e-40dd-8dfb-a243d798133a","Type":"ContainerDied","Data":"8a25122d6c9fad04d754046184c87ea909a7a3437fdcfded27819a663fb1f063"} Jan 26 08:11:10 crc kubenswrapper[4806]: E0126 08:11:10.818114 4806 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Jan 26 08:11:10 crc kubenswrapper[4806]: E0126 08:11:10.819040 4806 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rthdl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-8dksv_openstack(4588263f-b01b-4a54-829f-1cef11d1dbd3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 08:11:10 crc kubenswrapper[4806]: E0126 08:11:10.821050 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-8dksv" podUID="4588263f-b01b-4a54-829f-1cef11d1dbd3" Jan 26 08:11:11 crc kubenswrapper[4806]: E0126 08:11:11.378904 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-8dksv" podUID="4588263f-b01b-4a54-829f-1cef11d1dbd3" Jan 26 08:11:11 crc kubenswrapper[4806]: I0126 08:11:11.961037 4806 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-xcfqg" podUID="c6ba8a7a-2708-4123-90e8-5b66f4c86448" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.112:5353: i/o timeout" Jan 26 08:11:11 crc kubenswrapper[4806]: I0126 08:11:11.961314 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b8fbc5445-xcfqg" Jan 26 08:11:15 crc kubenswrapper[4806]: I0126 08:11:15.806254 4806 patch_prober.go:28] interesting pod/machine-config-daemon-k2tlk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 08:11:15 crc kubenswrapper[4806]: I0126 08:11:15.806850 4806 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 08:11:15 crc kubenswrapper[4806]: I0126 08:11:15.806897 4806 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" Jan 26 08:11:15 crc kubenswrapper[4806]: I0126 08:11:15.807494 4806 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1062ca2b49b34478f04a62458a36769a2e31737989a78160ffd05a185dfcbbaa"} pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 08:11:15 crc kubenswrapper[4806]: I0126 08:11:15.807552 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" containerName="machine-config-daemon" containerID="cri-o://1062ca2b49b34478f04a62458a36769a2e31737989a78160ffd05a185dfcbbaa" gracePeriod=600 Jan 26 08:11:16 crc kubenswrapper[4806]: I0126 08:11:16.423072 4806 generic.go:334] "Generic (PLEG): container finished" podID="d07502a2-50b0-4012-b335-340a1c694c50" containerID="1062ca2b49b34478f04a62458a36769a2e31737989a78160ffd05a185dfcbbaa" exitCode=0 Jan 26 08:11:16 crc kubenswrapper[4806]: I0126 08:11:16.423120 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" event={"ID":"d07502a2-50b0-4012-b335-340a1c694c50","Type":"ContainerDied","Data":"1062ca2b49b34478f04a62458a36769a2e31737989a78160ffd05a185dfcbbaa"} Jan 26 08:11:16 crc kubenswrapper[4806]: I0126 08:11:16.423178 4806 scope.go:117] "RemoveContainer" containerID="1043e4eeb08886878cec455f2ca6376f949985237b4b0930fb8995d1f97399b2" Jan 26 08:11:16 crc kubenswrapper[4806]: I0126 08:11:16.961805 4806 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-xcfqg" podUID="c6ba8a7a-2708-4123-90e8-5b66f4c86448" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.112:5353: i/o timeout" Jan 26 08:11:18 crc kubenswrapper[4806]: I0126 08:11:18.443828 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-xcfqg" event={"ID":"c6ba8a7a-2708-4123-90e8-5b66f4c86448","Type":"ContainerDied","Data":"7bdfa702d8bdd0253f1bf21e60c80fe4c36377660e1647ceb3bb25f0526f6052"} Jan 26 08:11:18 crc kubenswrapper[4806]: I0126 08:11:18.444158 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7bdfa702d8bdd0253f1bf21e60c80fe4c36377660e1647ceb3bb25f0526f6052" Jan 26 08:11:18 crc kubenswrapper[4806]: I0126 08:11:18.501400 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-xcfqg" Jan 26 08:11:18 crc kubenswrapper[4806]: I0126 08:11:18.654467 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c6ba8a7a-2708-4123-90e8-5b66f4c86448-ovsdbserver-sb\") pod \"c6ba8a7a-2708-4123-90e8-5b66f4c86448\" (UID: \"c6ba8a7a-2708-4123-90e8-5b66f4c86448\") " Jan 26 08:11:18 crc kubenswrapper[4806]: I0126 08:11:18.654583 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c6ba8a7a-2708-4123-90e8-5b66f4c86448-ovsdbserver-nb\") pod \"c6ba8a7a-2708-4123-90e8-5b66f4c86448\" (UID: \"c6ba8a7a-2708-4123-90e8-5b66f4c86448\") " Jan 26 08:11:18 crc kubenswrapper[4806]: I0126 08:11:18.654666 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6ba8a7a-2708-4123-90e8-5b66f4c86448-config\") pod \"c6ba8a7a-2708-4123-90e8-5b66f4c86448\" (UID: \"c6ba8a7a-2708-4123-90e8-5b66f4c86448\") " Jan 26 08:11:18 crc kubenswrapper[4806]: I0126 08:11:18.654755 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c6ba8a7a-2708-4123-90e8-5b66f4c86448-dns-svc\") pod \"c6ba8a7a-2708-4123-90e8-5b66f4c86448\" (UID: \"c6ba8a7a-2708-4123-90e8-5b66f4c86448\") " Jan 26 08:11:18 crc kubenswrapper[4806]: I0126 08:11:18.654780 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p2nrn\" (UniqueName: \"kubernetes.io/projected/c6ba8a7a-2708-4123-90e8-5b66f4c86448-kube-api-access-p2nrn\") pod \"c6ba8a7a-2708-4123-90e8-5b66f4c86448\" (UID: \"c6ba8a7a-2708-4123-90e8-5b66f4c86448\") " Jan 26 08:11:18 crc kubenswrapper[4806]: I0126 08:11:18.662535 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6ba8a7a-2708-4123-90e8-5b66f4c86448-kube-api-access-p2nrn" (OuterVolumeSpecName: "kube-api-access-p2nrn") pod "c6ba8a7a-2708-4123-90e8-5b66f4c86448" (UID: "c6ba8a7a-2708-4123-90e8-5b66f4c86448"). InnerVolumeSpecName "kube-api-access-p2nrn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:11:18 crc kubenswrapper[4806]: I0126 08:11:18.700696 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c6ba8a7a-2708-4123-90e8-5b66f4c86448-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c6ba8a7a-2708-4123-90e8-5b66f4c86448" (UID: "c6ba8a7a-2708-4123-90e8-5b66f4c86448"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 08:11:18 crc kubenswrapper[4806]: I0126 08:11:18.702677 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c6ba8a7a-2708-4123-90e8-5b66f4c86448-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c6ba8a7a-2708-4123-90e8-5b66f4c86448" (UID: "c6ba8a7a-2708-4123-90e8-5b66f4c86448"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 08:11:18 crc kubenswrapper[4806]: I0126 08:11:18.726699 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c6ba8a7a-2708-4123-90e8-5b66f4c86448-config" (OuterVolumeSpecName: "config") pod "c6ba8a7a-2708-4123-90e8-5b66f4c86448" (UID: "c6ba8a7a-2708-4123-90e8-5b66f4c86448"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 08:11:18 crc kubenswrapper[4806]: I0126 08:11:18.730227 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c6ba8a7a-2708-4123-90e8-5b66f4c86448-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c6ba8a7a-2708-4123-90e8-5b66f4c86448" (UID: "c6ba8a7a-2708-4123-90e8-5b66f4c86448"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 08:11:18 crc kubenswrapper[4806]: I0126 08:11:18.757179 4806 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c6ba8a7a-2708-4123-90e8-5b66f4c86448-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 08:11:18 crc kubenswrapper[4806]: I0126 08:11:18.757228 4806 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c6ba8a7a-2708-4123-90e8-5b66f4c86448-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 08:11:18 crc kubenswrapper[4806]: I0126 08:11:18.757250 4806 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6ba8a7a-2708-4123-90e8-5b66f4c86448-config\") on node \"crc\" DevicePath \"\"" Jan 26 08:11:18 crc kubenswrapper[4806]: I0126 08:11:18.757266 4806 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c6ba8a7a-2708-4123-90e8-5b66f4c86448-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 08:11:18 crc kubenswrapper[4806]: I0126 08:11:18.757284 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p2nrn\" (UniqueName: \"kubernetes.io/projected/c6ba8a7a-2708-4123-90e8-5b66f4c86448-kube-api-access-p2nrn\") on node \"crc\" DevicePath \"\"" Jan 26 08:11:18 crc kubenswrapper[4806]: E0126 08:11:18.907283 4806 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Jan 26 08:11:18 crc kubenswrapper[4806]: E0126 08:11:18.907463 4806 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5b7h554h58bh5d9hf8h547h64h595hddhffh556hb5hbfh7dh85h595h66fhfhb9h88hfh664h548h5cchbdh59fh599h64fh9h5d5hf5hdcq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vrfc7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(93bf46a8-2942-4b36-9853-88ff5c6e756b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 08:11:18 crc kubenswrapper[4806]: I0126 08:11:18.931195 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-d6959ff45-2jnxn" Jan 26 08:11:19 crc kubenswrapper[4806]: I0126 08:11:19.060871 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1ded787e-1546-468b-a693-640272090020-logs\") pod \"1ded787e-1546-468b-a693-640272090020\" (UID: \"1ded787e-1546-468b-a693-640272090020\") " Jan 26 08:11:19 crc kubenswrapper[4806]: I0126 08:11:19.061029 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1ded787e-1546-468b-a693-640272090020-scripts\") pod \"1ded787e-1546-468b-a693-640272090020\" (UID: \"1ded787e-1546-468b-a693-640272090020\") " Jan 26 08:11:19 crc kubenswrapper[4806]: I0126 08:11:19.061110 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1ded787e-1546-468b-a693-640272090020-horizon-secret-key\") pod \"1ded787e-1546-468b-a693-640272090020\" (UID: \"1ded787e-1546-468b-a693-640272090020\") " Jan 26 08:11:19 crc kubenswrapper[4806]: I0126 08:11:19.061150 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1ded787e-1546-468b-a693-640272090020-config-data\") pod \"1ded787e-1546-468b-a693-640272090020\" (UID: \"1ded787e-1546-468b-a693-640272090020\") " Jan 26 08:11:19 crc kubenswrapper[4806]: I0126 08:11:19.061242 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k552v\" (UniqueName: \"kubernetes.io/projected/1ded787e-1546-468b-a693-640272090020-kube-api-access-k552v\") pod \"1ded787e-1546-468b-a693-640272090020\" (UID: \"1ded787e-1546-468b-a693-640272090020\") " Jan 26 08:11:19 crc kubenswrapper[4806]: I0126 08:11:19.061580 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ded787e-1546-468b-a693-640272090020-logs" (OuterVolumeSpecName: "logs") pod "1ded787e-1546-468b-a693-640272090020" (UID: "1ded787e-1546-468b-a693-640272090020"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 08:11:19 crc kubenswrapper[4806]: I0126 08:11:19.061700 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ded787e-1546-468b-a693-640272090020-scripts" (OuterVolumeSpecName: "scripts") pod "1ded787e-1546-468b-a693-640272090020" (UID: "1ded787e-1546-468b-a693-640272090020"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 08:11:19 crc kubenswrapper[4806]: I0126 08:11:19.062325 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ded787e-1546-468b-a693-640272090020-config-data" (OuterVolumeSpecName: "config-data") pod "1ded787e-1546-468b-a693-640272090020" (UID: "1ded787e-1546-468b-a693-640272090020"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 08:11:19 crc kubenswrapper[4806]: I0126 08:11:19.063004 4806 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1ded787e-1546-468b-a693-640272090020-logs\") on node \"crc\" DevicePath \"\"" Jan 26 08:11:19 crc kubenswrapper[4806]: I0126 08:11:19.063072 4806 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1ded787e-1546-468b-a693-640272090020-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 08:11:19 crc kubenswrapper[4806]: I0126 08:11:19.063128 4806 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1ded787e-1546-468b-a693-640272090020-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 08:11:19 crc kubenswrapper[4806]: I0126 08:11:19.064535 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ded787e-1546-468b-a693-640272090020-kube-api-access-k552v" (OuterVolumeSpecName: "kube-api-access-k552v") pod "1ded787e-1546-468b-a693-640272090020" (UID: "1ded787e-1546-468b-a693-640272090020"). InnerVolumeSpecName "kube-api-access-k552v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:11:19 crc kubenswrapper[4806]: I0126 08:11:19.065373 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ded787e-1546-468b-a693-640272090020-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "1ded787e-1546-468b-a693-640272090020" (UID: "1ded787e-1546-468b-a693-640272090020"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:11:19 crc kubenswrapper[4806]: I0126 08:11:19.168268 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k552v\" (UniqueName: \"kubernetes.io/projected/1ded787e-1546-468b-a693-640272090020-kube-api-access-k552v\") on node \"crc\" DevicePath \"\"" Jan 26 08:11:19 crc kubenswrapper[4806]: I0126 08:11:19.168301 4806 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1ded787e-1546-468b-a693-640272090020-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 26 08:11:19 crc kubenswrapper[4806]: I0126 08:11:19.189901 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-795f9f9b67-n94zj" Jan 26 08:11:19 crc kubenswrapper[4806]: E0126 08:11:19.190665 4806 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified" Jan 26 08:11:19 crc kubenswrapper[4806]: E0126 08:11:19.190777 4806 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kvctp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-bjtkx_openstack(19528149-09a1-44a5-b419-bbe91789d493): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 08:11:19 crc kubenswrapper[4806]: E0126 08:11:19.192251 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/heat-db-sync-bjtkx" podUID="19528149-09a1-44a5-b419-bbe91789d493" Jan 26 08:11:19 crc kubenswrapper[4806]: I0126 08:11:19.198340 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-r5vvf" Jan 26 08:11:19 crc kubenswrapper[4806]: I0126 08:11:19.369929 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0a51881-d18e-40dd-8dfb-a243d798133a-combined-ca-bundle\") pod \"b0a51881-d18e-40dd-8dfb-a243d798133a\" (UID: \"b0a51881-d18e-40dd-8dfb-a243d798133a\") " Jan 26 08:11:19 crc kubenswrapper[4806]: I0126 08:11:19.370093 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a15cfc51-1d28-4476-b9ac-2ef08300220f-logs\") pod \"a15cfc51-1d28-4476-b9ac-2ef08300220f\" (UID: \"a15cfc51-1d28-4476-b9ac-2ef08300220f\") " Jan 26 08:11:19 crc kubenswrapper[4806]: I0126 08:11:19.370238 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a15cfc51-1d28-4476-b9ac-2ef08300220f-config-data\") pod \"a15cfc51-1d28-4476-b9ac-2ef08300220f\" (UID: \"a15cfc51-1d28-4476-b9ac-2ef08300220f\") " Jan 26 08:11:19 crc kubenswrapper[4806]: I0126 08:11:19.370265 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b0a51881-d18e-40dd-8dfb-a243d798133a-config\") pod \"b0a51881-d18e-40dd-8dfb-a243d798133a\" (UID: \"b0a51881-d18e-40dd-8dfb-a243d798133a\") " Jan 26 08:11:19 crc kubenswrapper[4806]: I0126 08:11:19.370319 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a15cfc51-1d28-4476-b9ac-2ef08300220f-horizon-secret-key\") pod \"a15cfc51-1d28-4476-b9ac-2ef08300220f\" (UID: \"a15cfc51-1d28-4476-b9ac-2ef08300220f\") " Jan 26 08:11:19 crc kubenswrapper[4806]: I0126 08:11:19.370414 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8h9bg\" (UniqueName: \"kubernetes.io/projected/b0a51881-d18e-40dd-8dfb-a243d798133a-kube-api-access-8h9bg\") pod \"b0a51881-d18e-40dd-8dfb-a243d798133a\" (UID: \"b0a51881-d18e-40dd-8dfb-a243d798133a\") " Jan 26 08:11:19 crc kubenswrapper[4806]: I0126 08:11:19.370473 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hhzvf\" (UniqueName: \"kubernetes.io/projected/a15cfc51-1d28-4476-b9ac-2ef08300220f-kube-api-access-hhzvf\") pod \"a15cfc51-1d28-4476-b9ac-2ef08300220f\" (UID: \"a15cfc51-1d28-4476-b9ac-2ef08300220f\") " Jan 26 08:11:19 crc kubenswrapper[4806]: I0126 08:11:19.370554 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a15cfc51-1d28-4476-b9ac-2ef08300220f-scripts\") pod \"a15cfc51-1d28-4476-b9ac-2ef08300220f\" (UID: \"a15cfc51-1d28-4476-b9ac-2ef08300220f\") " Jan 26 08:11:19 crc kubenswrapper[4806]: I0126 08:11:19.371757 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a15cfc51-1d28-4476-b9ac-2ef08300220f-scripts" (OuterVolumeSpecName: "scripts") pod "a15cfc51-1d28-4476-b9ac-2ef08300220f" (UID: "a15cfc51-1d28-4476-b9ac-2ef08300220f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 08:11:19 crc kubenswrapper[4806]: I0126 08:11:19.373114 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a15cfc51-1d28-4476-b9ac-2ef08300220f-config-data" (OuterVolumeSpecName: "config-data") pod "a15cfc51-1d28-4476-b9ac-2ef08300220f" (UID: "a15cfc51-1d28-4476-b9ac-2ef08300220f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 08:11:19 crc kubenswrapper[4806]: I0126 08:11:19.374471 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a15cfc51-1d28-4476-b9ac-2ef08300220f-logs" (OuterVolumeSpecName: "logs") pod "a15cfc51-1d28-4476-b9ac-2ef08300220f" (UID: "a15cfc51-1d28-4476-b9ac-2ef08300220f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 08:11:19 crc kubenswrapper[4806]: I0126 08:11:19.376297 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0a51881-d18e-40dd-8dfb-a243d798133a-kube-api-access-8h9bg" (OuterVolumeSpecName: "kube-api-access-8h9bg") pod "b0a51881-d18e-40dd-8dfb-a243d798133a" (UID: "b0a51881-d18e-40dd-8dfb-a243d798133a"). InnerVolumeSpecName "kube-api-access-8h9bg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:11:19 crc kubenswrapper[4806]: I0126 08:11:19.377774 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a15cfc51-1d28-4476-b9ac-2ef08300220f-kube-api-access-hhzvf" (OuterVolumeSpecName: "kube-api-access-hhzvf") pod "a15cfc51-1d28-4476-b9ac-2ef08300220f" (UID: "a15cfc51-1d28-4476-b9ac-2ef08300220f"). InnerVolumeSpecName "kube-api-access-hhzvf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:11:19 crc kubenswrapper[4806]: I0126 08:11:19.378005 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a15cfc51-1d28-4476-b9ac-2ef08300220f-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "a15cfc51-1d28-4476-b9ac-2ef08300220f" (UID: "a15cfc51-1d28-4476-b9ac-2ef08300220f"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:11:19 crc kubenswrapper[4806]: I0126 08:11:19.393939 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0a51881-d18e-40dd-8dfb-a243d798133a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b0a51881-d18e-40dd-8dfb-a243d798133a" (UID: "b0a51881-d18e-40dd-8dfb-a243d798133a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:11:19 crc kubenswrapper[4806]: I0126 08:11:19.396710 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0a51881-d18e-40dd-8dfb-a243d798133a-config" (OuterVolumeSpecName: "config") pod "b0a51881-d18e-40dd-8dfb-a243d798133a" (UID: "b0a51881-d18e-40dd-8dfb-a243d798133a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:11:19 crc kubenswrapper[4806]: I0126 08:11:19.472985 4806 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a15cfc51-1d28-4476-b9ac-2ef08300220f-logs\") on node \"crc\" DevicePath \"\"" Jan 26 08:11:19 crc kubenswrapper[4806]: I0126 08:11:19.473016 4806 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a15cfc51-1d28-4476-b9ac-2ef08300220f-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 08:11:19 crc kubenswrapper[4806]: I0126 08:11:19.473026 4806 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/b0a51881-d18e-40dd-8dfb-a243d798133a-config\") on node \"crc\" DevicePath \"\"" Jan 26 08:11:19 crc kubenswrapper[4806]: I0126 08:11:19.473035 4806 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a15cfc51-1d28-4476-b9ac-2ef08300220f-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 26 08:11:19 crc kubenswrapper[4806]: I0126 08:11:19.473046 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8h9bg\" (UniqueName: \"kubernetes.io/projected/b0a51881-d18e-40dd-8dfb-a243d798133a-kube-api-access-8h9bg\") on node \"crc\" DevicePath \"\"" Jan 26 08:11:19 crc kubenswrapper[4806]: I0126 08:11:19.473054 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hhzvf\" (UniqueName: \"kubernetes.io/projected/a15cfc51-1d28-4476-b9ac-2ef08300220f-kube-api-access-hhzvf\") on node \"crc\" DevicePath \"\"" Jan 26 08:11:19 crc kubenswrapper[4806]: I0126 08:11:19.473062 4806 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a15cfc51-1d28-4476-b9ac-2ef08300220f-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 08:11:19 crc kubenswrapper[4806]: I0126 08:11:19.473070 4806 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0a51881-d18e-40dd-8dfb-a243d798133a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 08:11:19 crc kubenswrapper[4806]: I0126 08:11:19.476404 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-r5vvf" Jan 26 08:11:19 crc kubenswrapper[4806]: I0126 08:11:19.476516 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-r5vvf" event={"ID":"b0a51881-d18e-40dd-8dfb-a243d798133a","Type":"ContainerDied","Data":"a5bf5421c602860c92dc7c4b244d6e55a1de2ebe55850dc369e30a8d0d3dcd53"} Jan 26 08:11:19 crc kubenswrapper[4806]: I0126 08:11:19.476561 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a5bf5421c602860c92dc7c4b244d6e55a1de2ebe55850dc369e30a8d0d3dcd53" Jan 26 08:11:19 crc kubenswrapper[4806]: I0126 08:11:19.478372 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-795f9f9b67-n94zj" event={"ID":"a15cfc51-1d28-4476-b9ac-2ef08300220f","Type":"ContainerDied","Data":"8aab465a27a6f72104325bfd7323b25831c57fabd1466520f25f05843140d12f"} Jan 26 08:11:19 crc kubenswrapper[4806]: I0126 08:11:19.478425 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-795f9f9b67-n94zj" Jan 26 08:11:19 crc kubenswrapper[4806]: I0126 08:11:19.480104 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-d6959ff45-2jnxn" event={"ID":"1ded787e-1546-468b-a693-640272090020","Type":"ContainerDied","Data":"6ccddb363d27b58595bab4e448598ce827cc961567d9efd7af8dafd8c572dadd"} Jan 26 08:11:19 crc kubenswrapper[4806]: I0126 08:11:19.480125 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-d6959ff45-2jnxn" Jan 26 08:11:19 crc kubenswrapper[4806]: I0126 08:11:19.480192 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-xcfqg" Jan 26 08:11:19 crc kubenswrapper[4806]: E0126 08:11:19.494550 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified\\\"\"" pod="openstack/heat-db-sync-bjtkx" podUID="19528149-09a1-44a5-b419-bbe91789d493" Jan 26 08:11:19 crc kubenswrapper[4806]: I0126 08:11:19.537328 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-xcfqg"] Jan 26 08:11:19 crc kubenswrapper[4806]: I0126 08:11:19.570276 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-xcfqg"] Jan 26 08:11:19 crc kubenswrapper[4806]: I0126 08:11:19.608422 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-795f9f9b67-n94zj"] Jan 26 08:11:19 crc kubenswrapper[4806]: I0126 08:11:19.616197 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-795f9f9b67-n94zj"] Jan 26 08:11:19 crc kubenswrapper[4806]: I0126 08:11:19.630269 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-d6959ff45-2jnxn"] Jan 26 08:11:19 crc kubenswrapper[4806]: I0126 08:11:19.636789 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-d6959ff45-2jnxn"] Jan 26 08:11:20 crc kubenswrapper[4806]: I0126 08:11:20.421576 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-q5h8d"] Jan 26 08:11:20 crc kubenswrapper[4806]: I0126 08:11:20.483047 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-d6hkg"] Jan 26 08:11:20 crc kubenswrapper[4806]: E0126 08:11:20.483831 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6ba8a7a-2708-4123-90e8-5b66f4c86448" containerName="init" Jan 26 08:11:20 crc kubenswrapper[4806]: I0126 08:11:20.483913 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6ba8a7a-2708-4123-90e8-5b66f4c86448" containerName="init" Jan 26 08:11:20 crc kubenswrapper[4806]: E0126 08:11:20.483978 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0a51881-d18e-40dd-8dfb-a243d798133a" containerName="neutron-db-sync" Jan 26 08:11:20 crc kubenswrapper[4806]: I0126 08:11:20.484065 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0a51881-d18e-40dd-8dfb-a243d798133a" containerName="neutron-db-sync" Jan 26 08:11:20 crc kubenswrapper[4806]: E0126 08:11:20.484137 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6ba8a7a-2708-4123-90e8-5b66f4c86448" containerName="dnsmasq-dns" Jan 26 08:11:20 crc kubenswrapper[4806]: I0126 08:11:20.484193 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6ba8a7a-2708-4123-90e8-5b66f4c86448" containerName="dnsmasq-dns" Jan 26 08:11:20 crc kubenswrapper[4806]: I0126 08:11:20.484407 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6ba8a7a-2708-4123-90e8-5b66f4c86448" containerName="dnsmasq-dns" Jan 26 08:11:20 crc kubenswrapper[4806]: I0126 08:11:20.484473 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0a51881-d18e-40dd-8dfb-a243d798133a" containerName="neutron-db-sync" Jan 26 08:11:20 crc kubenswrapper[4806]: I0126 08:11:20.485374 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-d6hkg" Jan 26 08:11:20 crc kubenswrapper[4806]: I0126 08:11:20.514626 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-d6hkg"] Jan 26 08:11:20 crc kubenswrapper[4806]: I0126 08:11:20.515859 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mc29v\" (UniqueName: \"kubernetes.io/projected/6f121c86-c5ee-47c3-b80b-f8791a68ee15-kube-api-access-mc29v\") pod \"dnsmasq-dns-6b7b667979-d6hkg\" (UID: \"6f121c86-c5ee-47c3-b80b-f8791a68ee15\") " pod="openstack/dnsmasq-dns-6b7b667979-d6hkg" Jan 26 08:11:20 crc kubenswrapper[4806]: I0126 08:11:20.515906 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6f121c86-c5ee-47c3-b80b-f8791a68ee15-dns-svc\") pod \"dnsmasq-dns-6b7b667979-d6hkg\" (UID: \"6f121c86-c5ee-47c3-b80b-f8791a68ee15\") " pod="openstack/dnsmasq-dns-6b7b667979-d6hkg" Jan 26 08:11:20 crc kubenswrapper[4806]: I0126 08:11:20.515940 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f121c86-c5ee-47c3-b80b-f8791a68ee15-config\") pod \"dnsmasq-dns-6b7b667979-d6hkg\" (UID: \"6f121c86-c5ee-47c3-b80b-f8791a68ee15\") " pod="openstack/dnsmasq-dns-6b7b667979-d6hkg" Jan 26 08:11:20 crc kubenswrapper[4806]: I0126 08:11:20.516023 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6f121c86-c5ee-47c3-b80b-f8791a68ee15-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7b667979-d6hkg\" (UID: \"6f121c86-c5ee-47c3-b80b-f8791a68ee15\") " pod="openstack/dnsmasq-dns-6b7b667979-d6hkg" Jan 26 08:11:20 crc kubenswrapper[4806]: I0126 08:11:20.516053 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6f121c86-c5ee-47c3-b80b-f8791a68ee15-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7b667979-d6hkg\" (UID: \"6f121c86-c5ee-47c3-b80b-f8791a68ee15\") " pod="openstack/dnsmasq-dns-6b7b667979-d6hkg" Jan 26 08:11:20 crc kubenswrapper[4806]: I0126 08:11:20.516213 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6f121c86-c5ee-47c3-b80b-f8791a68ee15-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7b667979-d6hkg\" (UID: \"6f121c86-c5ee-47c3-b80b-f8791a68ee15\") " pod="openstack/dnsmasq-dns-6b7b667979-d6hkg" Jan 26 08:11:20 crc kubenswrapper[4806]: E0126 08:11:20.532245 4806 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Jan 26 08:11:20 crc kubenswrapper[4806]: E0126 08:11:20.532411 4806 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ngzhn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-qw29c_openstack(bc6102bf-7483-4063-af9d-841e78398b0c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 08:11:20 crc kubenswrapper[4806]: E0126 08:11:20.534051 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-qw29c" podUID="bc6102bf-7483-4063-af9d-841e78398b0c" Jan 26 08:11:20 crc kubenswrapper[4806]: I0126 08:11:20.583667 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-d5647fbb4-vvzj4"] Jan 26 08:11:20 crc kubenswrapper[4806]: I0126 08:11:20.585304 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-d5647fbb4-vvzj4" Jan 26 08:11:20 crc kubenswrapper[4806]: I0126 08:11:20.604720 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 26 08:11:20 crc kubenswrapper[4806]: I0126 08:11:20.604936 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 26 08:11:20 crc kubenswrapper[4806]: I0126 08:11:20.605485 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Jan 26 08:11:20 crc kubenswrapper[4806]: I0126 08:11:20.605612 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-gd76f" Jan 26 08:11:20 crc kubenswrapper[4806]: I0126 08:11:20.627098 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6f121c86-c5ee-47c3-b80b-f8791a68ee15-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7b667979-d6hkg\" (UID: \"6f121c86-c5ee-47c3-b80b-f8791a68ee15\") " pod="openstack/dnsmasq-dns-6b7b667979-d6hkg" Jan 26 08:11:20 crc kubenswrapper[4806]: I0126 08:11:20.627525 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4chc\" (UniqueName: \"kubernetes.io/projected/c547f19a-cde0-4c88-aa6a-d7b43f868565-kube-api-access-d4chc\") pod \"neutron-d5647fbb4-vvzj4\" (UID: \"c547f19a-cde0-4c88-aa6a-d7b43f868565\") " pod="openstack/neutron-d5647fbb4-vvzj4" Jan 26 08:11:20 crc kubenswrapper[4806]: I0126 08:11:20.627564 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c547f19a-cde0-4c88-aa6a-d7b43f868565-ovndb-tls-certs\") pod \"neutron-d5647fbb4-vvzj4\" (UID: \"c547f19a-cde0-4c88-aa6a-d7b43f868565\") " pod="openstack/neutron-d5647fbb4-vvzj4" Jan 26 08:11:20 crc kubenswrapper[4806]: I0126 08:11:20.627614 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mc29v\" (UniqueName: \"kubernetes.io/projected/6f121c86-c5ee-47c3-b80b-f8791a68ee15-kube-api-access-mc29v\") pod \"dnsmasq-dns-6b7b667979-d6hkg\" (UID: \"6f121c86-c5ee-47c3-b80b-f8791a68ee15\") " pod="openstack/dnsmasq-dns-6b7b667979-d6hkg" Jan 26 08:11:20 crc kubenswrapper[4806]: I0126 08:11:20.627649 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6f121c86-c5ee-47c3-b80b-f8791a68ee15-dns-svc\") pod \"dnsmasq-dns-6b7b667979-d6hkg\" (UID: \"6f121c86-c5ee-47c3-b80b-f8791a68ee15\") " pod="openstack/dnsmasq-dns-6b7b667979-d6hkg" Jan 26 08:11:20 crc kubenswrapper[4806]: I0126 08:11:20.627705 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f121c86-c5ee-47c3-b80b-f8791a68ee15-config\") pod \"dnsmasq-dns-6b7b667979-d6hkg\" (UID: \"6f121c86-c5ee-47c3-b80b-f8791a68ee15\") " pod="openstack/dnsmasq-dns-6b7b667979-d6hkg" Jan 26 08:11:20 crc kubenswrapper[4806]: I0126 08:11:20.627754 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c547f19a-cde0-4c88-aa6a-d7b43f868565-httpd-config\") pod \"neutron-d5647fbb4-vvzj4\" (UID: \"c547f19a-cde0-4c88-aa6a-d7b43f868565\") " pod="openstack/neutron-d5647fbb4-vvzj4" Jan 26 08:11:20 crc kubenswrapper[4806]: I0126 08:11:20.627888 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c547f19a-cde0-4c88-aa6a-d7b43f868565-combined-ca-bundle\") pod \"neutron-d5647fbb4-vvzj4\" (UID: \"c547f19a-cde0-4c88-aa6a-d7b43f868565\") " pod="openstack/neutron-d5647fbb4-vvzj4" Jan 26 08:11:20 crc kubenswrapper[4806]: I0126 08:11:20.627989 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6f121c86-c5ee-47c3-b80b-f8791a68ee15-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7b667979-d6hkg\" (UID: \"6f121c86-c5ee-47c3-b80b-f8791a68ee15\") " pod="openstack/dnsmasq-dns-6b7b667979-d6hkg" Jan 26 08:11:20 crc kubenswrapper[4806]: I0126 08:11:20.628016 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6f121c86-c5ee-47c3-b80b-f8791a68ee15-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7b667979-d6hkg\" (UID: \"6f121c86-c5ee-47c3-b80b-f8791a68ee15\") " pod="openstack/dnsmasq-dns-6b7b667979-d6hkg" Jan 26 08:11:20 crc kubenswrapper[4806]: I0126 08:11:20.628058 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c547f19a-cde0-4c88-aa6a-d7b43f868565-config\") pod \"neutron-d5647fbb4-vvzj4\" (UID: \"c547f19a-cde0-4c88-aa6a-d7b43f868565\") " pod="openstack/neutron-d5647fbb4-vvzj4" Jan 26 08:11:20 crc kubenswrapper[4806]: I0126 08:11:20.636429 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-d5647fbb4-vvzj4"] Jan 26 08:11:20 crc kubenswrapper[4806]: I0126 08:11:20.637137 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6f121c86-c5ee-47c3-b80b-f8791a68ee15-dns-svc\") pod \"dnsmasq-dns-6b7b667979-d6hkg\" (UID: \"6f121c86-c5ee-47c3-b80b-f8791a68ee15\") " pod="openstack/dnsmasq-dns-6b7b667979-d6hkg" Jan 26 08:11:20 crc kubenswrapper[4806]: I0126 08:11:20.642006 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6f121c86-c5ee-47c3-b80b-f8791a68ee15-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7b667979-d6hkg\" (UID: \"6f121c86-c5ee-47c3-b80b-f8791a68ee15\") " pod="openstack/dnsmasq-dns-6b7b667979-d6hkg" Jan 26 08:11:20 crc kubenswrapper[4806]: I0126 08:11:20.643461 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f121c86-c5ee-47c3-b80b-f8791a68ee15-config\") pod \"dnsmasq-dns-6b7b667979-d6hkg\" (UID: \"6f121c86-c5ee-47c3-b80b-f8791a68ee15\") " pod="openstack/dnsmasq-dns-6b7b667979-d6hkg" Jan 26 08:11:20 crc kubenswrapper[4806]: I0126 08:11:20.643995 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6f121c86-c5ee-47c3-b80b-f8791a68ee15-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7b667979-d6hkg\" (UID: \"6f121c86-c5ee-47c3-b80b-f8791a68ee15\") " pod="openstack/dnsmasq-dns-6b7b667979-d6hkg" Jan 26 08:11:20 crc kubenswrapper[4806]: I0126 08:11:20.644183 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6f121c86-c5ee-47c3-b80b-f8791a68ee15-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7b667979-d6hkg\" (UID: \"6f121c86-c5ee-47c3-b80b-f8791a68ee15\") " pod="openstack/dnsmasq-dns-6b7b667979-d6hkg" Jan 26 08:11:20 crc kubenswrapper[4806]: I0126 08:11:20.661447 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mc29v\" (UniqueName: \"kubernetes.io/projected/6f121c86-c5ee-47c3-b80b-f8791a68ee15-kube-api-access-mc29v\") pod \"dnsmasq-dns-6b7b667979-d6hkg\" (UID: \"6f121c86-c5ee-47c3-b80b-f8791a68ee15\") " pod="openstack/dnsmasq-dns-6b7b667979-d6hkg" Jan 26 08:11:20 crc kubenswrapper[4806]: I0126 08:11:20.731688 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c547f19a-cde0-4c88-aa6a-d7b43f868565-config\") pod \"neutron-d5647fbb4-vvzj4\" (UID: \"c547f19a-cde0-4c88-aa6a-d7b43f868565\") " pod="openstack/neutron-d5647fbb4-vvzj4" Jan 26 08:11:20 crc kubenswrapper[4806]: I0126 08:11:20.731780 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4chc\" (UniqueName: \"kubernetes.io/projected/c547f19a-cde0-4c88-aa6a-d7b43f868565-kube-api-access-d4chc\") pod \"neutron-d5647fbb4-vvzj4\" (UID: \"c547f19a-cde0-4c88-aa6a-d7b43f868565\") " pod="openstack/neutron-d5647fbb4-vvzj4" Jan 26 08:11:20 crc kubenswrapper[4806]: I0126 08:11:20.731801 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c547f19a-cde0-4c88-aa6a-d7b43f868565-ovndb-tls-certs\") pod \"neutron-d5647fbb4-vvzj4\" (UID: \"c547f19a-cde0-4c88-aa6a-d7b43f868565\") " pod="openstack/neutron-d5647fbb4-vvzj4" Jan 26 08:11:20 crc kubenswrapper[4806]: I0126 08:11:20.732282 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c547f19a-cde0-4c88-aa6a-d7b43f868565-httpd-config\") pod \"neutron-d5647fbb4-vvzj4\" (UID: \"c547f19a-cde0-4c88-aa6a-d7b43f868565\") " pod="openstack/neutron-d5647fbb4-vvzj4" Jan 26 08:11:20 crc kubenswrapper[4806]: I0126 08:11:20.732418 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c547f19a-cde0-4c88-aa6a-d7b43f868565-combined-ca-bundle\") pod \"neutron-d5647fbb4-vvzj4\" (UID: \"c547f19a-cde0-4c88-aa6a-d7b43f868565\") " pod="openstack/neutron-d5647fbb4-vvzj4" Jan 26 08:11:20 crc kubenswrapper[4806]: I0126 08:11:20.744804 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c547f19a-cde0-4c88-aa6a-d7b43f868565-combined-ca-bundle\") pod \"neutron-d5647fbb4-vvzj4\" (UID: \"c547f19a-cde0-4c88-aa6a-d7b43f868565\") " pod="openstack/neutron-d5647fbb4-vvzj4" Jan 26 08:11:20 crc kubenswrapper[4806]: I0126 08:11:20.749561 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/c547f19a-cde0-4c88-aa6a-d7b43f868565-config\") pod \"neutron-d5647fbb4-vvzj4\" (UID: \"c547f19a-cde0-4c88-aa6a-d7b43f868565\") " pod="openstack/neutron-d5647fbb4-vvzj4" Jan 26 08:11:20 crc kubenswrapper[4806]: I0126 08:11:20.749657 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4chc\" (UniqueName: \"kubernetes.io/projected/c547f19a-cde0-4c88-aa6a-d7b43f868565-kube-api-access-d4chc\") pod \"neutron-d5647fbb4-vvzj4\" (UID: \"c547f19a-cde0-4c88-aa6a-d7b43f868565\") " pod="openstack/neutron-d5647fbb4-vvzj4" Jan 26 08:11:20 crc kubenswrapper[4806]: I0126 08:11:20.750632 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c547f19a-cde0-4c88-aa6a-d7b43f868565-ovndb-tls-certs\") pod \"neutron-d5647fbb4-vvzj4\" (UID: \"c547f19a-cde0-4c88-aa6a-d7b43f868565\") " pod="openstack/neutron-d5647fbb4-vvzj4" Jan 26 08:11:20 crc kubenswrapper[4806]: I0126 08:11:20.751452 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c547f19a-cde0-4c88-aa6a-d7b43f868565-httpd-config\") pod \"neutron-d5647fbb4-vvzj4\" (UID: \"c547f19a-cde0-4c88-aa6a-d7b43f868565\") " pod="openstack/neutron-d5647fbb4-vvzj4" Jan 26 08:11:20 crc kubenswrapper[4806]: I0126 08:11:20.778744 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-d6hkg" Jan 26 08:11:20 crc kubenswrapper[4806]: I0126 08:11:20.790822 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-d5647fbb4-vvzj4" Jan 26 08:11:21 crc kubenswrapper[4806]: I0126 08:11:21.079499 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ded787e-1546-468b-a693-640272090020" path="/var/lib/kubelet/pods/1ded787e-1546-468b-a693-640272090020/volumes" Jan 26 08:11:21 crc kubenswrapper[4806]: I0126 08:11:21.080333 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a15cfc51-1d28-4476-b9ac-2ef08300220f" path="/var/lib/kubelet/pods/a15cfc51-1d28-4476-b9ac-2ef08300220f/volumes" Jan 26 08:11:21 crc kubenswrapper[4806]: I0126 08:11:21.080899 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6ba8a7a-2708-4123-90e8-5b66f4c86448" path="/var/lib/kubelet/pods/c6ba8a7a-2708-4123-90e8-5b66f4c86448/volumes" Jan 26 08:11:21 crc kubenswrapper[4806]: I0126 08:11:21.197671 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7d485d788d-5q4tb"] Jan 26 08:11:21 crc kubenswrapper[4806]: I0126 08:11:21.366396 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-q5h8d"] Jan 26 08:11:21 crc kubenswrapper[4806]: W0126 08:11:21.372645 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8a935821_0f8f_4e4f_9ae8_00fa265f8269.slice/crio-926d43f8c600b127d9907eca76d8ca2a2c630085be27e41f616714e9e1560b5a WatchSource:0}: Error finding container 926d43f8c600b127d9907eca76d8ca2a2c630085be27e41f616714e9e1560b5a: Status 404 returned error can't find the container with id 926d43f8c600b127d9907eca76d8ca2a2c630085be27e41f616714e9e1560b5a Jan 26 08:11:21 crc kubenswrapper[4806]: I0126 08:11:21.565703 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-cbdcb8bcc-96jf5" event={"ID":"36dba152-b43d-47c4-94bb-874f93b0884f","Type":"ContainerStarted","Data":"5a999e2cb6a801c6ea12d47114e5ca927ccfbec059be674dc6989803b3e94929"} Jan 26 08:11:21 crc kubenswrapper[4806]: I0126 08:11:21.569455 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-cbdcb8bcc-96jf5" event={"ID":"36dba152-b43d-47c4-94bb-874f93b0884f","Type":"ContainerStarted","Data":"f43ad7611386218711315d63b148811af89ce769e51f9c16adce40cda7cf010b"} Jan 26 08:11:21 crc kubenswrapper[4806]: I0126 08:11:21.566132 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-cbdcb8bcc-96jf5" podUID="36dba152-b43d-47c4-94bb-874f93b0884f" containerName="horizon" containerID="cri-o://5a999e2cb6a801c6ea12d47114e5ca927ccfbec059be674dc6989803b3e94929" gracePeriod=30 Jan 26 08:11:21 crc kubenswrapper[4806]: I0126 08:11:21.565804 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-cbdcb8bcc-96jf5" podUID="36dba152-b43d-47c4-94bb-874f93b0884f" containerName="horizon-log" containerID="cri-o://f43ad7611386218711315d63b148811af89ce769e51f9c16adce40cda7cf010b" gracePeriod=30 Jan 26 08:11:21 crc kubenswrapper[4806]: I0126 08:11:21.580794 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" event={"ID":"d07502a2-50b0-4012-b335-340a1c694c50","Type":"ContainerStarted","Data":"8880d10e53faf854bc25456c263d76882c8161d6eb264ea6dd36a69766a56246"} Jan 26 08:11:21 crc kubenswrapper[4806]: I0126 08:11:21.593101 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7d485d788d-5q4tb" event={"ID":"d7b4ee8d-6333-4683-94c4-b79229c76537","Type":"ContainerStarted","Data":"9ff95e3e3101df9a660674f885ad43770e43e1a900182394428d477ad095fa2b"} Jan 26 08:11:21 crc kubenswrapper[4806]: I0126 08:11:21.594860 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-q5h8d" event={"ID":"8a935821-0f8f-4e4f-9ae8-00fa265f8269","Type":"ContainerStarted","Data":"926d43f8c600b127d9907eca76d8ca2a2c630085be27e41f616714e9e1560b5a"} Jan 26 08:11:21 crc kubenswrapper[4806]: E0126 08:11:21.598186 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-qw29c" podUID="bc6102bf-7483-4063-af9d-841e78398b0c" Jan 26 08:11:21 crc kubenswrapper[4806]: I0126 08:11:21.611793 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-cbdcb8bcc-96jf5" podStartSLOduration=4.860673879 podStartE2EDuration="37.611778579s" podCreationTimestamp="2026-01-26 08:10:44 +0000 UTC" firstStartedPulling="2026-01-26 08:10:46.411890297 +0000 UTC m=+1025.676298353" lastFinishedPulling="2026-01-26 08:11:19.162994997 +0000 UTC m=+1058.427403053" observedRunningTime="2026-01-26 08:11:21.611177482 +0000 UTC m=+1060.875585538" watchObservedRunningTime="2026-01-26 08:11:21.611778579 +0000 UTC m=+1060.876186635" Jan 26 08:11:21 crc kubenswrapper[4806]: I0126 08:11:21.808525 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6b8f96b47b-sbsnb"] Jan 26 08:11:21 crc kubenswrapper[4806]: W0126 08:11:21.863421 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6cdc8231_c5bf_4074_883b_94949b9e00dc.slice/crio-f4e72c873749f034a4d30d949facdbf214e7c037d7e42a60258d26e72d05c44e WatchSource:0}: Error finding container f4e72c873749f034a4d30d949facdbf214e7c037d7e42a60258d26e72d05c44e: Status 404 returned error can't find the container with id f4e72c873749f034a4d30d949facdbf214e7c037d7e42a60258d26e72d05c44e Jan 26 08:11:21 crc kubenswrapper[4806]: I0126 08:11:21.903034 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-btkzm"] Jan 26 08:11:21 crc kubenswrapper[4806]: I0126 08:11:21.917260 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-d6hkg"] Jan 26 08:11:21 crc kubenswrapper[4806]: I0126 08:11:21.932079 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 08:11:21 crc kubenswrapper[4806]: I0126 08:11:21.962622 4806 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-xcfqg" podUID="c6ba8a7a-2708-4123-90e8-5b66f4c86448" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.112:5353: i/o timeout" Jan 26 08:11:22 crc kubenswrapper[4806]: I0126 08:11:22.022199 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 08:11:22 crc kubenswrapper[4806]: I0126 08:11:22.470581 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-d5647fbb4-vvzj4"] Jan 26 08:11:22 crc kubenswrapper[4806]: I0126 08:11:22.636497 4806 generic.go:334] "Generic (PLEG): container finished" podID="6f121c86-c5ee-47c3-b80b-f8791a68ee15" containerID="16c3f0f7b1670dde9dd5b12e056f8138fefa91d69ceb8da299d19ecb16396892" exitCode=0 Jan 26 08:11:22 crc kubenswrapper[4806]: I0126 08:11:22.636577 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-d6hkg" event={"ID":"6f121c86-c5ee-47c3-b80b-f8791a68ee15","Type":"ContainerDied","Data":"16c3f0f7b1670dde9dd5b12e056f8138fefa91d69ceb8da299d19ecb16396892"} Jan 26 08:11:22 crc kubenswrapper[4806]: I0126 08:11:22.636621 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-d6hkg" event={"ID":"6f121c86-c5ee-47c3-b80b-f8791a68ee15","Type":"ContainerStarted","Data":"38f7440d4992a6b7670a357834a24f326b655e43b7cccd8ce4616dbe5c234e3b"} Jan 26 08:11:22 crc kubenswrapper[4806]: I0126 08:11:22.642720 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6b8f96b47b-sbsnb" event={"ID":"d4ed3e96-22ec-410e-8f50-afd310343aa8","Type":"ContainerStarted","Data":"a09f2f022f2412188c545633fabbb6e4a0425cd682be778b24e3a09f7182b0e0"} Jan 26 08:11:22 crc kubenswrapper[4806]: I0126 08:11:22.642763 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6b8f96b47b-sbsnb" event={"ID":"d4ed3e96-22ec-410e-8f50-afd310343aa8","Type":"ContainerStarted","Data":"e9bab62fbcf1de9e428b558840cfafffd701f684badedadd3aa45617214cbfa0"} Jan 26 08:11:22 crc kubenswrapper[4806]: I0126 08:11:22.667550 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0abdef54-e353-49e8-9dbd-bc47d32d131e","Type":"ContainerStarted","Data":"24d1d784c5ea330c7a4aebde837bd74a21c2b5b1e602917dc84f08e9ec67377f"} Jan 26 08:11:22 crc kubenswrapper[4806]: I0126 08:11:22.673895 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7d485d788d-5q4tb" event={"ID":"d7b4ee8d-6333-4683-94c4-b79229c76537","Type":"ContainerStarted","Data":"2f5c5518703436421c91f464ab9777da39f58b99c3d2a17879d7f4a25c298cfc"} Jan 26 08:11:22 crc kubenswrapper[4806]: I0126 08:11:22.673938 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7d485d788d-5q4tb" event={"ID":"d7b4ee8d-6333-4683-94c4-b79229c76537","Type":"ContainerStarted","Data":"f87d035c73c255421b48d4cd47dacbc10b097d6a12332e04ccdd30feaf74373d"} Jan 26 08:11:22 crc kubenswrapper[4806]: I0126 08:11:22.681155 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-btkzm" event={"ID":"6a63a316-0795-4795-8662-5c0b2de2597f","Type":"ContainerStarted","Data":"f5afbd855fc295ff0dfcceb591fa970b5bf97f180e0f1d5686519f0411226b48"} Jan 26 08:11:22 crc kubenswrapper[4806]: I0126 08:11:22.681191 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-btkzm" event={"ID":"6a63a316-0795-4795-8662-5c0b2de2597f","Type":"ContainerStarted","Data":"52f12ae359cad1668cf1dbb8d4d637e3962fd6a8b3ad18eb31a00da66e6eff14"} Jan 26 08:11:22 crc kubenswrapper[4806]: I0126 08:11:22.687029 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-hnszz" event={"ID":"86ed2345-2edc-46bb-a416-3cfa5c01b38d","Type":"ContainerStarted","Data":"ed2485be131e4cad4b6ee955c2e0c99fc91b6551e9083f5ac7d8e12c02c13027"} Jan 26 08:11:22 crc kubenswrapper[4806]: I0126 08:11:22.706318 4806 generic.go:334] "Generic (PLEG): container finished" podID="8a935821-0f8f-4e4f-9ae8-00fa265f8269" containerID="0f35d509c91ab6c2d444ff43c320ee6016b1b4518a39b45f57e589832857130d" exitCode=0 Jan 26 08:11:22 crc kubenswrapper[4806]: I0126 08:11:22.706449 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-q5h8d" event={"ID":"8a935821-0f8f-4e4f-9ae8-00fa265f8269","Type":"ContainerDied","Data":"0f35d509c91ab6c2d444ff43c320ee6016b1b4518a39b45f57e589832857130d"} Jan 26 08:11:22 crc kubenswrapper[4806]: I0126 08:11:22.720925 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-7d485d788d-5q4tb" podStartSLOduration=29.720907507 podStartE2EDuration="29.720907507s" podCreationTimestamp="2026-01-26 08:10:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 08:11:22.698284981 +0000 UTC m=+1061.962693047" watchObservedRunningTime="2026-01-26 08:11:22.720907507 +0000 UTC m=+1061.985315563" Jan 26 08:11:22 crc kubenswrapper[4806]: I0126 08:11:22.725778 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6cdc8231-c5bf-4074-883b-94949b9e00dc","Type":"ContainerStarted","Data":"f4e72c873749f034a4d30d949facdbf214e7c037d7e42a60258d26e72d05c44e"} Jan 26 08:11:22 crc kubenswrapper[4806]: I0126 08:11:22.751914 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-hnszz" podStartSLOduration=3.680300769 podStartE2EDuration="38.751895767s" podCreationTimestamp="2026-01-26 08:10:44 +0000 UTC" firstStartedPulling="2026-01-26 08:10:46.621802494 +0000 UTC m=+1025.886210550" lastFinishedPulling="2026-01-26 08:11:21.693397492 +0000 UTC m=+1060.957805548" observedRunningTime="2026-01-26 08:11:22.751052364 +0000 UTC m=+1062.015460420" watchObservedRunningTime="2026-01-26 08:11:22.751895767 +0000 UTC m=+1062.016303823" Jan 26 08:11:22 crc kubenswrapper[4806]: I0126 08:11:22.756920 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-btkzm" podStartSLOduration=20.756906968 podStartE2EDuration="20.756906968s" podCreationTimestamp="2026-01-26 08:11:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 08:11:22.725982959 +0000 UTC m=+1061.990391015" watchObservedRunningTime="2026-01-26 08:11:22.756906968 +0000 UTC m=+1062.021315024" Jan 26 08:11:23 crc kubenswrapper[4806]: I0126 08:11:23.109390 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-7bcf6cb6cc-b9fxx"] Jan 26 08:11:23 crc kubenswrapper[4806]: I0126 08:11:23.110871 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7bcf6cb6cc-b9fxx" Jan 26 08:11:23 crc kubenswrapper[4806]: I0126 08:11:23.115585 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Jan 26 08:11:23 crc kubenswrapper[4806]: I0126 08:11:23.115841 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Jan 26 08:11:23 crc kubenswrapper[4806]: I0126 08:11:23.142040 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7bcf6cb6cc-b9fxx"] Jan 26 08:11:23 crc kubenswrapper[4806]: I0126 08:11:23.230116 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0dff0ed-4136-4f1b-b3d4-5eb610a99d38-public-tls-certs\") pod \"neutron-7bcf6cb6cc-b9fxx\" (UID: \"a0dff0ed-4136-4f1b-b3d4-5eb610a99d38\") " pod="openstack/neutron-7bcf6cb6cc-b9fxx" Jan 26 08:11:23 crc kubenswrapper[4806]: I0126 08:11:23.230182 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0dff0ed-4136-4f1b-b3d4-5eb610a99d38-combined-ca-bundle\") pod \"neutron-7bcf6cb6cc-b9fxx\" (UID: \"a0dff0ed-4136-4f1b-b3d4-5eb610a99d38\") " pod="openstack/neutron-7bcf6cb6cc-b9fxx" Jan 26 08:11:23 crc kubenswrapper[4806]: I0126 08:11:23.230250 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0dff0ed-4136-4f1b-b3d4-5eb610a99d38-internal-tls-certs\") pod \"neutron-7bcf6cb6cc-b9fxx\" (UID: \"a0dff0ed-4136-4f1b-b3d4-5eb610a99d38\") " pod="openstack/neutron-7bcf6cb6cc-b9fxx" Jan 26 08:11:23 crc kubenswrapper[4806]: I0126 08:11:23.230269 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0dff0ed-4136-4f1b-b3d4-5eb610a99d38-ovndb-tls-certs\") pod \"neutron-7bcf6cb6cc-b9fxx\" (UID: \"a0dff0ed-4136-4f1b-b3d4-5eb610a99d38\") " pod="openstack/neutron-7bcf6cb6cc-b9fxx" Jan 26 08:11:23 crc kubenswrapper[4806]: I0126 08:11:23.230298 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a0dff0ed-4136-4f1b-b3d4-5eb610a99d38-httpd-config\") pod \"neutron-7bcf6cb6cc-b9fxx\" (UID: \"a0dff0ed-4136-4f1b-b3d4-5eb610a99d38\") " pod="openstack/neutron-7bcf6cb6cc-b9fxx" Jan 26 08:11:23 crc kubenswrapper[4806]: I0126 08:11:23.230335 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77d4r\" (UniqueName: \"kubernetes.io/projected/a0dff0ed-4136-4f1b-b3d4-5eb610a99d38-kube-api-access-77d4r\") pod \"neutron-7bcf6cb6cc-b9fxx\" (UID: \"a0dff0ed-4136-4f1b-b3d4-5eb610a99d38\") " pod="openstack/neutron-7bcf6cb6cc-b9fxx" Jan 26 08:11:23 crc kubenswrapper[4806]: I0126 08:11:23.230377 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a0dff0ed-4136-4f1b-b3d4-5eb610a99d38-config\") pod \"neutron-7bcf6cb6cc-b9fxx\" (UID: \"a0dff0ed-4136-4f1b-b3d4-5eb610a99d38\") " pod="openstack/neutron-7bcf6cb6cc-b9fxx" Jan 26 08:11:23 crc kubenswrapper[4806]: I0126 08:11:23.332104 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a0dff0ed-4136-4f1b-b3d4-5eb610a99d38-config\") pod \"neutron-7bcf6cb6cc-b9fxx\" (UID: \"a0dff0ed-4136-4f1b-b3d4-5eb610a99d38\") " pod="openstack/neutron-7bcf6cb6cc-b9fxx" Jan 26 08:11:23 crc kubenswrapper[4806]: I0126 08:11:23.332210 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0dff0ed-4136-4f1b-b3d4-5eb610a99d38-public-tls-certs\") pod \"neutron-7bcf6cb6cc-b9fxx\" (UID: \"a0dff0ed-4136-4f1b-b3d4-5eb610a99d38\") " pod="openstack/neutron-7bcf6cb6cc-b9fxx" Jan 26 08:11:23 crc kubenswrapper[4806]: I0126 08:11:23.332252 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0dff0ed-4136-4f1b-b3d4-5eb610a99d38-combined-ca-bundle\") pod \"neutron-7bcf6cb6cc-b9fxx\" (UID: \"a0dff0ed-4136-4f1b-b3d4-5eb610a99d38\") " pod="openstack/neutron-7bcf6cb6cc-b9fxx" Jan 26 08:11:23 crc kubenswrapper[4806]: I0126 08:11:23.332296 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0dff0ed-4136-4f1b-b3d4-5eb610a99d38-internal-tls-certs\") pod \"neutron-7bcf6cb6cc-b9fxx\" (UID: \"a0dff0ed-4136-4f1b-b3d4-5eb610a99d38\") " pod="openstack/neutron-7bcf6cb6cc-b9fxx" Jan 26 08:11:23 crc kubenswrapper[4806]: I0126 08:11:23.332317 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0dff0ed-4136-4f1b-b3d4-5eb610a99d38-ovndb-tls-certs\") pod \"neutron-7bcf6cb6cc-b9fxx\" (UID: \"a0dff0ed-4136-4f1b-b3d4-5eb610a99d38\") " pod="openstack/neutron-7bcf6cb6cc-b9fxx" Jan 26 08:11:23 crc kubenswrapper[4806]: I0126 08:11:23.332343 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a0dff0ed-4136-4f1b-b3d4-5eb610a99d38-httpd-config\") pod \"neutron-7bcf6cb6cc-b9fxx\" (UID: \"a0dff0ed-4136-4f1b-b3d4-5eb610a99d38\") " pod="openstack/neutron-7bcf6cb6cc-b9fxx" Jan 26 08:11:23 crc kubenswrapper[4806]: I0126 08:11:23.332371 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77d4r\" (UniqueName: \"kubernetes.io/projected/a0dff0ed-4136-4f1b-b3d4-5eb610a99d38-kube-api-access-77d4r\") pod \"neutron-7bcf6cb6cc-b9fxx\" (UID: \"a0dff0ed-4136-4f1b-b3d4-5eb610a99d38\") " pod="openstack/neutron-7bcf6cb6cc-b9fxx" Jan 26 08:11:23 crc kubenswrapper[4806]: I0126 08:11:23.337235 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0dff0ed-4136-4f1b-b3d4-5eb610a99d38-public-tls-certs\") pod \"neutron-7bcf6cb6cc-b9fxx\" (UID: \"a0dff0ed-4136-4f1b-b3d4-5eb610a99d38\") " pod="openstack/neutron-7bcf6cb6cc-b9fxx" Jan 26 08:11:23 crc kubenswrapper[4806]: I0126 08:11:23.337253 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0dff0ed-4136-4f1b-b3d4-5eb610a99d38-combined-ca-bundle\") pod \"neutron-7bcf6cb6cc-b9fxx\" (UID: \"a0dff0ed-4136-4f1b-b3d4-5eb610a99d38\") " pod="openstack/neutron-7bcf6cb6cc-b9fxx" Jan 26 08:11:23 crc kubenswrapper[4806]: I0126 08:11:23.337452 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0dff0ed-4136-4f1b-b3d4-5eb610a99d38-internal-tls-certs\") pod \"neutron-7bcf6cb6cc-b9fxx\" (UID: \"a0dff0ed-4136-4f1b-b3d4-5eb610a99d38\") " pod="openstack/neutron-7bcf6cb6cc-b9fxx" Jan 26 08:11:23 crc kubenswrapper[4806]: I0126 08:11:23.337490 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/a0dff0ed-4136-4f1b-b3d4-5eb610a99d38-config\") pod \"neutron-7bcf6cb6cc-b9fxx\" (UID: \"a0dff0ed-4136-4f1b-b3d4-5eb610a99d38\") " pod="openstack/neutron-7bcf6cb6cc-b9fxx" Jan 26 08:11:23 crc kubenswrapper[4806]: I0126 08:11:23.338510 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a0dff0ed-4136-4f1b-b3d4-5eb610a99d38-httpd-config\") pod \"neutron-7bcf6cb6cc-b9fxx\" (UID: \"a0dff0ed-4136-4f1b-b3d4-5eb610a99d38\") " pod="openstack/neutron-7bcf6cb6cc-b9fxx" Jan 26 08:11:23 crc kubenswrapper[4806]: I0126 08:11:23.368925 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77d4r\" (UniqueName: \"kubernetes.io/projected/a0dff0ed-4136-4f1b-b3d4-5eb610a99d38-kube-api-access-77d4r\") pod \"neutron-7bcf6cb6cc-b9fxx\" (UID: \"a0dff0ed-4136-4f1b-b3d4-5eb610a99d38\") " pod="openstack/neutron-7bcf6cb6cc-b9fxx" Jan 26 08:11:23 crc kubenswrapper[4806]: I0126 08:11:23.369975 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0dff0ed-4136-4f1b-b3d4-5eb610a99d38-ovndb-tls-certs\") pod \"neutron-7bcf6cb6cc-b9fxx\" (UID: \"a0dff0ed-4136-4f1b-b3d4-5eb610a99d38\") " pod="openstack/neutron-7bcf6cb6cc-b9fxx" Jan 26 08:11:23 crc kubenswrapper[4806]: I0126 08:11:23.425785 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7bcf6cb6cc-b9fxx" Jan 26 08:11:23 crc kubenswrapper[4806]: I0126 08:11:23.559483 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-q5h8d" Jan 26 08:11:23 crc kubenswrapper[4806]: I0126 08:11:23.727460 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7d485d788d-5q4tb" Jan 26 08:11:23 crc kubenswrapper[4806]: I0126 08:11:23.729339 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7d485d788d-5q4tb" Jan 26 08:11:23 crc kubenswrapper[4806]: I0126 08:11:23.747675 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a935821-0f8f-4e4f-9ae8-00fa265f8269-config\") pod \"8a935821-0f8f-4e4f-9ae8-00fa265f8269\" (UID: \"8a935821-0f8f-4e4f-9ae8-00fa265f8269\") " Jan 26 08:11:23 crc kubenswrapper[4806]: I0126 08:11:23.747715 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8a935821-0f8f-4e4f-9ae8-00fa265f8269-ovsdbserver-nb\") pod \"8a935821-0f8f-4e4f-9ae8-00fa265f8269\" (UID: \"8a935821-0f8f-4e4f-9ae8-00fa265f8269\") " Jan 26 08:11:23 crc kubenswrapper[4806]: I0126 08:11:23.747857 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8a935821-0f8f-4e4f-9ae8-00fa265f8269-dns-svc\") pod \"8a935821-0f8f-4e4f-9ae8-00fa265f8269\" (UID: \"8a935821-0f8f-4e4f-9ae8-00fa265f8269\") " Jan 26 08:11:23 crc kubenswrapper[4806]: I0126 08:11:23.747893 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8a935821-0f8f-4e4f-9ae8-00fa265f8269-ovsdbserver-sb\") pod \"8a935821-0f8f-4e4f-9ae8-00fa265f8269\" (UID: \"8a935821-0f8f-4e4f-9ae8-00fa265f8269\") " Jan 26 08:11:23 crc kubenswrapper[4806]: I0126 08:11:23.747934 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g5x2m\" (UniqueName: \"kubernetes.io/projected/8a935821-0f8f-4e4f-9ae8-00fa265f8269-kube-api-access-g5x2m\") pod \"8a935821-0f8f-4e4f-9ae8-00fa265f8269\" (UID: \"8a935821-0f8f-4e4f-9ae8-00fa265f8269\") " Jan 26 08:11:23 crc kubenswrapper[4806]: I0126 08:11:23.747968 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8a935821-0f8f-4e4f-9ae8-00fa265f8269-dns-swift-storage-0\") pod \"8a935821-0f8f-4e4f-9ae8-00fa265f8269\" (UID: \"8a935821-0f8f-4e4f-9ae8-00fa265f8269\") " Jan 26 08:11:23 crc kubenswrapper[4806]: I0126 08:11:23.761178 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-d5647fbb4-vvzj4" event={"ID":"c547f19a-cde0-4c88-aa6a-d7b43f868565","Type":"ContainerStarted","Data":"5cb2c60e993ae596622a0d8a6da7ca562678b11bffdd59299536e30cb6e32b2f"} Jan 26 08:11:23 crc kubenswrapper[4806]: I0126 08:11:23.770989 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a935821-0f8f-4e4f-9ae8-00fa265f8269-kube-api-access-g5x2m" (OuterVolumeSpecName: "kube-api-access-g5x2m") pod "8a935821-0f8f-4e4f-9ae8-00fa265f8269" (UID: "8a935821-0f8f-4e4f-9ae8-00fa265f8269"). InnerVolumeSpecName "kube-api-access-g5x2m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:11:23 crc kubenswrapper[4806]: I0126 08:11:23.792227 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-q5h8d" event={"ID":"8a935821-0f8f-4e4f-9ae8-00fa265f8269","Type":"ContainerDied","Data":"926d43f8c600b127d9907eca76d8ca2a2c630085be27e41f616714e9e1560b5a"} Jan 26 08:11:23 crc kubenswrapper[4806]: I0126 08:11:23.792273 4806 scope.go:117] "RemoveContainer" containerID="0f35d509c91ab6c2d444ff43c320ee6016b1b4518a39b45f57e589832857130d" Jan 26 08:11:23 crc kubenswrapper[4806]: I0126 08:11:23.792401 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-q5h8d" Jan 26 08:11:23 crc kubenswrapper[4806]: I0126 08:11:23.816416 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a935821-0f8f-4e4f-9ae8-00fa265f8269-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "8a935821-0f8f-4e4f-9ae8-00fa265f8269" (UID: "8a935821-0f8f-4e4f-9ae8-00fa265f8269"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 08:11:23 crc kubenswrapper[4806]: I0126 08:11:23.818200 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6cdc8231-c5bf-4074-883b-94949b9e00dc","Type":"ContainerStarted","Data":"83ccf7e64256c5a57bb2b61caedb6981dbb6531dd1061a6cbe73eb36ffd4c024"} Jan 26 08:11:23 crc kubenswrapper[4806]: I0126 08:11:23.824854 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a935821-0f8f-4e4f-9ae8-00fa265f8269-config" (OuterVolumeSpecName: "config") pod "8a935821-0f8f-4e4f-9ae8-00fa265f8269" (UID: "8a935821-0f8f-4e4f-9ae8-00fa265f8269"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 08:11:23 crc kubenswrapper[4806]: I0126 08:11:23.846618 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a935821-0f8f-4e4f-9ae8-00fa265f8269-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8a935821-0f8f-4e4f-9ae8-00fa265f8269" (UID: "8a935821-0f8f-4e4f-9ae8-00fa265f8269"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 08:11:23 crc kubenswrapper[4806]: I0126 08:11:23.849834 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g5x2m\" (UniqueName: \"kubernetes.io/projected/8a935821-0f8f-4e4f-9ae8-00fa265f8269-kube-api-access-g5x2m\") on node \"crc\" DevicePath \"\"" Jan 26 08:11:23 crc kubenswrapper[4806]: I0126 08:11:23.849876 4806 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8a935821-0f8f-4e4f-9ae8-00fa265f8269-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 26 08:11:23 crc kubenswrapper[4806]: I0126 08:11:23.849888 4806 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a935821-0f8f-4e4f-9ae8-00fa265f8269-config\") on node \"crc\" DevicePath \"\"" Jan 26 08:11:23 crc kubenswrapper[4806]: I0126 08:11:23.849896 4806 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8a935821-0f8f-4e4f-9ae8-00fa265f8269-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 08:11:23 crc kubenswrapper[4806]: I0126 08:11:23.926751 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a935821-0f8f-4e4f-9ae8-00fa265f8269-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8a935821-0f8f-4e4f-9ae8-00fa265f8269" (UID: "8a935821-0f8f-4e4f-9ae8-00fa265f8269"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 08:11:23 crc kubenswrapper[4806]: I0126 08:11:23.928616 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a935821-0f8f-4e4f-9ae8-00fa265f8269-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "8a935821-0f8f-4e4f-9ae8-00fa265f8269" (UID: "8a935821-0f8f-4e4f-9ae8-00fa265f8269"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 08:11:23 crc kubenswrapper[4806]: I0126 08:11:23.960139 4806 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8a935821-0f8f-4e4f-9ae8-00fa265f8269-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 08:11:23 crc kubenswrapper[4806]: I0126 08:11:23.960169 4806 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8a935821-0f8f-4e4f-9ae8-00fa265f8269-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 08:11:24 crc kubenswrapper[4806]: I0126 08:11:24.223590 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-q5h8d"] Jan 26 08:11:24 crc kubenswrapper[4806]: I0126 08:11:24.241118 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-q5h8d"] Jan 26 08:11:24 crc kubenswrapper[4806]: I0126 08:11:24.584424 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7bcf6cb6cc-b9fxx"] Jan 26 08:11:24 crc kubenswrapper[4806]: I0126 08:11:24.832641 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7bcf6cb6cc-b9fxx" event={"ID":"a0dff0ed-4136-4f1b-b3d4-5eb610a99d38","Type":"ContainerStarted","Data":"a3801e557d6f2db14f64621f4c38a600712a179b8e545df0f587f6cdc5858021"} Jan 26 08:11:24 crc kubenswrapper[4806]: I0126 08:11:24.933325 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-cbdcb8bcc-96jf5" Jan 26 08:11:25 crc kubenswrapper[4806]: I0126 08:11:25.077601 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a935821-0f8f-4e4f-9ae8-00fa265f8269" path="/var/lib/kubelet/pods/8a935821-0f8f-4e4f-9ae8-00fa265f8269/volumes" Jan 26 08:11:25 crc kubenswrapper[4806]: I0126 08:11:25.858704 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"93bf46a8-2942-4b36-9853-88ff5c6e756b","Type":"ContainerStarted","Data":"ce21628d69b54f9e7078ea4cc4723a743027cca289649838f8e9a6552da1cecf"} Jan 26 08:11:25 crc kubenswrapper[4806]: I0126 08:11:25.861325 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-d6hkg" event={"ID":"6f121c86-c5ee-47c3-b80b-f8791a68ee15","Type":"ContainerStarted","Data":"31f15ffccfba1e8be909c21c0618c26eb1e7bfa209ca17a5c1161b715201b10c"} Jan 26 08:11:25 crc kubenswrapper[4806]: I0126 08:11:25.861952 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6b7b667979-d6hkg" Jan 26 08:11:25 crc kubenswrapper[4806]: I0126 08:11:25.876453 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-d5647fbb4-vvzj4" event={"ID":"c547f19a-cde0-4c88-aa6a-d7b43f868565","Type":"ContainerStarted","Data":"dcdc9e36654ccc8cff04cd981b11301de74f383b2f670b79f3add120dc058f25"} Jan 26 08:11:25 crc kubenswrapper[4806]: I0126 08:11:25.876508 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-d5647fbb4-vvzj4" event={"ID":"c547f19a-cde0-4c88-aa6a-d7b43f868565","Type":"ContainerStarted","Data":"40cbaac9b2d13f60e8b402ba949a9eef260993775c0a8f132c5f61f476c58cec"} Jan 26 08:11:25 crc kubenswrapper[4806]: I0126 08:11:25.876590 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-d5647fbb4-vvzj4" Jan 26 08:11:25 crc kubenswrapper[4806]: I0126 08:11:25.879668 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6b8f96b47b-sbsnb" event={"ID":"d4ed3e96-22ec-410e-8f50-afd310343aa8","Type":"ContainerStarted","Data":"6604da69c5d3a234f5e2c2196ef078675d09a8c5f616414f8f77812022d54176"} Jan 26 08:11:25 crc kubenswrapper[4806]: I0126 08:11:25.889382 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6b7b667979-d6hkg" podStartSLOduration=5.889361146 podStartE2EDuration="5.889361146s" podCreationTimestamp="2026-01-26 08:11:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 08:11:25.88632025 +0000 UTC m=+1065.150728306" watchObservedRunningTime="2026-01-26 08:11:25.889361146 +0000 UTC m=+1065.153769202" Jan 26 08:11:25 crc kubenswrapper[4806]: I0126 08:11:25.897511 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0abdef54-e353-49e8-9dbd-bc47d32d131e","Type":"ContainerStarted","Data":"38152e1f713befd776e645b5872ec18aac962dcc21613d252fbe05f8ffb4b144"} Jan 26 08:11:25 crc kubenswrapper[4806]: I0126 08:11:25.911250 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7bcf6cb6cc-b9fxx" event={"ID":"a0dff0ed-4136-4f1b-b3d4-5eb610a99d38","Type":"ContainerStarted","Data":"703c7f76126c519f5df185e47194d1d9b3f23aed98c6ae176f3ccd52e6ab29ea"} Jan 26 08:11:25 crc kubenswrapper[4806]: I0126 08:11:25.911287 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7bcf6cb6cc-b9fxx" event={"ID":"a0dff0ed-4136-4f1b-b3d4-5eb610a99d38","Type":"ContainerStarted","Data":"fd594b637acf82dc96d273bb5255ca283aeb955bd5f41111dc327731bb40271d"} Jan 26 08:11:25 crc kubenswrapper[4806]: I0126 08:11:25.912228 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-7bcf6cb6cc-b9fxx" Jan 26 08:11:25 crc kubenswrapper[4806]: I0126 08:11:25.926003 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-d5647fbb4-vvzj4" podStartSLOduration=5.925988665 podStartE2EDuration="5.925988665s" podCreationTimestamp="2026-01-26 08:11:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 08:11:25.919798831 +0000 UTC m=+1065.184206887" watchObservedRunningTime="2026-01-26 08:11:25.925988665 +0000 UTC m=+1065.190396721" Jan 26 08:11:25 crc kubenswrapper[4806]: I0126 08:11:25.929068 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6cdc8231-c5bf-4074-883b-94949b9e00dc","Type":"ContainerStarted","Data":"bb263c2cc5ece11f59df82c8c0ec57770cc32119e9e59e2fa122f52c631cb99d"} Jan 26 08:11:25 crc kubenswrapper[4806]: I0126 08:11:25.929215 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="6cdc8231-c5bf-4074-883b-94949b9e00dc" containerName="glance-log" containerID="cri-o://83ccf7e64256c5a57bb2b61caedb6981dbb6531dd1061a6cbe73eb36ffd4c024" gracePeriod=30 Jan 26 08:11:25 crc kubenswrapper[4806]: I0126 08:11:25.929439 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="6cdc8231-c5bf-4074-883b-94949b9e00dc" containerName="glance-httpd" containerID="cri-o://bb263c2cc5ece11f59df82c8c0ec57770cc32119e9e59e2fa122f52c631cb99d" gracePeriod=30 Jan 26 08:11:25 crc kubenswrapper[4806]: I0126 08:11:25.975483 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-6b8f96b47b-sbsnb" podStartSLOduration=32.975466144 podStartE2EDuration="32.975466144s" podCreationTimestamp="2026-01-26 08:10:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 08:11:25.955163634 +0000 UTC m=+1065.219571690" watchObservedRunningTime="2026-01-26 08:11:25.975466144 +0000 UTC m=+1065.239874200" Jan 26 08:11:25 crc kubenswrapper[4806]: I0126 08:11:25.996600 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-7bcf6cb6cc-b9fxx" podStartSLOduration=2.996583428 podStartE2EDuration="2.996583428s" podCreationTimestamp="2026-01-26 08:11:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 08:11:25.992018349 +0000 UTC m=+1065.256426395" watchObservedRunningTime="2026-01-26 08:11:25.996583428 +0000 UTC m=+1065.260991484" Jan 26 08:11:26 crc kubenswrapper[4806]: I0126 08:11:26.049430 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=30.049404722 podStartE2EDuration="30.049404722s" podCreationTimestamp="2026-01-26 08:10:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 08:11:26.028304879 +0000 UTC m=+1065.292712935" watchObservedRunningTime="2026-01-26 08:11:26.049404722 +0000 UTC m=+1065.313812798" Jan 26 08:11:26 crc kubenswrapper[4806]: I0126 08:11:26.978741 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0abdef54-e353-49e8-9dbd-bc47d32d131e","Type":"ContainerStarted","Data":"42027540d4ff9845837bedb682481f3e2764c14bfb1dba27e29fc227bb1de670"} Jan 26 08:11:26 crc kubenswrapper[4806]: I0126 08:11:26.979540 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="0abdef54-e353-49e8-9dbd-bc47d32d131e" containerName="glance-log" containerID="cri-o://38152e1f713befd776e645b5872ec18aac962dcc21613d252fbe05f8ffb4b144" gracePeriod=30 Jan 26 08:11:26 crc kubenswrapper[4806]: I0126 08:11:26.980416 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="0abdef54-e353-49e8-9dbd-bc47d32d131e" containerName="glance-httpd" containerID="cri-o://42027540d4ff9845837bedb682481f3e2764c14bfb1dba27e29fc227bb1de670" gracePeriod=30 Jan 26 08:11:27 crc kubenswrapper[4806]: I0126 08:11:26.999380 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-8dksv" event={"ID":"4588263f-b01b-4a54-829f-1cef11d1dbd3","Type":"ContainerStarted","Data":"fe8f1dbf123a7ed8f81d7773dea0015a57089e658ae2a3760eead5826aeece01"} Jan 26 08:11:27 crc kubenswrapper[4806]: I0126 08:11:27.016152 4806 generic.go:334] "Generic (PLEG): container finished" podID="86ed2345-2edc-46bb-a416-3cfa5c01b38d" containerID="ed2485be131e4cad4b6ee955c2e0c99fc91b6551e9083f5ac7d8e12c02c13027" exitCode=0 Jan 26 08:11:27 crc kubenswrapper[4806]: I0126 08:11:27.016241 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-hnszz" event={"ID":"86ed2345-2edc-46bb-a416-3cfa5c01b38d","Type":"ContainerDied","Data":"ed2485be131e4cad4b6ee955c2e0c99fc91b6551e9083f5ac7d8e12c02c13027"} Jan 26 08:11:27 crc kubenswrapper[4806]: I0126 08:11:27.024180 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=31.024162734 podStartE2EDuration="31.024162734s" podCreationTimestamp="2026-01-26 08:10:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 08:11:27.009621475 +0000 UTC m=+1066.274029531" watchObservedRunningTime="2026-01-26 08:11:27.024162734 +0000 UTC m=+1066.288570790" Jan 26 08:11:27 crc kubenswrapper[4806]: I0126 08:11:27.039785 4806 generic.go:334] "Generic (PLEG): container finished" podID="6cdc8231-c5bf-4074-883b-94949b9e00dc" containerID="bb263c2cc5ece11f59df82c8c0ec57770cc32119e9e59e2fa122f52c631cb99d" exitCode=0 Jan 26 08:11:27 crc kubenswrapper[4806]: I0126 08:11:27.039816 4806 generic.go:334] "Generic (PLEG): container finished" podID="6cdc8231-c5bf-4074-883b-94949b9e00dc" containerID="83ccf7e64256c5a57bb2b61caedb6981dbb6531dd1061a6cbe73eb36ffd4c024" exitCode=143 Jan 26 08:11:27 crc kubenswrapper[4806]: I0126 08:11:27.043944 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-8dksv" podStartSLOduration=3.697133162 podStartE2EDuration="43.043928889s" podCreationTimestamp="2026-01-26 08:10:44 +0000 UTC" firstStartedPulling="2026-01-26 08:10:46.347444837 +0000 UTC m=+1025.611852893" lastFinishedPulling="2026-01-26 08:11:25.694240564 +0000 UTC m=+1064.958648620" observedRunningTime="2026-01-26 08:11:27.03647043 +0000 UTC m=+1066.300878486" watchObservedRunningTime="2026-01-26 08:11:27.043928889 +0000 UTC m=+1066.308336945" Jan 26 08:11:27 crc kubenswrapper[4806]: I0126 08:11:27.045293 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6cdc8231-c5bf-4074-883b-94949b9e00dc","Type":"ContainerDied","Data":"bb263c2cc5ece11f59df82c8c0ec57770cc32119e9e59e2fa122f52c631cb99d"} Jan 26 08:11:27 crc kubenswrapper[4806]: I0126 08:11:27.045338 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6cdc8231-c5bf-4074-883b-94949b9e00dc","Type":"ContainerDied","Data":"83ccf7e64256c5a57bb2b61caedb6981dbb6531dd1061a6cbe73eb36ffd4c024"} Jan 26 08:11:27 crc kubenswrapper[4806]: I0126 08:11:27.221297 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 26 08:11:27 crc kubenswrapper[4806]: I0126 08:11:27.363853 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cdc8231-c5bf-4074-883b-94949b9e00dc-combined-ca-bundle\") pod \"6cdc8231-c5bf-4074-883b-94949b9e00dc\" (UID: \"6cdc8231-c5bf-4074-883b-94949b9e00dc\") " Jan 26 08:11:27 crc kubenswrapper[4806]: I0126 08:11:27.364114 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-szc25\" (UniqueName: \"kubernetes.io/projected/6cdc8231-c5bf-4074-883b-94949b9e00dc-kube-api-access-szc25\") pod \"6cdc8231-c5bf-4074-883b-94949b9e00dc\" (UID: \"6cdc8231-c5bf-4074-883b-94949b9e00dc\") " Jan 26 08:11:27 crc kubenswrapper[4806]: I0126 08:11:27.364199 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"6cdc8231-c5bf-4074-883b-94949b9e00dc\" (UID: \"6cdc8231-c5bf-4074-883b-94949b9e00dc\") " Jan 26 08:11:27 crc kubenswrapper[4806]: I0126 08:11:27.364283 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6cdc8231-c5bf-4074-883b-94949b9e00dc-config-data\") pod \"6cdc8231-c5bf-4074-883b-94949b9e00dc\" (UID: \"6cdc8231-c5bf-4074-883b-94949b9e00dc\") " Jan 26 08:11:27 crc kubenswrapper[4806]: I0126 08:11:27.364385 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6cdc8231-c5bf-4074-883b-94949b9e00dc-logs\") pod \"6cdc8231-c5bf-4074-883b-94949b9e00dc\" (UID: \"6cdc8231-c5bf-4074-883b-94949b9e00dc\") " Jan 26 08:11:27 crc kubenswrapper[4806]: I0126 08:11:27.364460 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6cdc8231-c5bf-4074-883b-94949b9e00dc-httpd-run\") pod \"6cdc8231-c5bf-4074-883b-94949b9e00dc\" (UID: \"6cdc8231-c5bf-4074-883b-94949b9e00dc\") " Jan 26 08:11:27 crc kubenswrapper[4806]: I0126 08:11:27.364563 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6cdc8231-c5bf-4074-883b-94949b9e00dc-scripts\") pod \"6cdc8231-c5bf-4074-883b-94949b9e00dc\" (UID: \"6cdc8231-c5bf-4074-883b-94949b9e00dc\") " Jan 26 08:11:27 crc kubenswrapper[4806]: I0126 08:11:27.364685 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6cdc8231-c5bf-4074-883b-94949b9e00dc-logs" (OuterVolumeSpecName: "logs") pod "6cdc8231-c5bf-4074-883b-94949b9e00dc" (UID: "6cdc8231-c5bf-4074-883b-94949b9e00dc"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 08:11:27 crc kubenswrapper[4806]: I0126 08:11:27.364731 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6cdc8231-c5bf-4074-883b-94949b9e00dc-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "6cdc8231-c5bf-4074-883b-94949b9e00dc" (UID: "6cdc8231-c5bf-4074-883b-94949b9e00dc"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 08:11:27 crc kubenswrapper[4806]: I0126 08:11:27.365124 4806 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6cdc8231-c5bf-4074-883b-94949b9e00dc-logs\") on node \"crc\" DevicePath \"\"" Jan 26 08:11:27 crc kubenswrapper[4806]: I0126 08:11:27.365217 4806 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6cdc8231-c5bf-4074-883b-94949b9e00dc-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 26 08:11:27 crc kubenswrapper[4806]: I0126 08:11:27.385229 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6cdc8231-c5bf-4074-883b-94949b9e00dc-scripts" (OuterVolumeSpecName: "scripts") pod "6cdc8231-c5bf-4074-883b-94949b9e00dc" (UID: "6cdc8231-c5bf-4074-883b-94949b9e00dc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:11:27 crc kubenswrapper[4806]: I0126 08:11:27.390660 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "glance") pod "6cdc8231-c5bf-4074-883b-94949b9e00dc" (UID: "6cdc8231-c5bf-4074-883b-94949b9e00dc"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 26 08:11:27 crc kubenswrapper[4806]: I0126 08:11:27.419674 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6cdc8231-c5bf-4074-883b-94949b9e00dc-kube-api-access-szc25" (OuterVolumeSpecName: "kube-api-access-szc25") pod "6cdc8231-c5bf-4074-883b-94949b9e00dc" (UID: "6cdc8231-c5bf-4074-883b-94949b9e00dc"). InnerVolumeSpecName "kube-api-access-szc25". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:11:27 crc kubenswrapper[4806]: I0126 08:11:27.437686 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6cdc8231-c5bf-4074-883b-94949b9e00dc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6cdc8231-c5bf-4074-883b-94949b9e00dc" (UID: "6cdc8231-c5bf-4074-883b-94949b9e00dc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:11:27 crc kubenswrapper[4806]: I0126 08:11:27.468844 4806 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6cdc8231-c5bf-4074-883b-94949b9e00dc-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 08:11:27 crc kubenswrapper[4806]: I0126 08:11:27.468887 4806 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cdc8231-c5bf-4074-883b-94949b9e00dc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 08:11:27 crc kubenswrapper[4806]: I0126 08:11:27.468901 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-szc25\" (UniqueName: \"kubernetes.io/projected/6cdc8231-c5bf-4074-883b-94949b9e00dc-kube-api-access-szc25\") on node \"crc\" DevicePath \"\"" Jan 26 08:11:27 crc kubenswrapper[4806]: I0126 08:11:27.468933 4806 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Jan 26 08:11:27 crc kubenswrapper[4806]: I0126 08:11:27.482731 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6cdc8231-c5bf-4074-883b-94949b9e00dc-config-data" (OuterVolumeSpecName: "config-data") pod "6cdc8231-c5bf-4074-883b-94949b9e00dc" (UID: "6cdc8231-c5bf-4074-883b-94949b9e00dc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:11:27 crc kubenswrapper[4806]: I0126 08:11:27.492869 4806 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Jan 26 08:11:27 crc kubenswrapper[4806]: I0126 08:11:27.512629 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 26 08:11:27 crc kubenswrapper[4806]: I0126 08:11:27.512694 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 26 08:11:27 crc kubenswrapper[4806]: I0126 08:11:27.571586 4806 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Jan 26 08:11:27 crc kubenswrapper[4806]: I0126 08:11:27.571624 4806 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6cdc8231-c5bf-4074-883b-94949b9e00dc-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 08:11:27 crc kubenswrapper[4806]: I0126 08:11:27.994612 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 26 08:11:28 crc kubenswrapper[4806]: I0126 08:11:28.072392 4806 generic.go:334] "Generic (PLEG): container finished" podID="0abdef54-e353-49e8-9dbd-bc47d32d131e" containerID="42027540d4ff9845837bedb682481f3e2764c14bfb1dba27e29fc227bb1de670" exitCode=0 Jan 26 08:11:28 crc kubenswrapper[4806]: I0126 08:11:28.072426 4806 generic.go:334] "Generic (PLEG): container finished" podID="0abdef54-e353-49e8-9dbd-bc47d32d131e" containerID="38152e1f713befd776e645b5872ec18aac962dcc21613d252fbe05f8ffb4b144" exitCode=143 Jan 26 08:11:28 crc kubenswrapper[4806]: I0126 08:11:28.072479 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0abdef54-e353-49e8-9dbd-bc47d32d131e","Type":"ContainerDied","Data":"42027540d4ff9845837bedb682481f3e2764c14bfb1dba27e29fc227bb1de670"} Jan 26 08:11:28 crc kubenswrapper[4806]: I0126 08:11:28.072511 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0abdef54-e353-49e8-9dbd-bc47d32d131e","Type":"ContainerDied","Data":"38152e1f713befd776e645b5872ec18aac962dcc21613d252fbe05f8ffb4b144"} Jan 26 08:11:28 crc kubenswrapper[4806]: I0126 08:11:28.074004 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 26 08:11:28 crc kubenswrapper[4806]: I0126 08:11:28.075120 4806 scope.go:117] "RemoveContainer" containerID="42027540d4ff9845837bedb682481f3e2764c14bfb1dba27e29fc227bb1de670" Jan 26 08:11:28 crc kubenswrapper[4806]: I0126 08:11:28.078604 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"0abdef54-e353-49e8-9dbd-bc47d32d131e","Type":"ContainerDied","Data":"24d1d784c5ea330c7a4aebde837bd74a21c2b5b1e602917dc84f08e9ec67377f"} Jan 26 08:11:28 crc kubenswrapper[4806]: I0126 08:11:28.079074 4806 generic.go:334] "Generic (PLEG): container finished" podID="6a63a316-0795-4795-8662-5c0b2de2597f" containerID="f5afbd855fc295ff0dfcceb591fa970b5bf97f180e0f1d5686519f0411226b48" exitCode=0 Jan 26 08:11:28 crc kubenswrapper[4806]: I0126 08:11:28.079125 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-btkzm" event={"ID":"6a63a316-0795-4795-8662-5c0b2de2597f","Type":"ContainerDied","Data":"f5afbd855fc295ff0dfcceb591fa970b5bf97f180e0f1d5686519f0411226b48"} Jan 26 08:11:28 crc kubenswrapper[4806]: I0126 08:11:28.093695 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0abdef54-e353-49e8-9dbd-bc47d32d131e-combined-ca-bundle\") pod \"0abdef54-e353-49e8-9dbd-bc47d32d131e\" (UID: \"0abdef54-e353-49e8-9dbd-bc47d32d131e\") " Jan 26 08:11:28 crc kubenswrapper[4806]: I0126 08:11:28.093786 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"0abdef54-e353-49e8-9dbd-bc47d32d131e\" (UID: \"0abdef54-e353-49e8-9dbd-bc47d32d131e\") " Jan 26 08:11:28 crc kubenswrapper[4806]: I0126 08:11:28.093848 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0abdef54-e353-49e8-9dbd-bc47d32d131e-httpd-run\") pod \"0abdef54-e353-49e8-9dbd-bc47d32d131e\" (UID: \"0abdef54-e353-49e8-9dbd-bc47d32d131e\") " Jan 26 08:11:28 crc kubenswrapper[4806]: I0126 08:11:28.093897 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ck7db\" (UniqueName: \"kubernetes.io/projected/0abdef54-e353-49e8-9dbd-bc47d32d131e-kube-api-access-ck7db\") pod \"0abdef54-e353-49e8-9dbd-bc47d32d131e\" (UID: \"0abdef54-e353-49e8-9dbd-bc47d32d131e\") " Jan 26 08:11:28 crc kubenswrapper[4806]: I0126 08:11:28.093941 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0abdef54-e353-49e8-9dbd-bc47d32d131e-config-data\") pod \"0abdef54-e353-49e8-9dbd-bc47d32d131e\" (UID: \"0abdef54-e353-49e8-9dbd-bc47d32d131e\") " Jan 26 08:11:28 crc kubenswrapper[4806]: I0126 08:11:28.093963 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0abdef54-e353-49e8-9dbd-bc47d32d131e-logs\") pod \"0abdef54-e353-49e8-9dbd-bc47d32d131e\" (UID: \"0abdef54-e353-49e8-9dbd-bc47d32d131e\") " Jan 26 08:11:28 crc kubenswrapper[4806]: I0126 08:11:28.094048 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0abdef54-e353-49e8-9dbd-bc47d32d131e-scripts\") pod \"0abdef54-e353-49e8-9dbd-bc47d32d131e\" (UID: \"0abdef54-e353-49e8-9dbd-bc47d32d131e\") " Jan 26 08:11:28 crc kubenswrapper[4806]: I0126 08:11:28.100749 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0abdef54-e353-49e8-9dbd-bc47d32d131e-logs" (OuterVolumeSpecName: "logs") pod "0abdef54-e353-49e8-9dbd-bc47d32d131e" (UID: "0abdef54-e353-49e8-9dbd-bc47d32d131e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 08:11:28 crc kubenswrapper[4806]: I0126 08:11:28.101504 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0abdef54-e353-49e8-9dbd-bc47d32d131e-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "0abdef54-e353-49e8-9dbd-bc47d32d131e" (UID: "0abdef54-e353-49e8-9dbd-bc47d32d131e"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 08:11:28 crc kubenswrapper[4806]: I0126 08:11:28.105250 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0abdef54-e353-49e8-9dbd-bc47d32d131e-kube-api-access-ck7db" (OuterVolumeSpecName: "kube-api-access-ck7db") pod "0abdef54-e353-49e8-9dbd-bc47d32d131e" (UID: "0abdef54-e353-49e8-9dbd-bc47d32d131e"). InnerVolumeSpecName "kube-api-access-ck7db". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:11:28 crc kubenswrapper[4806]: I0126 08:11:28.112656 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 26 08:11:28 crc kubenswrapper[4806]: I0126 08:11:28.112844 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6cdc8231-c5bf-4074-883b-94949b9e00dc","Type":"ContainerDied","Data":"f4e72c873749f034a4d30d949facdbf214e7c037d7e42a60258d26e72d05c44e"} Jan 26 08:11:28 crc kubenswrapper[4806]: I0126 08:11:28.137752 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "glance") pod "0abdef54-e353-49e8-9dbd-bc47d32d131e" (UID: "0abdef54-e353-49e8-9dbd-bc47d32d131e"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 26 08:11:28 crc kubenswrapper[4806]: I0126 08:11:28.177713 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0abdef54-e353-49e8-9dbd-bc47d32d131e-scripts" (OuterVolumeSpecName: "scripts") pod "0abdef54-e353-49e8-9dbd-bc47d32d131e" (UID: "0abdef54-e353-49e8-9dbd-bc47d32d131e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:11:28 crc kubenswrapper[4806]: I0126 08:11:28.197434 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ck7db\" (UniqueName: \"kubernetes.io/projected/0abdef54-e353-49e8-9dbd-bc47d32d131e-kube-api-access-ck7db\") on node \"crc\" DevicePath \"\"" Jan 26 08:11:28 crc kubenswrapper[4806]: I0126 08:11:28.197467 4806 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0abdef54-e353-49e8-9dbd-bc47d32d131e-logs\") on node \"crc\" DevicePath \"\"" Jan 26 08:11:28 crc kubenswrapper[4806]: I0126 08:11:28.197479 4806 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0abdef54-e353-49e8-9dbd-bc47d32d131e-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 08:11:28 crc kubenswrapper[4806]: I0126 08:11:28.197498 4806 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Jan 26 08:11:28 crc kubenswrapper[4806]: I0126 08:11:28.197511 4806 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0abdef54-e353-49e8-9dbd-bc47d32d131e-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 26 08:11:28 crc kubenswrapper[4806]: I0126 08:11:28.198958 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0abdef54-e353-49e8-9dbd-bc47d32d131e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0abdef54-e353-49e8-9dbd-bc47d32d131e" (UID: "0abdef54-e353-49e8-9dbd-bc47d32d131e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:11:28 crc kubenswrapper[4806]: I0126 08:11:28.243673 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0abdef54-e353-49e8-9dbd-bc47d32d131e-config-data" (OuterVolumeSpecName: "config-data") pod "0abdef54-e353-49e8-9dbd-bc47d32d131e" (UID: "0abdef54-e353-49e8-9dbd-bc47d32d131e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:11:28 crc kubenswrapper[4806]: I0126 08:11:28.253700 4806 scope.go:117] "RemoveContainer" containerID="38152e1f713befd776e645b5872ec18aac962dcc21613d252fbe05f8ffb4b144" Jan 26 08:11:28 crc kubenswrapper[4806]: I0126 08:11:28.269161 4806 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Jan 26 08:11:28 crc kubenswrapper[4806]: I0126 08:11:28.279819 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 08:11:28 crc kubenswrapper[4806]: I0126 08:11:28.285682 4806 scope.go:117] "RemoveContainer" containerID="42027540d4ff9845837bedb682481f3e2764c14bfb1dba27e29fc227bb1de670" Jan 26 08:11:28 crc kubenswrapper[4806]: E0126 08:11:28.286194 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42027540d4ff9845837bedb682481f3e2764c14bfb1dba27e29fc227bb1de670\": container with ID starting with 42027540d4ff9845837bedb682481f3e2764c14bfb1dba27e29fc227bb1de670 not found: ID does not exist" containerID="42027540d4ff9845837bedb682481f3e2764c14bfb1dba27e29fc227bb1de670" Jan 26 08:11:28 crc kubenswrapper[4806]: I0126 08:11:28.286217 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42027540d4ff9845837bedb682481f3e2764c14bfb1dba27e29fc227bb1de670"} err="failed to get container status \"42027540d4ff9845837bedb682481f3e2764c14bfb1dba27e29fc227bb1de670\": rpc error: code = NotFound desc = could not find container \"42027540d4ff9845837bedb682481f3e2764c14bfb1dba27e29fc227bb1de670\": container with ID starting with 42027540d4ff9845837bedb682481f3e2764c14bfb1dba27e29fc227bb1de670 not found: ID does not exist" Jan 26 08:11:28 crc kubenswrapper[4806]: I0126 08:11:28.286252 4806 scope.go:117] "RemoveContainer" containerID="38152e1f713befd776e645b5872ec18aac962dcc21613d252fbe05f8ffb4b144" Jan 26 08:11:28 crc kubenswrapper[4806]: E0126 08:11:28.288851 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"38152e1f713befd776e645b5872ec18aac962dcc21613d252fbe05f8ffb4b144\": container with ID starting with 38152e1f713befd776e645b5872ec18aac962dcc21613d252fbe05f8ffb4b144 not found: ID does not exist" containerID="38152e1f713befd776e645b5872ec18aac962dcc21613d252fbe05f8ffb4b144" Jan 26 08:11:28 crc kubenswrapper[4806]: I0126 08:11:28.288871 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38152e1f713befd776e645b5872ec18aac962dcc21613d252fbe05f8ffb4b144"} err="failed to get container status \"38152e1f713befd776e645b5872ec18aac962dcc21613d252fbe05f8ffb4b144\": rpc error: code = NotFound desc = could not find container \"38152e1f713befd776e645b5872ec18aac962dcc21613d252fbe05f8ffb4b144\": container with ID starting with 38152e1f713befd776e645b5872ec18aac962dcc21613d252fbe05f8ffb4b144 not found: ID does not exist" Jan 26 08:11:28 crc kubenswrapper[4806]: I0126 08:11:28.288886 4806 scope.go:117] "RemoveContainer" containerID="42027540d4ff9845837bedb682481f3e2764c14bfb1dba27e29fc227bb1de670" Jan 26 08:11:28 crc kubenswrapper[4806]: I0126 08:11:28.289773 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42027540d4ff9845837bedb682481f3e2764c14bfb1dba27e29fc227bb1de670"} err="failed to get container status \"42027540d4ff9845837bedb682481f3e2764c14bfb1dba27e29fc227bb1de670\": rpc error: code = NotFound desc = could not find container \"42027540d4ff9845837bedb682481f3e2764c14bfb1dba27e29fc227bb1de670\": container with ID starting with 42027540d4ff9845837bedb682481f3e2764c14bfb1dba27e29fc227bb1de670 not found: ID does not exist" Jan 26 08:11:28 crc kubenswrapper[4806]: I0126 08:11:28.289815 4806 scope.go:117] "RemoveContainer" containerID="38152e1f713befd776e645b5872ec18aac962dcc21613d252fbe05f8ffb4b144" Jan 26 08:11:28 crc kubenswrapper[4806]: I0126 08:11:28.291573 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 08:11:28 crc kubenswrapper[4806]: I0126 08:11:28.293302 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38152e1f713befd776e645b5872ec18aac962dcc21613d252fbe05f8ffb4b144"} err="failed to get container status \"38152e1f713befd776e645b5872ec18aac962dcc21613d252fbe05f8ffb4b144\": rpc error: code = NotFound desc = could not find container \"38152e1f713befd776e645b5872ec18aac962dcc21613d252fbe05f8ffb4b144\": container with ID starting with 38152e1f713befd776e645b5872ec18aac962dcc21613d252fbe05f8ffb4b144 not found: ID does not exist" Jan 26 08:11:28 crc kubenswrapper[4806]: I0126 08:11:28.293343 4806 scope.go:117] "RemoveContainer" containerID="bb263c2cc5ece11f59df82c8c0ec57770cc32119e9e59e2fa122f52c631cb99d" Jan 26 08:11:28 crc kubenswrapper[4806]: I0126 08:11:28.298647 4806 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0abdef54-e353-49e8-9dbd-bc47d32d131e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 08:11:28 crc kubenswrapper[4806]: I0126 08:11:28.298771 4806 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Jan 26 08:11:28 crc kubenswrapper[4806]: I0126 08:11:28.298852 4806 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0abdef54-e353-49e8-9dbd-bc47d32d131e-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 08:11:28 crc kubenswrapper[4806]: I0126 08:11:28.310878 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 08:11:28 crc kubenswrapper[4806]: E0126 08:11:28.311215 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0abdef54-e353-49e8-9dbd-bc47d32d131e" containerName="glance-httpd" Jan 26 08:11:28 crc kubenswrapper[4806]: I0126 08:11:28.311231 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="0abdef54-e353-49e8-9dbd-bc47d32d131e" containerName="glance-httpd" Jan 26 08:11:28 crc kubenswrapper[4806]: E0126 08:11:28.311242 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0abdef54-e353-49e8-9dbd-bc47d32d131e" containerName="glance-log" Jan 26 08:11:28 crc kubenswrapper[4806]: I0126 08:11:28.311249 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="0abdef54-e353-49e8-9dbd-bc47d32d131e" containerName="glance-log" Jan 26 08:11:28 crc kubenswrapper[4806]: E0126 08:11:28.311265 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a935821-0f8f-4e4f-9ae8-00fa265f8269" containerName="init" Jan 26 08:11:28 crc kubenswrapper[4806]: I0126 08:11:28.311272 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a935821-0f8f-4e4f-9ae8-00fa265f8269" containerName="init" Jan 26 08:11:28 crc kubenswrapper[4806]: E0126 08:11:28.311292 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cdc8231-c5bf-4074-883b-94949b9e00dc" containerName="glance-httpd" Jan 26 08:11:28 crc kubenswrapper[4806]: I0126 08:11:28.311298 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cdc8231-c5bf-4074-883b-94949b9e00dc" containerName="glance-httpd" Jan 26 08:11:28 crc kubenswrapper[4806]: E0126 08:11:28.311309 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cdc8231-c5bf-4074-883b-94949b9e00dc" containerName="glance-log" Jan 26 08:11:28 crc kubenswrapper[4806]: I0126 08:11:28.311315 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cdc8231-c5bf-4074-883b-94949b9e00dc" containerName="glance-log" Jan 26 08:11:28 crc kubenswrapper[4806]: I0126 08:11:28.311513 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="0abdef54-e353-49e8-9dbd-bc47d32d131e" containerName="glance-log" Jan 26 08:11:28 crc kubenswrapper[4806]: I0126 08:11:28.311551 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="6cdc8231-c5bf-4074-883b-94949b9e00dc" containerName="glance-log" Jan 26 08:11:28 crc kubenswrapper[4806]: I0126 08:11:28.311566 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a935821-0f8f-4e4f-9ae8-00fa265f8269" containerName="init" Jan 26 08:11:28 crc kubenswrapper[4806]: I0126 08:11:28.311577 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="6cdc8231-c5bf-4074-883b-94949b9e00dc" containerName="glance-httpd" Jan 26 08:11:28 crc kubenswrapper[4806]: I0126 08:11:28.311596 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="0abdef54-e353-49e8-9dbd-bc47d32d131e" containerName="glance-httpd" Jan 26 08:11:28 crc kubenswrapper[4806]: I0126 08:11:28.312515 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 26 08:11:28 crc kubenswrapper[4806]: I0126 08:11:28.317318 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 26 08:11:28 crc kubenswrapper[4806]: I0126 08:11:28.318498 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 26 08:11:28 crc kubenswrapper[4806]: I0126 08:11:28.318610 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 08:11:28 crc kubenswrapper[4806]: I0126 08:11:28.379795 4806 scope.go:117] "RemoveContainer" containerID="83ccf7e64256c5a57bb2b61caedb6981dbb6531dd1061a6cbe73eb36ffd4c024" Jan 26 08:11:28 crc kubenswrapper[4806]: I0126 08:11:28.470587 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 08:11:28 crc kubenswrapper[4806]: I0126 08:11:28.488079 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-hnszz" Jan 26 08:11:28 crc kubenswrapper[4806]: I0126 08:11:28.488668 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 08:11:28 crc kubenswrapper[4806]: I0126 08:11:28.500598 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 08:11:28 crc kubenswrapper[4806]: E0126 08:11:28.501026 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86ed2345-2edc-46bb-a416-3cfa5c01b38d" containerName="placement-db-sync" Jan 26 08:11:28 crc kubenswrapper[4806]: I0126 08:11:28.501042 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="86ed2345-2edc-46bb-a416-3cfa5c01b38d" containerName="placement-db-sync" Jan 26 08:11:28 crc kubenswrapper[4806]: I0126 08:11:28.501247 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="86ed2345-2edc-46bb-a416-3cfa5c01b38d" containerName="placement-db-sync" Jan 26 08:11:28 crc kubenswrapper[4806]: I0126 08:11:28.501723 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20febbb2-a1ac-4a38-8a1d-594fa53b0b06-config-data\") pod \"glance-default-internal-api-0\" (UID: \"20febbb2-a1ac-4a38-8a1d-594fa53b0b06\") " pod="openstack/glance-default-internal-api-0" Jan 26 08:11:28 crc kubenswrapper[4806]: I0126 08:11:28.501798 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20febbb2-a1ac-4a38-8a1d-594fa53b0b06-scripts\") pod \"glance-default-internal-api-0\" (UID: \"20febbb2-a1ac-4a38-8a1d-594fa53b0b06\") " pod="openstack/glance-default-internal-api-0" Jan 26 08:11:28 crc kubenswrapper[4806]: I0126 08:11:28.501819 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/20febbb2-a1ac-4a38-8a1d-594fa53b0b06-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"20febbb2-a1ac-4a38-8a1d-594fa53b0b06\") " pod="openstack/glance-default-internal-api-0" Jan 26 08:11:28 crc kubenswrapper[4806]: I0126 08:11:28.501950 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20febbb2-a1ac-4a38-8a1d-594fa53b0b06-logs\") pod \"glance-default-internal-api-0\" (UID: \"20febbb2-a1ac-4a38-8a1d-594fa53b0b06\") " pod="openstack/glance-default-internal-api-0" Jan 26 08:11:28 crc kubenswrapper[4806]: I0126 08:11:28.501988 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/20febbb2-a1ac-4a38-8a1d-594fa53b0b06-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"20febbb2-a1ac-4a38-8a1d-594fa53b0b06\") " pod="openstack/glance-default-internal-api-0" Jan 26 08:11:28 crc kubenswrapper[4806]: I0126 08:11:28.502113 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"20febbb2-a1ac-4a38-8a1d-594fa53b0b06\") " pod="openstack/glance-default-internal-api-0" Jan 26 08:11:28 crc kubenswrapper[4806]: I0126 08:11:28.502151 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 26 08:11:28 crc kubenswrapper[4806]: I0126 08:11:28.502164 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20febbb2-a1ac-4a38-8a1d-594fa53b0b06-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"20febbb2-a1ac-4a38-8a1d-594fa53b0b06\") " pod="openstack/glance-default-internal-api-0" Jan 26 08:11:28 crc kubenswrapper[4806]: I0126 08:11:28.502604 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hljgh\" (UniqueName: \"kubernetes.io/projected/20febbb2-a1ac-4a38-8a1d-594fa53b0b06-kube-api-access-hljgh\") pod \"glance-default-internal-api-0\" (UID: \"20febbb2-a1ac-4a38-8a1d-594fa53b0b06\") " pod="openstack/glance-default-internal-api-0" Jan 26 08:11:28 crc kubenswrapper[4806]: I0126 08:11:28.508767 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 26 08:11:28 crc kubenswrapper[4806]: I0126 08:11:28.509024 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 26 08:11:28 crc kubenswrapper[4806]: I0126 08:11:28.525602 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 08:11:28 crc kubenswrapper[4806]: I0126 08:11:28.605165 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rlpp6\" (UniqueName: \"kubernetes.io/projected/86ed2345-2edc-46bb-a416-3cfa5c01b38d-kube-api-access-rlpp6\") pod \"86ed2345-2edc-46bb-a416-3cfa5c01b38d\" (UID: \"86ed2345-2edc-46bb-a416-3cfa5c01b38d\") " Jan 26 08:11:28 crc kubenswrapper[4806]: I0126 08:11:28.605264 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86ed2345-2edc-46bb-a416-3cfa5c01b38d-config-data\") pod \"86ed2345-2edc-46bb-a416-3cfa5c01b38d\" (UID: \"86ed2345-2edc-46bb-a416-3cfa5c01b38d\") " Jan 26 08:11:28 crc kubenswrapper[4806]: I0126 08:11:28.605297 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/86ed2345-2edc-46bb-a416-3cfa5c01b38d-logs\") pod \"86ed2345-2edc-46bb-a416-3cfa5c01b38d\" (UID: \"86ed2345-2edc-46bb-a416-3cfa5c01b38d\") " Jan 26 08:11:28 crc kubenswrapper[4806]: I0126 08:11:28.605355 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/86ed2345-2edc-46bb-a416-3cfa5c01b38d-scripts\") pod \"86ed2345-2edc-46bb-a416-3cfa5c01b38d\" (UID: \"86ed2345-2edc-46bb-a416-3cfa5c01b38d\") " Jan 26 08:11:28 crc kubenswrapper[4806]: I0126 08:11:28.605440 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86ed2345-2edc-46bb-a416-3cfa5c01b38d-combined-ca-bundle\") pod \"86ed2345-2edc-46bb-a416-3cfa5c01b38d\" (UID: \"86ed2345-2edc-46bb-a416-3cfa5c01b38d\") " Jan 26 08:11:28 crc kubenswrapper[4806]: I0126 08:11:28.605822 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/20febbb2-a1ac-4a38-8a1d-594fa53b0b06-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"20febbb2-a1ac-4a38-8a1d-594fa53b0b06\") " pod="openstack/glance-default-internal-api-0" Jan 26 08:11:28 crc kubenswrapper[4806]: I0126 08:11:28.605895 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20febbb2-a1ac-4a38-8a1d-594fa53b0b06-logs\") pod \"glance-default-internal-api-0\" (UID: \"20febbb2-a1ac-4a38-8a1d-594fa53b0b06\") " pod="openstack/glance-default-internal-api-0" Jan 26 08:11:28 crc kubenswrapper[4806]: I0126 08:11:28.605934 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/20febbb2-a1ac-4a38-8a1d-594fa53b0b06-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"20febbb2-a1ac-4a38-8a1d-594fa53b0b06\") " pod="openstack/glance-default-internal-api-0" Jan 26 08:11:28 crc kubenswrapper[4806]: I0126 08:11:28.605960 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e0c22de-1431-4fcc-9ebd-1cc4791260c8-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"4e0c22de-1431-4fcc-9ebd-1cc4791260c8\") " pod="openstack/glance-default-external-api-0" Jan 26 08:11:28 crc kubenswrapper[4806]: I0126 08:11:28.605999 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e0c22de-1431-4fcc-9ebd-1cc4791260c8-config-data\") pod \"glance-default-external-api-0\" (UID: \"4e0c22de-1431-4fcc-9ebd-1cc4791260c8\") " pod="openstack/glance-default-external-api-0" Jan 26 08:11:28 crc kubenswrapper[4806]: I0126 08:11:28.606030 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"20febbb2-a1ac-4a38-8a1d-594fa53b0b06\") " pod="openstack/glance-default-internal-api-0" Jan 26 08:11:28 crc kubenswrapper[4806]: I0126 08:11:28.606051 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e0c22de-1431-4fcc-9ebd-1cc4791260c8-scripts\") pod \"glance-default-external-api-0\" (UID: \"4e0c22de-1431-4fcc-9ebd-1cc4791260c8\") " pod="openstack/glance-default-external-api-0" Jan 26 08:11:28 crc kubenswrapper[4806]: I0126 08:11:28.606102 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20febbb2-a1ac-4a38-8a1d-594fa53b0b06-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"20febbb2-a1ac-4a38-8a1d-594fa53b0b06\") " pod="openstack/glance-default-internal-api-0" Jan 26 08:11:28 crc kubenswrapper[4806]: I0126 08:11:28.606132 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e0c22de-1431-4fcc-9ebd-1cc4791260c8-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"4e0c22de-1431-4fcc-9ebd-1cc4791260c8\") " pod="openstack/glance-default-external-api-0" Jan 26 08:11:28 crc kubenswrapper[4806]: I0126 08:11:28.606151 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hljgh\" (UniqueName: \"kubernetes.io/projected/20febbb2-a1ac-4a38-8a1d-594fa53b0b06-kube-api-access-hljgh\") pod \"glance-default-internal-api-0\" (UID: \"20febbb2-a1ac-4a38-8a1d-594fa53b0b06\") " pod="openstack/glance-default-internal-api-0" Jan 26 08:11:28 crc kubenswrapper[4806]: I0126 08:11:28.606172 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"4e0c22de-1431-4fcc-9ebd-1cc4791260c8\") " pod="openstack/glance-default-external-api-0" Jan 26 08:11:28 crc kubenswrapper[4806]: I0126 08:11:28.606205 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20febbb2-a1ac-4a38-8a1d-594fa53b0b06-config-data\") pod \"glance-default-internal-api-0\" (UID: \"20febbb2-a1ac-4a38-8a1d-594fa53b0b06\") " pod="openstack/glance-default-internal-api-0" Jan 26 08:11:28 crc kubenswrapper[4806]: I0126 08:11:28.606219 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4e0c22de-1431-4fcc-9ebd-1cc4791260c8-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"4e0c22de-1431-4fcc-9ebd-1cc4791260c8\") " pod="openstack/glance-default-external-api-0" Jan 26 08:11:28 crc kubenswrapper[4806]: I0126 08:11:28.606236 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stk4d\" (UniqueName: \"kubernetes.io/projected/4e0c22de-1431-4fcc-9ebd-1cc4791260c8-kube-api-access-stk4d\") pod \"glance-default-external-api-0\" (UID: \"4e0c22de-1431-4fcc-9ebd-1cc4791260c8\") " pod="openstack/glance-default-external-api-0" Jan 26 08:11:28 crc kubenswrapper[4806]: I0126 08:11:28.606264 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4e0c22de-1431-4fcc-9ebd-1cc4791260c8-logs\") pod \"glance-default-external-api-0\" (UID: \"4e0c22de-1431-4fcc-9ebd-1cc4791260c8\") " pod="openstack/glance-default-external-api-0" Jan 26 08:11:28 crc kubenswrapper[4806]: I0126 08:11:28.606299 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20febbb2-a1ac-4a38-8a1d-594fa53b0b06-scripts\") pod \"glance-default-internal-api-0\" (UID: \"20febbb2-a1ac-4a38-8a1d-594fa53b0b06\") " pod="openstack/glance-default-internal-api-0" Jan 26 08:11:28 crc kubenswrapper[4806]: I0126 08:11:28.610053 4806 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"20febbb2-a1ac-4a38-8a1d-594fa53b0b06\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/glance-default-internal-api-0" Jan 26 08:11:28 crc kubenswrapper[4806]: I0126 08:11:28.610518 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/20febbb2-a1ac-4a38-8a1d-594fa53b0b06-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"20febbb2-a1ac-4a38-8a1d-594fa53b0b06\") " pod="openstack/glance-default-internal-api-0" Jan 26 08:11:28 crc kubenswrapper[4806]: I0126 08:11:28.610780 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20febbb2-a1ac-4a38-8a1d-594fa53b0b06-logs\") pod \"glance-default-internal-api-0\" (UID: \"20febbb2-a1ac-4a38-8a1d-594fa53b0b06\") " pod="openstack/glance-default-internal-api-0" Jan 26 08:11:28 crc kubenswrapper[4806]: I0126 08:11:28.611112 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/86ed2345-2edc-46bb-a416-3cfa5c01b38d-logs" (OuterVolumeSpecName: "logs") pod "86ed2345-2edc-46bb-a416-3cfa5c01b38d" (UID: "86ed2345-2edc-46bb-a416-3cfa5c01b38d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 08:11:28 crc kubenswrapper[4806]: I0126 08:11:28.614481 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/20febbb2-a1ac-4a38-8a1d-594fa53b0b06-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"20febbb2-a1ac-4a38-8a1d-594fa53b0b06\") " pod="openstack/glance-default-internal-api-0" Jan 26 08:11:28 crc kubenswrapper[4806]: I0126 08:11:28.623566 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20febbb2-a1ac-4a38-8a1d-594fa53b0b06-scripts\") pod \"glance-default-internal-api-0\" (UID: \"20febbb2-a1ac-4a38-8a1d-594fa53b0b06\") " pod="openstack/glance-default-internal-api-0" Jan 26 08:11:28 crc kubenswrapper[4806]: I0126 08:11:28.634981 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86ed2345-2edc-46bb-a416-3cfa5c01b38d-scripts" (OuterVolumeSpecName: "scripts") pod "86ed2345-2edc-46bb-a416-3cfa5c01b38d" (UID: "86ed2345-2edc-46bb-a416-3cfa5c01b38d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:11:28 crc kubenswrapper[4806]: I0126 08:11:28.638014 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20febbb2-a1ac-4a38-8a1d-594fa53b0b06-config-data\") pod \"glance-default-internal-api-0\" (UID: \"20febbb2-a1ac-4a38-8a1d-594fa53b0b06\") " pod="openstack/glance-default-internal-api-0" Jan 26 08:11:28 crc kubenswrapper[4806]: I0126 08:11:28.638281 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86ed2345-2edc-46bb-a416-3cfa5c01b38d-kube-api-access-rlpp6" (OuterVolumeSpecName: "kube-api-access-rlpp6") pod "86ed2345-2edc-46bb-a416-3cfa5c01b38d" (UID: "86ed2345-2edc-46bb-a416-3cfa5c01b38d"). InnerVolumeSpecName "kube-api-access-rlpp6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:11:28 crc kubenswrapper[4806]: I0126 08:11:28.639253 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20febbb2-a1ac-4a38-8a1d-594fa53b0b06-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"20febbb2-a1ac-4a38-8a1d-594fa53b0b06\") " pod="openstack/glance-default-internal-api-0" Jan 26 08:11:28 crc kubenswrapper[4806]: I0126 08:11:28.639536 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hljgh\" (UniqueName: \"kubernetes.io/projected/20febbb2-a1ac-4a38-8a1d-594fa53b0b06-kube-api-access-hljgh\") pod \"glance-default-internal-api-0\" (UID: \"20febbb2-a1ac-4a38-8a1d-594fa53b0b06\") " pod="openstack/glance-default-internal-api-0" Jan 26 08:11:28 crc kubenswrapper[4806]: I0126 08:11:28.644951 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86ed2345-2edc-46bb-a416-3cfa5c01b38d-config-data" (OuterVolumeSpecName: "config-data") pod "86ed2345-2edc-46bb-a416-3cfa5c01b38d" (UID: "86ed2345-2edc-46bb-a416-3cfa5c01b38d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:11:28 crc kubenswrapper[4806]: I0126 08:11:28.669340 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"20febbb2-a1ac-4a38-8a1d-594fa53b0b06\") " pod="openstack/glance-default-internal-api-0" Jan 26 08:11:28 crc kubenswrapper[4806]: I0126 08:11:28.685055 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86ed2345-2edc-46bb-a416-3cfa5c01b38d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "86ed2345-2edc-46bb-a416-3cfa5c01b38d" (UID: "86ed2345-2edc-46bb-a416-3cfa5c01b38d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:11:28 crc kubenswrapper[4806]: I0126 08:11:28.707566 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e0c22de-1431-4fcc-9ebd-1cc4791260c8-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"4e0c22de-1431-4fcc-9ebd-1cc4791260c8\") " pod="openstack/glance-default-external-api-0" Jan 26 08:11:28 crc kubenswrapper[4806]: I0126 08:11:28.707627 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e0c22de-1431-4fcc-9ebd-1cc4791260c8-config-data\") pod \"glance-default-external-api-0\" (UID: \"4e0c22de-1431-4fcc-9ebd-1cc4791260c8\") " pod="openstack/glance-default-external-api-0" Jan 26 08:11:28 crc kubenswrapper[4806]: I0126 08:11:28.707655 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e0c22de-1431-4fcc-9ebd-1cc4791260c8-scripts\") pod \"glance-default-external-api-0\" (UID: \"4e0c22de-1431-4fcc-9ebd-1cc4791260c8\") " pod="openstack/glance-default-external-api-0" Jan 26 08:11:28 crc kubenswrapper[4806]: I0126 08:11:28.707711 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e0c22de-1431-4fcc-9ebd-1cc4791260c8-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"4e0c22de-1431-4fcc-9ebd-1cc4791260c8\") " pod="openstack/glance-default-external-api-0" Jan 26 08:11:28 crc kubenswrapper[4806]: I0126 08:11:28.707738 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"4e0c22de-1431-4fcc-9ebd-1cc4791260c8\") " pod="openstack/glance-default-external-api-0" Jan 26 08:11:28 crc kubenswrapper[4806]: I0126 08:11:28.707773 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4e0c22de-1431-4fcc-9ebd-1cc4791260c8-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"4e0c22de-1431-4fcc-9ebd-1cc4791260c8\") " pod="openstack/glance-default-external-api-0" Jan 26 08:11:28 crc kubenswrapper[4806]: I0126 08:11:28.707789 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-stk4d\" (UniqueName: \"kubernetes.io/projected/4e0c22de-1431-4fcc-9ebd-1cc4791260c8-kube-api-access-stk4d\") pod \"glance-default-external-api-0\" (UID: \"4e0c22de-1431-4fcc-9ebd-1cc4791260c8\") " pod="openstack/glance-default-external-api-0" Jan 26 08:11:28 crc kubenswrapper[4806]: I0126 08:11:28.707813 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4e0c22de-1431-4fcc-9ebd-1cc4791260c8-logs\") pod \"glance-default-external-api-0\" (UID: \"4e0c22de-1431-4fcc-9ebd-1cc4791260c8\") " pod="openstack/glance-default-external-api-0" Jan 26 08:11:28 crc kubenswrapper[4806]: I0126 08:11:28.707896 4806 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86ed2345-2edc-46bb-a416-3cfa5c01b38d-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 08:11:28 crc kubenswrapper[4806]: I0126 08:11:28.707907 4806 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/86ed2345-2edc-46bb-a416-3cfa5c01b38d-logs\") on node \"crc\" DevicePath \"\"" Jan 26 08:11:28 crc kubenswrapper[4806]: I0126 08:11:28.707918 4806 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/86ed2345-2edc-46bb-a416-3cfa5c01b38d-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 08:11:28 crc kubenswrapper[4806]: I0126 08:11:28.707928 4806 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86ed2345-2edc-46bb-a416-3cfa5c01b38d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 08:11:28 crc kubenswrapper[4806]: I0126 08:11:28.707941 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rlpp6\" (UniqueName: \"kubernetes.io/projected/86ed2345-2edc-46bb-a416-3cfa5c01b38d-kube-api-access-rlpp6\") on node \"crc\" DevicePath \"\"" Jan 26 08:11:28 crc kubenswrapper[4806]: I0126 08:11:28.708351 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4e0c22de-1431-4fcc-9ebd-1cc4791260c8-logs\") pod \"glance-default-external-api-0\" (UID: \"4e0c22de-1431-4fcc-9ebd-1cc4791260c8\") " pod="openstack/glance-default-external-api-0" Jan 26 08:11:28 crc kubenswrapper[4806]: I0126 08:11:28.708471 4806 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"4e0c22de-1431-4fcc-9ebd-1cc4791260c8\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/glance-default-external-api-0" Jan 26 08:11:28 crc kubenswrapper[4806]: I0126 08:11:28.710029 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4e0c22de-1431-4fcc-9ebd-1cc4791260c8-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"4e0c22de-1431-4fcc-9ebd-1cc4791260c8\") " pod="openstack/glance-default-external-api-0" Jan 26 08:11:28 crc kubenswrapper[4806]: I0126 08:11:28.713592 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e0c22de-1431-4fcc-9ebd-1cc4791260c8-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"4e0c22de-1431-4fcc-9ebd-1cc4791260c8\") " pod="openstack/glance-default-external-api-0" Jan 26 08:11:28 crc kubenswrapper[4806]: I0126 08:11:28.717281 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e0c22de-1431-4fcc-9ebd-1cc4791260c8-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"4e0c22de-1431-4fcc-9ebd-1cc4791260c8\") " pod="openstack/glance-default-external-api-0" Jan 26 08:11:28 crc kubenswrapper[4806]: I0126 08:11:28.720068 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e0c22de-1431-4fcc-9ebd-1cc4791260c8-config-data\") pod \"glance-default-external-api-0\" (UID: \"4e0c22de-1431-4fcc-9ebd-1cc4791260c8\") " pod="openstack/glance-default-external-api-0" Jan 26 08:11:28 crc kubenswrapper[4806]: I0126 08:11:28.726458 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e0c22de-1431-4fcc-9ebd-1cc4791260c8-scripts\") pod \"glance-default-external-api-0\" (UID: \"4e0c22de-1431-4fcc-9ebd-1cc4791260c8\") " pod="openstack/glance-default-external-api-0" Jan 26 08:11:28 crc kubenswrapper[4806]: I0126 08:11:28.728027 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-stk4d\" (UniqueName: \"kubernetes.io/projected/4e0c22de-1431-4fcc-9ebd-1cc4791260c8-kube-api-access-stk4d\") pod \"glance-default-external-api-0\" (UID: \"4e0c22de-1431-4fcc-9ebd-1cc4791260c8\") " pod="openstack/glance-default-external-api-0" Jan 26 08:11:28 crc kubenswrapper[4806]: I0126 08:11:28.785718 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"4e0c22de-1431-4fcc-9ebd-1cc4791260c8\") " pod="openstack/glance-default-external-api-0" Jan 26 08:11:28 crc kubenswrapper[4806]: I0126 08:11:28.834584 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 26 08:11:28 crc kubenswrapper[4806]: I0126 08:11:28.936560 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 26 08:11:29 crc kubenswrapper[4806]: I0126 08:11:29.062121 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0abdef54-e353-49e8-9dbd-bc47d32d131e" path="/var/lib/kubelet/pods/0abdef54-e353-49e8-9dbd-bc47d32d131e/volumes" Jan 26 08:11:29 crc kubenswrapper[4806]: I0126 08:11:29.062989 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6cdc8231-c5bf-4074-883b-94949b9e00dc" path="/var/lib/kubelet/pods/6cdc8231-c5bf-4074-883b-94949b9e00dc/volumes" Jan 26 08:11:29 crc kubenswrapper[4806]: I0126 08:11:29.127243 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-hnszz" event={"ID":"86ed2345-2edc-46bb-a416-3cfa5c01b38d","Type":"ContainerDied","Data":"89e2ffb816380037443529f69c704fb204139c80493e6d6008daccbaf329aa32"} Jan 26 08:11:29 crc kubenswrapper[4806]: I0126 08:11:29.127280 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="89e2ffb816380037443529f69c704fb204139c80493e6d6008daccbaf329aa32" Jan 26 08:11:29 crc kubenswrapper[4806]: I0126 08:11:29.127329 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-hnszz" Jan 26 08:11:29 crc kubenswrapper[4806]: I0126 08:11:29.198381 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-6f665b5db4-wpfmw"] Jan 26 08:11:29 crc kubenswrapper[4806]: I0126 08:11:29.213327 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6f665b5db4-wpfmw" Jan 26 08:11:29 crc kubenswrapper[4806]: I0126 08:11:29.222039 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 26 08:11:29 crc kubenswrapper[4806]: I0126 08:11:29.222154 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6f665b5db4-wpfmw"] Jan 26 08:11:29 crc kubenswrapper[4806]: I0126 08:11:29.222256 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Jan 26 08:11:29 crc kubenswrapper[4806]: I0126 08:11:29.222315 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Jan 26 08:11:29 crc kubenswrapper[4806]: I0126 08:11:29.222404 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 26 08:11:29 crc kubenswrapper[4806]: I0126 08:11:29.235740 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-mwbpz" Jan 26 08:11:29 crc kubenswrapper[4806]: I0126 08:11:29.334751 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tnxg\" (UniqueName: \"kubernetes.io/projected/57bd9f7e-2311-4121-a33f-4610aecf4422-kube-api-access-6tnxg\") pod \"placement-6f665b5db4-wpfmw\" (UID: \"57bd9f7e-2311-4121-a33f-4610aecf4422\") " pod="openstack/placement-6f665b5db4-wpfmw" Jan 26 08:11:29 crc kubenswrapper[4806]: I0126 08:11:29.334807 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57bd9f7e-2311-4121-a33f-4610aecf4422-config-data\") pod \"placement-6f665b5db4-wpfmw\" (UID: \"57bd9f7e-2311-4121-a33f-4610aecf4422\") " pod="openstack/placement-6f665b5db4-wpfmw" Jan 26 08:11:29 crc kubenswrapper[4806]: I0126 08:11:29.334880 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/57bd9f7e-2311-4121-a33f-4610aecf4422-logs\") pod \"placement-6f665b5db4-wpfmw\" (UID: \"57bd9f7e-2311-4121-a33f-4610aecf4422\") " pod="openstack/placement-6f665b5db4-wpfmw" Jan 26 08:11:29 crc kubenswrapper[4806]: I0126 08:11:29.334900 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/57bd9f7e-2311-4121-a33f-4610aecf4422-public-tls-certs\") pod \"placement-6f665b5db4-wpfmw\" (UID: \"57bd9f7e-2311-4121-a33f-4610aecf4422\") " pod="openstack/placement-6f665b5db4-wpfmw" Jan 26 08:11:29 crc kubenswrapper[4806]: I0126 08:11:29.334945 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/57bd9f7e-2311-4121-a33f-4610aecf4422-internal-tls-certs\") pod \"placement-6f665b5db4-wpfmw\" (UID: \"57bd9f7e-2311-4121-a33f-4610aecf4422\") " pod="openstack/placement-6f665b5db4-wpfmw" Jan 26 08:11:29 crc kubenswrapper[4806]: I0126 08:11:29.334971 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57bd9f7e-2311-4121-a33f-4610aecf4422-combined-ca-bundle\") pod \"placement-6f665b5db4-wpfmw\" (UID: \"57bd9f7e-2311-4121-a33f-4610aecf4422\") " pod="openstack/placement-6f665b5db4-wpfmw" Jan 26 08:11:29 crc kubenswrapper[4806]: I0126 08:11:29.335028 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/57bd9f7e-2311-4121-a33f-4610aecf4422-scripts\") pod \"placement-6f665b5db4-wpfmw\" (UID: \"57bd9f7e-2311-4121-a33f-4610aecf4422\") " pod="openstack/placement-6f665b5db4-wpfmw" Jan 26 08:11:29 crc kubenswrapper[4806]: I0126 08:11:29.444488 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/57bd9f7e-2311-4121-a33f-4610aecf4422-logs\") pod \"placement-6f665b5db4-wpfmw\" (UID: \"57bd9f7e-2311-4121-a33f-4610aecf4422\") " pod="openstack/placement-6f665b5db4-wpfmw" Jan 26 08:11:29 crc kubenswrapper[4806]: I0126 08:11:29.444544 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/57bd9f7e-2311-4121-a33f-4610aecf4422-public-tls-certs\") pod \"placement-6f665b5db4-wpfmw\" (UID: \"57bd9f7e-2311-4121-a33f-4610aecf4422\") " pod="openstack/placement-6f665b5db4-wpfmw" Jan 26 08:11:29 crc kubenswrapper[4806]: I0126 08:11:29.444594 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/57bd9f7e-2311-4121-a33f-4610aecf4422-internal-tls-certs\") pod \"placement-6f665b5db4-wpfmw\" (UID: \"57bd9f7e-2311-4121-a33f-4610aecf4422\") " pod="openstack/placement-6f665b5db4-wpfmw" Jan 26 08:11:29 crc kubenswrapper[4806]: I0126 08:11:29.444612 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57bd9f7e-2311-4121-a33f-4610aecf4422-combined-ca-bundle\") pod \"placement-6f665b5db4-wpfmw\" (UID: \"57bd9f7e-2311-4121-a33f-4610aecf4422\") " pod="openstack/placement-6f665b5db4-wpfmw" Jan 26 08:11:29 crc kubenswrapper[4806]: I0126 08:11:29.444660 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/57bd9f7e-2311-4121-a33f-4610aecf4422-scripts\") pod \"placement-6f665b5db4-wpfmw\" (UID: \"57bd9f7e-2311-4121-a33f-4610aecf4422\") " pod="openstack/placement-6f665b5db4-wpfmw" Jan 26 08:11:29 crc kubenswrapper[4806]: I0126 08:11:29.444688 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6tnxg\" (UniqueName: \"kubernetes.io/projected/57bd9f7e-2311-4121-a33f-4610aecf4422-kube-api-access-6tnxg\") pod \"placement-6f665b5db4-wpfmw\" (UID: \"57bd9f7e-2311-4121-a33f-4610aecf4422\") " pod="openstack/placement-6f665b5db4-wpfmw" Jan 26 08:11:29 crc kubenswrapper[4806]: I0126 08:11:29.444706 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57bd9f7e-2311-4121-a33f-4610aecf4422-config-data\") pod \"placement-6f665b5db4-wpfmw\" (UID: \"57bd9f7e-2311-4121-a33f-4610aecf4422\") " pod="openstack/placement-6f665b5db4-wpfmw" Jan 26 08:11:29 crc kubenswrapper[4806]: I0126 08:11:29.450090 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57bd9f7e-2311-4121-a33f-4610aecf4422-config-data\") pod \"placement-6f665b5db4-wpfmw\" (UID: \"57bd9f7e-2311-4121-a33f-4610aecf4422\") " pod="openstack/placement-6f665b5db4-wpfmw" Jan 26 08:11:29 crc kubenswrapper[4806]: I0126 08:11:29.450403 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/57bd9f7e-2311-4121-a33f-4610aecf4422-public-tls-certs\") pod \"placement-6f665b5db4-wpfmw\" (UID: \"57bd9f7e-2311-4121-a33f-4610aecf4422\") " pod="openstack/placement-6f665b5db4-wpfmw" Jan 26 08:11:29 crc kubenswrapper[4806]: I0126 08:11:29.452800 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57bd9f7e-2311-4121-a33f-4610aecf4422-combined-ca-bundle\") pod \"placement-6f665b5db4-wpfmw\" (UID: \"57bd9f7e-2311-4121-a33f-4610aecf4422\") " pod="openstack/placement-6f665b5db4-wpfmw" Jan 26 08:11:29 crc kubenswrapper[4806]: I0126 08:11:29.453367 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/57bd9f7e-2311-4121-a33f-4610aecf4422-scripts\") pod \"placement-6f665b5db4-wpfmw\" (UID: \"57bd9f7e-2311-4121-a33f-4610aecf4422\") " pod="openstack/placement-6f665b5db4-wpfmw" Jan 26 08:11:29 crc kubenswrapper[4806]: I0126 08:11:29.455384 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/57bd9f7e-2311-4121-a33f-4610aecf4422-internal-tls-certs\") pod \"placement-6f665b5db4-wpfmw\" (UID: \"57bd9f7e-2311-4121-a33f-4610aecf4422\") " pod="openstack/placement-6f665b5db4-wpfmw" Jan 26 08:11:29 crc kubenswrapper[4806]: I0126 08:11:29.455652 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 08:11:29 crc kubenswrapper[4806]: I0126 08:11:29.455710 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/57bd9f7e-2311-4121-a33f-4610aecf4422-logs\") pod \"placement-6f665b5db4-wpfmw\" (UID: \"57bd9f7e-2311-4121-a33f-4610aecf4422\") " pod="openstack/placement-6f665b5db4-wpfmw" Jan 26 08:11:29 crc kubenswrapper[4806]: I0126 08:11:29.471117 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6tnxg\" (UniqueName: \"kubernetes.io/projected/57bd9f7e-2311-4121-a33f-4610aecf4422-kube-api-access-6tnxg\") pod \"placement-6f665b5db4-wpfmw\" (UID: \"57bd9f7e-2311-4121-a33f-4610aecf4422\") " pod="openstack/placement-6f665b5db4-wpfmw" Jan 26 08:11:29 crc kubenswrapper[4806]: I0126 08:11:29.586298 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6f665b5db4-wpfmw" Jan 26 08:11:29 crc kubenswrapper[4806]: I0126 08:11:29.791812 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-btkzm" Jan 26 08:11:29 crc kubenswrapper[4806]: I0126 08:11:29.816461 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 08:11:29 crc kubenswrapper[4806]: I0126 08:11:29.850398 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6a63a316-0795-4795-8662-5c0b2de2597f-credential-keys\") pod \"6a63a316-0795-4795-8662-5c0b2de2597f\" (UID: \"6a63a316-0795-4795-8662-5c0b2de2597f\") " Jan 26 08:11:29 crc kubenswrapper[4806]: I0126 08:11:29.850474 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6a63a316-0795-4795-8662-5c0b2de2597f-fernet-keys\") pod \"6a63a316-0795-4795-8662-5c0b2de2597f\" (UID: \"6a63a316-0795-4795-8662-5c0b2de2597f\") " Jan 26 08:11:29 crc kubenswrapper[4806]: I0126 08:11:29.850598 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a63a316-0795-4795-8662-5c0b2de2597f-combined-ca-bundle\") pod \"6a63a316-0795-4795-8662-5c0b2de2597f\" (UID: \"6a63a316-0795-4795-8662-5c0b2de2597f\") " Jan 26 08:11:29 crc kubenswrapper[4806]: I0126 08:11:29.850644 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a63a316-0795-4795-8662-5c0b2de2597f-scripts\") pod \"6a63a316-0795-4795-8662-5c0b2de2597f\" (UID: \"6a63a316-0795-4795-8662-5c0b2de2597f\") " Jan 26 08:11:29 crc kubenswrapper[4806]: I0126 08:11:29.850703 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a63a316-0795-4795-8662-5c0b2de2597f-config-data\") pod \"6a63a316-0795-4795-8662-5c0b2de2597f\" (UID: \"6a63a316-0795-4795-8662-5c0b2de2597f\") " Jan 26 08:11:29 crc kubenswrapper[4806]: I0126 08:11:29.850737 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-74ls5\" (UniqueName: \"kubernetes.io/projected/6a63a316-0795-4795-8662-5c0b2de2597f-kube-api-access-74ls5\") pod \"6a63a316-0795-4795-8662-5c0b2de2597f\" (UID: \"6a63a316-0795-4795-8662-5c0b2de2597f\") " Jan 26 08:11:29 crc kubenswrapper[4806]: I0126 08:11:29.879970 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a63a316-0795-4795-8662-5c0b2de2597f-kube-api-access-74ls5" (OuterVolumeSpecName: "kube-api-access-74ls5") pod "6a63a316-0795-4795-8662-5c0b2de2597f" (UID: "6a63a316-0795-4795-8662-5c0b2de2597f"). InnerVolumeSpecName "kube-api-access-74ls5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:11:29 crc kubenswrapper[4806]: I0126 08:11:29.880404 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a63a316-0795-4795-8662-5c0b2de2597f-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "6a63a316-0795-4795-8662-5c0b2de2597f" (UID: "6a63a316-0795-4795-8662-5c0b2de2597f"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:11:29 crc kubenswrapper[4806]: I0126 08:11:29.886690 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a63a316-0795-4795-8662-5c0b2de2597f-scripts" (OuterVolumeSpecName: "scripts") pod "6a63a316-0795-4795-8662-5c0b2de2597f" (UID: "6a63a316-0795-4795-8662-5c0b2de2597f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:11:29 crc kubenswrapper[4806]: I0126 08:11:29.888668 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a63a316-0795-4795-8662-5c0b2de2597f-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "6a63a316-0795-4795-8662-5c0b2de2597f" (UID: "6a63a316-0795-4795-8662-5c0b2de2597f"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:11:29 crc kubenswrapper[4806]: I0126 08:11:29.906662 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a63a316-0795-4795-8662-5c0b2de2597f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6a63a316-0795-4795-8662-5c0b2de2597f" (UID: "6a63a316-0795-4795-8662-5c0b2de2597f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:11:29 crc kubenswrapper[4806]: I0126 08:11:29.925393 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a63a316-0795-4795-8662-5c0b2de2597f-config-data" (OuterVolumeSpecName: "config-data") pod "6a63a316-0795-4795-8662-5c0b2de2597f" (UID: "6a63a316-0795-4795-8662-5c0b2de2597f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:11:29 crc kubenswrapper[4806]: I0126 08:11:29.952973 4806 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a63a316-0795-4795-8662-5c0b2de2597f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 08:11:29 crc kubenswrapper[4806]: I0126 08:11:29.953019 4806 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a63a316-0795-4795-8662-5c0b2de2597f-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 08:11:29 crc kubenswrapper[4806]: I0126 08:11:29.953035 4806 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a63a316-0795-4795-8662-5c0b2de2597f-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 08:11:29 crc kubenswrapper[4806]: I0126 08:11:29.953048 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-74ls5\" (UniqueName: \"kubernetes.io/projected/6a63a316-0795-4795-8662-5c0b2de2597f-kube-api-access-74ls5\") on node \"crc\" DevicePath \"\"" Jan 26 08:11:29 crc kubenswrapper[4806]: I0126 08:11:29.953063 4806 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6a63a316-0795-4795-8662-5c0b2de2597f-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 26 08:11:29 crc kubenswrapper[4806]: I0126 08:11:29.953075 4806 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6a63a316-0795-4795-8662-5c0b2de2597f-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 26 08:11:30 crc kubenswrapper[4806]: I0126 08:11:30.182870 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4e0c22de-1431-4fcc-9ebd-1cc4791260c8","Type":"ContainerStarted","Data":"9f6bf4e0c7d26f3986d737b202fc9bd31912452aa2d23cc49e7c9eede2751cd7"} Jan 26 08:11:30 crc kubenswrapper[4806]: I0126 08:11:30.184473 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"20febbb2-a1ac-4a38-8a1d-594fa53b0b06","Type":"ContainerStarted","Data":"14fccf9ba4ff31f8b5e7130b007494f392074236eab4f20ec1115dce87888c70"} Jan 26 08:11:30 crc kubenswrapper[4806]: I0126 08:11:30.193025 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-btkzm" event={"ID":"6a63a316-0795-4795-8662-5c0b2de2597f","Type":"ContainerDied","Data":"52f12ae359cad1668cf1dbb8d4d637e3962fd6a8b3ad18eb31a00da66e6eff14"} Jan 26 08:11:30 crc kubenswrapper[4806]: I0126 08:11:30.193082 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="52f12ae359cad1668cf1dbb8d4d637e3962fd6a8b3ad18eb31a00da66e6eff14" Jan 26 08:11:30 crc kubenswrapper[4806]: I0126 08:11:30.193159 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-btkzm" Jan 26 08:11:30 crc kubenswrapper[4806]: I0126 08:11:30.285668 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-5c44c79675-nsqr2"] Jan 26 08:11:30 crc kubenswrapper[4806]: E0126 08:11:30.286018 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a63a316-0795-4795-8662-5c0b2de2597f" containerName="keystone-bootstrap" Jan 26 08:11:30 crc kubenswrapper[4806]: I0126 08:11:30.286034 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a63a316-0795-4795-8662-5c0b2de2597f" containerName="keystone-bootstrap" Jan 26 08:11:30 crc kubenswrapper[4806]: I0126 08:11:30.286199 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a63a316-0795-4795-8662-5c0b2de2597f" containerName="keystone-bootstrap" Jan 26 08:11:30 crc kubenswrapper[4806]: I0126 08:11:30.286823 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5c44c79675-nsqr2" Jan 26 08:11:30 crc kubenswrapper[4806]: I0126 08:11:30.293965 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 26 08:11:30 crc kubenswrapper[4806]: I0126 08:11:30.294176 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-wsvrh" Jan 26 08:11:30 crc kubenswrapper[4806]: I0126 08:11:30.294389 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 26 08:11:30 crc kubenswrapper[4806]: I0126 08:11:30.294500 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 26 08:11:30 crc kubenswrapper[4806]: I0126 08:11:30.296578 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 26 08:11:30 crc kubenswrapper[4806]: I0126 08:11:30.297175 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 26 08:11:30 crc kubenswrapper[4806]: I0126 08:11:30.304906 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5c44c79675-nsqr2"] Jan 26 08:11:30 crc kubenswrapper[4806]: I0126 08:11:30.361768 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c7f3b17-0e6a-47c1-90d5-bc5ccd2c4453-config-data\") pod \"keystone-5c44c79675-nsqr2\" (UID: \"6c7f3b17-0e6a-47c1-90d5-bc5ccd2c4453\") " pod="openstack/keystone-5c44c79675-nsqr2" Jan 26 08:11:30 crc kubenswrapper[4806]: I0126 08:11:30.361814 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6c7f3b17-0e6a-47c1-90d5-bc5ccd2c4453-fernet-keys\") pod \"keystone-5c44c79675-nsqr2\" (UID: \"6c7f3b17-0e6a-47c1-90d5-bc5ccd2c4453\") " pod="openstack/keystone-5c44c79675-nsqr2" Jan 26 08:11:30 crc kubenswrapper[4806]: I0126 08:11:30.361860 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6wx2\" (UniqueName: \"kubernetes.io/projected/6c7f3b17-0e6a-47c1-90d5-bc5ccd2c4453-kube-api-access-p6wx2\") pod \"keystone-5c44c79675-nsqr2\" (UID: \"6c7f3b17-0e6a-47c1-90d5-bc5ccd2c4453\") " pod="openstack/keystone-5c44c79675-nsqr2" Jan 26 08:11:30 crc kubenswrapper[4806]: I0126 08:11:30.361883 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c7f3b17-0e6a-47c1-90d5-bc5ccd2c4453-internal-tls-certs\") pod \"keystone-5c44c79675-nsqr2\" (UID: \"6c7f3b17-0e6a-47c1-90d5-bc5ccd2c4453\") " pod="openstack/keystone-5c44c79675-nsqr2" Jan 26 08:11:30 crc kubenswrapper[4806]: I0126 08:11:30.361905 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6c7f3b17-0e6a-47c1-90d5-bc5ccd2c4453-credential-keys\") pod \"keystone-5c44c79675-nsqr2\" (UID: \"6c7f3b17-0e6a-47c1-90d5-bc5ccd2c4453\") " pod="openstack/keystone-5c44c79675-nsqr2" Jan 26 08:11:30 crc kubenswrapper[4806]: I0126 08:11:30.361937 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c7f3b17-0e6a-47c1-90d5-bc5ccd2c4453-combined-ca-bundle\") pod \"keystone-5c44c79675-nsqr2\" (UID: \"6c7f3b17-0e6a-47c1-90d5-bc5ccd2c4453\") " pod="openstack/keystone-5c44c79675-nsqr2" Jan 26 08:11:30 crc kubenswrapper[4806]: I0126 08:11:30.361989 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c7f3b17-0e6a-47c1-90d5-bc5ccd2c4453-public-tls-certs\") pod \"keystone-5c44c79675-nsqr2\" (UID: \"6c7f3b17-0e6a-47c1-90d5-bc5ccd2c4453\") " pod="openstack/keystone-5c44c79675-nsqr2" Jan 26 08:11:30 crc kubenswrapper[4806]: I0126 08:11:30.362008 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c7f3b17-0e6a-47c1-90d5-bc5ccd2c4453-scripts\") pod \"keystone-5c44c79675-nsqr2\" (UID: \"6c7f3b17-0e6a-47c1-90d5-bc5ccd2c4453\") " pod="openstack/keystone-5c44c79675-nsqr2" Jan 26 08:11:30 crc kubenswrapper[4806]: I0126 08:11:30.425675 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6f665b5db4-wpfmw"] Jan 26 08:11:30 crc kubenswrapper[4806]: I0126 08:11:30.463357 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6wx2\" (UniqueName: \"kubernetes.io/projected/6c7f3b17-0e6a-47c1-90d5-bc5ccd2c4453-kube-api-access-p6wx2\") pod \"keystone-5c44c79675-nsqr2\" (UID: \"6c7f3b17-0e6a-47c1-90d5-bc5ccd2c4453\") " pod="openstack/keystone-5c44c79675-nsqr2" Jan 26 08:11:30 crc kubenswrapper[4806]: I0126 08:11:30.463646 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c7f3b17-0e6a-47c1-90d5-bc5ccd2c4453-internal-tls-certs\") pod \"keystone-5c44c79675-nsqr2\" (UID: \"6c7f3b17-0e6a-47c1-90d5-bc5ccd2c4453\") " pod="openstack/keystone-5c44c79675-nsqr2" Jan 26 08:11:30 crc kubenswrapper[4806]: I0126 08:11:30.463672 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6c7f3b17-0e6a-47c1-90d5-bc5ccd2c4453-credential-keys\") pod \"keystone-5c44c79675-nsqr2\" (UID: \"6c7f3b17-0e6a-47c1-90d5-bc5ccd2c4453\") " pod="openstack/keystone-5c44c79675-nsqr2" Jan 26 08:11:30 crc kubenswrapper[4806]: I0126 08:11:30.463701 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c7f3b17-0e6a-47c1-90d5-bc5ccd2c4453-combined-ca-bundle\") pod \"keystone-5c44c79675-nsqr2\" (UID: \"6c7f3b17-0e6a-47c1-90d5-bc5ccd2c4453\") " pod="openstack/keystone-5c44c79675-nsqr2" Jan 26 08:11:30 crc kubenswrapper[4806]: I0126 08:11:30.463751 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c7f3b17-0e6a-47c1-90d5-bc5ccd2c4453-public-tls-certs\") pod \"keystone-5c44c79675-nsqr2\" (UID: \"6c7f3b17-0e6a-47c1-90d5-bc5ccd2c4453\") " pod="openstack/keystone-5c44c79675-nsqr2" Jan 26 08:11:30 crc kubenswrapper[4806]: I0126 08:11:30.463770 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c7f3b17-0e6a-47c1-90d5-bc5ccd2c4453-scripts\") pod \"keystone-5c44c79675-nsqr2\" (UID: \"6c7f3b17-0e6a-47c1-90d5-bc5ccd2c4453\") " pod="openstack/keystone-5c44c79675-nsqr2" Jan 26 08:11:30 crc kubenswrapper[4806]: I0126 08:11:30.463806 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c7f3b17-0e6a-47c1-90d5-bc5ccd2c4453-config-data\") pod \"keystone-5c44c79675-nsqr2\" (UID: \"6c7f3b17-0e6a-47c1-90d5-bc5ccd2c4453\") " pod="openstack/keystone-5c44c79675-nsqr2" Jan 26 08:11:30 crc kubenswrapper[4806]: I0126 08:11:30.463827 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6c7f3b17-0e6a-47c1-90d5-bc5ccd2c4453-fernet-keys\") pod \"keystone-5c44c79675-nsqr2\" (UID: \"6c7f3b17-0e6a-47c1-90d5-bc5ccd2c4453\") " pod="openstack/keystone-5c44c79675-nsqr2" Jan 26 08:11:30 crc kubenswrapper[4806]: I0126 08:11:30.472713 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6c7f3b17-0e6a-47c1-90d5-bc5ccd2c4453-fernet-keys\") pod \"keystone-5c44c79675-nsqr2\" (UID: \"6c7f3b17-0e6a-47c1-90d5-bc5ccd2c4453\") " pod="openstack/keystone-5c44c79675-nsqr2" Jan 26 08:11:30 crc kubenswrapper[4806]: I0126 08:11:30.473201 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c7f3b17-0e6a-47c1-90d5-bc5ccd2c4453-combined-ca-bundle\") pod \"keystone-5c44c79675-nsqr2\" (UID: \"6c7f3b17-0e6a-47c1-90d5-bc5ccd2c4453\") " pod="openstack/keystone-5c44c79675-nsqr2" Jan 26 08:11:30 crc kubenswrapper[4806]: I0126 08:11:30.474244 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c7f3b17-0e6a-47c1-90d5-bc5ccd2c4453-config-data\") pod \"keystone-5c44c79675-nsqr2\" (UID: \"6c7f3b17-0e6a-47c1-90d5-bc5ccd2c4453\") " pod="openstack/keystone-5c44c79675-nsqr2" Jan 26 08:11:30 crc kubenswrapper[4806]: I0126 08:11:30.477076 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6c7f3b17-0e6a-47c1-90d5-bc5ccd2c4453-credential-keys\") pod \"keystone-5c44c79675-nsqr2\" (UID: \"6c7f3b17-0e6a-47c1-90d5-bc5ccd2c4453\") " pod="openstack/keystone-5c44c79675-nsqr2" Jan 26 08:11:30 crc kubenswrapper[4806]: I0126 08:11:30.477083 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c7f3b17-0e6a-47c1-90d5-bc5ccd2c4453-scripts\") pod \"keystone-5c44c79675-nsqr2\" (UID: \"6c7f3b17-0e6a-47c1-90d5-bc5ccd2c4453\") " pod="openstack/keystone-5c44c79675-nsqr2" Jan 26 08:11:30 crc kubenswrapper[4806]: I0126 08:11:30.486977 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c7f3b17-0e6a-47c1-90d5-bc5ccd2c4453-internal-tls-certs\") pod \"keystone-5c44c79675-nsqr2\" (UID: \"6c7f3b17-0e6a-47c1-90d5-bc5ccd2c4453\") " pod="openstack/keystone-5c44c79675-nsqr2" Jan 26 08:11:30 crc kubenswrapper[4806]: I0126 08:11:30.494338 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c7f3b17-0e6a-47c1-90d5-bc5ccd2c4453-public-tls-certs\") pod \"keystone-5c44c79675-nsqr2\" (UID: \"6c7f3b17-0e6a-47c1-90d5-bc5ccd2c4453\") " pod="openstack/keystone-5c44c79675-nsqr2" Jan 26 08:11:30 crc kubenswrapper[4806]: I0126 08:11:30.504750 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6wx2\" (UniqueName: \"kubernetes.io/projected/6c7f3b17-0e6a-47c1-90d5-bc5ccd2c4453-kube-api-access-p6wx2\") pod \"keystone-5c44c79675-nsqr2\" (UID: \"6c7f3b17-0e6a-47c1-90d5-bc5ccd2c4453\") " pod="openstack/keystone-5c44c79675-nsqr2" Jan 26 08:11:30 crc kubenswrapper[4806]: I0126 08:11:30.632651 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5c44c79675-nsqr2" Jan 26 08:11:30 crc kubenswrapper[4806]: I0126 08:11:30.802015 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6b7b667979-d6hkg" Jan 26 08:11:30 crc kubenswrapper[4806]: I0126 08:11:30.884000 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cf78879c9-42c6r"] Jan 26 08:11:30 crc kubenswrapper[4806]: I0126 08:11:30.884409 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-cf78879c9-42c6r" podUID="2762738e-b53a-4f25-ae4e-fa5182994a78" containerName="dnsmasq-dns" containerID="cri-o://9adff4692a6d52a6838b189713eb2f631570866b5371a0c6407a2857a0019017" gracePeriod=10 Jan 26 08:11:31 crc kubenswrapper[4806]: I0126 08:11:31.237325 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4e0c22de-1431-4fcc-9ebd-1cc4791260c8","Type":"ContainerStarted","Data":"2ad53b837c7da7b8af599460f6cfc99ca4cac64f501b36d44e0ac6f5929e7096"} Jan 26 08:11:31 crc kubenswrapper[4806]: I0126 08:11:31.245563 4806 generic.go:334] "Generic (PLEG): container finished" podID="4588263f-b01b-4a54-829f-1cef11d1dbd3" containerID="fe8f1dbf123a7ed8f81d7773dea0015a57089e658ae2a3760eead5826aeece01" exitCode=0 Jan 26 08:11:31 crc kubenswrapper[4806]: I0126 08:11:31.245636 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-8dksv" event={"ID":"4588263f-b01b-4a54-829f-1cef11d1dbd3","Type":"ContainerDied","Data":"fe8f1dbf123a7ed8f81d7773dea0015a57089e658ae2a3760eead5826aeece01"} Jan 26 08:11:31 crc kubenswrapper[4806]: I0126 08:11:31.255365 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"20febbb2-a1ac-4a38-8a1d-594fa53b0b06","Type":"ContainerStarted","Data":"8744719c3edd57487457204fcef0336b3149b40334d7045bf497f74d4a82cd60"} Jan 26 08:11:31 crc kubenswrapper[4806]: I0126 08:11:31.312578 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6f665b5db4-wpfmw" event={"ID":"57bd9f7e-2311-4121-a33f-4610aecf4422","Type":"ContainerStarted","Data":"b34f0aac5705137870b0712adcfdd81e54e9c3ad5c47d0d57133cae7b6f2141a"} Jan 26 08:11:31 crc kubenswrapper[4806]: I0126 08:11:31.494020 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5c44c79675-nsqr2"] Jan 26 08:11:32 crc kubenswrapper[4806]: I0126 08:11:32.302985 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cf78879c9-42c6r" Jan 26 08:11:32 crc kubenswrapper[4806]: I0126 08:11:32.351736 4806 generic.go:334] "Generic (PLEG): container finished" podID="2762738e-b53a-4f25-ae4e-fa5182994a78" containerID="9adff4692a6d52a6838b189713eb2f631570866b5371a0c6407a2857a0019017" exitCode=0 Jan 26 08:11:32 crc kubenswrapper[4806]: I0126 08:11:32.351795 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cf78879c9-42c6r" event={"ID":"2762738e-b53a-4f25-ae4e-fa5182994a78","Type":"ContainerDied","Data":"9adff4692a6d52a6838b189713eb2f631570866b5371a0c6407a2857a0019017"} Jan 26 08:11:32 crc kubenswrapper[4806]: I0126 08:11:32.351823 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cf78879c9-42c6r" event={"ID":"2762738e-b53a-4f25-ae4e-fa5182994a78","Type":"ContainerDied","Data":"a7d938e28470073b1c56ced327081624d82b3bb0ed1a48ee35ba203adeea5a4d"} Jan 26 08:11:32 crc kubenswrapper[4806]: I0126 08:11:32.351841 4806 scope.go:117] "RemoveContainer" containerID="9adff4692a6d52a6838b189713eb2f631570866b5371a0c6407a2857a0019017" Jan 26 08:11:32 crc kubenswrapper[4806]: I0126 08:11:32.351954 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cf78879c9-42c6r" Jan 26 08:11:32 crc kubenswrapper[4806]: I0126 08:11:32.356490 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6f665b5db4-wpfmw" event={"ID":"57bd9f7e-2311-4121-a33f-4610aecf4422","Type":"ContainerStarted","Data":"acee30ad2f82278ae1fbbb9201e1e37707b8286ff51882d72a721656a8c36545"} Jan 26 08:11:32 crc kubenswrapper[4806]: I0126 08:11:32.356541 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6f665b5db4-wpfmw" event={"ID":"57bd9f7e-2311-4121-a33f-4610aecf4422","Type":"ContainerStarted","Data":"0ee1d32e382be2e85fa724b5a4876347692e01059907c22e49733ec853c3a98c"} Jan 26 08:11:32 crc kubenswrapper[4806]: I0126 08:11:32.357188 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6f665b5db4-wpfmw" Jan 26 08:11:32 crc kubenswrapper[4806]: I0126 08:11:32.357209 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6f665b5db4-wpfmw" Jan 26 08:11:32 crc kubenswrapper[4806]: I0126 08:11:32.366433 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4e0c22de-1431-4fcc-9ebd-1cc4791260c8","Type":"ContainerStarted","Data":"9c0390bc804d1036e606767eb250807db0d925677c52f562f9fee863672b8282"} Jan 26 08:11:32 crc kubenswrapper[4806]: I0126 08:11:32.373707 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5c44c79675-nsqr2" event={"ID":"6c7f3b17-0e6a-47c1-90d5-bc5ccd2c4453","Type":"ContainerStarted","Data":"0a05fd2fbbd8126ebec746657d52538130b8a2c9ac631964ea0c74660fd193e6"} Jan 26 08:11:32 crc kubenswrapper[4806]: I0126 08:11:32.373770 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5c44c79675-nsqr2" event={"ID":"6c7f3b17-0e6a-47c1-90d5-bc5ccd2c4453","Type":"ContainerStarted","Data":"903f85264065edafb66ee0ffa6e338be210195c9148f68fc7e47b4eb572f2115"} Jan 26 08:11:32 crc kubenswrapper[4806]: I0126 08:11:32.373884 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-5c44c79675-nsqr2" Jan 26 08:11:32 crc kubenswrapper[4806]: I0126 08:11:32.412157 4806 scope.go:117] "RemoveContainer" containerID="1c564710fbb3c31334906cf48cfbbc12782e81af23377856a47e6e1105bff444" Jan 26 08:11:32 crc kubenswrapper[4806]: I0126 08:11:32.420465 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-6f665b5db4-wpfmw" podStartSLOduration=3.420443868 podStartE2EDuration="3.420443868s" podCreationTimestamp="2026-01-26 08:11:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 08:11:32.406854736 +0000 UTC m=+1071.671262792" watchObservedRunningTime="2026-01-26 08:11:32.420443868 +0000 UTC m=+1071.684851914" Jan 26 08:11:32 crc kubenswrapper[4806]: I0126 08:11:32.458330 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.458311342 podStartE2EDuration="4.458311342s" podCreationTimestamp="2026-01-26 08:11:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 08:11:32.436791227 +0000 UTC m=+1071.701199273" watchObservedRunningTime="2026-01-26 08:11:32.458311342 +0000 UTC m=+1071.722719398" Jan 26 08:11:32 crc kubenswrapper[4806]: I0126 08:11:32.470415 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-5c44c79675-nsqr2" podStartSLOduration=2.470391261 podStartE2EDuration="2.470391261s" podCreationTimestamp="2026-01-26 08:11:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 08:11:32.452754446 +0000 UTC m=+1071.717162502" watchObservedRunningTime="2026-01-26 08:11:32.470391261 +0000 UTC m=+1071.734799317" Jan 26 08:11:32 crc kubenswrapper[4806]: I0126 08:11:32.471635 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2762738e-b53a-4f25-ae4e-fa5182994a78-dns-svc\") pod \"2762738e-b53a-4f25-ae4e-fa5182994a78\" (UID: \"2762738e-b53a-4f25-ae4e-fa5182994a78\") " Jan 26 08:11:32 crc kubenswrapper[4806]: I0126 08:11:32.471681 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2762738e-b53a-4f25-ae4e-fa5182994a78-config\") pod \"2762738e-b53a-4f25-ae4e-fa5182994a78\" (UID: \"2762738e-b53a-4f25-ae4e-fa5182994a78\") " Jan 26 08:11:32 crc kubenswrapper[4806]: I0126 08:11:32.471732 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2762738e-b53a-4f25-ae4e-fa5182994a78-dns-swift-storage-0\") pod \"2762738e-b53a-4f25-ae4e-fa5182994a78\" (UID: \"2762738e-b53a-4f25-ae4e-fa5182994a78\") " Jan 26 08:11:32 crc kubenswrapper[4806]: I0126 08:11:32.472062 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rgddk\" (UniqueName: \"kubernetes.io/projected/2762738e-b53a-4f25-ae4e-fa5182994a78-kube-api-access-rgddk\") pod \"2762738e-b53a-4f25-ae4e-fa5182994a78\" (UID: \"2762738e-b53a-4f25-ae4e-fa5182994a78\") " Jan 26 08:11:32 crc kubenswrapper[4806]: I0126 08:11:32.472141 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2762738e-b53a-4f25-ae4e-fa5182994a78-ovsdbserver-sb\") pod \"2762738e-b53a-4f25-ae4e-fa5182994a78\" (UID: \"2762738e-b53a-4f25-ae4e-fa5182994a78\") " Jan 26 08:11:32 crc kubenswrapper[4806]: I0126 08:11:32.472187 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2762738e-b53a-4f25-ae4e-fa5182994a78-ovsdbserver-nb\") pod \"2762738e-b53a-4f25-ae4e-fa5182994a78\" (UID: \"2762738e-b53a-4f25-ae4e-fa5182994a78\") " Jan 26 08:11:32 crc kubenswrapper[4806]: I0126 08:11:32.509633 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2762738e-b53a-4f25-ae4e-fa5182994a78-kube-api-access-rgddk" (OuterVolumeSpecName: "kube-api-access-rgddk") pod "2762738e-b53a-4f25-ae4e-fa5182994a78" (UID: "2762738e-b53a-4f25-ae4e-fa5182994a78"). InnerVolumeSpecName "kube-api-access-rgddk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:11:32 crc kubenswrapper[4806]: I0126 08:11:32.534216 4806 scope.go:117] "RemoveContainer" containerID="9adff4692a6d52a6838b189713eb2f631570866b5371a0c6407a2857a0019017" Jan 26 08:11:32 crc kubenswrapper[4806]: E0126 08:11:32.534562 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9adff4692a6d52a6838b189713eb2f631570866b5371a0c6407a2857a0019017\": container with ID starting with 9adff4692a6d52a6838b189713eb2f631570866b5371a0c6407a2857a0019017 not found: ID does not exist" containerID="9adff4692a6d52a6838b189713eb2f631570866b5371a0c6407a2857a0019017" Jan 26 08:11:32 crc kubenswrapper[4806]: I0126 08:11:32.534587 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9adff4692a6d52a6838b189713eb2f631570866b5371a0c6407a2857a0019017"} err="failed to get container status \"9adff4692a6d52a6838b189713eb2f631570866b5371a0c6407a2857a0019017\": rpc error: code = NotFound desc = could not find container \"9adff4692a6d52a6838b189713eb2f631570866b5371a0c6407a2857a0019017\": container with ID starting with 9adff4692a6d52a6838b189713eb2f631570866b5371a0c6407a2857a0019017 not found: ID does not exist" Jan 26 08:11:32 crc kubenswrapper[4806]: I0126 08:11:32.534607 4806 scope.go:117] "RemoveContainer" containerID="1c564710fbb3c31334906cf48cfbbc12782e81af23377856a47e6e1105bff444" Jan 26 08:11:32 crc kubenswrapper[4806]: E0126 08:11:32.534826 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c564710fbb3c31334906cf48cfbbc12782e81af23377856a47e6e1105bff444\": container with ID starting with 1c564710fbb3c31334906cf48cfbbc12782e81af23377856a47e6e1105bff444 not found: ID does not exist" containerID="1c564710fbb3c31334906cf48cfbbc12782e81af23377856a47e6e1105bff444" Jan 26 08:11:32 crc kubenswrapper[4806]: I0126 08:11:32.534844 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c564710fbb3c31334906cf48cfbbc12782e81af23377856a47e6e1105bff444"} err="failed to get container status \"1c564710fbb3c31334906cf48cfbbc12782e81af23377856a47e6e1105bff444\": rpc error: code = NotFound desc = could not find container \"1c564710fbb3c31334906cf48cfbbc12782e81af23377856a47e6e1105bff444\": container with ID starting with 1c564710fbb3c31334906cf48cfbbc12782e81af23377856a47e6e1105bff444 not found: ID does not exist" Jan 26 08:11:32 crc kubenswrapper[4806]: I0126 08:11:32.575296 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rgddk\" (UniqueName: \"kubernetes.io/projected/2762738e-b53a-4f25-ae4e-fa5182994a78-kube-api-access-rgddk\") on node \"crc\" DevicePath \"\"" Jan 26 08:11:32 crc kubenswrapper[4806]: I0126 08:11:32.791250 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2762738e-b53a-4f25-ae4e-fa5182994a78-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2762738e-b53a-4f25-ae4e-fa5182994a78" (UID: "2762738e-b53a-4f25-ae4e-fa5182994a78"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 08:11:32 crc kubenswrapper[4806]: I0126 08:11:32.884138 4806 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2762738e-b53a-4f25-ae4e-fa5182994a78-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 08:11:32 crc kubenswrapper[4806]: I0126 08:11:32.893047 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2762738e-b53a-4f25-ae4e-fa5182994a78-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "2762738e-b53a-4f25-ae4e-fa5182994a78" (UID: "2762738e-b53a-4f25-ae4e-fa5182994a78"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 08:11:32 crc kubenswrapper[4806]: I0126 08:11:32.894349 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2762738e-b53a-4f25-ae4e-fa5182994a78-config" (OuterVolumeSpecName: "config") pod "2762738e-b53a-4f25-ae4e-fa5182994a78" (UID: "2762738e-b53a-4f25-ae4e-fa5182994a78"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 08:11:32 crc kubenswrapper[4806]: I0126 08:11:32.906228 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2762738e-b53a-4f25-ae4e-fa5182994a78-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2762738e-b53a-4f25-ae4e-fa5182994a78" (UID: "2762738e-b53a-4f25-ae4e-fa5182994a78"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 08:11:32 crc kubenswrapper[4806]: I0126 08:11:32.919955 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2762738e-b53a-4f25-ae4e-fa5182994a78-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2762738e-b53a-4f25-ae4e-fa5182994a78" (UID: "2762738e-b53a-4f25-ae4e-fa5182994a78"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 08:11:32 crc kubenswrapper[4806]: I0126 08:11:32.985183 4806 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2762738e-b53a-4f25-ae4e-fa5182994a78-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 08:11:32 crc kubenswrapper[4806]: I0126 08:11:32.985212 4806 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2762738e-b53a-4f25-ae4e-fa5182994a78-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 08:11:32 crc kubenswrapper[4806]: I0126 08:11:32.985222 4806 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2762738e-b53a-4f25-ae4e-fa5182994a78-config\") on node \"crc\" DevicePath \"\"" Jan 26 08:11:32 crc kubenswrapper[4806]: I0126 08:11:32.985230 4806 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2762738e-b53a-4f25-ae4e-fa5182994a78-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 26 08:11:33 crc kubenswrapper[4806]: I0126 08:11:33.077747 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-8dksv" Jan 26 08:11:33 crc kubenswrapper[4806]: I0126 08:11:33.084935 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cf78879c9-42c6r"] Jan 26 08:11:33 crc kubenswrapper[4806]: I0126 08:11:33.085580 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-cf78879c9-42c6r"] Jan 26 08:11:33 crc kubenswrapper[4806]: I0126 08:11:33.189004 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4588263f-b01b-4a54-829f-1cef11d1dbd3-combined-ca-bundle\") pod \"4588263f-b01b-4a54-829f-1cef11d1dbd3\" (UID: \"4588263f-b01b-4a54-829f-1cef11d1dbd3\") " Jan 26 08:11:33 crc kubenswrapper[4806]: I0126 08:11:33.189094 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4588263f-b01b-4a54-829f-1cef11d1dbd3-db-sync-config-data\") pod \"4588263f-b01b-4a54-829f-1cef11d1dbd3\" (UID: \"4588263f-b01b-4a54-829f-1cef11d1dbd3\") " Jan 26 08:11:33 crc kubenswrapper[4806]: I0126 08:11:33.189167 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rthdl\" (UniqueName: \"kubernetes.io/projected/4588263f-b01b-4a54-829f-1cef11d1dbd3-kube-api-access-rthdl\") pod \"4588263f-b01b-4a54-829f-1cef11d1dbd3\" (UID: \"4588263f-b01b-4a54-829f-1cef11d1dbd3\") " Jan 26 08:11:33 crc kubenswrapper[4806]: I0126 08:11:33.201039 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4588263f-b01b-4a54-829f-1cef11d1dbd3-kube-api-access-rthdl" (OuterVolumeSpecName: "kube-api-access-rthdl") pod "4588263f-b01b-4a54-829f-1cef11d1dbd3" (UID: "4588263f-b01b-4a54-829f-1cef11d1dbd3"). InnerVolumeSpecName "kube-api-access-rthdl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:11:33 crc kubenswrapper[4806]: I0126 08:11:33.224948 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4588263f-b01b-4a54-829f-1cef11d1dbd3-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "4588263f-b01b-4a54-829f-1cef11d1dbd3" (UID: "4588263f-b01b-4a54-829f-1cef11d1dbd3"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:11:33 crc kubenswrapper[4806]: I0126 08:11:33.280754 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4588263f-b01b-4a54-829f-1cef11d1dbd3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4588263f-b01b-4a54-829f-1cef11d1dbd3" (UID: "4588263f-b01b-4a54-829f-1cef11d1dbd3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:11:33 crc kubenswrapper[4806]: I0126 08:11:33.302734 4806 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4588263f-b01b-4a54-829f-1cef11d1dbd3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 08:11:33 crc kubenswrapper[4806]: I0126 08:11:33.302956 4806 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4588263f-b01b-4a54-829f-1cef11d1dbd3-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 08:11:33 crc kubenswrapper[4806]: I0126 08:11:33.303018 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rthdl\" (UniqueName: \"kubernetes.io/projected/4588263f-b01b-4a54-829f-1cef11d1dbd3-kube-api-access-rthdl\") on node \"crc\" DevicePath \"\"" Jan 26 08:11:33 crc kubenswrapper[4806]: I0126 08:11:33.471314 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-7b56fd48c9-2fhh8"] Jan 26 08:11:33 crc kubenswrapper[4806]: E0126 08:11:33.471963 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2762738e-b53a-4f25-ae4e-fa5182994a78" containerName="dnsmasq-dns" Jan 26 08:11:33 crc kubenswrapper[4806]: I0126 08:11:33.471976 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="2762738e-b53a-4f25-ae4e-fa5182994a78" containerName="dnsmasq-dns" Jan 26 08:11:33 crc kubenswrapper[4806]: E0126 08:11:33.471998 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4588263f-b01b-4a54-829f-1cef11d1dbd3" containerName="barbican-db-sync" Jan 26 08:11:33 crc kubenswrapper[4806]: I0126 08:11:33.472005 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="4588263f-b01b-4a54-829f-1cef11d1dbd3" containerName="barbican-db-sync" Jan 26 08:11:33 crc kubenswrapper[4806]: E0126 08:11:33.472015 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2762738e-b53a-4f25-ae4e-fa5182994a78" containerName="init" Jan 26 08:11:33 crc kubenswrapper[4806]: I0126 08:11:33.472025 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="2762738e-b53a-4f25-ae4e-fa5182994a78" containerName="init" Jan 26 08:11:33 crc kubenswrapper[4806]: I0126 08:11:33.472351 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="4588263f-b01b-4a54-829f-1cef11d1dbd3" containerName="barbican-db-sync" Jan 26 08:11:33 crc kubenswrapper[4806]: I0126 08:11:33.472396 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="2762738e-b53a-4f25-ae4e-fa5182994a78" containerName="dnsmasq-dns" Jan 26 08:11:33 crc kubenswrapper[4806]: I0126 08:11:33.488015 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-7b56fd48c9-2fhh8" Jan 26 08:11:33 crc kubenswrapper[4806]: I0126 08:11:33.499800 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 26 08:11:33 crc kubenswrapper[4806]: I0126 08:11:33.537656 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-bjtkx" event={"ID":"19528149-09a1-44a5-b419-bbe91789d493","Type":"ContainerStarted","Data":"a9564fe8c4b2397cae0a3995b2fda49cd35376fd0adf3daf75579d44162e21a2"} Jan 26 08:11:33 crc kubenswrapper[4806]: I0126 08:11:33.611558 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-7b56fd48c9-2fhh8"] Jan 26 08:11:33 crc kubenswrapper[4806]: I0126 08:11:33.612446 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-8dksv" event={"ID":"4588263f-b01b-4a54-829f-1cef11d1dbd3","Type":"ContainerDied","Data":"018297ed617cbeeea51a1718d31dc809e8c02f42977f4e15b58124578dcb7d4b"} Jan 26 08:11:33 crc kubenswrapper[4806]: I0126 08:11:33.612478 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="018297ed617cbeeea51a1718d31dc809e8c02f42977f4e15b58124578dcb7d4b" Jan 26 08:11:33 crc kubenswrapper[4806]: I0126 08:11:33.612614 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-8dksv" Jan 26 08:11:33 crc kubenswrapper[4806]: I0126 08:11:33.645442 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-7c56b98c68-pf9q4"] Jan 26 08:11:33 crc kubenswrapper[4806]: I0126 08:11:33.648674 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-7c56b98c68-pf9q4" Jan 26 08:11:33 crc kubenswrapper[4806]: I0126 08:11:33.652903 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c07d6e0b-e41e-402f-8d38-196e641be864-config-data-custom\") pod \"barbican-worker-7b56fd48c9-2fhh8\" (UID: \"c07d6e0b-e41e-402f-8d38-196e641be864\") " pod="openstack/barbican-worker-7b56fd48c9-2fhh8" Jan 26 08:11:33 crc kubenswrapper[4806]: I0126 08:11:33.652989 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c07d6e0b-e41e-402f-8d38-196e641be864-combined-ca-bundle\") pod \"barbican-worker-7b56fd48c9-2fhh8\" (UID: \"c07d6e0b-e41e-402f-8d38-196e641be864\") " pod="openstack/barbican-worker-7b56fd48c9-2fhh8" Jan 26 08:11:33 crc kubenswrapper[4806]: I0126 08:11:33.653099 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pp899\" (UniqueName: \"kubernetes.io/projected/c07d6e0b-e41e-402f-8d38-196e641be864-kube-api-access-pp899\") pod \"barbican-worker-7b56fd48c9-2fhh8\" (UID: \"c07d6e0b-e41e-402f-8d38-196e641be864\") " pod="openstack/barbican-worker-7b56fd48c9-2fhh8" Jan 26 08:11:33 crc kubenswrapper[4806]: I0126 08:11:33.653132 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c07d6e0b-e41e-402f-8d38-196e641be864-logs\") pod \"barbican-worker-7b56fd48c9-2fhh8\" (UID: \"c07d6e0b-e41e-402f-8d38-196e641be864\") " pod="openstack/barbican-worker-7b56fd48c9-2fhh8" Jan 26 08:11:33 crc kubenswrapper[4806]: I0126 08:11:33.653151 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c07d6e0b-e41e-402f-8d38-196e641be864-config-data\") pod \"barbican-worker-7b56fd48c9-2fhh8\" (UID: \"c07d6e0b-e41e-402f-8d38-196e641be864\") " pod="openstack/barbican-worker-7b56fd48c9-2fhh8" Jan 26 08:11:33 crc kubenswrapper[4806]: I0126 08:11:33.660119 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"20febbb2-a1ac-4a38-8a1d-594fa53b0b06","Type":"ContainerStarted","Data":"cc152a9a8b4a3addd40f4a2697481ff2981fe983753c7a5d570dc0e5efb5eb2d"} Jan 26 08:11:33 crc kubenswrapper[4806]: I0126 08:11:33.662944 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-7c56b98c68-pf9q4"] Jan 26 08:11:33 crc kubenswrapper[4806]: I0126 08:11:33.673870 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 26 08:11:33 crc kubenswrapper[4806]: I0126 08:11:33.693227 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-bjtkx" podStartSLOduration=3.397621159 podStartE2EDuration="49.693207843s" podCreationTimestamp="2026-01-26 08:10:44 +0000 UTC" firstStartedPulling="2026-01-26 08:10:45.730137775 +0000 UTC m=+1024.994545831" lastFinishedPulling="2026-01-26 08:11:32.025724459 +0000 UTC m=+1071.290132515" observedRunningTime="2026-01-26 08:11:33.642250712 +0000 UTC m=+1072.906658768" watchObservedRunningTime="2026-01-26 08:11:33.693207843 +0000 UTC m=+1072.957615899" Jan 26 08:11:33 crc kubenswrapper[4806]: I0126 08:11:33.737745 4806 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-7d485d788d-5q4tb" podUID="d7b4ee8d-6333-4683-94c4-b79229c76537" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.150:8443: connect: connection refused" Jan 26 08:11:33 crc kubenswrapper[4806]: I0126 08:11:33.754414 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c07d6e0b-e41e-402f-8d38-196e641be864-logs\") pod \"barbican-worker-7b56fd48c9-2fhh8\" (UID: \"c07d6e0b-e41e-402f-8d38-196e641be864\") " pod="openstack/barbican-worker-7b56fd48c9-2fhh8" Jan 26 08:11:33 crc kubenswrapper[4806]: I0126 08:11:33.754482 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c07d6e0b-e41e-402f-8d38-196e641be864-config-data\") pod \"barbican-worker-7b56fd48c9-2fhh8\" (UID: \"c07d6e0b-e41e-402f-8d38-196e641be864\") " pod="openstack/barbican-worker-7b56fd48c9-2fhh8" Jan 26 08:11:33 crc kubenswrapper[4806]: I0126 08:11:33.754571 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c07d6e0b-e41e-402f-8d38-196e641be864-config-data-custom\") pod \"barbican-worker-7b56fd48c9-2fhh8\" (UID: \"c07d6e0b-e41e-402f-8d38-196e641be864\") " pod="openstack/barbican-worker-7b56fd48c9-2fhh8" Jan 26 08:11:33 crc kubenswrapper[4806]: I0126 08:11:33.754613 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c07d6e0b-e41e-402f-8d38-196e641be864-combined-ca-bundle\") pod \"barbican-worker-7b56fd48c9-2fhh8\" (UID: \"c07d6e0b-e41e-402f-8d38-196e641be864\") " pod="openstack/barbican-worker-7b56fd48c9-2fhh8" Jan 26 08:11:33 crc kubenswrapper[4806]: I0126 08:11:33.754764 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pp899\" (UniqueName: \"kubernetes.io/projected/c07d6e0b-e41e-402f-8d38-196e641be864-kube-api-access-pp899\") pod \"barbican-worker-7b56fd48c9-2fhh8\" (UID: \"c07d6e0b-e41e-402f-8d38-196e641be864\") " pod="openstack/barbican-worker-7b56fd48c9-2fhh8" Jan 26 08:11:33 crc kubenswrapper[4806]: I0126 08:11:33.772790 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c07d6e0b-e41e-402f-8d38-196e641be864-logs\") pod \"barbican-worker-7b56fd48c9-2fhh8\" (UID: \"c07d6e0b-e41e-402f-8d38-196e641be864\") " pod="openstack/barbican-worker-7b56fd48c9-2fhh8" Jan 26 08:11:33 crc kubenswrapper[4806]: I0126 08:11:33.787041 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c07d6e0b-e41e-402f-8d38-196e641be864-combined-ca-bundle\") pod \"barbican-worker-7b56fd48c9-2fhh8\" (UID: \"c07d6e0b-e41e-402f-8d38-196e641be864\") " pod="openstack/barbican-worker-7b56fd48c9-2fhh8" Jan 26 08:11:33 crc kubenswrapper[4806]: I0126 08:11:33.809314 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c07d6e0b-e41e-402f-8d38-196e641be864-config-data\") pod \"barbican-worker-7b56fd48c9-2fhh8\" (UID: \"c07d6e0b-e41e-402f-8d38-196e641be864\") " pod="openstack/barbican-worker-7b56fd48c9-2fhh8" Jan 26 08:11:33 crc kubenswrapper[4806]: I0126 08:11:33.831047 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c07d6e0b-e41e-402f-8d38-196e641be864-config-data-custom\") pod \"barbican-worker-7b56fd48c9-2fhh8\" (UID: \"c07d6e0b-e41e-402f-8d38-196e641be864\") " pod="openstack/barbican-worker-7b56fd48c9-2fhh8" Jan 26 08:11:33 crc kubenswrapper[4806]: I0126 08:11:33.863884 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f8773f7-27d5-469b-837e-90bf31716266-config-data\") pod \"barbican-keystone-listener-7c56b98c68-pf9q4\" (UID: \"3f8773f7-27d5-469b-837e-90bf31716266\") " pod="openstack/barbican-keystone-listener-7c56b98c68-pf9q4" Jan 26 08:11:33 crc kubenswrapper[4806]: I0126 08:11:33.863964 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7n22v\" (UniqueName: \"kubernetes.io/projected/3f8773f7-27d5-469b-837e-90bf31716266-kube-api-access-7n22v\") pod \"barbican-keystone-listener-7c56b98c68-pf9q4\" (UID: \"3f8773f7-27d5-469b-837e-90bf31716266\") " pod="openstack/barbican-keystone-listener-7c56b98c68-pf9q4" Jan 26 08:11:33 crc kubenswrapper[4806]: I0126 08:11:33.863988 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f8773f7-27d5-469b-837e-90bf31716266-combined-ca-bundle\") pod \"barbican-keystone-listener-7c56b98c68-pf9q4\" (UID: \"3f8773f7-27d5-469b-837e-90bf31716266\") " pod="openstack/barbican-keystone-listener-7c56b98c68-pf9q4" Jan 26 08:11:33 crc kubenswrapper[4806]: I0126 08:11:33.864115 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3f8773f7-27d5-469b-837e-90bf31716266-config-data-custom\") pod \"barbican-keystone-listener-7c56b98c68-pf9q4\" (UID: \"3f8773f7-27d5-469b-837e-90bf31716266\") " pod="openstack/barbican-keystone-listener-7c56b98c68-pf9q4" Jan 26 08:11:33 crc kubenswrapper[4806]: I0126 08:11:33.865020 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3f8773f7-27d5-469b-837e-90bf31716266-logs\") pod \"barbican-keystone-listener-7c56b98c68-pf9q4\" (UID: \"3f8773f7-27d5-469b-837e-90bf31716266\") " pod="openstack/barbican-keystone-listener-7c56b98c68-pf9q4" Jan 26 08:11:33 crc kubenswrapper[4806]: I0126 08:11:33.892489 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=5.89247104 podStartE2EDuration="5.89247104s" podCreationTimestamp="2026-01-26 08:11:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 08:11:33.870151583 +0000 UTC m=+1073.134559639" watchObservedRunningTime="2026-01-26 08:11:33.89247104 +0000 UTC m=+1073.156879096" Jan 26 08:11:33 crc kubenswrapper[4806]: I0126 08:11:33.892804 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-xjh66"] Jan 26 08:11:33 crc kubenswrapper[4806]: I0126 08:11:33.894419 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-xjh66" Jan 26 08:11:33 crc kubenswrapper[4806]: I0126 08:11:33.956197 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pp899\" (UniqueName: \"kubernetes.io/projected/c07d6e0b-e41e-402f-8d38-196e641be864-kube-api-access-pp899\") pod \"barbican-worker-7b56fd48c9-2fhh8\" (UID: \"c07d6e0b-e41e-402f-8d38-196e641be864\") " pod="openstack/barbican-worker-7b56fd48c9-2fhh8" Jan 26 08:11:33 crc kubenswrapper[4806]: I0126 08:11:33.971429 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7n22v\" (UniqueName: \"kubernetes.io/projected/3f8773f7-27d5-469b-837e-90bf31716266-kube-api-access-7n22v\") pod \"barbican-keystone-listener-7c56b98c68-pf9q4\" (UID: \"3f8773f7-27d5-469b-837e-90bf31716266\") " pod="openstack/barbican-keystone-listener-7c56b98c68-pf9q4" Jan 26 08:11:33 crc kubenswrapper[4806]: I0126 08:11:33.971481 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f8773f7-27d5-469b-837e-90bf31716266-combined-ca-bundle\") pod \"barbican-keystone-listener-7c56b98c68-pf9q4\" (UID: \"3f8773f7-27d5-469b-837e-90bf31716266\") " pod="openstack/barbican-keystone-listener-7c56b98c68-pf9q4" Jan 26 08:11:33 crc kubenswrapper[4806]: I0126 08:11:33.971505 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24nsc\" (UniqueName: \"kubernetes.io/projected/101d2806-ced2-4267-86ec-114320756e46-kube-api-access-24nsc\") pod \"dnsmasq-dns-848cf88cfc-xjh66\" (UID: \"101d2806-ced2-4267-86ec-114320756e46\") " pod="openstack/dnsmasq-dns-848cf88cfc-xjh66" Jan 26 08:11:33 crc kubenswrapper[4806]: I0126 08:11:33.971554 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/101d2806-ced2-4267-86ec-114320756e46-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-xjh66\" (UID: \"101d2806-ced2-4267-86ec-114320756e46\") " pod="openstack/dnsmasq-dns-848cf88cfc-xjh66" Jan 26 08:11:33 crc kubenswrapper[4806]: I0126 08:11:33.971602 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/101d2806-ced2-4267-86ec-114320756e46-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-xjh66\" (UID: \"101d2806-ced2-4267-86ec-114320756e46\") " pod="openstack/dnsmasq-dns-848cf88cfc-xjh66" Jan 26 08:11:33 crc kubenswrapper[4806]: I0126 08:11:33.971632 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/101d2806-ced2-4267-86ec-114320756e46-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-xjh66\" (UID: \"101d2806-ced2-4267-86ec-114320756e46\") " pod="openstack/dnsmasq-dns-848cf88cfc-xjh66" Jan 26 08:11:33 crc kubenswrapper[4806]: I0126 08:11:33.971647 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/101d2806-ced2-4267-86ec-114320756e46-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-xjh66\" (UID: \"101d2806-ced2-4267-86ec-114320756e46\") " pod="openstack/dnsmasq-dns-848cf88cfc-xjh66" Jan 26 08:11:33 crc kubenswrapper[4806]: I0126 08:11:33.971679 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3f8773f7-27d5-469b-837e-90bf31716266-config-data-custom\") pod \"barbican-keystone-listener-7c56b98c68-pf9q4\" (UID: \"3f8773f7-27d5-469b-837e-90bf31716266\") " pod="openstack/barbican-keystone-listener-7c56b98c68-pf9q4" Jan 26 08:11:33 crc kubenswrapper[4806]: I0126 08:11:33.971706 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3f8773f7-27d5-469b-837e-90bf31716266-logs\") pod \"barbican-keystone-listener-7c56b98c68-pf9q4\" (UID: \"3f8773f7-27d5-469b-837e-90bf31716266\") " pod="openstack/barbican-keystone-listener-7c56b98c68-pf9q4" Jan 26 08:11:33 crc kubenswrapper[4806]: I0126 08:11:33.971752 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f8773f7-27d5-469b-837e-90bf31716266-config-data\") pod \"barbican-keystone-listener-7c56b98c68-pf9q4\" (UID: \"3f8773f7-27d5-469b-837e-90bf31716266\") " pod="openstack/barbican-keystone-listener-7c56b98c68-pf9q4" Jan 26 08:11:33 crc kubenswrapper[4806]: I0126 08:11:33.971788 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/101d2806-ced2-4267-86ec-114320756e46-config\") pod \"dnsmasq-dns-848cf88cfc-xjh66\" (UID: \"101d2806-ced2-4267-86ec-114320756e46\") " pod="openstack/dnsmasq-dns-848cf88cfc-xjh66" Jan 26 08:11:33 crc kubenswrapper[4806]: I0126 08:11:33.973949 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3f8773f7-27d5-469b-837e-90bf31716266-logs\") pod \"barbican-keystone-listener-7c56b98c68-pf9q4\" (UID: \"3f8773f7-27d5-469b-837e-90bf31716266\") " pod="openstack/barbican-keystone-listener-7c56b98c68-pf9q4" Jan 26 08:11:33 crc kubenswrapper[4806]: I0126 08:11:33.976896 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f8773f7-27d5-469b-837e-90bf31716266-combined-ca-bundle\") pod \"barbican-keystone-listener-7c56b98c68-pf9q4\" (UID: \"3f8773f7-27d5-469b-837e-90bf31716266\") " pod="openstack/barbican-keystone-listener-7c56b98c68-pf9q4" Jan 26 08:11:33 crc kubenswrapper[4806]: I0126 08:11:33.977487 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3f8773f7-27d5-469b-837e-90bf31716266-config-data-custom\") pod \"barbican-keystone-listener-7c56b98c68-pf9q4\" (UID: \"3f8773f7-27d5-469b-837e-90bf31716266\") " pod="openstack/barbican-keystone-listener-7c56b98c68-pf9q4" Jan 26 08:11:33 crc kubenswrapper[4806]: I0126 08:11:33.980482 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f8773f7-27d5-469b-837e-90bf31716266-config-data\") pod \"barbican-keystone-listener-7c56b98c68-pf9q4\" (UID: \"3f8773f7-27d5-469b-837e-90bf31716266\") " pod="openstack/barbican-keystone-listener-7c56b98c68-pf9q4" Jan 26 08:11:33 crc kubenswrapper[4806]: I0126 08:11:33.988868 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-xjh66"] Jan 26 08:11:33 crc kubenswrapper[4806]: I0126 08:11:33.995195 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6b8f96b47b-sbsnb" Jan 26 08:11:33 crc kubenswrapper[4806]: I0126 08:11:33.995654 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6b8f96b47b-sbsnb" Jan 26 08:11:34 crc kubenswrapper[4806]: I0126 08:11:34.030615 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7n22v\" (UniqueName: \"kubernetes.io/projected/3f8773f7-27d5-469b-837e-90bf31716266-kube-api-access-7n22v\") pod \"barbican-keystone-listener-7c56b98c68-pf9q4\" (UID: \"3f8773f7-27d5-469b-837e-90bf31716266\") " pod="openstack/barbican-keystone-listener-7c56b98c68-pf9q4" Jan 26 08:11:34 crc kubenswrapper[4806]: I0126 08:11:34.041886 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-7c56b98c68-pf9q4" Jan 26 08:11:34 crc kubenswrapper[4806]: I0126 08:11:34.083598 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/101d2806-ced2-4267-86ec-114320756e46-config\") pod \"dnsmasq-dns-848cf88cfc-xjh66\" (UID: \"101d2806-ced2-4267-86ec-114320756e46\") " pod="openstack/dnsmasq-dns-848cf88cfc-xjh66" Jan 26 08:11:34 crc kubenswrapper[4806]: I0126 08:11:34.083651 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24nsc\" (UniqueName: \"kubernetes.io/projected/101d2806-ced2-4267-86ec-114320756e46-kube-api-access-24nsc\") pod \"dnsmasq-dns-848cf88cfc-xjh66\" (UID: \"101d2806-ced2-4267-86ec-114320756e46\") " pod="openstack/dnsmasq-dns-848cf88cfc-xjh66" Jan 26 08:11:34 crc kubenswrapper[4806]: I0126 08:11:34.083689 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/101d2806-ced2-4267-86ec-114320756e46-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-xjh66\" (UID: \"101d2806-ced2-4267-86ec-114320756e46\") " pod="openstack/dnsmasq-dns-848cf88cfc-xjh66" Jan 26 08:11:34 crc kubenswrapper[4806]: I0126 08:11:34.083730 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/101d2806-ced2-4267-86ec-114320756e46-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-xjh66\" (UID: \"101d2806-ced2-4267-86ec-114320756e46\") " pod="openstack/dnsmasq-dns-848cf88cfc-xjh66" Jan 26 08:11:34 crc kubenswrapper[4806]: I0126 08:11:34.083770 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/101d2806-ced2-4267-86ec-114320756e46-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-xjh66\" (UID: \"101d2806-ced2-4267-86ec-114320756e46\") " pod="openstack/dnsmasq-dns-848cf88cfc-xjh66" Jan 26 08:11:34 crc kubenswrapper[4806]: I0126 08:11:34.083787 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/101d2806-ced2-4267-86ec-114320756e46-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-xjh66\" (UID: \"101d2806-ced2-4267-86ec-114320756e46\") " pod="openstack/dnsmasq-dns-848cf88cfc-xjh66" Jan 26 08:11:34 crc kubenswrapper[4806]: I0126 08:11:34.084631 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/101d2806-ced2-4267-86ec-114320756e46-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-xjh66\" (UID: \"101d2806-ced2-4267-86ec-114320756e46\") " pod="openstack/dnsmasq-dns-848cf88cfc-xjh66" Jan 26 08:11:34 crc kubenswrapper[4806]: I0126 08:11:34.085940 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/101d2806-ced2-4267-86ec-114320756e46-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-xjh66\" (UID: \"101d2806-ced2-4267-86ec-114320756e46\") " pod="openstack/dnsmasq-dns-848cf88cfc-xjh66" Jan 26 08:11:34 crc kubenswrapper[4806]: I0126 08:11:34.086115 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/101d2806-ced2-4267-86ec-114320756e46-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-xjh66\" (UID: \"101d2806-ced2-4267-86ec-114320756e46\") " pod="openstack/dnsmasq-dns-848cf88cfc-xjh66" Jan 26 08:11:34 crc kubenswrapper[4806]: I0126 08:11:34.086495 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/101d2806-ced2-4267-86ec-114320756e46-config\") pod \"dnsmasq-dns-848cf88cfc-xjh66\" (UID: \"101d2806-ced2-4267-86ec-114320756e46\") " pod="openstack/dnsmasq-dns-848cf88cfc-xjh66" Jan 26 08:11:34 crc kubenswrapper[4806]: I0126 08:11:34.087286 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/101d2806-ced2-4267-86ec-114320756e46-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-xjh66\" (UID: \"101d2806-ced2-4267-86ec-114320756e46\") " pod="openstack/dnsmasq-dns-848cf88cfc-xjh66" Jan 26 08:11:34 crc kubenswrapper[4806]: I0126 08:11:34.124735 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24nsc\" (UniqueName: \"kubernetes.io/projected/101d2806-ced2-4267-86ec-114320756e46-kube-api-access-24nsc\") pod \"dnsmasq-dns-848cf88cfc-xjh66\" (UID: \"101d2806-ced2-4267-86ec-114320756e46\") " pod="openstack/dnsmasq-dns-848cf88cfc-xjh66" Jan 26 08:11:34 crc kubenswrapper[4806]: I0126 08:11:34.128637 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-7b56fd48c9-2fhh8" Jan 26 08:11:34 crc kubenswrapper[4806]: I0126 08:11:34.218840 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-76c565f4b6-mqhr5"] Jan 26 08:11:34 crc kubenswrapper[4806]: I0126 08:11:34.221250 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-76c565f4b6-mqhr5" Jan 26 08:11:34 crc kubenswrapper[4806]: I0126 08:11:34.226078 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 26 08:11:34 crc kubenswrapper[4806]: I0126 08:11:34.234476 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-xjh66" Jan 26 08:11:34 crc kubenswrapper[4806]: I0126 08:11:34.284669 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-76c565f4b6-mqhr5"] Jan 26 08:11:34 crc kubenswrapper[4806]: I0126 08:11:34.388077 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d88b0ca-e8c5-40b1-b156-4f3fb3c612c3-config-data\") pod \"barbican-api-76c565f4b6-mqhr5\" (UID: \"1d88b0ca-e8c5-40b1-b156-4f3fb3c612c3\") " pod="openstack/barbican-api-76c565f4b6-mqhr5" Jan 26 08:11:34 crc kubenswrapper[4806]: I0126 08:11:34.388166 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d88b0ca-e8c5-40b1-b156-4f3fb3c612c3-combined-ca-bundle\") pod \"barbican-api-76c565f4b6-mqhr5\" (UID: \"1d88b0ca-e8c5-40b1-b156-4f3fb3c612c3\") " pod="openstack/barbican-api-76c565f4b6-mqhr5" Jan 26 08:11:34 crc kubenswrapper[4806]: I0126 08:11:34.388196 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1d88b0ca-e8c5-40b1-b156-4f3fb3c612c3-config-data-custom\") pod \"barbican-api-76c565f4b6-mqhr5\" (UID: \"1d88b0ca-e8c5-40b1-b156-4f3fb3c612c3\") " pod="openstack/barbican-api-76c565f4b6-mqhr5" Jan 26 08:11:34 crc kubenswrapper[4806]: I0126 08:11:34.388223 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xlr7\" (UniqueName: \"kubernetes.io/projected/1d88b0ca-e8c5-40b1-b156-4f3fb3c612c3-kube-api-access-6xlr7\") pod \"barbican-api-76c565f4b6-mqhr5\" (UID: \"1d88b0ca-e8c5-40b1-b156-4f3fb3c612c3\") " pod="openstack/barbican-api-76c565f4b6-mqhr5" Jan 26 08:11:34 crc kubenswrapper[4806]: I0126 08:11:34.388319 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1d88b0ca-e8c5-40b1-b156-4f3fb3c612c3-logs\") pod \"barbican-api-76c565f4b6-mqhr5\" (UID: \"1d88b0ca-e8c5-40b1-b156-4f3fb3c612c3\") " pod="openstack/barbican-api-76c565f4b6-mqhr5" Jan 26 08:11:34 crc kubenswrapper[4806]: I0126 08:11:34.490371 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d88b0ca-e8c5-40b1-b156-4f3fb3c612c3-combined-ca-bundle\") pod \"barbican-api-76c565f4b6-mqhr5\" (UID: \"1d88b0ca-e8c5-40b1-b156-4f3fb3c612c3\") " pod="openstack/barbican-api-76c565f4b6-mqhr5" Jan 26 08:11:34 crc kubenswrapper[4806]: I0126 08:11:34.490418 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1d88b0ca-e8c5-40b1-b156-4f3fb3c612c3-config-data-custom\") pod \"barbican-api-76c565f4b6-mqhr5\" (UID: \"1d88b0ca-e8c5-40b1-b156-4f3fb3c612c3\") " pod="openstack/barbican-api-76c565f4b6-mqhr5" Jan 26 08:11:34 crc kubenswrapper[4806]: I0126 08:11:34.490461 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xlr7\" (UniqueName: \"kubernetes.io/projected/1d88b0ca-e8c5-40b1-b156-4f3fb3c612c3-kube-api-access-6xlr7\") pod \"barbican-api-76c565f4b6-mqhr5\" (UID: \"1d88b0ca-e8c5-40b1-b156-4f3fb3c612c3\") " pod="openstack/barbican-api-76c565f4b6-mqhr5" Jan 26 08:11:34 crc kubenswrapper[4806]: I0126 08:11:34.490716 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1d88b0ca-e8c5-40b1-b156-4f3fb3c612c3-logs\") pod \"barbican-api-76c565f4b6-mqhr5\" (UID: \"1d88b0ca-e8c5-40b1-b156-4f3fb3c612c3\") " pod="openstack/barbican-api-76c565f4b6-mqhr5" Jan 26 08:11:34 crc kubenswrapper[4806]: I0126 08:11:34.490806 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d88b0ca-e8c5-40b1-b156-4f3fb3c612c3-config-data\") pod \"barbican-api-76c565f4b6-mqhr5\" (UID: \"1d88b0ca-e8c5-40b1-b156-4f3fb3c612c3\") " pod="openstack/barbican-api-76c565f4b6-mqhr5" Jan 26 08:11:34 crc kubenswrapper[4806]: I0126 08:11:34.491435 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1d88b0ca-e8c5-40b1-b156-4f3fb3c612c3-logs\") pod \"barbican-api-76c565f4b6-mqhr5\" (UID: \"1d88b0ca-e8c5-40b1-b156-4f3fb3c612c3\") " pod="openstack/barbican-api-76c565f4b6-mqhr5" Jan 26 08:11:34 crc kubenswrapper[4806]: I0126 08:11:34.501124 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1d88b0ca-e8c5-40b1-b156-4f3fb3c612c3-config-data-custom\") pod \"barbican-api-76c565f4b6-mqhr5\" (UID: \"1d88b0ca-e8c5-40b1-b156-4f3fb3c612c3\") " pod="openstack/barbican-api-76c565f4b6-mqhr5" Jan 26 08:11:34 crc kubenswrapper[4806]: I0126 08:11:34.501143 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d88b0ca-e8c5-40b1-b156-4f3fb3c612c3-combined-ca-bundle\") pod \"barbican-api-76c565f4b6-mqhr5\" (UID: \"1d88b0ca-e8c5-40b1-b156-4f3fb3c612c3\") " pod="openstack/barbican-api-76c565f4b6-mqhr5" Jan 26 08:11:34 crc kubenswrapper[4806]: I0126 08:11:34.502113 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d88b0ca-e8c5-40b1-b156-4f3fb3c612c3-config-data\") pod \"barbican-api-76c565f4b6-mqhr5\" (UID: \"1d88b0ca-e8c5-40b1-b156-4f3fb3c612c3\") " pod="openstack/barbican-api-76c565f4b6-mqhr5" Jan 26 08:11:34 crc kubenswrapper[4806]: I0126 08:11:34.510137 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xlr7\" (UniqueName: \"kubernetes.io/projected/1d88b0ca-e8c5-40b1-b156-4f3fb3c612c3-kube-api-access-6xlr7\") pod \"barbican-api-76c565f4b6-mqhr5\" (UID: \"1d88b0ca-e8c5-40b1-b156-4f3fb3c612c3\") " pod="openstack/barbican-api-76c565f4b6-mqhr5" Jan 26 08:11:34 crc kubenswrapper[4806]: I0126 08:11:34.549762 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-76c565f4b6-mqhr5" Jan 26 08:11:34 crc kubenswrapper[4806]: I0126 08:11:34.665554 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-7c56b98c68-pf9q4"] Jan 26 08:11:34 crc kubenswrapper[4806]: I0126 08:11:34.679628 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-qw29c" event={"ID":"bc6102bf-7483-4063-af9d-841e78398b0c","Type":"ContainerStarted","Data":"2fe5ae91a9473734ce41faf4efb4de45a5d442716ca0e10fd78e7008169ce5c0"} Jan 26 08:11:34 crc kubenswrapper[4806]: I0126 08:11:34.739465 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-qw29c" podStartSLOduration=4.422043756 podStartE2EDuration="50.739444943s" podCreationTimestamp="2026-01-26 08:10:44 +0000 UTC" firstStartedPulling="2026-01-26 08:10:46.413201734 +0000 UTC m=+1025.677609790" lastFinishedPulling="2026-01-26 08:11:32.730602921 +0000 UTC m=+1071.995010977" observedRunningTime="2026-01-26 08:11:34.713894356 +0000 UTC m=+1073.978302412" watchObservedRunningTime="2026-01-26 08:11:34.739444943 +0000 UTC m=+1074.003852989" Jan 26 08:11:34 crc kubenswrapper[4806]: I0126 08:11:34.833849 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-xjh66"] Jan 26 08:11:35 crc kubenswrapper[4806]: I0126 08:11:35.114935 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2762738e-b53a-4f25-ae4e-fa5182994a78" path="/var/lib/kubelet/pods/2762738e-b53a-4f25-ae4e-fa5182994a78/volumes" Jan 26 08:11:35 crc kubenswrapper[4806]: I0126 08:11:35.115541 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-7b56fd48c9-2fhh8"] Jan 26 08:11:35 crc kubenswrapper[4806]: W0126 08:11:35.120207 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc07d6e0b_e41e_402f_8d38_196e641be864.slice/crio-be31b3ccdcb92a7f95af48d6330f662c7800ba1d5f5479199ac18177afb95f1d WatchSource:0}: Error finding container be31b3ccdcb92a7f95af48d6330f662c7800ba1d5f5479199ac18177afb95f1d: Status 404 returned error can't find the container with id be31b3ccdcb92a7f95af48d6330f662c7800ba1d5f5479199ac18177afb95f1d Jan 26 08:11:35 crc kubenswrapper[4806]: I0126 08:11:35.320642 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-76c565f4b6-mqhr5"] Jan 26 08:11:35 crc kubenswrapper[4806]: I0126 08:11:35.743346 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-76c565f4b6-mqhr5" event={"ID":"1d88b0ca-e8c5-40b1-b156-4f3fb3c612c3","Type":"ContainerStarted","Data":"b5ad4005e4b42c1a424a41154505cbf8b8b75084f4de3935dfea9dc9fd65521c"} Jan 26 08:11:35 crc kubenswrapper[4806]: I0126 08:11:35.743658 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-76c565f4b6-mqhr5" event={"ID":"1d88b0ca-e8c5-40b1-b156-4f3fb3c612c3","Type":"ContainerStarted","Data":"02368f9990a62f811e5345abc44c34d242c3db1a62c54fb7d641474f562b48a3"} Jan 26 08:11:35 crc kubenswrapper[4806]: I0126 08:11:35.753805 4806 generic.go:334] "Generic (PLEG): container finished" podID="101d2806-ced2-4267-86ec-114320756e46" containerID="9f5a7b12ade198be05a17d5e64123755ee3e65667b31aa1124c321ae77428c01" exitCode=0 Jan 26 08:11:35 crc kubenswrapper[4806]: I0126 08:11:35.753850 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-xjh66" event={"ID":"101d2806-ced2-4267-86ec-114320756e46","Type":"ContainerDied","Data":"9f5a7b12ade198be05a17d5e64123755ee3e65667b31aa1124c321ae77428c01"} Jan 26 08:11:35 crc kubenswrapper[4806]: I0126 08:11:35.753867 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-xjh66" event={"ID":"101d2806-ced2-4267-86ec-114320756e46","Type":"ContainerStarted","Data":"45798520d3189bd7554182f54be640afcc9d6933ce2566f26e17b0073778a060"} Jan 26 08:11:35 crc kubenswrapper[4806]: I0126 08:11:35.757351 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7b56fd48c9-2fhh8" event={"ID":"c07d6e0b-e41e-402f-8d38-196e641be864","Type":"ContainerStarted","Data":"be31b3ccdcb92a7f95af48d6330f662c7800ba1d5f5479199ac18177afb95f1d"} Jan 26 08:11:35 crc kubenswrapper[4806]: I0126 08:11:35.759573 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7c56b98c68-pf9q4" event={"ID":"3f8773f7-27d5-469b-837e-90bf31716266","Type":"ContainerStarted","Data":"de30cef5a3e15cd59bc7406ec65febacd4bef1a1784264aad3bef62129ec1bf1"} Jan 26 08:11:36 crc kubenswrapper[4806]: I0126 08:11:36.770812 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-xjh66" event={"ID":"101d2806-ced2-4267-86ec-114320756e46","Type":"ContainerStarted","Data":"db922cc69b21749920c7875d5009173ba5f1c416b99d25f1100564119ef59752"} Jan 26 08:11:36 crc kubenswrapper[4806]: I0126 08:11:36.771109 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-848cf88cfc-xjh66" Jan 26 08:11:36 crc kubenswrapper[4806]: I0126 08:11:36.783242 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-76c565f4b6-mqhr5" event={"ID":"1d88b0ca-e8c5-40b1-b156-4f3fb3c612c3","Type":"ContainerStarted","Data":"a587379076ed125fce902c6e4b41e28a15352be0b2f908952fe53d31983910c4"} Jan 26 08:11:36 crc kubenswrapper[4806]: I0126 08:11:36.783821 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-76c565f4b6-mqhr5" Jan 26 08:11:36 crc kubenswrapper[4806]: I0126 08:11:36.783849 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-76c565f4b6-mqhr5" Jan 26 08:11:36 crc kubenswrapper[4806]: I0126 08:11:36.791975 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-848cf88cfc-xjh66" podStartSLOduration=3.791958494 podStartE2EDuration="3.791958494s" podCreationTimestamp="2026-01-26 08:11:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 08:11:36.790928135 +0000 UTC m=+1076.055336191" watchObservedRunningTime="2026-01-26 08:11:36.791958494 +0000 UTC m=+1076.056366550" Jan 26 08:11:36 crc kubenswrapper[4806]: I0126 08:11:36.819800 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-76c565f4b6-mqhr5" podStartSLOduration=2.819782535 podStartE2EDuration="2.819782535s" podCreationTimestamp="2026-01-26 08:11:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 08:11:36.806021089 +0000 UTC m=+1076.070429135" watchObservedRunningTime="2026-01-26 08:11:36.819782535 +0000 UTC m=+1076.084190591" Jan 26 08:11:37 crc kubenswrapper[4806]: I0126 08:11:37.874245 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-5554df79f4-4pvrc"] Jan 26 08:11:37 crc kubenswrapper[4806]: I0126 08:11:37.876930 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5554df79f4-4pvrc" Jan 26 08:11:37 crc kubenswrapper[4806]: I0126 08:11:37.881975 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Jan 26 08:11:37 crc kubenswrapper[4806]: I0126 08:11:37.882198 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Jan 26 08:11:37 crc kubenswrapper[4806]: I0126 08:11:37.922329 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5554df79f4-4pvrc"] Jan 26 08:11:38 crc kubenswrapper[4806]: I0126 08:11:38.007466 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hh59x\" (UniqueName: \"kubernetes.io/projected/758b7482-35c7-4cda-aaff-f3e3784bc5c4-kube-api-access-hh59x\") pod \"barbican-api-5554df79f4-4pvrc\" (UID: \"758b7482-35c7-4cda-aaff-f3e3784bc5c4\") " pod="openstack/barbican-api-5554df79f4-4pvrc" Jan 26 08:11:38 crc kubenswrapper[4806]: I0126 08:11:38.007552 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/758b7482-35c7-4cda-aaff-f3e3784bc5c4-internal-tls-certs\") pod \"barbican-api-5554df79f4-4pvrc\" (UID: \"758b7482-35c7-4cda-aaff-f3e3784bc5c4\") " pod="openstack/barbican-api-5554df79f4-4pvrc" Jan 26 08:11:38 crc kubenswrapper[4806]: I0126 08:11:38.007627 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/758b7482-35c7-4cda-aaff-f3e3784bc5c4-config-data\") pod \"barbican-api-5554df79f4-4pvrc\" (UID: \"758b7482-35c7-4cda-aaff-f3e3784bc5c4\") " pod="openstack/barbican-api-5554df79f4-4pvrc" Jan 26 08:11:38 crc kubenswrapper[4806]: I0126 08:11:38.007650 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/758b7482-35c7-4cda-aaff-f3e3784bc5c4-config-data-custom\") pod \"barbican-api-5554df79f4-4pvrc\" (UID: \"758b7482-35c7-4cda-aaff-f3e3784bc5c4\") " pod="openstack/barbican-api-5554df79f4-4pvrc" Jan 26 08:11:38 crc kubenswrapper[4806]: I0126 08:11:38.007798 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/758b7482-35c7-4cda-aaff-f3e3784bc5c4-public-tls-certs\") pod \"barbican-api-5554df79f4-4pvrc\" (UID: \"758b7482-35c7-4cda-aaff-f3e3784bc5c4\") " pod="openstack/barbican-api-5554df79f4-4pvrc" Jan 26 08:11:38 crc kubenswrapper[4806]: I0126 08:11:38.007829 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/758b7482-35c7-4cda-aaff-f3e3784bc5c4-combined-ca-bundle\") pod \"barbican-api-5554df79f4-4pvrc\" (UID: \"758b7482-35c7-4cda-aaff-f3e3784bc5c4\") " pod="openstack/barbican-api-5554df79f4-4pvrc" Jan 26 08:11:38 crc kubenswrapper[4806]: I0126 08:11:38.007880 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/758b7482-35c7-4cda-aaff-f3e3784bc5c4-logs\") pod \"barbican-api-5554df79f4-4pvrc\" (UID: \"758b7482-35c7-4cda-aaff-f3e3784bc5c4\") " pod="openstack/barbican-api-5554df79f4-4pvrc" Jan 26 08:11:38 crc kubenswrapper[4806]: I0126 08:11:38.109378 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/758b7482-35c7-4cda-aaff-f3e3784bc5c4-public-tls-certs\") pod \"barbican-api-5554df79f4-4pvrc\" (UID: \"758b7482-35c7-4cda-aaff-f3e3784bc5c4\") " pod="openstack/barbican-api-5554df79f4-4pvrc" Jan 26 08:11:38 crc kubenswrapper[4806]: I0126 08:11:38.109420 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/758b7482-35c7-4cda-aaff-f3e3784bc5c4-combined-ca-bundle\") pod \"barbican-api-5554df79f4-4pvrc\" (UID: \"758b7482-35c7-4cda-aaff-f3e3784bc5c4\") " pod="openstack/barbican-api-5554df79f4-4pvrc" Jan 26 08:11:38 crc kubenswrapper[4806]: I0126 08:11:38.109445 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/758b7482-35c7-4cda-aaff-f3e3784bc5c4-logs\") pod \"barbican-api-5554df79f4-4pvrc\" (UID: \"758b7482-35c7-4cda-aaff-f3e3784bc5c4\") " pod="openstack/barbican-api-5554df79f4-4pvrc" Jan 26 08:11:38 crc kubenswrapper[4806]: I0126 08:11:38.109491 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hh59x\" (UniqueName: \"kubernetes.io/projected/758b7482-35c7-4cda-aaff-f3e3784bc5c4-kube-api-access-hh59x\") pod \"barbican-api-5554df79f4-4pvrc\" (UID: \"758b7482-35c7-4cda-aaff-f3e3784bc5c4\") " pod="openstack/barbican-api-5554df79f4-4pvrc" Jan 26 08:11:38 crc kubenswrapper[4806]: I0126 08:11:38.109515 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/758b7482-35c7-4cda-aaff-f3e3784bc5c4-internal-tls-certs\") pod \"barbican-api-5554df79f4-4pvrc\" (UID: \"758b7482-35c7-4cda-aaff-f3e3784bc5c4\") " pod="openstack/barbican-api-5554df79f4-4pvrc" Jan 26 08:11:38 crc kubenswrapper[4806]: I0126 08:11:38.109606 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/758b7482-35c7-4cda-aaff-f3e3784bc5c4-config-data\") pod \"barbican-api-5554df79f4-4pvrc\" (UID: \"758b7482-35c7-4cda-aaff-f3e3784bc5c4\") " pod="openstack/barbican-api-5554df79f4-4pvrc" Jan 26 08:11:38 crc kubenswrapper[4806]: I0126 08:11:38.109622 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/758b7482-35c7-4cda-aaff-f3e3784bc5c4-config-data-custom\") pod \"barbican-api-5554df79f4-4pvrc\" (UID: \"758b7482-35c7-4cda-aaff-f3e3784bc5c4\") " pod="openstack/barbican-api-5554df79f4-4pvrc" Jan 26 08:11:38 crc kubenswrapper[4806]: I0126 08:11:38.110973 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/758b7482-35c7-4cda-aaff-f3e3784bc5c4-logs\") pod \"barbican-api-5554df79f4-4pvrc\" (UID: \"758b7482-35c7-4cda-aaff-f3e3784bc5c4\") " pod="openstack/barbican-api-5554df79f4-4pvrc" Jan 26 08:11:38 crc kubenswrapper[4806]: I0126 08:11:38.118479 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/758b7482-35c7-4cda-aaff-f3e3784bc5c4-combined-ca-bundle\") pod \"barbican-api-5554df79f4-4pvrc\" (UID: \"758b7482-35c7-4cda-aaff-f3e3784bc5c4\") " pod="openstack/barbican-api-5554df79f4-4pvrc" Jan 26 08:11:38 crc kubenswrapper[4806]: I0126 08:11:38.118713 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/758b7482-35c7-4cda-aaff-f3e3784bc5c4-config-data-custom\") pod \"barbican-api-5554df79f4-4pvrc\" (UID: \"758b7482-35c7-4cda-aaff-f3e3784bc5c4\") " pod="openstack/barbican-api-5554df79f4-4pvrc" Jan 26 08:11:38 crc kubenswrapper[4806]: I0126 08:11:38.119352 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/758b7482-35c7-4cda-aaff-f3e3784bc5c4-public-tls-certs\") pod \"barbican-api-5554df79f4-4pvrc\" (UID: \"758b7482-35c7-4cda-aaff-f3e3784bc5c4\") " pod="openstack/barbican-api-5554df79f4-4pvrc" Jan 26 08:11:38 crc kubenswrapper[4806]: I0126 08:11:38.122882 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/758b7482-35c7-4cda-aaff-f3e3784bc5c4-config-data\") pod \"barbican-api-5554df79f4-4pvrc\" (UID: \"758b7482-35c7-4cda-aaff-f3e3784bc5c4\") " pod="openstack/barbican-api-5554df79f4-4pvrc" Jan 26 08:11:38 crc kubenswrapper[4806]: I0126 08:11:38.198573 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/758b7482-35c7-4cda-aaff-f3e3784bc5c4-internal-tls-certs\") pod \"barbican-api-5554df79f4-4pvrc\" (UID: \"758b7482-35c7-4cda-aaff-f3e3784bc5c4\") " pod="openstack/barbican-api-5554df79f4-4pvrc" Jan 26 08:11:38 crc kubenswrapper[4806]: I0126 08:11:38.202775 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hh59x\" (UniqueName: \"kubernetes.io/projected/758b7482-35c7-4cda-aaff-f3e3784bc5c4-kube-api-access-hh59x\") pod \"barbican-api-5554df79f4-4pvrc\" (UID: \"758b7482-35c7-4cda-aaff-f3e3784bc5c4\") " pod="openstack/barbican-api-5554df79f4-4pvrc" Jan 26 08:11:38 crc kubenswrapper[4806]: I0126 08:11:38.226558 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5554df79f4-4pvrc" Jan 26 08:11:38 crc kubenswrapper[4806]: I0126 08:11:38.835061 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 26 08:11:38 crc kubenswrapper[4806]: I0126 08:11:38.836163 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 26 08:11:38 crc kubenswrapper[4806]: I0126 08:11:38.921266 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 26 08:11:38 crc kubenswrapper[4806]: I0126 08:11:38.932009 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 26 08:11:38 crc kubenswrapper[4806]: I0126 08:11:38.937000 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 26 08:11:38 crc kubenswrapper[4806]: I0126 08:11:38.937038 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 26 08:11:38 crc kubenswrapper[4806]: I0126 08:11:38.997748 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 26 08:11:39 crc kubenswrapper[4806]: I0126 08:11:39.038355 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 26 08:11:39 crc kubenswrapper[4806]: I0126 08:11:39.866684 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 26 08:11:39 crc kubenswrapper[4806]: I0126 08:11:39.867102 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 26 08:11:39 crc kubenswrapper[4806]: I0126 08:11:39.867113 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 26 08:11:39 crc kubenswrapper[4806]: I0126 08:11:39.867125 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 26 08:11:40 crc kubenswrapper[4806]: I0126 08:11:40.874930 4806 generic.go:334] "Generic (PLEG): container finished" podID="19528149-09a1-44a5-b419-bbe91789d493" containerID="a9564fe8c4b2397cae0a3995b2fda49cd35376fd0adf3daf75579d44162e21a2" exitCode=0 Jan 26 08:11:40 crc kubenswrapper[4806]: I0126 08:11:40.875016 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-bjtkx" event={"ID":"19528149-09a1-44a5-b419-bbe91789d493","Type":"ContainerDied","Data":"a9564fe8c4b2397cae0a3995b2fda49cd35376fd0adf3daf75579d44162e21a2"} Jan 26 08:11:41 crc kubenswrapper[4806]: I0126 08:11:41.844928 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5554df79f4-4pvrc"] Jan 26 08:11:41 crc kubenswrapper[4806]: I0126 08:11:41.912839 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7c56b98c68-pf9q4" event={"ID":"3f8773f7-27d5-469b-837e-90bf31716266","Type":"ContainerStarted","Data":"2ea71708eb27fb24edef69b2c6e39c68a631854dc65c4f5a1c510f973556b1d8"} Jan 26 08:11:41 crc kubenswrapper[4806]: I0126 08:11:41.915396 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5554df79f4-4pvrc" event={"ID":"758b7482-35c7-4cda-aaff-f3e3784bc5c4","Type":"ContainerStarted","Data":"4b93d9e50d6f96212d2d6c31e1dab740d004bd20aa7418a1847bd1923ef6b63b"} Jan 26 08:11:41 crc kubenswrapper[4806]: I0126 08:11:41.917405 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"93bf46a8-2942-4b36-9853-88ff5c6e756b","Type":"ContainerStarted","Data":"8301cf9275eda41f8581b82288261e3c0d7d73701303206c3c8836982952549f"} Jan 26 08:11:41 crc kubenswrapper[4806]: I0126 08:11:41.918793 4806 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 08:11:41 crc kubenswrapper[4806]: I0126 08:11:41.918809 4806 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 08:11:41 crc kubenswrapper[4806]: I0126 08:11:41.919500 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7b56fd48c9-2fhh8" event={"ID":"c07d6e0b-e41e-402f-8d38-196e641be864","Type":"ContainerStarted","Data":"93af00d78ce27930279eb6c61e96d3abb270f45234bc22eb89e3e4dbd7b84cad"} Jan 26 08:11:42 crc kubenswrapper[4806]: I0126 08:11:42.424119 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-bjtkx" Jan 26 08:11:42 crc kubenswrapper[4806]: I0126 08:11:42.515832 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19528149-09a1-44a5-b419-bbe91789d493-combined-ca-bundle\") pod \"19528149-09a1-44a5-b419-bbe91789d493\" (UID: \"19528149-09a1-44a5-b419-bbe91789d493\") " Jan 26 08:11:42 crc kubenswrapper[4806]: I0126 08:11:42.516206 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kvctp\" (UniqueName: \"kubernetes.io/projected/19528149-09a1-44a5-b419-bbe91789d493-kube-api-access-kvctp\") pod \"19528149-09a1-44a5-b419-bbe91789d493\" (UID: \"19528149-09a1-44a5-b419-bbe91789d493\") " Jan 26 08:11:42 crc kubenswrapper[4806]: I0126 08:11:42.516305 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19528149-09a1-44a5-b419-bbe91789d493-config-data\") pod \"19528149-09a1-44a5-b419-bbe91789d493\" (UID: \"19528149-09a1-44a5-b419-bbe91789d493\") " Jan 26 08:11:42 crc kubenswrapper[4806]: I0126 08:11:42.536898 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19528149-09a1-44a5-b419-bbe91789d493-kube-api-access-kvctp" (OuterVolumeSpecName: "kube-api-access-kvctp") pod "19528149-09a1-44a5-b419-bbe91789d493" (UID: "19528149-09a1-44a5-b419-bbe91789d493"). InnerVolumeSpecName "kube-api-access-kvctp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:11:42 crc kubenswrapper[4806]: I0126 08:11:42.592635 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19528149-09a1-44a5-b419-bbe91789d493-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "19528149-09a1-44a5-b419-bbe91789d493" (UID: "19528149-09a1-44a5-b419-bbe91789d493"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:11:42 crc kubenswrapper[4806]: I0126 08:11:42.623091 4806 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19528149-09a1-44a5-b419-bbe91789d493-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 08:11:42 crc kubenswrapper[4806]: I0126 08:11:42.623120 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kvctp\" (UniqueName: \"kubernetes.io/projected/19528149-09a1-44a5-b419-bbe91789d493-kube-api-access-kvctp\") on node \"crc\" DevicePath \"\"" Jan 26 08:11:42 crc kubenswrapper[4806]: I0126 08:11:42.634391 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19528149-09a1-44a5-b419-bbe91789d493-config-data" (OuterVolumeSpecName: "config-data") pod "19528149-09a1-44a5-b419-bbe91789d493" (UID: "19528149-09a1-44a5-b419-bbe91789d493"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:11:42 crc kubenswrapper[4806]: I0126 08:11:42.726646 4806 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19528149-09a1-44a5-b419-bbe91789d493-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 08:11:42 crc kubenswrapper[4806]: I0126 08:11:42.933080 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-bjtkx" Jan 26 08:11:42 crc kubenswrapper[4806]: I0126 08:11:42.933096 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-bjtkx" event={"ID":"19528149-09a1-44a5-b419-bbe91789d493","Type":"ContainerDied","Data":"fd5e70421906e9710119a2f5550ab75b467b5cac848c81709f2f2e7f6bb2530d"} Jan 26 08:11:42 crc kubenswrapper[4806]: I0126 08:11:42.933138 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fd5e70421906e9710119a2f5550ab75b467b5cac848c81709f2f2e7f6bb2530d" Jan 26 08:11:42 crc kubenswrapper[4806]: I0126 08:11:42.948040 4806 generic.go:334] "Generic (PLEG): container finished" podID="bc6102bf-7483-4063-af9d-841e78398b0c" containerID="2fe5ae91a9473734ce41faf4efb4de45a5d442716ca0e10fd78e7008169ce5c0" exitCode=0 Jan 26 08:11:42 crc kubenswrapper[4806]: I0126 08:11:42.948131 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-qw29c" event={"ID":"bc6102bf-7483-4063-af9d-841e78398b0c","Type":"ContainerDied","Data":"2fe5ae91a9473734ce41faf4efb4de45a5d442716ca0e10fd78e7008169ce5c0"} Jan 26 08:11:42 crc kubenswrapper[4806]: I0126 08:11:42.959818 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7b56fd48c9-2fhh8" event={"ID":"c07d6e0b-e41e-402f-8d38-196e641be864","Type":"ContainerStarted","Data":"eedb882668111dd15f74a7b22f46298e4c145136077fa90288b9b1324f310d51"} Jan 26 08:11:42 crc kubenswrapper[4806]: I0126 08:11:42.983038 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7c56b98c68-pf9q4" event={"ID":"3f8773f7-27d5-469b-837e-90bf31716266","Type":"ContainerStarted","Data":"ed2667e69b70a6965e954e00b47b93cf075856d18c97f8c5565dd7dd7ae60d4f"} Jan 26 08:11:43 crc kubenswrapper[4806]: I0126 08:11:43.000939 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5554df79f4-4pvrc" event={"ID":"758b7482-35c7-4cda-aaff-f3e3784bc5c4","Type":"ContainerStarted","Data":"5e018f9dd4d67b94669f9876d511c1d2738076d40fea5fdde7b544070214cfdf"} Jan 26 08:11:43 crc kubenswrapper[4806]: I0126 08:11:43.031046 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-7b56fd48c9-2fhh8" podStartSLOduration=3.921382929 podStartE2EDuration="10.031027683s" podCreationTimestamp="2026-01-26 08:11:33 +0000 UTC" firstStartedPulling="2026-01-26 08:11:35.138661718 +0000 UTC m=+1074.403069774" lastFinishedPulling="2026-01-26 08:11:41.248306472 +0000 UTC m=+1080.512714528" observedRunningTime="2026-01-26 08:11:43.012966305 +0000 UTC m=+1082.277374351" watchObservedRunningTime="2026-01-26 08:11:43.031027683 +0000 UTC m=+1082.295435739" Jan 26 08:11:43 crc kubenswrapper[4806]: I0126 08:11:43.045503 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-7c56b98c68-pf9q4" podStartSLOduration=3.493562621 podStartE2EDuration="10.045487329s" podCreationTimestamp="2026-01-26 08:11:33 +0000 UTC" firstStartedPulling="2026-01-26 08:11:34.69448383 +0000 UTC m=+1073.958891886" lastFinishedPulling="2026-01-26 08:11:41.246408538 +0000 UTC m=+1080.510816594" observedRunningTime="2026-01-26 08:11:43.043466302 +0000 UTC m=+1082.307874358" watchObservedRunningTime="2026-01-26 08:11:43.045487329 +0000 UTC m=+1082.309895385" Jan 26 08:11:43 crc kubenswrapper[4806]: I0126 08:11:43.728193 4806 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-7d485d788d-5q4tb" podUID="d7b4ee8d-6333-4683-94c4-b79229c76537" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.150:8443: connect: connection refused" Jan 26 08:11:43 crc kubenswrapper[4806]: I0126 08:11:43.995406 4806 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6b8f96b47b-sbsnb" podUID="d4ed3e96-22ec-410e-8f50-afd310343aa8" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.151:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.151:8443: connect: connection refused" Jan 26 08:11:44 crc kubenswrapper[4806]: I0126 08:11:44.012301 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5554df79f4-4pvrc" event={"ID":"758b7482-35c7-4cda-aaff-f3e3784bc5c4","Type":"ContainerStarted","Data":"ead68df5302ea15bb4e91fb231294e6762828e0ec413855959d9cb53e5bd3c3d"} Jan 26 08:11:44 crc kubenswrapper[4806]: I0126 08:11:44.012792 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5554df79f4-4pvrc" Jan 26 08:11:44 crc kubenswrapper[4806]: I0126 08:11:44.012944 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5554df79f4-4pvrc" Jan 26 08:11:44 crc kubenswrapper[4806]: I0126 08:11:44.240689 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-848cf88cfc-xjh66" Jan 26 08:11:44 crc kubenswrapper[4806]: I0126 08:11:44.267981 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-5554df79f4-4pvrc" podStartSLOduration=7.267962141 podStartE2EDuration="7.267962141s" podCreationTimestamp="2026-01-26 08:11:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 08:11:44.04259752 +0000 UTC m=+1083.307005576" watchObservedRunningTime="2026-01-26 08:11:44.267962141 +0000 UTC m=+1083.532370197" Jan 26 08:11:44 crc kubenswrapper[4806]: I0126 08:11:44.304964 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-d6hkg"] Jan 26 08:11:44 crc kubenswrapper[4806]: I0126 08:11:44.305213 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6b7b667979-d6hkg" podUID="6f121c86-c5ee-47c3-b80b-f8791a68ee15" containerName="dnsmasq-dns" containerID="cri-o://31f15ffccfba1e8be909c21c0618c26eb1e7bfa209ca17a5c1161b715201b10c" gracePeriod=10 Jan 26 08:11:44 crc kubenswrapper[4806]: I0126 08:11:44.582417 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-qw29c" Jan 26 08:11:44 crc kubenswrapper[4806]: I0126 08:11:44.661767 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bc6102bf-7483-4063-af9d-841e78398b0c-db-sync-config-data\") pod \"bc6102bf-7483-4063-af9d-841e78398b0c\" (UID: \"bc6102bf-7483-4063-af9d-841e78398b0c\") " Jan 26 08:11:44 crc kubenswrapper[4806]: I0126 08:11:44.662796 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngzhn\" (UniqueName: \"kubernetes.io/projected/bc6102bf-7483-4063-af9d-841e78398b0c-kube-api-access-ngzhn\") pod \"bc6102bf-7483-4063-af9d-841e78398b0c\" (UID: \"bc6102bf-7483-4063-af9d-841e78398b0c\") " Jan 26 08:11:44 crc kubenswrapper[4806]: I0126 08:11:44.662828 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bc6102bf-7483-4063-af9d-841e78398b0c-etc-machine-id\") pod \"bc6102bf-7483-4063-af9d-841e78398b0c\" (UID: \"bc6102bf-7483-4063-af9d-841e78398b0c\") " Jan 26 08:11:44 crc kubenswrapper[4806]: I0126 08:11:44.662852 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc6102bf-7483-4063-af9d-841e78398b0c-config-data\") pod \"bc6102bf-7483-4063-af9d-841e78398b0c\" (UID: \"bc6102bf-7483-4063-af9d-841e78398b0c\") " Jan 26 08:11:44 crc kubenswrapper[4806]: I0126 08:11:44.662876 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc6102bf-7483-4063-af9d-841e78398b0c-scripts\") pod \"bc6102bf-7483-4063-af9d-841e78398b0c\" (UID: \"bc6102bf-7483-4063-af9d-841e78398b0c\") " Jan 26 08:11:44 crc kubenswrapper[4806]: I0126 08:11:44.662944 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc6102bf-7483-4063-af9d-841e78398b0c-combined-ca-bundle\") pod \"bc6102bf-7483-4063-af9d-841e78398b0c\" (UID: \"bc6102bf-7483-4063-af9d-841e78398b0c\") " Jan 26 08:11:44 crc kubenswrapper[4806]: I0126 08:11:44.663415 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bc6102bf-7483-4063-af9d-841e78398b0c-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "bc6102bf-7483-4063-af9d-841e78398b0c" (UID: "bc6102bf-7483-4063-af9d-841e78398b0c"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 08:11:44 crc kubenswrapper[4806]: I0126 08:11:44.671171 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc6102bf-7483-4063-af9d-841e78398b0c-scripts" (OuterVolumeSpecName: "scripts") pod "bc6102bf-7483-4063-af9d-841e78398b0c" (UID: "bc6102bf-7483-4063-af9d-841e78398b0c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:11:44 crc kubenswrapper[4806]: I0126 08:11:44.677190 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc6102bf-7483-4063-af9d-841e78398b0c-kube-api-access-ngzhn" (OuterVolumeSpecName: "kube-api-access-ngzhn") pod "bc6102bf-7483-4063-af9d-841e78398b0c" (UID: "bc6102bf-7483-4063-af9d-841e78398b0c"). InnerVolumeSpecName "kube-api-access-ngzhn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:11:44 crc kubenswrapper[4806]: I0126 08:11:44.678770 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc6102bf-7483-4063-af9d-841e78398b0c-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "bc6102bf-7483-4063-af9d-841e78398b0c" (UID: "bc6102bf-7483-4063-af9d-841e78398b0c"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:11:44 crc kubenswrapper[4806]: I0126 08:11:44.769675 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngzhn\" (UniqueName: \"kubernetes.io/projected/bc6102bf-7483-4063-af9d-841e78398b0c-kube-api-access-ngzhn\") on node \"crc\" DevicePath \"\"" Jan 26 08:11:44 crc kubenswrapper[4806]: I0126 08:11:44.769996 4806 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bc6102bf-7483-4063-af9d-841e78398b0c-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 26 08:11:44 crc kubenswrapper[4806]: I0126 08:11:44.770004 4806 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc6102bf-7483-4063-af9d-841e78398b0c-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 08:11:44 crc kubenswrapper[4806]: I0126 08:11:44.770013 4806 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bc6102bf-7483-4063-af9d-841e78398b0c-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 08:11:44 crc kubenswrapper[4806]: I0126 08:11:44.781413 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc6102bf-7483-4063-af9d-841e78398b0c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bc6102bf-7483-4063-af9d-841e78398b0c" (UID: "bc6102bf-7483-4063-af9d-841e78398b0c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:11:44 crc kubenswrapper[4806]: I0126 08:11:44.858842 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc6102bf-7483-4063-af9d-841e78398b0c-config-data" (OuterVolumeSpecName: "config-data") pod "bc6102bf-7483-4063-af9d-841e78398b0c" (UID: "bc6102bf-7483-4063-af9d-841e78398b0c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:11:44 crc kubenswrapper[4806]: I0126 08:11:44.876574 4806 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc6102bf-7483-4063-af9d-841e78398b0c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 08:11:44 crc kubenswrapper[4806]: I0126 08:11:44.876609 4806 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc6102bf-7483-4063-af9d-841e78398b0c-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 08:11:44 crc kubenswrapper[4806]: I0126 08:11:44.982374 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 26 08:11:44 crc kubenswrapper[4806]: I0126 08:11:44.982477 4806 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 08:11:44 crc kubenswrapper[4806]: I0126 08:11:44.990415 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 26 08:11:45 crc kubenswrapper[4806]: I0126 08:11:45.084181 4806 generic.go:334] "Generic (PLEG): container finished" podID="6f121c86-c5ee-47c3-b80b-f8791a68ee15" containerID="31f15ffccfba1e8be909c21c0618c26eb1e7bfa209ca17a5c1161b715201b10c" exitCode=0 Jan 26 08:11:45 crc kubenswrapper[4806]: I0126 08:11:45.084261 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-d6hkg" event={"ID":"6f121c86-c5ee-47c3-b80b-f8791a68ee15","Type":"ContainerDied","Data":"31f15ffccfba1e8be909c21c0618c26eb1e7bfa209ca17a5c1161b715201b10c"} Jan 26 08:11:45 crc kubenswrapper[4806]: I0126 08:11:45.141102 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-qw29c" Jan 26 08:11:45 crc kubenswrapper[4806]: I0126 08:11:45.141807 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-qw29c" event={"ID":"bc6102bf-7483-4063-af9d-841e78398b0c","Type":"ContainerDied","Data":"c89fec84698ff1592ba8f352fd5b70972d628d8262b81d473618c696f80f66fc"} Jan 26 08:11:45 crc kubenswrapper[4806]: I0126 08:11:45.141831 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c89fec84698ff1592ba8f352fd5b70972d628d8262b81d473618c696f80f66fc" Jan 26 08:11:45 crc kubenswrapper[4806]: I0126 08:11:45.147115 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-d6hkg" Jan 26 08:11:45 crc kubenswrapper[4806]: I0126 08:11:45.287153 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6f121c86-c5ee-47c3-b80b-f8791a68ee15-dns-svc\") pod \"6f121c86-c5ee-47c3-b80b-f8791a68ee15\" (UID: \"6f121c86-c5ee-47c3-b80b-f8791a68ee15\") " Jan 26 08:11:45 crc kubenswrapper[4806]: I0126 08:11:45.287248 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6f121c86-c5ee-47c3-b80b-f8791a68ee15-dns-swift-storage-0\") pod \"6f121c86-c5ee-47c3-b80b-f8791a68ee15\" (UID: \"6f121c86-c5ee-47c3-b80b-f8791a68ee15\") " Jan 26 08:11:45 crc kubenswrapper[4806]: I0126 08:11:45.287275 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6f121c86-c5ee-47c3-b80b-f8791a68ee15-ovsdbserver-sb\") pod \"6f121c86-c5ee-47c3-b80b-f8791a68ee15\" (UID: \"6f121c86-c5ee-47c3-b80b-f8791a68ee15\") " Jan 26 08:11:45 crc kubenswrapper[4806]: I0126 08:11:45.287374 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6f121c86-c5ee-47c3-b80b-f8791a68ee15-ovsdbserver-nb\") pod \"6f121c86-c5ee-47c3-b80b-f8791a68ee15\" (UID: \"6f121c86-c5ee-47c3-b80b-f8791a68ee15\") " Jan 26 08:11:45 crc kubenswrapper[4806]: I0126 08:11:45.287453 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mc29v\" (UniqueName: \"kubernetes.io/projected/6f121c86-c5ee-47c3-b80b-f8791a68ee15-kube-api-access-mc29v\") pod \"6f121c86-c5ee-47c3-b80b-f8791a68ee15\" (UID: \"6f121c86-c5ee-47c3-b80b-f8791a68ee15\") " Jan 26 08:11:45 crc kubenswrapper[4806]: I0126 08:11:45.287495 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f121c86-c5ee-47c3-b80b-f8791a68ee15-config\") pod \"6f121c86-c5ee-47c3-b80b-f8791a68ee15\" (UID: \"6f121c86-c5ee-47c3-b80b-f8791a68ee15\") " Jan 26 08:11:45 crc kubenswrapper[4806]: I0126 08:11:45.299564 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f121c86-c5ee-47c3-b80b-f8791a68ee15-kube-api-access-mc29v" (OuterVolumeSpecName: "kube-api-access-mc29v") pod "6f121c86-c5ee-47c3-b80b-f8791a68ee15" (UID: "6f121c86-c5ee-47c3-b80b-f8791a68ee15"). InnerVolumeSpecName "kube-api-access-mc29v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:11:45 crc kubenswrapper[4806]: I0126 08:11:45.354143 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 26 08:11:45 crc kubenswrapper[4806]: E0126 08:11:45.355851 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc6102bf-7483-4063-af9d-841e78398b0c" containerName="cinder-db-sync" Jan 26 08:11:45 crc kubenswrapper[4806]: I0126 08:11:45.355971 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc6102bf-7483-4063-af9d-841e78398b0c" containerName="cinder-db-sync" Jan 26 08:11:45 crc kubenswrapper[4806]: E0126 08:11:45.356362 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f121c86-c5ee-47c3-b80b-f8791a68ee15" containerName="dnsmasq-dns" Jan 26 08:11:45 crc kubenswrapper[4806]: I0126 08:11:45.356440 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f121c86-c5ee-47c3-b80b-f8791a68ee15" containerName="dnsmasq-dns" Jan 26 08:11:45 crc kubenswrapper[4806]: E0126 08:11:45.356558 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f121c86-c5ee-47c3-b80b-f8791a68ee15" containerName="init" Jan 26 08:11:45 crc kubenswrapper[4806]: I0126 08:11:45.356632 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f121c86-c5ee-47c3-b80b-f8791a68ee15" containerName="init" Jan 26 08:11:45 crc kubenswrapper[4806]: E0126 08:11:45.356704 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19528149-09a1-44a5-b419-bbe91789d493" containerName="heat-db-sync" Jan 26 08:11:45 crc kubenswrapper[4806]: I0126 08:11:45.356783 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="19528149-09a1-44a5-b419-bbe91789d493" containerName="heat-db-sync" Jan 26 08:11:45 crc kubenswrapper[4806]: I0126 08:11:45.357220 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f121c86-c5ee-47c3-b80b-f8791a68ee15" containerName="dnsmasq-dns" Jan 26 08:11:45 crc kubenswrapper[4806]: I0126 08:11:45.357310 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="19528149-09a1-44a5-b419-bbe91789d493" containerName="heat-db-sync" Jan 26 08:11:45 crc kubenswrapper[4806]: I0126 08:11:45.357397 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc6102bf-7483-4063-af9d-841e78398b0c" containerName="cinder-db-sync" Jan 26 08:11:45 crc kubenswrapper[4806]: I0126 08:11:45.358747 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 26 08:11:45 crc kubenswrapper[4806]: I0126 08:11:45.363725 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 26 08:11:45 crc kubenswrapper[4806]: I0126 08:11:45.364039 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 26 08:11:45 crc kubenswrapper[4806]: I0126 08:11:45.364151 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-zk99g" Jan 26 08:11:45 crc kubenswrapper[4806]: I0126 08:11:45.364612 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 26 08:11:45 crc kubenswrapper[4806]: I0126 08:11:45.407508 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 26 08:11:45 crc kubenswrapper[4806]: I0126 08:11:45.408678 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mc29v\" (UniqueName: \"kubernetes.io/projected/6f121c86-c5ee-47c3-b80b-f8791a68ee15-kube-api-access-mc29v\") on node \"crc\" DevicePath \"\"" Jan 26 08:11:45 crc kubenswrapper[4806]: I0126 08:11:45.508058 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-6g69f"] Jan 26 08:11:45 crc kubenswrapper[4806]: I0126 08:11:45.528405 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f121c86-c5ee-47c3-b80b-f8791a68ee15-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "6f121c86-c5ee-47c3-b80b-f8791a68ee15" (UID: "6f121c86-c5ee-47c3-b80b-f8791a68ee15"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 08:11:45 crc kubenswrapper[4806]: I0126 08:11:45.540317 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f121c86-c5ee-47c3-b80b-f8791a68ee15-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6f121c86-c5ee-47c3-b80b-f8791a68ee15" (UID: "6f121c86-c5ee-47c3-b80b-f8791a68ee15"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 08:11:45 crc kubenswrapper[4806]: I0126 08:11:45.540886 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f121c86-c5ee-47c3-b80b-f8791a68ee15-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6f121c86-c5ee-47c3-b80b-f8791a68ee15" (UID: "6f121c86-c5ee-47c3-b80b-f8791a68ee15"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 08:11:45 crc kubenswrapper[4806]: I0126 08:11:45.541790 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/42d9518b-30e3-453d-9680-c84861b479e5-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"42d9518b-30e3-453d-9680-c84861b479e5\") " pod="openstack/cinder-scheduler-0" Jan 26 08:11:45 crc kubenswrapper[4806]: I0126 08:11:45.548824 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42d9518b-30e3-453d-9680-c84861b479e5-config-data\") pod \"cinder-scheduler-0\" (UID: \"42d9518b-30e3-453d-9680-c84861b479e5\") " pod="openstack/cinder-scheduler-0" Jan 26 08:11:45 crc kubenswrapper[4806]: I0126 08:11:45.548687 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-6g69f" Jan 26 08:11:45 crc kubenswrapper[4806]: I0126 08:11:45.547649 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-6g69f"] Jan 26 08:11:45 crc kubenswrapper[4806]: I0126 08:11:45.550590 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42d9518b-30e3-453d-9680-c84861b479e5-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"42d9518b-30e3-453d-9680-c84861b479e5\") " pod="openstack/cinder-scheduler-0" Jan 26 08:11:45 crc kubenswrapper[4806]: I0126 08:11:45.554222 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/42d9518b-30e3-453d-9680-c84861b479e5-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"42d9518b-30e3-453d-9680-c84861b479e5\") " pod="openstack/cinder-scheduler-0" Jan 26 08:11:45 crc kubenswrapper[4806]: I0126 08:11:45.554937 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpg5k\" (UniqueName: \"kubernetes.io/projected/42d9518b-30e3-453d-9680-c84861b479e5-kube-api-access-zpg5k\") pod \"cinder-scheduler-0\" (UID: \"42d9518b-30e3-453d-9680-c84861b479e5\") " pod="openstack/cinder-scheduler-0" Jan 26 08:11:45 crc kubenswrapper[4806]: I0126 08:11:45.555230 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42d9518b-30e3-453d-9680-c84861b479e5-scripts\") pod \"cinder-scheduler-0\" (UID: \"42d9518b-30e3-453d-9680-c84861b479e5\") " pod="openstack/cinder-scheduler-0" Jan 26 08:11:45 crc kubenswrapper[4806]: I0126 08:11:45.561323 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f121c86-c5ee-47c3-b80b-f8791a68ee15-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6f121c86-c5ee-47c3-b80b-f8791a68ee15" (UID: "6f121c86-c5ee-47c3-b80b-f8791a68ee15"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 08:11:45 crc kubenswrapper[4806]: I0126 08:11:45.568383 4806 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6f121c86-c5ee-47c3-b80b-f8791a68ee15-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 08:11:45 crc kubenswrapper[4806]: I0126 08:11:45.568491 4806 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6f121c86-c5ee-47c3-b80b-f8791a68ee15-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 26 08:11:45 crc kubenswrapper[4806]: I0126 08:11:45.568843 4806 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6f121c86-c5ee-47c3-b80b-f8791a68ee15-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 08:11:45 crc kubenswrapper[4806]: I0126 08:11:45.568918 4806 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6f121c86-c5ee-47c3-b80b-f8791a68ee15-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 08:11:45 crc kubenswrapper[4806]: I0126 08:11:45.590146 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f121c86-c5ee-47c3-b80b-f8791a68ee15-config" (OuterVolumeSpecName: "config") pod "6f121c86-c5ee-47c3-b80b-f8791a68ee15" (UID: "6f121c86-c5ee-47c3-b80b-f8791a68ee15"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 08:11:45 crc kubenswrapper[4806]: I0126 08:11:45.598763 4806 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-76c565f4b6-mqhr5" podUID="1d88b0ca-e8c5-40b1-b156-4f3fb3c612c3" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.166:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 08:11:45 crc kubenswrapper[4806]: I0126 08:11:45.642129 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 26 08:11:45 crc kubenswrapper[4806]: I0126 08:11:45.644403 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 26 08:11:45 crc kubenswrapper[4806]: I0126 08:11:45.649059 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 26 08:11:45 crc kubenswrapper[4806]: I0126 08:11:45.681316 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ec2813a6-2bcf-4dcd-a9ab-4cd2ed03d451-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-6g69f\" (UID: \"ec2813a6-2bcf-4dcd-a9ab-4cd2ed03d451\") " pod="openstack/dnsmasq-dns-6578955fd5-6g69f" Jan 26 08:11:45 crc kubenswrapper[4806]: I0126 08:11:45.681378 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec2813a6-2bcf-4dcd-a9ab-4cd2ed03d451-config\") pod \"dnsmasq-dns-6578955fd5-6g69f\" (UID: \"ec2813a6-2bcf-4dcd-a9ab-4cd2ed03d451\") " pod="openstack/dnsmasq-dns-6578955fd5-6g69f" Jan 26 08:11:45 crc kubenswrapper[4806]: I0126 08:11:45.681440 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42d9518b-30e3-453d-9680-c84861b479e5-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"42d9518b-30e3-453d-9680-c84861b479e5\") " pod="openstack/cinder-scheduler-0" Jan 26 08:11:45 crc kubenswrapper[4806]: I0126 08:11:45.681460 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ec2813a6-2bcf-4dcd-a9ab-4cd2ed03d451-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-6g69f\" (UID: \"ec2813a6-2bcf-4dcd-a9ab-4cd2ed03d451\") " pod="openstack/dnsmasq-dns-6578955fd5-6g69f" Jan 26 08:11:45 crc kubenswrapper[4806]: I0126 08:11:45.681477 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ec2813a6-2bcf-4dcd-a9ab-4cd2ed03d451-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-6g69f\" (UID: \"ec2813a6-2bcf-4dcd-a9ab-4cd2ed03d451\") " pod="openstack/dnsmasq-dns-6578955fd5-6g69f" Jan 26 08:11:45 crc kubenswrapper[4806]: I0126 08:11:45.681536 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/42d9518b-30e3-453d-9680-c84861b479e5-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"42d9518b-30e3-453d-9680-c84861b479e5\") " pod="openstack/cinder-scheduler-0" Jan 26 08:11:45 crc kubenswrapper[4806]: I0126 08:11:45.681583 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zpg5k\" (UniqueName: \"kubernetes.io/projected/42d9518b-30e3-453d-9680-c84861b479e5-kube-api-access-zpg5k\") pod \"cinder-scheduler-0\" (UID: \"42d9518b-30e3-453d-9680-c84861b479e5\") " pod="openstack/cinder-scheduler-0" Jan 26 08:11:45 crc kubenswrapper[4806]: I0126 08:11:45.681639 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ec2813a6-2bcf-4dcd-a9ab-4cd2ed03d451-dns-svc\") pod \"dnsmasq-dns-6578955fd5-6g69f\" (UID: \"ec2813a6-2bcf-4dcd-a9ab-4cd2ed03d451\") " pod="openstack/dnsmasq-dns-6578955fd5-6g69f" Jan 26 08:11:45 crc kubenswrapper[4806]: I0126 08:11:45.681673 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42d9518b-30e3-453d-9680-c84861b479e5-scripts\") pod \"cinder-scheduler-0\" (UID: \"42d9518b-30e3-453d-9680-c84861b479e5\") " pod="openstack/cinder-scheduler-0" Jan 26 08:11:45 crc kubenswrapper[4806]: I0126 08:11:45.681778 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/42d9518b-30e3-453d-9680-c84861b479e5-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"42d9518b-30e3-453d-9680-c84861b479e5\") " pod="openstack/cinder-scheduler-0" Jan 26 08:11:45 crc kubenswrapper[4806]: I0126 08:11:45.681815 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42d9518b-30e3-453d-9680-c84861b479e5-config-data\") pod \"cinder-scheduler-0\" (UID: \"42d9518b-30e3-453d-9680-c84861b479e5\") " pod="openstack/cinder-scheduler-0" Jan 26 08:11:45 crc kubenswrapper[4806]: I0126 08:11:45.681854 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7w76l\" (UniqueName: \"kubernetes.io/projected/ec2813a6-2bcf-4dcd-a9ab-4cd2ed03d451-kube-api-access-7w76l\") pod \"dnsmasq-dns-6578955fd5-6g69f\" (UID: \"ec2813a6-2bcf-4dcd-a9ab-4cd2ed03d451\") " pod="openstack/dnsmasq-dns-6578955fd5-6g69f" Jan 26 08:11:45 crc kubenswrapper[4806]: I0126 08:11:45.682411 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/42d9518b-30e3-453d-9680-c84861b479e5-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"42d9518b-30e3-453d-9680-c84861b479e5\") " pod="openstack/cinder-scheduler-0" Jan 26 08:11:45 crc kubenswrapper[4806]: I0126 08:11:45.699291 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/42d9518b-30e3-453d-9680-c84861b479e5-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"42d9518b-30e3-453d-9680-c84861b479e5\") " pod="openstack/cinder-scheduler-0" Jan 26 08:11:45 crc kubenswrapper[4806]: I0126 08:11:45.701215 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42d9518b-30e3-453d-9680-c84861b479e5-scripts\") pod \"cinder-scheduler-0\" (UID: \"42d9518b-30e3-453d-9680-c84861b479e5\") " pod="openstack/cinder-scheduler-0" Jan 26 08:11:45 crc kubenswrapper[4806]: I0126 08:11:45.701573 4806 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f121c86-c5ee-47c3-b80b-f8791a68ee15-config\") on node \"crc\" DevicePath \"\"" Jan 26 08:11:45 crc kubenswrapper[4806]: I0126 08:11:45.707229 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42d9518b-30e3-453d-9680-c84861b479e5-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"42d9518b-30e3-453d-9680-c84861b479e5\") " pod="openstack/cinder-scheduler-0" Jan 26 08:11:45 crc kubenswrapper[4806]: I0126 08:11:45.708344 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42d9518b-30e3-453d-9680-c84861b479e5-config-data\") pod \"cinder-scheduler-0\" (UID: \"42d9518b-30e3-453d-9680-c84861b479e5\") " pod="openstack/cinder-scheduler-0" Jan 26 08:11:45 crc kubenswrapper[4806]: I0126 08:11:45.726748 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 26 08:11:45 crc kubenswrapper[4806]: I0126 08:11:45.733672 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpg5k\" (UniqueName: \"kubernetes.io/projected/42d9518b-30e3-453d-9680-c84861b479e5-kube-api-access-zpg5k\") pod \"cinder-scheduler-0\" (UID: \"42d9518b-30e3-453d-9680-c84861b479e5\") " pod="openstack/cinder-scheduler-0" Jan 26 08:11:45 crc kubenswrapper[4806]: I0126 08:11:45.802905 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7bd66660-7983-40b2-94b0-bd9663391fee-config-data\") pod \"cinder-api-0\" (UID: \"7bd66660-7983-40b2-94b0-bd9663391fee\") " pod="openstack/cinder-api-0" Jan 26 08:11:45 crc kubenswrapper[4806]: I0126 08:11:45.802966 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7bd66660-7983-40b2-94b0-bd9663391fee-config-data-custom\") pod \"cinder-api-0\" (UID: \"7bd66660-7983-40b2-94b0-bd9663391fee\") " pod="openstack/cinder-api-0" Jan 26 08:11:45 crc kubenswrapper[4806]: I0126 08:11:45.802992 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bd66660-7983-40b2-94b0-bd9663391fee-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"7bd66660-7983-40b2-94b0-bd9663391fee\") " pod="openstack/cinder-api-0" Jan 26 08:11:45 crc kubenswrapper[4806]: I0126 08:11:45.803032 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7bd66660-7983-40b2-94b0-bd9663391fee-scripts\") pod \"cinder-api-0\" (UID: \"7bd66660-7983-40b2-94b0-bd9663391fee\") " pod="openstack/cinder-api-0" Jan 26 08:11:45 crc kubenswrapper[4806]: I0126 08:11:45.803069 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7w76l\" (UniqueName: \"kubernetes.io/projected/ec2813a6-2bcf-4dcd-a9ab-4cd2ed03d451-kube-api-access-7w76l\") pod \"dnsmasq-dns-6578955fd5-6g69f\" (UID: \"ec2813a6-2bcf-4dcd-a9ab-4cd2ed03d451\") " pod="openstack/dnsmasq-dns-6578955fd5-6g69f" Jan 26 08:11:45 crc kubenswrapper[4806]: I0126 08:11:45.803091 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ec2813a6-2bcf-4dcd-a9ab-4cd2ed03d451-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-6g69f\" (UID: \"ec2813a6-2bcf-4dcd-a9ab-4cd2ed03d451\") " pod="openstack/dnsmasq-dns-6578955fd5-6g69f" Jan 26 08:11:45 crc kubenswrapper[4806]: I0126 08:11:45.803113 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7bd66660-7983-40b2-94b0-bd9663391fee-logs\") pod \"cinder-api-0\" (UID: \"7bd66660-7983-40b2-94b0-bd9663391fee\") " pod="openstack/cinder-api-0" Jan 26 08:11:45 crc kubenswrapper[4806]: I0126 08:11:45.803130 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec2813a6-2bcf-4dcd-a9ab-4cd2ed03d451-config\") pod \"dnsmasq-dns-6578955fd5-6g69f\" (UID: \"ec2813a6-2bcf-4dcd-a9ab-4cd2ed03d451\") " pod="openstack/dnsmasq-dns-6578955fd5-6g69f" Jan 26 08:11:45 crc kubenswrapper[4806]: I0126 08:11:45.803159 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ec2813a6-2bcf-4dcd-a9ab-4cd2ed03d451-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-6g69f\" (UID: \"ec2813a6-2bcf-4dcd-a9ab-4cd2ed03d451\") " pod="openstack/dnsmasq-dns-6578955fd5-6g69f" Jan 26 08:11:45 crc kubenswrapper[4806]: I0126 08:11:45.803176 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ec2813a6-2bcf-4dcd-a9ab-4cd2ed03d451-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-6g69f\" (UID: \"ec2813a6-2bcf-4dcd-a9ab-4cd2ed03d451\") " pod="openstack/dnsmasq-dns-6578955fd5-6g69f" Jan 26 08:11:45 crc kubenswrapper[4806]: I0126 08:11:45.803193 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7bd66660-7983-40b2-94b0-bd9663391fee-etc-machine-id\") pod \"cinder-api-0\" (UID: \"7bd66660-7983-40b2-94b0-bd9663391fee\") " pod="openstack/cinder-api-0" Jan 26 08:11:45 crc kubenswrapper[4806]: I0126 08:11:45.803211 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkzzt\" (UniqueName: \"kubernetes.io/projected/7bd66660-7983-40b2-94b0-bd9663391fee-kube-api-access-pkzzt\") pod \"cinder-api-0\" (UID: \"7bd66660-7983-40b2-94b0-bd9663391fee\") " pod="openstack/cinder-api-0" Jan 26 08:11:45 crc kubenswrapper[4806]: I0126 08:11:45.803255 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ec2813a6-2bcf-4dcd-a9ab-4cd2ed03d451-dns-svc\") pod \"dnsmasq-dns-6578955fd5-6g69f\" (UID: \"ec2813a6-2bcf-4dcd-a9ab-4cd2ed03d451\") " pod="openstack/dnsmasq-dns-6578955fd5-6g69f" Jan 26 08:11:45 crc kubenswrapper[4806]: I0126 08:11:45.804358 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ec2813a6-2bcf-4dcd-a9ab-4cd2ed03d451-dns-svc\") pod \"dnsmasq-dns-6578955fd5-6g69f\" (UID: \"ec2813a6-2bcf-4dcd-a9ab-4cd2ed03d451\") " pod="openstack/dnsmasq-dns-6578955fd5-6g69f" Jan 26 08:11:45 crc kubenswrapper[4806]: I0126 08:11:45.805902 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ec2813a6-2bcf-4dcd-a9ab-4cd2ed03d451-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-6g69f\" (UID: \"ec2813a6-2bcf-4dcd-a9ab-4cd2ed03d451\") " pod="openstack/dnsmasq-dns-6578955fd5-6g69f" Jan 26 08:11:45 crc kubenswrapper[4806]: I0126 08:11:45.806472 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec2813a6-2bcf-4dcd-a9ab-4cd2ed03d451-config\") pod \"dnsmasq-dns-6578955fd5-6g69f\" (UID: \"ec2813a6-2bcf-4dcd-a9ab-4cd2ed03d451\") " pod="openstack/dnsmasq-dns-6578955fd5-6g69f" Jan 26 08:11:45 crc kubenswrapper[4806]: I0126 08:11:45.807032 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ec2813a6-2bcf-4dcd-a9ab-4cd2ed03d451-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-6g69f\" (UID: \"ec2813a6-2bcf-4dcd-a9ab-4cd2ed03d451\") " pod="openstack/dnsmasq-dns-6578955fd5-6g69f" Jan 26 08:11:45 crc kubenswrapper[4806]: I0126 08:11:45.810350 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ec2813a6-2bcf-4dcd-a9ab-4cd2ed03d451-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-6g69f\" (UID: \"ec2813a6-2bcf-4dcd-a9ab-4cd2ed03d451\") " pod="openstack/dnsmasq-dns-6578955fd5-6g69f" Jan 26 08:11:45 crc kubenswrapper[4806]: I0126 08:11:45.821549 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7w76l\" (UniqueName: \"kubernetes.io/projected/ec2813a6-2bcf-4dcd-a9ab-4cd2ed03d451-kube-api-access-7w76l\") pod \"dnsmasq-dns-6578955fd5-6g69f\" (UID: \"ec2813a6-2bcf-4dcd-a9ab-4cd2ed03d451\") " pod="openstack/dnsmasq-dns-6578955fd5-6g69f" Jan 26 08:11:45 crc kubenswrapper[4806]: I0126 08:11:45.908441 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7bd66660-7983-40b2-94b0-bd9663391fee-config-data\") pod \"cinder-api-0\" (UID: \"7bd66660-7983-40b2-94b0-bd9663391fee\") " pod="openstack/cinder-api-0" Jan 26 08:11:45 crc kubenswrapper[4806]: I0126 08:11:45.908570 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7bd66660-7983-40b2-94b0-bd9663391fee-config-data-custom\") pod \"cinder-api-0\" (UID: \"7bd66660-7983-40b2-94b0-bd9663391fee\") " pod="openstack/cinder-api-0" Jan 26 08:11:45 crc kubenswrapper[4806]: I0126 08:11:45.909121 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bd66660-7983-40b2-94b0-bd9663391fee-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"7bd66660-7983-40b2-94b0-bd9663391fee\") " pod="openstack/cinder-api-0" Jan 26 08:11:45 crc kubenswrapper[4806]: I0126 08:11:45.909165 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7bd66660-7983-40b2-94b0-bd9663391fee-scripts\") pod \"cinder-api-0\" (UID: \"7bd66660-7983-40b2-94b0-bd9663391fee\") " pod="openstack/cinder-api-0" Jan 26 08:11:45 crc kubenswrapper[4806]: I0126 08:11:45.909310 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7bd66660-7983-40b2-94b0-bd9663391fee-logs\") pod \"cinder-api-0\" (UID: \"7bd66660-7983-40b2-94b0-bd9663391fee\") " pod="openstack/cinder-api-0" Jan 26 08:11:45 crc kubenswrapper[4806]: I0126 08:11:45.909448 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7bd66660-7983-40b2-94b0-bd9663391fee-etc-machine-id\") pod \"cinder-api-0\" (UID: \"7bd66660-7983-40b2-94b0-bd9663391fee\") " pod="openstack/cinder-api-0" Jan 26 08:11:45 crc kubenswrapper[4806]: I0126 08:11:45.909472 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkzzt\" (UniqueName: \"kubernetes.io/projected/7bd66660-7983-40b2-94b0-bd9663391fee-kube-api-access-pkzzt\") pod \"cinder-api-0\" (UID: \"7bd66660-7983-40b2-94b0-bd9663391fee\") " pod="openstack/cinder-api-0" Jan 26 08:11:45 crc kubenswrapper[4806]: I0126 08:11:45.910118 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7bd66660-7983-40b2-94b0-bd9663391fee-etc-machine-id\") pod \"cinder-api-0\" (UID: \"7bd66660-7983-40b2-94b0-bd9663391fee\") " pod="openstack/cinder-api-0" Jan 26 08:11:45 crc kubenswrapper[4806]: I0126 08:11:45.910502 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7bd66660-7983-40b2-94b0-bd9663391fee-logs\") pod \"cinder-api-0\" (UID: \"7bd66660-7983-40b2-94b0-bd9663391fee\") " pod="openstack/cinder-api-0" Jan 26 08:11:45 crc kubenswrapper[4806]: I0126 08:11:45.919588 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7bd66660-7983-40b2-94b0-bd9663391fee-scripts\") pod \"cinder-api-0\" (UID: \"7bd66660-7983-40b2-94b0-bd9663391fee\") " pod="openstack/cinder-api-0" Jan 26 08:11:45 crc kubenswrapper[4806]: I0126 08:11:45.920299 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bd66660-7983-40b2-94b0-bd9663391fee-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"7bd66660-7983-40b2-94b0-bd9663391fee\") " pod="openstack/cinder-api-0" Jan 26 08:11:45 crc kubenswrapper[4806]: I0126 08:11:45.926918 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7bd66660-7983-40b2-94b0-bd9663391fee-config-data-custom\") pod \"cinder-api-0\" (UID: \"7bd66660-7983-40b2-94b0-bd9663391fee\") " pod="openstack/cinder-api-0" Jan 26 08:11:45 crc kubenswrapper[4806]: I0126 08:11:45.929710 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7bd66660-7983-40b2-94b0-bd9663391fee-config-data\") pod \"cinder-api-0\" (UID: \"7bd66660-7983-40b2-94b0-bd9663391fee\") " pod="openstack/cinder-api-0" Jan 26 08:11:45 crc kubenswrapper[4806]: I0126 08:11:45.936650 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkzzt\" (UniqueName: \"kubernetes.io/projected/7bd66660-7983-40b2-94b0-bd9663391fee-kube-api-access-pkzzt\") pod \"cinder-api-0\" (UID: \"7bd66660-7983-40b2-94b0-bd9663391fee\") " pod="openstack/cinder-api-0" Jan 26 08:11:46 crc kubenswrapper[4806]: I0126 08:11:46.011139 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 26 08:11:46 crc kubenswrapper[4806]: I0126 08:11:46.197807 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 26 08:11:46 crc kubenswrapper[4806]: I0126 08:11:46.197936 4806 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 08:11:46 crc kubenswrapper[4806]: I0126 08:11:46.199153 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 26 08:11:46 crc kubenswrapper[4806]: I0126 08:11:46.561349 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-d6hkg" Jan 26 08:11:46 crc kubenswrapper[4806]: I0126 08:11:46.561535 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-d6hkg" event={"ID":"6f121c86-c5ee-47c3-b80b-f8791a68ee15","Type":"ContainerDied","Data":"38f7440d4992a6b7670a357834a24f326b655e43b7cccd8ce4616dbe5c234e3b"} Jan 26 08:11:46 crc kubenswrapper[4806]: I0126 08:11:46.562119 4806 scope.go:117] "RemoveContainer" containerID="31f15ffccfba1e8be909c21c0618c26eb1e7bfa209ca17a5c1161b715201b10c" Jan 26 08:11:46 crc kubenswrapper[4806]: I0126 08:11:46.594127 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-6g69f" Jan 26 08:11:46 crc kubenswrapper[4806]: I0126 08:11:46.602491 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 26 08:11:46 crc kubenswrapper[4806]: I0126 08:11:46.621502 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-d6hkg"] Jan 26 08:11:46 crc kubenswrapper[4806]: I0126 08:11:46.644957 4806 scope.go:117] "RemoveContainer" containerID="16c3f0f7b1670dde9dd5b12e056f8138fefa91d69ceb8da299d19ecb16396892" Jan 26 08:11:46 crc kubenswrapper[4806]: I0126 08:11:46.645913 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-d6hkg"] Jan 26 08:11:47 crc kubenswrapper[4806]: I0126 08:11:47.081201 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f121c86-c5ee-47c3-b80b-f8791a68ee15" path="/var/lib/kubelet/pods/6f121c86-c5ee-47c3-b80b-f8791a68ee15/volumes" Jan 26 08:11:47 crc kubenswrapper[4806]: I0126 08:11:47.230023 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 26 08:11:47 crc kubenswrapper[4806]: I0126 08:11:47.534770 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-6g69f"] Jan 26 08:11:47 crc kubenswrapper[4806]: I0126 08:11:47.645600 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 26 08:11:47 crc kubenswrapper[4806]: I0126 08:11:47.650016 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"42d9518b-30e3-453d-9680-c84861b479e5","Type":"ContainerStarted","Data":"c40b2163dc8728b2a720be96b3020cd5c25a673d7c2540fa72880cddbde0275a"} Jan 26 08:11:48 crc kubenswrapper[4806]: I0126 08:11:48.597933 4806 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-76c565f4b6-mqhr5" podUID="1d88b0ca-e8c5-40b1-b156-4f3fb3c612c3" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.166:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 08:11:48 crc kubenswrapper[4806]: I0126 08:11:48.673462 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"7bd66660-7983-40b2-94b0-bd9663391fee","Type":"ContainerStarted","Data":"3c9cd6990c3993b25d4c9718941003d4655ec76a4e802f6b2857ff109f92bfb9"} Jan 26 08:11:48 crc kubenswrapper[4806]: I0126 08:11:48.673503 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"7bd66660-7983-40b2-94b0-bd9663391fee","Type":"ContainerStarted","Data":"ad6a6d51316413a785de65ac54e7490711c5222011bade4bfde24e8cebdaa5e9"} Jan 26 08:11:48 crc kubenswrapper[4806]: I0126 08:11:48.691026 4806 generic.go:334] "Generic (PLEG): container finished" podID="ec2813a6-2bcf-4dcd-a9ab-4cd2ed03d451" containerID="52d032c86aa3acb5b6cb177137e22eaae32a133c95397506eff9fcd681c18b90" exitCode=0 Jan 26 08:11:48 crc kubenswrapper[4806]: I0126 08:11:48.691166 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-6g69f" event={"ID":"ec2813a6-2bcf-4dcd-a9ab-4cd2ed03d451","Type":"ContainerDied","Data":"52d032c86aa3acb5b6cb177137e22eaae32a133c95397506eff9fcd681c18b90"} Jan 26 08:11:48 crc kubenswrapper[4806]: I0126 08:11:48.691204 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-6g69f" event={"ID":"ec2813a6-2bcf-4dcd-a9ab-4cd2ed03d451","Type":"ContainerStarted","Data":"204a06576a96662de2c1771ff7a815c4d192f978e42ca4d0cc4b27a0140e95dd"} Jan 26 08:11:49 crc kubenswrapper[4806]: I0126 08:11:49.293071 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 26 08:11:49 crc kubenswrapper[4806]: I0126 08:11:49.650741 4806 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-76c565f4b6-mqhr5" podUID="1d88b0ca-e8c5-40b1-b156-4f3fb3c612c3" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.166:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 08:11:49 crc kubenswrapper[4806]: I0126 08:11:49.652974 4806 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-76c565f4b6-mqhr5" podUID="1d88b0ca-e8c5-40b1-b156-4f3fb3c612c3" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.166:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 08:11:50 crc kubenswrapper[4806]: I0126 08:11:50.641749 4806 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-76c565f4b6-mqhr5" podUID="1d88b0ca-e8c5-40b1-b156-4f3fb3c612c3" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.166:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 08:11:50 crc kubenswrapper[4806]: I0126 08:11:50.743403 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-6g69f" event={"ID":"ec2813a6-2bcf-4dcd-a9ab-4cd2ed03d451","Type":"ContainerStarted","Data":"49c0d8e9f9683e509328152203cc80f37bdfdd244576c95436f96d95be8dfbc1"} Jan 26 08:11:50 crc kubenswrapper[4806]: I0126 08:11:50.743543 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6578955fd5-6g69f" Jan 26 08:11:50 crc kubenswrapper[4806]: I0126 08:11:50.748136 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"42d9518b-30e3-453d-9680-c84861b479e5","Type":"ContainerStarted","Data":"193b314a43294d9e54ad42bc4ba171b212b7d0142521aaead843c1a609645606"} Jan 26 08:11:50 crc kubenswrapper[4806]: I0126 08:11:50.753195 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"7bd66660-7983-40b2-94b0-bd9663391fee","Type":"ContainerStarted","Data":"04315d96d9ac1aee7f4e108564753a04b9c84266f4363555a34babee77a45edf"} Jan 26 08:11:50 crc kubenswrapper[4806]: I0126 08:11:50.753315 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="7bd66660-7983-40b2-94b0-bd9663391fee" containerName="cinder-api-log" containerID="cri-o://3c9cd6990c3993b25d4c9718941003d4655ec76a4e802f6b2857ff109f92bfb9" gracePeriod=30 Jan 26 08:11:50 crc kubenswrapper[4806]: I0126 08:11:50.753325 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 26 08:11:50 crc kubenswrapper[4806]: I0126 08:11:50.753392 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="7bd66660-7983-40b2-94b0-bd9663391fee" containerName="cinder-api" containerID="cri-o://04315d96d9ac1aee7f4e108564753a04b9c84266f4363555a34babee77a45edf" gracePeriod=30 Jan 26 08:11:50 crc kubenswrapper[4806]: I0126 08:11:50.772456 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6578955fd5-6g69f" podStartSLOduration=5.772438424 podStartE2EDuration="5.772438424s" podCreationTimestamp="2026-01-26 08:11:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 08:11:50.765597022 +0000 UTC m=+1090.030005088" watchObservedRunningTime="2026-01-26 08:11:50.772438424 +0000 UTC m=+1090.036846480" Jan 26 08:11:50 crc kubenswrapper[4806]: I0126 08:11:50.796529 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=5.79649785 podStartE2EDuration="5.79649785s" podCreationTimestamp="2026-01-26 08:11:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 08:11:50.788145295 +0000 UTC m=+1090.052553341" watchObservedRunningTime="2026-01-26 08:11:50.79649785 +0000 UTC m=+1090.060905896" Jan 26 08:11:50 crc kubenswrapper[4806]: I0126 08:11:50.817832 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-d5647fbb4-vvzj4" Jan 26 08:11:51 crc kubenswrapper[4806]: I0126 08:11:51.220377 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7bcf6cb6cc-b9fxx"] Jan 26 08:11:51 crc kubenswrapper[4806]: I0126 08:11:51.221825 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7bcf6cb6cc-b9fxx" podUID="a0dff0ed-4136-4f1b-b3d4-5eb610a99d38" containerName="neutron-httpd" containerID="cri-o://703c7f76126c519f5df185e47194d1d9b3f23aed98c6ae176f3ccd52e6ab29ea" gracePeriod=30 Jan 26 08:11:51 crc kubenswrapper[4806]: I0126 08:11:51.221881 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7bcf6cb6cc-b9fxx" podUID="a0dff0ed-4136-4f1b-b3d4-5eb610a99d38" containerName="neutron-api" containerID="cri-o://fd594b637acf82dc96d273bb5255ca283aeb955bd5f41111dc327731bb40271d" gracePeriod=30 Jan 26 08:11:51 crc kubenswrapper[4806]: I0126 08:11:51.254579 4806 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-7bcf6cb6cc-b9fxx" podUID="a0dff0ed-4136-4f1b-b3d4-5eb610a99d38" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.158:9696/\": EOF" Jan 26 08:11:51 crc kubenswrapper[4806]: I0126 08:11:51.276593 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-74d94c4d65-ms88t"] Jan 26 08:11:51 crc kubenswrapper[4806]: I0126 08:11:51.278303 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-74d94c4d65-ms88t" Jan 26 08:11:51 crc kubenswrapper[4806]: I0126 08:11:51.319204 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-74d94c4d65-ms88t"] Jan 26 08:11:51 crc kubenswrapper[4806]: I0126 08:11:51.409713 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d9165bb-c377-4c19-9728-58a6ea046166-public-tls-certs\") pod \"neutron-74d94c4d65-ms88t\" (UID: \"1d9165bb-c377-4c19-9728-58a6ea046166\") " pod="openstack/neutron-74d94c4d65-ms88t" Jan 26 08:11:51 crc kubenswrapper[4806]: I0126 08:11:51.409799 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d9165bb-c377-4c19-9728-58a6ea046166-combined-ca-bundle\") pod \"neutron-74d94c4d65-ms88t\" (UID: \"1d9165bb-c377-4c19-9728-58a6ea046166\") " pod="openstack/neutron-74d94c4d65-ms88t" Jan 26 08:11:51 crc kubenswrapper[4806]: I0126 08:11:51.409825 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1d9165bb-c377-4c19-9728-58a6ea046166-httpd-config\") pod \"neutron-74d94c4d65-ms88t\" (UID: \"1d9165bb-c377-4c19-9728-58a6ea046166\") " pod="openstack/neutron-74d94c4d65-ms88t" Jan 26 08:11:51 crc kubenswrapper[4806]: I0126 08:11:51.409848 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1d9165bb-c377-4c19-9728-58a6ea046166-config\") pod \"neutron-74d94c4d65-ms88t\" (UID: \"1d9165bb-c377-4c19-9728-58a6ea046166\") " pod="openstack/neutron-74d94c4d65-ms88t" Jan 26 08:11:51 crc kubenswrapper[4806]: I0126 08:11:51.409879 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d9165bb-c377-4c19-9728-58a6ea046166-ovndb-tls-certs\") pod \"neutron-74d94c4d65-ms88t\" (UID: \"1d9165bb-c377-4c19-9728-58a6ea046166\") " pod="openstack/neutron-74d94c4d65-ms88t" Jan 26 08:11:51 crc kubenswrapper[4806]: I0126 08:11:51.409906 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d9165bb-c377-4c19-9728-58a6ea046166-internal-tls-certs\") pod \"neutron-74d94c4d65-ms88t\" (UID: \"1d9165bb-c377-4c19-9728-58a6ea046166\") " pod="openstack/neutron-74d94c4d65-ms88t" Jan 26 08:11:51 crc kubenswrapper[4806]: I0126 08:11:51.409978 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnv49\" (UniqueName: \"kubernetes.io/projected/1d9165bb-c377-4c19-9728-58a6ea046166-kube-api-access-gnv49\") pod \"neutron-74d94c4d65-ms88t\" (UID: \"1d9165bb-c377-4c19-9728-58a6ea046166\") " pod="openstack/neutron-74d94c4d65-ms88t" Jan 26 08:11:51 crc kubenswrapper[4806]: I0126 08:11:51.511737 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1d9165bb-c377-4c19-9728-58a6ea046166-httpd-config\") pod \"neutron-74d94c4d65-ms88t\" (UID: \"1d9165bb-c377-4c19-9728-58a6ea046166\") " pod="openstack/neutron-74d94c4d65-ms88t" Jan 26 08:11:51 crc kubenswrapper[4806]: I0126 08:11:51.511793 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1d9165bb-c377-4c19-9728-58a6ea046166-config\") pod \"neutron-74d94c4d65-ms88t\" (UID: \"1d9165bb-c377-4c19-9728-58a6ea046166\") " pod="openstack/neutron-74d94c4d65-ms88t" Jan 26 08:11:51 crc kubenswrapper[4806]: I0126 08:11:51.511834 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d9165bb-c377-4c19-9728-58a6ea046166-ovndb-tls-certs\") pod \"neutron-74d94c4d65-ms88t\" (UID: \"1d9165bb-c377-4c19-9728-58a6ea046166\") " pod="openstack/neutron-74d94c4d65-ms88t" Jan 26 08:11:51 crc kubenswrapper[4806]: I0126 08:11:51.511868 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d9165bb-c377-4c19-9728-58a6ea046166-internal-tls-certs\") pod \"neutron-74d94c4d65-ms88t\" (UID: \"1d9165bb-c377-4c19-9728-58a6ea046166\") " pod="openstack/neutron-74d94c4d65-ms88t" Jan 26 08:11:51 crc kubenswrapper[4806]: I0126 08:11:51.511945 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gnv49\" (UniqueName: \"kubernetes.io/projected/1d9165bb-c377-4c19-9728-58a6ea046166-kube-api-access-gnv49\") pod \"neutron-74d94c4d65-ms88t\" (UID: \"1d9165bb-c377-4c19-9728-58a6ea046166\") " pod="openstack/neutron-74d94c4d65-ms88t" Jan 26 08:11:51 crc kubenswrapper[4806]: I0126 08:11:51.511971 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d9165bb-c377-4c19-9728-58a6ea046166-public-tls-certs\") pod \"neutron-74d94c4d65-ms88t\" (UID: \"1d9165bb-c377-4c19-9728-58a6ea046166\") " pod="openstack/neutron-74d94c4d65-ms88t" Jan 26 08:11:51 crc kubenswrapper[4806]: I0126 08:11:51.512018 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d9165bb-c377-4c19-9728-58a6ea046166-combined-ca-bundle\") pod \"neutron-74d94c4d65-ms88t\" (UID: \"1d9165bb-c377-4c19-9728-58a6ea046166\") " pod="openstack/neutron-74d94c4d65-ms88t" Jan 26 08:11:51 crc kubenswrapper[4806]: I0126 08:11:51.522180 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d9165bb-c377-4c19-9728-58a6ea046166-public-tls-certs\") pod \"neutron-74d94c4d65-ms88t\" (UID: \"1d9165bb-c377-4c19-9728-58a6ea046166\") " pod="openstack/neutron-74d94c4d65-ms88t" Jan 26 08:11:51 crc kubenswrapper[4806]: I0126 08:11:51.522842 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d9165bb-c377-4c19-9728-58a6ea046166-combined-ca-bundle\") pod \"neutron-74d94c4d65-ms88t\" (UID: \"1d9165bb-c377-4c19-9728-58a6ea046166\") " pod="openstack/neutron-74d94c4d65-ms88t" Jan 26 08:11:51 crc kubenswrapper[4806]: I0126 08:11:51.531711 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d9165bb-c377-4c19-9728-58a6ea046166-internal-tls-certs\") pod \"neutron-74d94c4d65-ms88t\" (UID: \"1d9165bb-c377-4c19-9728-58a6ea046166\") " pod="openstack/neutron-74d94c4d65-ms88t" Jan 26 08:11:51 crc kubenswrapper[4806]: I0126 08:11:51.532715 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/1d9165bb-c377-4c19-9728-58a6ea046166-config\") pod \"neutron-74d94c4d65-ms88t\" (UID: \"1d9165bb-c377-4c19-9728-58a6ea046166\") " pod="openstack/neutron-74d94c4d65-ms88t" Jan 26 08:11:51 crc kubenswrapper[4806]: I0126 08:11:51.535183 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d9165bb-c377-4c19-9728-58a6ea046166-ovndb-tls-certs\") pod \"neutron-74d94c4d65-ms88t\" (UID: \"1d9165bb-c377-4c19-9728-58a6ea046166\") " pod="openstack/neutron-74d94c4d65-ms88t" Jan 26 08:11:51 crc kubenswrapper[4806]: I0126 08:11:51.555419 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gnv49\" (UniqueName: \"kubernetes.io/projected/1d9165bb-c377-4c19-9728-58a6ea046166-kube-api-access-gnv49\") pod \"neutron-74d94c4d65-ms88t\" (UID: \"1d9165bb-c377-4c19-9728-58a6ea046166\") " pod="openstack/neutron-74d94c4d65-ms88t" Jan 26 08:11:51 crc kubenswrapper[4806]: I0126 08:11:51.555946 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1d9165bb-c377-4c19-9728-58a6ea046166-httpd-config\") pod \"neutron-74d94c4d65-ms88t\" (UID: \"1d9165bb-c377-4c19-9728-58a6ea046166\") " pod="openstack/neutron-74d94c4d65-ms88t" Jan 26 08:11:51 crc kubenswrapper[4806]: I0126 08:11:51.620320 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-74d94c4d65-ms88t" Jan 26 08:11:51 crc kubenswrapper[4806]: I0126 08:11:51.830623 4806 generic.go:334] "Generic (PLEG): container finished" podID="7bd66660-7983-40b2-94b0-bd9663391fee" containerID="3c9cd6990c3993b25d4c9718941003d4655ec76a4e802f6b2857ff109f92bfb9" exitCode=143 Jan 26 08:11:51 crc kubenswrapper[4806]: I0126 08:11:51.830723 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"7bd66660-7983-40b2-94b0-bd9663391fee","Type":"ContainerDied","Data":"3c9cd6990c3993b25d4c9718941003d4655ec76a4e802f6b2857ff109f92bfb9"} Jan 26 08:11:51 crc kubenswrapper[4806]: I0126 08:11:51.852099 4806 generic.go:334] "Generic (PLEG): container finished" podID="a0dff0ed-4136-4f1b-b3d4-5eb610a99d38" containerID="703c7f76126c519f5df185e47194d1d9b3f23aed98c6ae176f3ccd52e6ab29ea" exitCode=0 Jan 26 08:11:51 crc kubenswrapper[4806]: I0126 08:11:51.852201 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7bcf6cb6cc-b9fxx" event={"ID":"a0dff0ed-4136-4f1b-b3d4-5eb610a99d38","Type":"ContainerDied","Data":"703c7f76126c519f5df185e47194d1d9b3f23aed98c6ae176f3ccd52e6ab29ea"} Jan 26 08:11:51 crc kubenswrapper[4806]: I0126 08:11:51.867715 4806 generic.go:334] "Generic (PLEG): container finished" podID="36dba152-b43d-47c4-94bb-874f93b0884f" containerID="5a999e2cb6a801c6ea12d47114e5ca927ccfbec059be674dc6989803b3e94929" exitCode=137 Jan 26 08:11:51 crc kubenswrapper[4806]: I0126 08:11:51.867753 4806 generic.go:334] "Generic (PLEG): container finished" podID="36dba152-b43d-47c4-94bb-874f93b0884f" containerID="f43ad7611386218711315d63b148811af89ce769e51f9c16adce40cda7cf010b" exitCode=137 Jan 26 08:11:51 crc kubenswrapper[4806]: I0126 08:11:51.867804 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-cbdcb8bcc-96jf5" event={"ID":"36dba152-b43d-47c4-94bb-874f93b0884f","Type":"ContainerDied","Data":"5a999e2cb6a801c6ea12d47114e5ca927ccfbec059be674dc6989803b3e94929"} Jan 26 08:11:51 crc kubenswrapper[4806]: I0126 08:11:51.867832 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-cbdcb8bcc-96jf5" event={"ID":"36dba152-b43d-47c4-94bb-874f93b0884f","Type":"ContainerDied","Data":"f43ad7611386218711315d63b148811af89ce769e51f9c16adce40cda7cf010b"} Jan 26 08:11:51 crc kubenswrapper[4806]: I0126 08:11:51.884607 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"42d9518b-30e3-453d-9680-c84861b479e5","Type":"ContainerStarted","Data":"c1e20f70cd9e0de61ac9187c7dcc8c3273585475d8fc0e0d6859546ed5a413f4"} Jan 26 08:11:51 crc kubenswrapper[4806]: I0126 08:11:51.936050 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=5.810347899 podStartE2EDuration="6.936033831s" podCreationTimestamp="2026-01-26 08:11:45 +0000 UTC" firstStartedPulling="2026-01-26 08:11:47.237994775 +0000 UTC m=+1086.502402831" lastFinishedPulling="2026-01-26 08:11:48.363680707 +0000 UTC m=+1087.628088763" observedRunningTime="2026-01-26 08:11:51.925067443 +0000 UTC m=+1091.189475499" watchObservedRunningTime="2026-01-26 08:11:51.936033831 +0000 UTC m=+1091.200441887" Jan 26 08:11:52 crc kubenswrapper[4806]: E0126 08:11:52.194319 4806 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6f121c86_c5ee_47c3_b80b_f8791a68ee15.slice/crio-31f15ffccfba1e8be909c21c0618c26eb1e7bfa209ca17a5c1161b715201b10c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6f121c86_c5ee_47c3_b80b_f8791a68ee15.slice/crio-38f7440d4992a6b7670a357834a24f326b655e43b7cccd8ce4616dbe5c234e3b\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbc6102bf_7483_4063_af9d_841e78398b0c.slice/crio-conmon-2fe5ae91a9473734ce41faf4efb4de45a5d442716ca0e10fd78e7008169ce5c0.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbc6102bf_7483_4063_af9d_841e78398b0c.slice/crio-c89fec84698ff1592ba8f352fd5b70972d628d8262b81d473618c696f80f66fc\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6f121c86_c5ee_47c3_b80b_f8791a68ee15.slice/crio-conmon-31f15ffccfba1e8be909c21c0618c26eb1e7bfa209ca17a5c1161b715201b10c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbc6102bf_7483_4063_af9d_841e78398b0c.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6f121c86_c5ee_47c3_b80b_f8791a68ee15.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbc6102bf_7483_4063_af9d_841e78398b0c.slice/crio-2fe5ae91a9473734ce41faf4efb4de45a5d442716ca0e10fd78e7008169ce5c0.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod19528149_09a1_44a5_b419_bbe91789d493.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod19528149_09a1_44a5_b419_bbe91789d493.slice/crio-fd5e70421906e9710119a2f5550ab75b467b5cac848c81709f2f2e7f6bb2530d\": RecentStats: unable to find data in memory cache]" Jan 26 08:11:52 crc kubenswrapper[4806]: I0126 08:11:52.236732 4806 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-5554df79f4-4pvrc" podUID="758b7482-35c7-4cda-aaff-f3e3784bc5c4" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.167:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 08:11:52 crc kubenswrapper[4806]: I0126 08:11:52.308058 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-cbdcb8bcc-96jf5" Jan 26 08:11:52 crc kubenswrapper[4806]: I0126 08:11:52.352384 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/36dba152-b43d-47c4-94bb-874f93b0884f-config-data\") pod \"36dba152-b43d-47c4-94bb-874f93b0884f\" (UID: \"36dba152-b43d-47c4-94bb-874f93b0884f\") " Jan 26 08:11:52 crc kubenswrapper[4806]: I0126 08:11:52.352476 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/36dba152-b43d-47c4-94bb-874f93b0884f-horizon-secret-key\") pod \"36dba152-b43d-47c4-94bb-874f93b0884f\" (UID: \"36dba152-b43d-47c4-94bb-874f93b0884f\") " Jan 26 08:11:52 crc kubenswrapper[4806]: I0126 08:11:52.352546 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/36dba152-b43d-47c4-94bb-874f93b0884f-scripts\") pod \"36dba152-b43d-47c4-94bb-874f93b0884f\" (UID: \"36dba152-b43d-47c4-94bb-874f93b0884f\") " Jan 26 08:11:52 crc kubenswrapper[4806]: I0126 08:11:52.352703 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9tmzg\" (UniqueName: \"kubernetes.io/projected/36dba152-b43d-47c4-94bb-874f93b0884f-kube-api-access-9tmzg\") pod \"36dba152-b43d-47c4-94bb-874f93b0884f\" (UID: \"36dba152-b43d-47c4-94bb-874f93b0884f\") " Jan 26 08:11:52 crc kubenswrapper[4806]: I0126 08:11:52.352760 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/36dba152-b43d-47c4-94bb-874f93b0884f-logs\") pod \"36dba152-b43d-47c4-94bb-874f93b0884f\" (UID: \"36dba152-b43d-47c4-94bb-874f93b0884f\") " Jan 26 08:11:52 crc kubenswrapper[4806]: I0126 08:11:52.356914 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/36dba152-b43d-47c4-94bb-874f93b0884f-logs" (OuterVolumeSpecName: "logs") pod "36dba152-b43d-47c4-94bb-874f93b0884f" (UID: "36dba152-b43d-47c4-94bb-874f93b0884f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 08:11:52 crc kubenswrapper[4806]: I0126 08:11:52.377190 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36dba152-b43d-47c4-94bb-874f93b0884f-kube-api-access-9tmzg" (OuterVolumeSpecName: "kube-api-access-9tmzg") pod "36dba152-b43d-47c4-94bb-874f93b0884f" (UID: "36dba152-b43d-47c4-94bb-874f93b0884f"). InnerVolumeSpecName "kube-api-access-9tmzg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:11:52 crc kubenswrapper[4806]: I0126 08:11:52.382790 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36dba152-b43d-47c4-94bb-874f93b0884f-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "36dba152-b43d-47c4-94bb-874f93b0884f" (UID: "36dba152-b43d-47c4-94bb-874f93b0884f"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:11:52 crc kubenswrapper[4806]: I0126 08:11:52.463910 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9tmzg\" (UniqueName: \"kubernetes.io/projected/36dba152-b43d-47c4-94bb-874f93b0884f-kube-api-access-9tmzg\") on node \"crc\" DevicePath \"\"" Jan 26 08:11:52 crc kubenswrapper[4806]: I0126 08:11:52.464214 4806 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/36dba152-b43d-47c4-94bb-874f93b0884f-logs\") on node \"crc\" DevicePath \"\"" Jan 26 08:11:52 crc kubenswrapper[4806]: I0126 08:11:52.464227 4806 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/36dba152-b43d-47c4-94bb-874f93b0884f-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 26 08:11:52 crc kubenswrapper[4806]: I0126 08:11:52.507073 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36dba152-b43d-47c4-94bb-874f93b0884f-config-data" (OuterVolumeSpecName: "config-data") pod "36dba152-b43d-47c4-94bb-874f93b0884f" (UID: "36dba152-b43d-47c4-94bb-874f93b0884f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 08:11:52 crc kubenswrapper[4806]: I0126 08:11:52.507588 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36dba152-b43d-47c4-94bb-874f93b0884f-scripts" (OuterVolumeSpecName: "scripts") pod "36dba152-b43d-47c4-94bb-874f93b0884f" (UID: "36dba152-b43d-47c4-94bb-874f93b0884f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 08:11:52 crc kubenswrapper[4806]: I0126 08:11:52.556471 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-74d94c4d65-ms88t"] Jan 26 08:11:52 crc kubenswrapper[4806]: I0126 08:11:52.569836 4806 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/36dba152-b43d-47c4-94bb-874f93b0884f-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 08:11:52 crc kubenswrapper[4806]: I0126 08:11:52.569874 4806 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/36dba152-b43d-47c4-94bb-874f93b0884f-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 08:11:52 crc kubenswrapper[4806]: I0126 08:11:52.908891 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-cbdcb8bcc-96jf5" event={"ID":"36dba152-b43d-47c4-94bb-874f93b0884f","Type":"ContainerDied","Data":"c362f398f6f7a61971db6576b414a84f06551e1ea006f0559447851533f9a772"} Jan 26 08:11:52 crc kubenswrapper[4806]: I0126 08:11:52.908938 4806 scope.go:117] "RemoveContainer" containerID="5a999e2cb6a801c6ea12d47114e5ca927ccfbec059be674dc6989803b3e94929" Jan 26 08:11:52 crc kubenswrapper[4806]: I0126 08:11:52.909101 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-cbdcb8bcc-96jf5" Jan 26 08:11:52 crc kubenswrapper[4806]: I0126 08:11:52.946439 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-74d94c4d65-ms88t" event={"ID":"1d9165bb-c377-4c19-9728-58a6ea046166","Type":"ContainerStarted","Data":"22610a1a13aa710c22702bd17d86583a07a6500c5e5b2d697b6dae3f9654602a"} Jan 26 08:11:52 crc kubenswrapper[4806]: I0126 08:11:52.952930 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-cbdcb8bcc-96jf5"] Jan 26 08:11:52 crc kubenswrapper[4806]: I0126 08:11:52.980129 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-cbdcb8bcc-96jf5"] Jan 26 08:11:53 crc kubenswrapper[4806]: I0126 08:11:53.062901 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36dba152-b43d-47c4-94bb-874f93b0884f" path="/var/lib/kubelet/pods/36dba152-b43d-47c4-94bb-874f93b0884f/volumes" Jan 26 08:11:53 crc kubenswrapper[4806]: I0126 08:11:53.171803 4806 scope.go:117] "RemoveContainer" containerID="f43ad7611386218711315d63b148811af89ce769e51f9c16adce40cda7cf010b" Jan 26 08:11:53 crc kubenswrapper[4806]: I0126 08:11:53.227141 4806 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5554df79f4-4pvrc" podUID="758b7482-35c7-4cda-aaff-f3e3784bc5c4" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.167:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 26 08:11:53 crc kubenswrapper[4806]: I0126 08:11:53.227735 4806 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5554df79f4-4pvrc" podUID="758b7482-35c7-4cda-aaff-f3e3784bc5c4" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.167:9311/healthcheck\": context deadline exceeded" Jan 26 08:11:53 crc kubenswrapper[4806]: I0126 08:11:53.428898 4806 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-7bcf6cb6cc-b9fxx" podUID="a0dff0ed-4136-4f1b-b3d4-5eb610a99d38" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.158:9696/\": dial tcp 10.217.0.158:9696: connect: connection refused" Jan 26 08:11:53 crc kubenswrapper[4806]: I0126 08:11:53.639832 4806 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-76c565f4b6-mqhr5" podUID="1d88b0ca-e8c5-40b1-b156-4f3fb3c612c3" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.166:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 08:11:53 crc kubenswrapper[4806]: I0126 08:11:53.728855 4806 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-7d485d788d-5q4tb" podUID="d7b4ee8d-6333-4683-94c4-b79229c76537" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.150:8443: connect: connection refused" Jan 26 08:11:53 crc kubenswrapper[4806]: I0126 08:11:53.728947 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7d485d788d-5q4tb" Jan 26 08:11:53 crc kubenswrapper[4806]: I0126 08:11:53.729834 4806 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"f87d035c73c255421b48d4cd47dacbc10b097d6a12332e04ccdd30feaf74373d"} pod="openstack/horizon-7d485d788d-5q4tb" containerMessage="Container horizon failed startup probe, will be restarted" Jan 26 08:11:53 crc kubenswrapper[4806]: I0126 08:11:53.729871 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7d485d788d-5q4tb" podUID="d7b4ee8d-6333-4683-94c4-b79229c76537" containerName="horizon" containerID="cri-o://f87d035c73c255421b48d4cd47dacbc10b097d6a12332e04ccdd30feaf74373d" gracePeriod=30 Jan 26 08:11:53 crc kubenswrapper[4806]: I0126 08:11:53.965908 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-74d94c4d65-ms88t" event={"ID":"1d9165bb-c377-4c19-9728-58a6ea046166","Type":"ContainerStarted","Data":"3b40b8338b443d9b968b8d4234f7e57d534802db3a000fcee7657289cb8287e4"} Jan 26 08:11:53 crc kubenswrapper[4806]: I0126 08:11:53.965968 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-74d94c4d65-ms88t" event={"ID":"1d9165bb-c377-4c19-9728-58a6ea046166","Type":"ContainerStarted","Data":"5f29e3670e7c17d3438882af45c0c1ecd4e0e50999b2425753851569b4fa4361"} Jan 26 08:11:53 crc kubenswrapper[4806]: I0126 08:11:53.966132 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-74d94c4d65-ms88t" Jan 26 08:11:53 crc kubenswrapper[4806]: I0126 08:11:53.994052 4806 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6b8f96b47b-sbsnb" podUID="d4ed3e96-22ec-410e-8f50-afd310343aa8" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.151:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.151:8443: connect: connection refused" Jan 26 08:11:54 crc kubenswrapper[4806]: I0126 08:11:54.008942 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-74d94c4d65-ms88t" podStartSLOduration=3.008922744 podStartE2EDuration="3.008922744s" podCreationTimestamp="2026-01-26 08:11:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 08:11:53.993400898 +0000 UTC m=+1093.257808954" watchObservedRunningTime="2026-01-26 08:11:54.008922744 +0000 UTC m=+1093.273330800" Jan 26 08:11:54 crc kubenswrapper[4806]: I0126 08:11:54.732768 4806 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-76c565f4b6-mqhr5" podUID="1d88b0ca-e8c5-40b1-b156-4f3fb3c612c3" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.166:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 08:11:54 crc kubenswrapper[4806]: I0126 08:11:54.732850 4806 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-76c565f4b6-mqhr5" podUID="1d88b0ca-e8c5-40b1-b156-4f3fb3c612c3" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.166:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 08:11:54 crc kubenswrapper[4806]: I0126 08:11:54.738777 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-76c565f4b6-mqhr5" Jan 26 08:11:54 crc kubenswrapper[4806]: I0126 08:11:54.739373 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-76c565f4b6-mqhr5" Jan 26 08:11:55 crc kubenswrapper[4806]: I0126 08:11:55.231781 4806 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-5554df79f4-4pvrc" podUID="758b7482-35c7-4cda-aaff-f3e3784bc5c4" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.167:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 26 08:11:56 crc kubenswrapper[4806]: I0126 08:11:56.012004 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 26 08:11:56 crc kubenswrapper[4806]: I0126 08:11:56.359473 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 26 08:11:56 crc kubenswrapper[4806]: I0126 08:11:56.405498 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 26 08:11:56 crc kubenswrapper[4806]: I0126 08:11:56.596655 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6578955fd5-6g69f" Jan 26 08:11:56 crc kubenswrapper[4806]: I0126 08:11:56.684133 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-xjh66"] Jan 26 08:11:56 crc kubenswrapper[4806]: I0126 08:11:56.684593 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-848cf88cfc-xjh66" podUID="101d2806-ced2-4267-86ec-114320756e46" containerName="dnsmasq-dns" containerID="cri-o://db922cc69b21749920c7875d5009173ba5f1c416b99d25f1100564119ef59752" gracePeriod=10 Jan 26 08:11:56 crc kubenswrapper[4806]: I0126 08:11:56.846618 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5554df79f4-4pvrc" Jan 26 08:11:57 crc kubenswrapper[4806]: I0126 08:11:57.066383 4806 generic.go:334] "Generic (PLEG): container finished" podID="101d2806-ced2-4267-86ec-114320756e46" containerID="db922cc69b21749920c7875d5009173ba5f1c416b99d25f1100564119ef59752" exitCode=0 Jan 26 08:11:57 crc kubenswrapper[4806]: I0126 08:11:57.066433 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-xjh66" event={"ID":"101d2806-ced2-4267-86ec-114320756e46","Type":"ContainerDied","Data":"db922cc69b21749920c7875d5009173ba5f1c416b99d25f1100564119ef59752"} Jan 26 08:11:57 crc kubenswrapper[4806]: I0126 08:11:57.079367 4806 generic.go:334] "Generic (PLEG): container finished" podID="a0dff0ed-4136-4f1b-b3d4-5eb610a99d38" containerID="fd594b637acf82dc96d273bb5255ca283aeb955bd5f41111dc327731bb40271d" exitCode=0 Jan 26 08:11:57 crc kubenswrapper[4806]: I0126 08:11:57.079586 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="42d9518b-30e3-453d-9680-c84861b479e5" containerName="cinder-scheduler" containerID="cri-o://193b314a43294d9e54ad42bc4ba171b212b7d0142521aaead843c1a609645606" gracePeriod=30 Jan 26 08:11:57 crc kubenswrapper[4806]: I0126 08:11:57.079886 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7bcf6cb6cc-b9fxx" event={"ID":"a0dff0ed-4136-4f1b-b3d4-5eb610a99d38","Type":"ContainerDied","Data":"fd594b637acf82dc96d273bb5255ca283aeb955bd5f41111dc327731bb40271d"} Jan 26 08:11:57 crc kubenswrapper[4806]: I0126 08:11:57.080165 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="42d9518b-30e3-453d-9680-c84861b479e5" containerName="probe" containerID="cri-o://c1e20f70cd9e0de61ac9187c7dcc8c3273585475d8fc0e0d6859546ed5a413f4" gracePeriod=30 Jan 26 08:11:57 crc kubenswrapper[4806]: I0126 08:11:57.118895 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5554df79f4-4pvrc" Jan 26 08:11:57 crc kubenswrapper[4806]: I0126 08:11:57.186160 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-76c565f4b6-mqhr5"] Jan 26 08:11:57 crc kubenswrapper[4806]: I0126 08:11:57.186367 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-76c565f4b6-mqhr5" podUID="1d88b0ca-e8c5-40b1-b156-4f3fb3c612c3" containerName="barbican-api-log" containerID="cri-o://b5ad4005e4b42c1a424a41154505cbf8b8b75084f4de3935dfea9dc9fd65521c" gracePeriod=30 Jan 26 08:11:57 crc kubenswrapper[4806]: I0126 08:11:57.186436 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-76c565f4b6-mqhr5" podUID="1d88b0ca-e8c5-40b1-b156-4f3fb3c612c3" containerName="barbican-api" containerID="cri-o://a587379076ed125fce902c6e4b41e28a15352be0b2f908952fe53d31983910c4" gracePeriod=30 Jan 26 08:11:58 crc kubenswrapper[4806]: I0126 08:11:58.088825 4806 generic.go:334] "Generic (PLEG): container finished" podID="1d88b0ca-e8c5-40b1-b156-4f3fb3c612c3" containerID="b5ad4005e4b42c1a424a41154505cbf8b8b75084f4de3935dfea9dc9fd65521c" exitCode=143 Jan 26 08:11:58 crc kubenswrapper[4806]: I0126 08:11:58.088904 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-76c565f4b6-mqhr5" event={"ID":"1d88b0ca-e8c5-40b1-b156-4f3fb3c612c3","Type":"ContainerDied","Data":"b5ad4005e4b42c1a424a41154505cbf8b8b75084f4de3935dfea9dc9fd65521c"} Jan 26 08:11:58 crc kubenswrapper[4806]: I0126 08:11:58.092349 4806 generic.go:334] "Generic (PLEG): container finished" podID="42d9518b-30e3-453d-9680-c84861b479e5" containerID="c1e20f70cd9e0de61ac9187c7dcc8c3273585475d8fc0e0d6859546ed5a413f4" exitCode=0 Jan 26 08:11:58 crc kubenswrapper[4806]: I0126 08:11:58.092377 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"42d9518b-30e3-453d-9680-c84861b479e5","Type":"ContainerDied","Data":"c1e20f70cd9e0de61ac9187c7dcc8c3273585475d8fc0e0d6859546ed5a413f4"} Jan 26 08:11:59 crc kubenswrapper[4806]: I0126 08:11:59.117784 4806 generic.go:334] "Generic (PLEG): container finished" podID="42d9518b-30e3-453d-9680-c84861b479e5" containerID="193b314a43294d9e54ad42bc4ba171b212b7d0142521aaead843c1a609645606" exitCode=0 Jan 26 08:11:59 crc kubenswrapper[4806]: I0126 08:11:59.117981 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"42d9518b-30e3-453d-9680-c84861b479e5","Type":"ContainerDied","Data":"193b314a43294d9e54ad42bc4ba171b212b7d0142521aaead843c1a609645606"} Jan 26 08:11:59 crc kubenswrapper[4806]: I0126 08:11:59.235152 4806 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-848cf88cfc-xjh66" podUID="101d2806-ced2-4267-86ec-114320756e46" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.165:5353: connect: connection refused" Jan 26 08:11:59 crc kubenswrapper[4806]: I0126 08:11:59.318270 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 26 08:12:00 crc kubenswrapper[4806]: I0126 08:12:00.637945 4806 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-76c565f4b6-mqhr5" podUID="1d88b0ca-e8c5-40b1-b156-4f3fb3c612c3" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.166:9311/healthcheck\": read tcp 10.217.0.2:41306->10.217.0.166:9311: read: connection reset by peer" Jan 26 08:12:00 crc kubenswrapper[4806]: I0126 08:12:00.638033 4806 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-76c565f4b6-mqhr5" podUID="1d88b0ca-e8c5-40b1-b156-4f3fb3c612c3" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.166:9311/healthcheck\": read tcp 10.217.0.2:41302->10.217.0.166:9311: read: connection reset by peer" Jan 26 08:12:01 crc kubenswrapper[4806]: I0126 08:12:01.148388 4806 generic.go:334] "Generic (PLEG): container finished" podID="1d88b0ca-e8c5-40b1-b156-4f3fb3c612c3" containerID="a587379076ed125fce902c6e4b41e28a15352be0b2f908952fe53d31983910c4" exitCode=0 Jan 26 08:12:01 crc kubenswrapper[4806]: I0126 08:12:01.148435 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-76c565f4b6-mqhr5" event={"ID":"1d88b0ca-e8c5-40b1-b156-4f3fb3c612c3","Type":"ContainerDied","Data":"a587379076ed125fce902c6e4b41e28a15352be0b2f908952fe53d31983910c4"} Jan 26 08:12:02 crc kubenswrapper[4806]: I0126 08:12:02.093614 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6f665b5db4-wpfmw" Jan 26 08:12:02 crc kubenswrapper[4806]: I0126 08:12:02.160066 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6f665b5db4-wpfmw" Jan 26 08:12:03 crc kubenswrapper[4806]: I0126 08:12:03.748227 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-5c44c79675-nsqr2" Jan 26 08:12:04 crc kubenswrapper[4806]: I0126 08:12:04.235223 4806 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-848cf88cfc-xjh66" podUID="101d2806-ced2-4267-86ec-114320756e46" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.165:5353: connect: connection refused" Jan 26 08:12:04 crc kubenswrapper[4806]: E0126 08:12:04.349599 4806 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/ubi9/httpd-24:latest" Jan 26 08:12:04 crc kubenswrapper[4806]: E0126 08:12:04.349794 4806 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24:latest,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vrfc7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(93bf46a8-2942-4b36-9853-88ff5c6e756b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 08:12:04 crc kubenswrapper[4806]: E0126 08:12:04.351200 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"]" pod="openstack/ceilometer-0" podUID="93bf46a8-2942-4b36-9853-88ff5c6e756b" Jan 26 08:12:04 crc kubenswrapper[4806]: I0126 08:12:04.551824 4806 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-76c565f4b6-mqhr5" podUID="1d88b0ca-e8c5-40b1-b156-4f3fb3c612c3" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.166:9311/healthcheck\": dial tcp 10.217.0.166:9311: connect: connection refused" Jan 26 08:12:04 crc kubenswrapper[4806]: I0126 08:12:04.551948 4806 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-76c565f4b6-mqhr5" podUID="1d88b0ca-e8c5-40b1-b156-4f3fb3c612c3" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.166:9311/healthcheck\": dial tcp 10.217.0.166:9311: connect: connection refused" Jan 26 08:12:04 crc kubenswrapper[4806]: I0126 08:12:04.790995 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-xjh66" Jan 26 08:12:04 crc kubenswrapper[4806]: I0126 08:12:04.864753 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-24nsc\" (UniqueName: \"kubernetes.io/projected/101d2806-ced2-4267-86ec-114320756e46-kube-api-access-24nsc\") pod \"101d2806-ced2-4267-86ec-114320756e46\" (UID: \"101d2806-ced2-4267-86ec-114320756e46\") " Jan 26 08:12:04 crc kubenswrapper[4806]: I0126 08:12:04.864790 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/101d2806-ced2-4267-86ec-114320756e46-dns-swift-storage-0\") pod \"101d2806-ced2-4267-86ec-114320756e46\" (UID: \"101d2806-ced2-4267-86ec-114320756e46\") " Jan 26 08:12:04 crc kubenswrapper[4806]: I0126 08:12:04.864856 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/101d2806-ced2-4267-86ec-114320756e46-dns-svc\") pod \"101d2806-ced2-4267-86ec-114320756e46\" (UID: \"101d2806-ced2-4267-86ec-114320756e46\") " Jan 26 08:12:04 crc kubenswrapper[4806]: I0126 08:12:04.864904 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/101d2806-ced2-4267-86ec-114320756e46-ovsdbserver-sb\") pod \"101d2806-ced2-4267-86ec-114320756e46\" (UID: \"101d2806-ced2-4267-86ec-114320756e46\") " Jan 26 08:12:04 crc kubenswrapper[4806]: I0126 08:12:04.864957 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/101d2806-ced2-4267-86ec-114320756e46-config\") pod \"101d2806-ced2-4267-86ec-114320756e46\" (UID: \"101d2806-ced2-4267-86ec-114320756e46\") " Jan 26 08:12:04 crc kubenswrapper[4806]: I0126 08:12:04.864974 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/101d2806-ced2-4267-86ec-114320756e46-ovsdbserver-nb\") pod \"101d2806-ced2-4267-86ec-114320756e46\" (UID: \"101d2806-ced2-4267-86ec-114320756e46\") " Jan 26 08:12:04 crc kubenswrapper[4806]: I0126 08:12:04.890743 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/101d2806-ced2-4267-86ec-114320756e46-kube-api-access-24nsc" (OuterVolumeSpecName: "kube-api-access-24nsc") pod "101d2806-ced2-4267-86ec-114320756e46" (UID: "101d2806-ced2-4267-86ec-114320756e46"). InnerVolumeSpecName "kube-api-access-24nsc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:12:04 crc kubenswrapper[4806]: I0126 08:12:04.942895 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/101d2806-ced2-4267-86ec-114320756e46-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "101d2806-ced2-4267-86ec-114320756e46" (UID: "101d2806-ced2-4267-86ec-114320756e46"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 08:12:04 crc kubenswrapper[4806]: I0126 08:12:04.951191 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/101d2806-ced2-4267-86ec-114320756e46-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "101d2806-ced2-4267-86ec-114320756e46" (UID: "101d2806-ced2-4267-86ec-114320756e46"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 08:12:04 crc kubenswrapper[4806]: I0126 08:12:04.955012 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/101d2806-ced2-4267-86ec-114320756e46-config" (OuterVolumeSpecName: "config") pod "101d2806-ced2-4267-86ec-114320756e46" (UID: "101d2806-ced2-4267-86ec-114320756e46"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 08:12:04 crc kubenswrapper[4806]: I0126 08:12:04.972818 4806 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/101d2806-ced2-4267-86ec-114320756e46-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 08:12:04 crc kubenswrapper[4806]: I0126 08:12:04.972850 4806 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/101d2806-ced2-4267-86ec-114320756e46-config\") on node \"crc\" DevicePath \"\"" Jan 26 08:12:04 crc kubenswrapper[4806]: I0126 08:12:04.972859 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-24nsc\" (UniqueName: \"kubernetes.io/projected/101d2806-ced2-4267-86ec-114320756e46-kube-api-access-24nsc\") on node \"crc\" DevicePath \"\"" Jan 26 08:12:04 crc kubenswrapper[4806]: I0126 08:12:04.972868 4806 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/101d2806-ced2-4267-86ec-114320756e46-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 08:12:04 crc kubenswrapper[4806]: I0126 08:12:04.987998 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/101d2806-ced2-4267-86ec-114320756e46-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "101d2806-ced2-4267-86ec-114320756e46" (UID: "101d2806-ced2-4267-86ec-114320756e46"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 08:12:05 crc kubenswrapper[4806]: I0126 08:12:05.032736 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/101d2806-ced2-4267-86ec-114320756e46-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "101d2806-ced2-4267-86ec-114320756e46" (UID: "101d2806-ced2-4267-86ec-114320756e46"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 08:12:05 crc kubenswrapper[4806]: I0126 08:12:05.074127 4806 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/101d2806-ced2-4267-86ec-114320756e46-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 08:12:05 crc kubenswrapper[4806]: I0126 08:12:05.074148 4806 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/101d2806-ced2-4267-86ec-114320756e46-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 26 08:12:05 crc kubenswrapper[4806]: I0126 08:12:05.112924 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-76c565f4b6-mqhr5" Jan 26 08:12:05 crc kubenswrapper[4806]: I0126 08:12:05.175809 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d88b0ca-e8c5-40b1-b156-4f3fb3c612c3-combined-ca-bundle\") pod \"1d88b0ca-e8c5-40b1-b156-4f3fb3c612c3\" (UID: \"1d88b0ca-e8c5-40b1-b156-4f3fb3c612c3\") " Jan 26 08:12:05 crc kubenswrapper[4806]: I0126 08:12:05.175877 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1d88b0ca-e8c5-40b1-b156-4f3fb3c612c3-logs\") pod \"1d88b0ca-e8c5-40b1-b156-4f3fb3c612c3\" (UID: \"1d88b0ca-e8c5-40b1-b156-4f3fb3c612c3\") " Jan 26 08:12:05 crc kubenswrapper[4806]: I0126 08:12:05.175924 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6xlr7\" (UniqueName: \"kubernetes.io/projected/1d88b0ca-e8c5-40b1-b156-4f3fb3c612c3-kube-api-access-6xlr7\") pod \"1d88b0ca-e8c5-40b1-b156-4f3fb3c612c3\" (UID: \"1d88b0ca-e8c5-40b1-b156-4f3fb3c612c3\") " Jan 26 08:12:05 crc kubenswrapper[4806]: I0126 08:12:05.176018 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d88b0ca-e8c5-40b1-b156-4f3fb3c612c3-config-data\") pod \"1d88b0ca-e8c5-40b1-b156-4f3fb3c612c3\" (UID: \"1d88b0ca-e8c5-40b1-b156-4f3fb3c612c3\") " Jan 26 08:12:05 crc kubenswrapper[4806]: I0126 08:12:05.176186 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1d88b0ca-e8c5-40b1-b156-4f3fb3c612c3-config-data-custom\") pod \"1d88b0ca-e8c5-40b1-b156-4f3fb3c612c3\" (UID: \"1d88b0ca-e8c5-40b1-b156-4f3fb3c612c3\") " Jan 26 08:12:05 crc kubenswrapper[4806]: I0126 08:12:05.181923 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d88b0ca-e8c5-40b1-b156-4f3fb3c612c3-logs" (OuterVolumeSpecName: "logs") pod "1d88b0ca-e8c5-40b1-b156-4f3fb3c612c3" (UID: "1d88b0ca-e8c5-40b1-b156-4f3fb3c612c3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 08:12:05 crc kubenswrapper[4806]: I0126 08:12:05.190092 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d88b0ca-e8c5-40b1-b156-4f3fb3c612c3-kube-api-access-6xlr7" (OuterVolumeSpecName: "kube-api-access-6xlr7") pod "1d88b0ca-e8c5-40b1-b156-4f3fb3c612c3" (UID: "1d88b0ca-e8c5-40b1-b156-4f3fb3c612c3"). InnerVolumeSpecName "kube-api-access-6xlr7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:12:05 crc kubenswrapper[4806]: I0126 08:12:05.193676 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d88b0ca-e8c5-40b1-b156-4f3fb3c612c3-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "1d88b0ca-e8c5-40b1-b156-4f3fb3c612c3" (UID: "1d88b0ca-e8c5-40b1-b156-4f3fb3c612c3"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:12:05 crc kubenswrapper[4806]: I0126 08:12:05.217988 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-76c565f4b6-mqhr5" event={"ID":"1d88b0ca-e8c5-40b1-b156-4f3fb3c612c3","Type":"ContainerDied","Data":"02368f9990a62f811e5345abc44c34d242c3db1a62c54fb7d641474f562b48a3"} Jan 26 08:12:05 crc kubenswrapper[4806]: I0126 08:12:05.218056 4806 scope.go:117] "RemoveContainer" containerID="a587379076ed125fce902c6e4b41e28a15352be0b2f908952fe53d31983910c4" Jan 26 08:12:05 crc kubenswrapper[4806]: I0126 08:12:05.218225 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-76c565f4b6-mqhr5" Jan 26 08:12:05 crc kubenswrapper[4806]: I0126 08:12:05.226013 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="93bf46a8-2942-4b36-9853-88ff5c6e756b" containerName="ceilometer-notification-agent" containerID="cri-o://ce21628d69b54f9e7078ea4cc4723a743027cca289649838f8e9a6552da1cecf" gracePeriod=30 Jan 26 08:12:05 crc kubenswrapper[4806]: I0126 08:12:05.226328 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-xjh66" Jan 26 08:12:05 crc kubenswrapper[4806]: I0126 08:12:05.226685 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-xjh66" event={"ID":"101d2806-ced2-4267-86ec-114320756e46","Type":"ContainerDied","Data":"45798520d3189bd7554182f54be640afcc9d6933ce2566f26e17b0073778a060"} Jan 26 08:12:05 crc kubenswrapper[4806]: I0126 08:12:05.226854 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="93bf46a8-2942-4b36-9853-88ff5c6e756b" containerName="sg-core" containerID="cri-o://8301cf9275eda41f8581b82288261e3c0d7d73701303206c3c8836982952549f" gracePeriod=30 Jan 26 08:12:05 crc kubenswrapper[4806]: I0126 08:12:05.258703 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d88b0ca-e8c5-40b1-b156-4f3fb3c612c3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1d88b0ca-e8c5-40b1-b156-4f3fb3c612c3" (UID: "1d88b0ca-e8c5-40b1-b156-4f3fb3c612c3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:12:05 crc kubenswrapper[4806]: I0126 08:12:05.287856 4806 scope.go:117] "RemoveContainer" containerID="b5ad4005e4b42c1a424a41154505cbf8b8b75084f4de3935dfea9dc9fd65521c" Jan 26 08:12:05 crc kubenswrapper[4806]: I0126 08:12:05.296740 4806 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d88b0ca-e8c5-40b1-b156-4f3fb3c612c3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 08:12:05 crc kubenswrapper[4806]: I0126 08:12:05.296769 4806 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1d88b0ca-e8c5-40b1-b156-4f3fb3c612c3-logs\") on node \"crc\" DevicePath \"\"" Jan 26 08:12:05 crc kubenswrapper[4806]: I0126 08:12:05.296781 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6xlr7\" (UniqueName: \"kubernetes.io/projected/1d88b0ca-e8c5-40b1-b156-4f3fb3c612c3-kube-api-access-6xlr7\") on node \"crc\" DevicePath \"\"" Jan 26 08:12:05 crc kubenswrapper[4806]: I0126 08:12:05.296794 4806 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1d88b0ca-e8c5-40b1-b156-4f3fb3c612c3-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 26 08:12:05 crc kubenswrapper[4806]: I0126 08:12:05.317596 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-xjh66"] Jan 26 08:12:05 crc kubenswrapper[4806]: I0126 08:12:05.328631 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-xjh66"] Jan 26 08:12:05 crc kubenswrapper[4806]: I0126 08:12:05.345713 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7bcf6cb6cc-b9fxx" Jan 26 08:12:05 crc kubenswrapper[4806]: I0126 08:12:05.372277 4806 scope.go:117] "RemoveContainer" containerID="db922cc69b21749920c7875d5009173ba5f1c416b99d25f1100564119ef59752" Jan 26 08:12:05 crc kubenswrapper[4806]: I0126 08:12:05.405572 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0dff0ed-4136-4f1b-b3d4-5eb610a99d38-ovndb-tls-certs\") pod \"a0dff0ed-4136-4f1b-b3d4-5eb610a99d38\" (UID: \"a0dff0ed-4136-4f1b-b3d4-5eb610a99d38\") " Jan 26 08:12:05 crc kubenswrapper[4806]: I0126 08:12:05.405609 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0dff0ed-4136-4f1b-b3d4-5eb610a99d38-public-tls-certs\") pod \"a0dff0ed-4136-4f1b-b3d4-5eb610a99d38\" (UID: \"a0dff0ed-4136-4f1b-b3d4-5eb610a99d38\") " Jan 26 08:12:05 crc kubenswrapper[4806]: I0126 08:12:05.405688 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a0dff0ed-4136-4f1b-b3d4-5eb610a99d38-httpd-config\") pod \"a0dff0ed-4136-4f1b-b3d4-5eb610a99d38\" (UID: \"a0dff0ed-4136-4f1b-b3d4-5eb610a99d38\") " Jan 26 08:12:05 crc kubenswrapper[4806]: I0126 08:12:05.405838 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0dff0ed-4136-4f1b-b3d4-5eb610a99d38-internal-tls-certs\") pod \"a0dff0ed-4136-4f1b-b3d4-5eb610a99d38\" (UID: \"a0dff0ed-4136-4f1b-b3d4-5eb610a99d38\") " Jan 26 08:12:05 crc kubenswrapper[4806]: I0126 08:12:05.405871 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-77d4r\" (UniqueName: \"kubernetes.io/projected/a0dff0ed-4136-4f1b-b3d4-5eb610a99d38-kube-api-access-77d4r\") pod \"a0dff0ed-4136-4f1b-b3d4-5eb610a99d38\" (UID: \"a0dff0ed-4136-4f1b-b3d4-5eb610a99d38\") " Jan 26 08:12:05 crc kubenswrapper[4806]: I0126 08:12:05.405957 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0dff0ed-4136-4f1b-b3d4-5eb610a99d38-combined-ca-bundle\") pod \"a0dff0ed-4136-4f1b-b3d4-5eb610a99d38\" (UID: \"a0dff0ed-4136-4f1b-b3d4-5eb610a99d38\") " Jan 26 08:12:05 crc kubenswrapper[4806]: I0126 08:12:05.405989 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a0dff0ed-4136-4f1b-b3d4-5eb610a99d38-config\") pod \"a0dff0ed-4136-4f1b-b3d4-5eb610a99d38\" (UID: \"a0dff0ed-4136-4f1b-b3d4-5eb610a99d38\") " Jan 26 08:12:05 crc kubenswrapper[4806]: I0126 08:12:05.429810 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d88b0ca-e8c5-40b1-b156-4f3fb3c612c3-config-data" (OuterVolumeSpecName: "config-data") pod "1d88b0ca-e8c5-40b1-b156-4f3fb3c612c3" (UID: "1d88b0ca-e8c5-40b1-b156-4f3fb3c612c3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:12:05 crc kubenswrapper[4806]: I0126 08:12:05.430611 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0dff0ed-4136-4f1b-b3d4-5eb610a99d38-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "a0dff0ed-4136-4f1b-b3d4-5eb610a99d38" (UID: "a0dff0ed-4136-4f1b-b3d4-5eb610a99d38"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:12:05 crc kubenswrapper[4806]: I0126 08:12:05.437331 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0dff0ed-4136-4f1b-b3d4-5eb610a99d38-kube-api-access-77d4r" (OuterVolumeSpecName: "kube-api-access-77d4r") pod "a0dff0ed-4136-4f1b-b3d4-5eb610a99d38" (UID: "a0dff0ed-4136-4f1b-b3d4-5eb610a99d38"). InnerVolumeSpecName "kube-api-access-77d4r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:12:05 crc kubenswrapper[4806]: I0126 08:12:05.508777 4806 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d88b0ca-e8c5-40b1-b156-4f3fb3c612c3-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 08:12:05 crc kubenswrapper[4806]: I0126 08:12:05.508809 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-77d4r\" (UniqueName: \"kubernetes.io/projected/a0dff0ed-4136-4f1b-b3d4-5eb610a99d38-kube-api-access-77d4r\") on node \"crc\" DevicePath \"\"" Jan 26 08:12:05 crc kubenswrapper[4806]: I0126 08:12:05.508821 4806 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a0dff0ed-4136-4f1b-b3d4-5eb610a99d38-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 26 08:12:05 crc kubenswrapper[4806]: I0126 08:12:05.541313 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0dff0ed-4136-4f1b-b3d4-5eb610a99d38-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "a0dff0ed-4136-4f1b-b3d4-5eb610a99d38" (UID: "a0dff0ed-4136-4f1b-b3d4-5eb610a99d38"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:12:05 crc kubenswrapper[4806]: I0126 08:12:05.546753 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0dff0ed-4136-4f1b-b3d4-5eb610a99d38-config" (OuterVolumeSpecName: "config") pod "a0dff0ed-4136-4f1b-b3d4-5eb610a99d38" (UID: "a0dff0ed-4136-4f1b-b3d4-5eb610a99d38"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:12:05 crc kubenswrapper[4806]: I0126 08:12:05.562038 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0dff0ed-4136-4f1b-b3d4-5eb610a99d38-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "a0dff0ed-4136-4f1b-b3d4-5eb610a99d38" (UID: "a0dff0ed-4136-4f1b-b3d4-5eb610a99d38"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:12:05 crc kubenswrapper[4806]: I0126 08:12:05.572778 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0dff0ed-4136-4f1b-b3d4-5eb610a99d38-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a0dff0ed-4136-4f1b-b3d4-5eb610a99d38" (UID: "a0dff0ed-4136-4f1b-b3d4-5eb610a99d38"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:12:05 crc kubenswrapper[4806]: I0126 08:12:05.611872 4806 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0dff0ed-4136-4f1b-b3d4-5eb610a99d38-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 08:12:05 crc kubenswrapper[4806]: I0126 08:12:05.611900 4806 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/a0dff0ed-4136-4f1b-b3d4-5eb610a99d38-config\") on node \"crc\" DevicePath \"\"" Jan 26 08:12:05 crc kubenswrapper[4806]: I0126 08:12:05.611910 4806 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0dff0ed-4136-4f1b-b3d4-5eb610a99d38-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 08:12:05 crc kubenswrapper[4806]: I0126 08:12:05.611918 4806 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0dff0ed-4136-4f1b-b3d4-5eb610a99d38-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 08:12:05 crc kubenswrapper[4806]: I0126 08:12:05.613752 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0dff0ed-4136-4f1b-b3d4-5eb610a99d38-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "a0dff0ed-4136-4f1b-b3d4-5eb610a99d38" (UID: "a0dff0ed-4136-4f1b-b3d4-5eb610a99d38"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:12:05 crc kubenswrapper[4806]: I0126 08:12:05.699032 4806 scope.go:117] "RemoveContainer" containerID="9f5a7b12ade198be05a17d5e64123755ee3e65667b31aa1124c321ae77428c01" Jan 26 08:12:05 crc kubenswrapper[4806]: I0126 08:12:05.701611 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 26 08:12:05 crc kubenswrapper[4806]: I0126 08:12:05.716919 4806 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a0dff0ed-4136-4f1b-b3d4-5eb610a99d38-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 08:12:05 crc kubenswrapper[4806]: I0126 08:12:05.720419 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-76c565f4b6-mqhr5"] Jan 26 08:12:05 crc kubenswrapper[4806]: I0126 08:12:05.736188 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-76c565f4b6-mqhr5"] Jan 26 08:12:05 crc kubenswrapper[4806]: I0126 08:12:05.818499 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/42d9518b-30e3-453d-9680-c84861b479e5-etc-machine-id\") pod \"42d9518b-30e3-453d-9680-c84861b479e5\" (UID: \"42d9518b-30e3-453d-9680-c84861b479e5\") " Jan 26 08:12:05 crc kubenswrapper[4806]: I0126 08:12:05.818658 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/42d9518b-30e3-453d-9680-c84861b479e5-config-data-custom\") pod \"42d9518b-30e3-453d-9680-c84861b479e5\" (UID: \"42d9518b-30e3-453d-9680-c84861b479e5\") " Jan 26 08:12:05 crc kubenswrapper[4806]: I0126 08:12:05.818730 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zpg5k\" (UniqueName: \"kubernetes.io/projected/42d9518b-30e3-453d-9680-c84861b479e5-kube-api-access-zpg5k\") pod \"42d9518b-30e3-453d-9680-c84861b479e5\" (UID: \"42d9518b-30e3-453d-9680-c84861b479e5\") " Jan 26 08:12:05 crc kubenswrapper[4806]: I0126 08:12:05.818750 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42d9518b-30e3-453d-9680-c84861b479e5-scripts\") pod \"42d9518b-30e3-453d-9680-c84861b479e5\" (UID: \"42d9518b-30e3-453d-9680-c84861b479e5\") " Jan 26 08:12:05 crc kubenswrapper[4806]: I0126 08:12:05.818807 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42d9518b-30e3-453d-9680-c84861b479e5-config-data\") pod \"42d9518b-30e3-453d-9680-c84861b479e5\" (UID: \"42d9518b-30e3-453d-9680-c84861b479e5\") " Jan 26 08:12:05 crc kubenswrapper[4806]: I0126 08:12:05.818859 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42d9518b-30e3-453d-9680-c84861b479e5-combined-ca-bundle\") pod \"42d9518b-30e3-453d-9680-c84861b479e5\" (UID: \"42d9518b-30e3-453d-9680-c84861b479e5\") " Jan 26 08:12:05 crc kubenswrapper[4806]: I0126 08:12:05.819605 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/42d9518b-30e3-453d-9680-c84861b479e5-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "42d9518b-30e3-453d-9680-c84861b479e5" (UID: "42d9518b-30e3-453d-9680-c84861b479e5"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 08:12:05 crc kubenswrapper[4806]: I0126 08:12:05.826123 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42d9518b-30e3-453d-9680-c84861b479e5-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "42d9518b-30e3-453d-9680-c84861b479e5" (UID: "42d9518b-30e3-453d-9680-c84861b479e5"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:12:05 crc kubenswrapper[4806]: I0126 08:12:05.826835 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42d9518b-30e3-453d-9680-c84861b479e5-scripts" (OuterVolumeSpecName: "scripts") pod "42d9518b-30e3-453d-9680-c84861b479e5" (UID: "42d9518b-30e3-453d-9680-c84861b479e5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:12:05 crc kubenswrapper[4806]: I0126 08:12:05.832999 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42d9518b-30e3-453d-9680-c84861b479e5-kube-api-access-zpg5k" (OuterVolumeSpecName: "kube-api-access-zpg5k") pod "42d9518b-30e3-453d-9680-c84861b479e5" (UID: "42d9518b-30e3-453d-9680-c84861b479e5"). InnerVolumeSpecName "kube-api-access-zpg5k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:12:05 crc kubenswrapper[4806]: I0126 08:12:05.859715 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 26 08:12:05 crc kubenswrapper[4806]: E0126 08:12:05.860190 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0dff0ed-4136-4f1b-b3d4-5eb610a99d38" containerName="neutron-api" Jan 26 08:12:05 crc kubenswrapper[4806]: I0126 08:12:05.860210 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0dff0ed-4136-4f1b-b3d4-5eb610a99d38" containerName="neutron-api" Jan 26 08:12:05 crc kubenswrapper[4806]: E0126 08:12:05.860229 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d88b0ca-e8c5-40b1-b156-4f3fb3c612c3" containerName="barbican-api" Jan 26 08:12:05 crc kubenswrapper[4806]: I0126 08:12:05.860236 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d88b0ca-e8c5-40b1-b156-4f3fb3c612c3" containerName="barbican-api" Jan 26 08:12:05 crc kubenswrapper[4806]: E0126 08:12:05.860246 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="101d2806-ced2-4267-86ec-114320756e46" containerName="dnsmasq-dns" Jan 26 08:12:05 crc kubenswrapper[4806]: I0126 08:12:05.860251 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="101d2806-ced2-4267-86ec-114320756e46" containerName="dnsmasq-dns" Jan 26 08:12:05 crc kubenswrapper[4806]: E0126 08:12:05.860262 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0dff0ed-4136-4f1b-b3d4-5eb610a99d38" containerName="neutron-httpd" Jan 26 08:12:05 crc kubenswrapper[4806]: I0126 08:12:05.860267 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0dff0ed-4136-4f1b-b3d4-5eb610a99d38" containerName="neutron-httpd" Jan 26 08:12:05 crc kubenswrapper[4806]: E0126 08:12:05.860282 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="101d2806-ced2-4267-86ec-114320756e46" containerName="init" Jan 26 08:12:05 crc kubenswrapper[4806]: I0126 08:12:05.860289 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="101d2806-ced2-4267-86ec-114320756e46" containerName="init" Jan 26 08:12:05 crc kubenswrapper[4806]: E0126 08:12:05.860298 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42d9518b-30e3-453d-9680-c84861b479e5" containerName="probe" Jan 26 08:12:05 crc kubenswrapper[4806]: I0126 08:12:05.860305 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="42d9518b-30e3-453d-9680-c84861b479e5" containerName="probe" Jan 26 08:12:05 crc kubenswrapper[4806]: E0126 08:12:05.860321 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d88b0ca-e8c5-40b1-b156-4f3fb3c612c3" containerName="barbican-api-log" Jan 26 08:12:05 crc kubenswrapper[4806]: I0126 08:12:05.860328 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d88b0ca-e8c5-40b1-b156-4f3fb3c612c3" containerName="barbican-api-log" Jan 26 08:12:05 crc kubenswrapper[4806]: E0126 08:12:05.860344 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42d9518b-30e3-453d-9680-c84861b479e5" containerName="cinder-scheduler" Jan 26 08:12:05 crc kubenswrapper[4806]: I0126 08:12:05.860349 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="42d9518b-30e3-453d-9680-c84861b479e5" containerName="cinder-scheduler" Jan 26 08:12:05 crc kubenswrapper[4806]: E0126 08:12:05.860363 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36dba152-b43d-47c4-94bb-874f93b0884f" containerName="horizon-log" Jan 26 08:12:05 crc kubenswrapper[4806]: I0126 08:12:05.860369 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="36dba152-b43d-47c4-94bb-874f93b0884f" containerName="horizon-log" Jan 26 08:12:05 crc kubenswrapper[4806]: E0126 08:12:05.860378 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36dba152-b43d-47c4-94bb-874f93b0884f" containerName="horizon" Jan 26 08:12:05 crc kubenswrapper[4806]: I0126 08:12:05.860384 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="36dba152-b43d-47c4-94bb-874f93b0884f" containerName="horizon" Jan 26 08:12:05 crc kubenswrapper[4806]: I0126 08:12:05.860552 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0dff0ed-4136-4f1b-b3d4-5eb610a99d38" containerName="neutron-api" Jan 26 08:12:05 crc kubenswrapper[4806]: I0126 08:12:05.860564 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d88b0ca-e8c5-40b1-b156-4f3fb3c612c3" containerName="barbican-api-log" Jan 26 08:12:05 crc kubenswrapper[4806]: I0126 08:12:05.860575 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="42d9518b-30e3-453d-9680-c84861b479e5" containerName="probe" Jan 26 08:12:05 crc kubenswrapper[4806]: I0126 08:12:05.860591 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0dff0ed-4136-4f1b-b3d4-5eb610a99d38" containerName="neutron-httpd" Jan 26 08:12:05 crc kubenswrapper[4806]: I0126 08:12:05.860606 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="101d2806-ced2-4267-86ec-114320756e46" containerName="dnsmasq-dns" Jan 26 08:12:05 crc kubenswrapper[4806]: I0126 08:12:05.860613 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="42d9518b-30e3-453d-9680-c84861b479e5" containerName="cinder-scheduler" Jan 26 08:12:05 crc kubenswrapper[4806]: I0126 08:12:05.860625 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="36dba152-b43d-47c4-94bb-874f93b0884f" containerName="horizon-log" Jan 26 08:12:05 crc kubenswrapper[4806]: I0126 08:12:05.860633 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d88b0ca-e8c5-40b1-b156-4f3fb3c612c3" containerName="barbican-api" Jan 26 08:12:05 crc kubenswrapper[4806]: I0126 08:12:05.860643 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="36dba152-b43d-47c4-94bb-874f93b0884f" containerName="horizon" Jan 26 08:12:05 crc kubenswrapper[4806]: I0126 08:12:05.861288 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 26 08:12:05 crc kubenswrapper[4806]: I0126 08:12:05.866794 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 26 08:12:05 crc kubenswrapper[4806]: I0126 08:12:05.867167 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-7x2zf" Jan 26 08:12:05 crc kubenswrapper[4806]: I0126 08:12:05.869396 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 26 08:12:05 crc kubenswrapper[4806]: I0126 08:12:05.888492 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 26 08:12:05 crc kubenswrapper[4806]: I0126 08:12:05.901617 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42d9518b-30e3-453d-9680-c84861b479e5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "42d9518b-30e3-453d-9680-c84861b479e5" (UID: "42d9518b-30e3-453d-9680-c84861b479e5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:12:05 crc kubenswrapper[4806]: I0126 08:12:05.920340 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a30e60a-8dcc-43ea-bb0c-25ad2c6a62f2-combined-ca-bundle\") pod \"openstackclient\" (UID: \"5a30e60a-8dcc-43ea-bb0c-25ad2c6a62f2\") " pod="openstack/openstackclient" Jan 26 08:12:05 crc kubenswrapper[4806]: I0126 08:12:05.920383 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5a30e60a-8dcc-43ea-bb0c-25ad2c6a62f2-openstack-config-secret\") pod \"openstackclient\" (UID: \"5a30e60a-8dcc-43ea-bb0c-25ad2c6a62f2\") " pod="openstack/openstackclient" Jan 26 08:12:05 crc kubenswrapper[4806]: I0126 08:12:05.920475 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gbtf\" (UniqueName: \"kubernetes.io/projected/5a30e60a-8dcc-43ea-bb0c-25ad2c6a62f2-kube-api-access-5gbtf\") pod \"openstackclient\" (UID: \"5a30e60a-8dcc-43ea-bb0c-25ad2c6a62f2\") " pod="openstack/openstackclient" Jan 26 08:12:05 crc kubenswrapper[4806]: I0126 08:12:05.920495 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5a30e60a-8dcc-43ea-bb0c-25ad2c6a62f2-openstack-config\") pod \"openstackclient\" (UID: \"5a30e60a-8dcc-43ea-bb0c-25ad2c6a62f2\") " pod="openstack/openstackclient" Jan 26 08:12:05 crc kubenswrapper[4806]: I0126 08:12:05.920582 4806 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42d9518b-30e3-453d-9680-c84861b479e5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 08:12:05 crc kubenswrapper[4806]: I0126 08:12:05.920594 4806 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/42d9518b-30e3-453d-9680-c84861b479e5-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 26 08:12:05 crc kubenswrapper[4806]: I0126 08:12:05.920604 4806 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/42d9518b-30e3-453d-9680-c84861b479e5-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 26 08:12:05 crc kubenswrapper[4806]: I0126 08:12:05.920614 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zpg5k\" (UniqueName: \"kubernetes.io/projected/42d9518b-30e3-453d-9680-c84861b479e5-kube-api-access-zpg5k\") on node \"crc\" DevicePath \"\"" Jan 26 08:12:05 crc kubenswrapper[4806]: I0126 08:12:05.920661 4806 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42d9518b-30e3-453d-9680-c84861b479e5-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 08:12:05 crc kubenswrapper[4806]: I0126 08:12:05.966164 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42d9518b-30e3-453d-9680-c84861b479e5-config-data" (OuterVolumeSpecName: "config-data") pod "42d9518b-30e3-453d-9680-c84861b479e5" (UID: "42d9518b-30e3-453d-9680-c84861b479e5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:12:06 crc kubenswrapper[4806]: I0126 08:12:06.021685 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a30e60a-8dcc-43ea-bb0c-25ad2c6a62f2-combined-ca-bundle\") pod \"openstackclient\" (UID: \"5a30e60a-8dcc-43ea-bb0c-25ad2c6a62f2\") " pod="openstack/openstackclient" Jan 26 08:12:06 crc kubenswrapper[4806]: I0126 08:12:06.021738 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5a30e60a-8dcc-43ea-bb0c-25ad2c6a62f2-openstack-config-secret\") pod \"openstackclient\" (UID: \"5a30e60a-8dcc-43ea-bb0c-25ad2c6a62f2\") " pod="openstack/openstackclient" Jan 26 08:12:06 crc kubenswrapper[4806]: I0126 08:12:06.021834 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5gbtf\" (UniqueName: \"kubernetes.io/projected/5a30e60a-8dcc-43ea-bb0c-25ad2c6a62f2-kube-api-access-5gbtf\") pod \"openstackclient\" (UID: \"5a30e60a-8dcc-43ea-bb0c-25ad2c6a62f2\") " pod="openstack/openstackclient" Jan 26 08:12:06 crc kubenswrapper[4806]: I0126 08:12:06.021855 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5a30e60a-8dcc-43ea-bb0c-25ad2c6a62f2-openstack-config\") pod \"openstackclient\" (UID: \"5a30e60a-8dcc-43ea-bb0c-25ad2c6a62f2\") " pod="openstack/openstackclient" Jan 26 08:12:06 crc kubenswrapper[4806]: I0126 08:12:06.021923 4806 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42d9518b-30e3-453d-9680-c84861b479e5-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 08:12:06 crc kubenswrapper[4806]: I0126 08:12:06.022693 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5a30e60a-8dcc-43ea-bb0c-25ad2c6a62f2-openstack-config\") pod \"openstackclient\" (UID: \"5a30e60a-8dcc-43ea-bb0c-25ad2c6a62f2\") " pod="openstack/openstackclient" Jan 26 08:12:06 crc kubenswrapper[4806]: I0126 08:12:06.025916 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5a30e60a-8dcc-43ea-bb0c-25ad2c6a62f2-openstack-config-secret\") pod \"openstackclient\" (UID: \"5a30e60a-8dcc-43ea-bb0c-25ad2c6a62f2\") " pod="openstack/openstackclient" Jan 26 08:12:06 crc kubenswrapper[4806]: I0126 08:12:06.027146 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a30e60a-8dcc-43ea-bb0c-25ad2c6a62f2-combined-ca-bundle\") pod \"openstackclient\" (UID: \"5a30e60a-8dcc-43ea-bb0c-25ad2c6a62f2\") " pod="openstack/openstackclient" Jan 26 08:12:06 crc kubenswrapper[4806]: I0126 08:12:06.042934 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5gbtf\" (UniqueName: \"kubernetes.io/projected/5a30e60a-8dcc-43ea-bb0c-25ad2c6a62f2-kube-api-access-5gbtf\") pod \"openstackclient\" (UID: \"5a30e60a-8dcc-43ea-bb0c-25ad2c6a62f2\") " pod="openstack/openstackclient" Jan 26 08:12:06 crc kubenswrapper[4806]: I0126 08:12:06.140513 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Jan 26 08:12:06 crc kubenswrapper[4806]: I0126 08:12:06.141251 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 26 08:12:06 crc kubenswrapper[4806]: I0126 08:12:06.174636 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Jan 26 08:12:06 crc kubenswrapper[4806]: I0126 08:12:06.188675 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 26 08:12:06 crc kubenswrapper[4806]: I0126 08:12:06.190031 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 26 08:12:06 crc kubenswrapper[4806]: I0126 08:12:06.197619 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 26 08:12:06 crc kubenswrapper[4806]: I0126 08:12:06.227168 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/f39640fb-b2ef-4514-84d0-38c6d07adb11-openstack-config\") pod \"openstackclient\" (UID: \"f39640fb-b2ef-4514-84d0-38c6d07adb11\") " pod="openstack/openstackclient" Jan 26 08:12:06 crc kubenswrapper[4806]: I0126 08:12:06.227251 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/f39640fb-b2ef-4514-84d0-38c6d07adb11-openstack-config-secret\") pod \"openstackclient\" (UID: \"f39640fb-b2ef-4514-84d0-38c6d07adb11\") " pod="openstack/openstackclient" Jan 26 08:12:06 crc kubenswrapper[4806]: I0126 08:12:06.227312 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctzcb\" (UniqueName: \"kubernetes.io/projected/f39640fb-b2ef-4514-84d0-38c6d07adb11-kube-api-access-ctzcb\") pod \"openstackclient\" (UID: \"f39640fb-b2ef-4514-84d0-38c6d07adb11\") " pod="openstack/openstackclient" Jan 26 08:12:06 crc kubenswrapper[4806]: I0126 08:12:06.227330 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f39640fb-b2ef-4514-84d0-38c6d07adb11-combined-ca-bundle\") pod \"openstackclient\" (UID: \"f39640fb-b2ef-4514-84d0-38c6d07adb11\") " pod="openstack/openstackclient" Jan 26 08:12:06 crc kubenswrapper[4806]: I0126 08:12:06.245441 4806 generic.go:334] "Generic (PLEG): container finished" podID="93bf46a8-2942-4b36-9853-88ff5c6e756b" containerID="8301cf9275eda41f8581b82288261e3c0d7d73701303206c3c8836982952549f" exitCode=2 Jan 26 08:12:06 crc kubenswrapper[4806]: I0126 08:12:06.245585 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"93bf46a8-2942-4b36-9853-88ff5c6e756b","Type":"ContainerDied","Data":"8301cf9275eda41f8581b82288261e3c0d7d73701303206c3c8836982952549f"} Jan 26 08:12:06 crc kubenswrapper[4806]: I0126 08:12:06.257613 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"42d9518b-30e3-453d-9680-c84861b479e5","Type":"ContainerDied","Data":"c40b2163dc8728b2a720be96b3020cd5c25a673d7c2540fa72880cddbde0275a"} Jan 26 08:12:06 crc kubenswrapper[4806]: I0126 08:12:06.259255 4806 scope.go:117] "RemoveContainer" containerID="c1e20f70cd9e0de61ac9187c7dcc8c3273585475d8fc0e0d6859546ed5a413f4" Jan 26 08:12:06 crc kubenswrapper[4806]: I0126 08:12:06.258122 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 26 08:12:06 crc kubenswrapper[4806]: I0126 08:12:06.266668 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7bcf6cb6cc-b9fxx" event={"ID":"a0dff0ed-4136-4f1b-b3d4-5eb610a99d38","Type":"ContainerDied","Data":"a3801e557d6f2db14f64621f4c38a600712a179b8e545df0f587f6cdc5858021"} Jan 26 08:12:06 crc kubenswrapper[4806]: I0126 08:12:06.266756 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7bcf6cb6cc-b9fxx" Jan 26 08:12:06 crc kubenswrapper[4806]: I0126 08:12:06.298668 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7bcf6cb6cc-b9fxx"] Jan 26 08:12:06 crc kubenswrapper[4806]: I0126 08:12:06.304137 4806 scope.go:117] "RemoveContainer" containerID="193b314a43294d9e54ad42bc4ba171b212b7d0142521aaead843c1a609645606" Jan 26 08:12:06 crc kubenswrapper[4806]: I0126 08:12:06.306169 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-7bcf6cb6cc-b9fxx"] Jan 26 08:12:06 crc kubenswrapper[4806]: I0126 08:12:06.320388 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 26 08:12:06 crc kubenswrapper[4806]: I0126 08:12:06.330652 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/f39640fb-b2ef-4514-84d0-38c6d07adb11-openstack-config\") pod \"openstackclient\" (UID: \"f39640fb-b2ef-4514-84d0-38c6d07adb11\") " pod="openstack/openstackclient" Jan 26 08:12:06 crc kubenswrapper[4806]: I0126 08:12:06.330730 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/f39640fb-b2ef-4514-84d0-38c6d07adb11-openstack-config-secret\") pod \"openstackclient\" (UID: \"f39640fb-b2ef-4514-84d0-38c6d07adb11\") " pod="openstack/openstackclient" Jan 26 08:12:06 crc kubenswrapper[4806]: I0126 08:12:06.330787 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ctzcb\" (UniqueName: \"kubernetes.io/projected/f39640fb-b2ef-4514-84d0-38c6d07adb11-kube-api-access-ctzcb\") pod \"openstackclient\" (UID: \"f39640fb-b2ef-4514-84d0-38c6d07adb11\") " pod="openstack/openstackclient" Jan 26 08:12:06 crc kubenswrapper[4806]: I0126 08:12:06.330802 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f39640fb-b2ef-4514-84d0-38c6d07adb11-combined-ca-bundle\") pod \"openstackclient\" (UID: \"f39640fb-b2ef-4514-84d0-38c6d07adb11\") " pod="openstack/openstackclient" Jan 26 08:12:06 crc kubenswrapper[4806]: I0126 08:12:06.332423 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/f39640fb-b2ef-4514-84d0-38c6d07adb11-openstack-config\") pod \"openstackclient\" (UID: \"f39640fb-b2ef-4514-84d0-38c6d07adb11\") " pod="openstack/openstackclient" Jan 26 08:12:06 crc kubenswrapper[4806]: I0126 08:12:06.335904 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 26 08:12:06 crc kubenswrapper[4806]: I0126 08:12:06.336979 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/f39640fb-b2ef-4514-84d0-38c6d07adb11-openstack-config-secret\") pod \"openstackclient\" (UID: \"f39640fb-b2ef-4514-84d0-38c6d07adb11\") " pod="openstack/openstackclient" Jan 26 08:12:06 crc kubenswrapper[4806]: I0126 08:12:06.338165 4806 scope.go:117] "RemoveContainer" containerID="703c7f76126c519f5df185e47194d1d9b3f23aed98c6ae176f3ccd52e6ab29ea" Jan 26 08:12:06 crc kubenswrapper[4806]: I0126 08:12:06.346034 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f39640fb-b2ef-4514-84d0-38c6d07adb11-combined-ca-bundle\") pod \"openstackclient\" (UID: \"f39640fb-b2ef-4514-84d0-38c6d07adb11\") " pod="openstack/openstackclient" Jan 26 08:12:06 crc kubenswrapper[4806]: I0126 08:12:06.359340 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 26 08:12:06 crc kubenswrapper[4806]: I0126 08:12:06.361398 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 26 08:12:06 crc kubenswrapper[4806]: I0126 08:12:06.364976 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 26 08:12:06 crc kubenswrapper[4806]: I0126 08:12:06.365363 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ctzcb\" (UniqueName: \"kubernetes.io/projected/f39640fb-b2ef-4514-84d0-38c6d07adb11-kube-api-access-ctzcb\") pod \"openstackclient\" (UID: \"f39640fb-b2ef-4514-84d0-38c6d07adb11\") " pod="openstack/openstackclient" Jan 26 08:12:06 crc kubenswrapper[4806]: I0126 08:12:06.401674 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 26 08:12:06 crc kubenswrapper[4806]: I0126 08:12:06.431902 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/462d2770-3796-4a02-b83e-91de31a08bd0-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"462d2770-3796-4a02-b83e-91de31a08bd0\") " pod="openstack/cinder-scheduler-0" Jan 26 08:12:06 crc kubenswrapper[4806]: I0126 08:12:06.432040 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/462d2770-3796-4a02-b83e-91de31a08bd0-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"462d2770-3796-4a02-b83e-91de31a08bd0\") " pod="openstack/cinder-scheduler-0" Jan 26 08:12:06 crc kubenswrapper[4806]: I0126 08:12:06.432070 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8jgd\" (UniqueName: \"kubernetes.io/projected/462d2770-3796-4a02-b83e-91de31a08bd0-kube-api-access-f8jgd\") pod \"cinder-scheduler-0\" (UID: \"462d2770-3796-4a02-b83e-91de31a08bd0\") " pod="openstack/cinder-scheduler-0" Jan 26 08:12:06 crc kubenswrapper[4806]: I0126 08:12:06.432100 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/462d2770-3796-4a02-b83e-91de31a08bd0-scripts\") pod \"cinder-scheduler-0\" (UID: \"462d2770-3796-4a02-b83e-91de31a08bd0\") " pod="openstack/cinder-scheduler-0" Jan 26 08:12:06 crc kubenswrapper[4806]: I0126 08:12:06.432224 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/462d2770-3796-4a02-b83e-91de31a08bd0-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"462d2770-3796-4a02-b83e-91de31a08bd0\") " pod="openstack/cinder-scheduler-0" Jan 26 08:12:06 crc kubenswrapper[4806]: I0126 08:12:06.432445 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/462d2770-3796-4a02-b83e-91de31a08bd0-config-data\") pod \"cinder-scheduler-0\" (UID: \"462d2770-3796-4a02-b83e-91de31a08bd0\") " pod="openstack/cinder-scheduler-0" Jan 26 08:12:06 crc kubenswrapper[4806]: E0126 08:12:06.436159 4806 log.go:32] "RunPodSandbox from runtime service failed" err=< Jan 26 08:12:06 crc kubenswrapper[4806]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_5a30e60a-8dcc-43ea-bb0c-25ad2c6a62f2_0(51827aa4f084b9eea725a022b1ce4ed385eba891064f9964aa4051ce397a8152): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"51827aa4f084b9eea725a022b1ce4ed385eba891064f9964aa4051ce397a8152" Netns:"/var/run/netns/9941462e-9add-4144-a5e7-0e118264cd6d" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=51827aa4f084b9eea725a022b1ce4ed385eba891064f9964aa4051ce397a8152;K8S_POD_UID=5a30e60a-8dcc-43ea-bb0c-25ad2c6a62f2" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/5a30e60a-8dcc-43ea-bb0c-25ad2c6a62f2]: expected pod UID "5a30e60a-8dcc-43ea-bb0c-25ad2c6a62f2" but got "f39640fb-b2ef-4514-84d0-38c6d07adb11" from Kube API Jan 26 08:12:06 crc kubenswrapper[4806]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 26 08:12:06 crc kubenswrapper[4806]: > Jan 26 08:12:06 crc kubenswrapper[4806]: E0126 08:12:06.436222 4806 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Jan 26 08:12:06 crc kubenswrapper[4806]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_5a30e60a-8dcc-43ea-bb0c-25ad2c6a62f2_0(51827aa4f084b9eea725a022b1ce4ed385eba891064f9964aa4051ce397a8152): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"51827aa4f084b9eea725a022b1ce4ed385eba891064f9964aa4051ce397a8152" Netns:"/var/run/netns/9941462e-9add-4144-a5e7-0e118264cd6d" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=51827aa4f084b9eea725a022b1ce4ed385eba891064f9964aa4051ce397a8152;K8S_POD_UID=5a30e60a-8dcc-43ea-bb0c-25ad2c6a62f2" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/5a30e60a-8dcc-43ea-bb0c-25ad2c6a62f2]: expected pod UID "5a30e60a-8dcc-43ea-bb0c-25ad2c6a62f2" but got "f39640fb-b2ef-4514-84d0-38c6d07adb11" from Kube API Jan 26 08:12:06 crc kubenswrapper[4806]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 26 08:12:06 crc kubenswrapper[4806]: > pod="openstack/openstackclient" Jan 26 08:12:06 crc kubenswrapper[4806]: I0126 08:12:06.506778 4806 scope.go:117] "RemoveContainer" containerID="fd594b637acf82dc96d273bb5255ca283aeb955bd5f41111dc327731bb40271d" Jan 26 08:12:06 crc kubenswrapper[4806]: I0126 08:12:06.515837 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 26 08:12:06 crc kubenswrapper[4806]: I0126 08:12:06.534349 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/462d2770-3796-4a02-b83e-91de31a08bd0-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"462d2770-3796-4a02-b83e-91de31a08bd0\") " pod="openstack/cinder-scheduler-0" Jan 26 08:12:06 crc kubenswrapper[4806]: I0126 08:12:06.534394 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f8jgd\" (UniqueName: \"kubernetes.io/projected/462d2770-3796-4a02-b83e-91de31a08bd0-kube-api-access-f8jgd\") pod \"cinder-scheduler-0\" (UID: \"462d2770-3796-4a02-b83e-91de31a08bd0\") " pod="openstack/cinder-scheduler-0" Jan 26 08:12:06 crc kubenswrapper[4806]: I0126 08:12:06.534423 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/462d2770-3796-4a02-b83e-91de31a08bd0-scripts\") pod \"cinder-scheduler-0\" (UID: \"462d2770-3796-4a02-b83e-91de31a08bd0\") " pod="openstack/cinder-scheduler-0" Jan 26 08:12:06 crc kubenswrapper[4806]: I0126 08:12:06.534450 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/462d2770-3796-4a02-b83e-91de31a08bd0-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"462d2770-3796-4a02-b83e-91de31a08bd0\") " pod="openstack/cinder-scheduler-0" Jan 26 08:12:06 crc kubenswrapper[4806]: I0126 08:12:06.534484 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/462d2770-3796-4a02-b83e-91de31a08bd0-config-data\") pod \"cinder-scheduler-0\" (UID: \"462d2770-3796-4a02-b83e-91de31a08bd0\") " pod="openstack/cinder-scheduler-0" Jan 26 08:12:06 crc kubenswrapper[4806]: I0126 08:12:06.534578 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/462d2770-3796-4a02-b83e-91de31a08bd0-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"462d2770-3796-4a02-b83e-91de31a08bd0\") " pod="openstack/cinder-scheduler-0" Jan 26 08:12:06 crc kubenswrapper[4806]: I0126 08:12:06.535042 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/462d2770-3796-4a02-b83e-91de31a08bd0-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"462d2770-3796-4a02-b83e-91de31a08bd0\") " pod="openstack/cinder-scheduler-0" Jan 26 08:12:06 crc kubenswrapper[4806]: I0126 08:12:06.542276 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/462d2770-3796-4a02-b83e-91de31a08bd0-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"462d2770-3796-4a02-b83e-91de31a08bd0\") " pod="openstack/cinder-scheduler-0" Jan 26 08:12:06 crc kubenswrapper[4806]: I0126 08:12:06.546114 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/462d2770-3796-4a02-b83e-91de31a08bd0-config-data\") pod \"cinder-scheduler-0\" (UID: \"462d2770-3796-4a02-b83e-91de31a08bd0\") " pod="openstack/cinder-scheduler-0" Jan 26 08:12:06 crc kubenswrapper[4806]: I0126 08:12:06.549431 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/462d2770-3796-4a02-b83e-91de31a08bd0-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"462d2770-3796-4a02-b83e-91de31a08bd0\") " pod="openstack/cinder-scheduler-0" Jan 26 08:12:06 crc kubenswrapper[4806]: I0126 08:12:06.553398 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/462d2770-3796-4a02-b83e-91de31a08bd0-scripts\") pod \"cinder-scheduler-0\" (UID: \"462d2770-3796-4a02-b83e-91de31a08bd0\") " pod="openstack/cinder-scheduler-0" Jan 26 08:12:06 crc kubenswrapper[4806]: I0126 08:12:06.557783 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8jgd\" (UniqueName: \"kubernetes.io/projected/462d2770-3796-4a02-b83e-91de31a08bd0-kube-api-access-f8jgd\") pod \"cinder-scheduler-0\" (UID: \"462d2770-3796-4a02-b83e-91de31a08bd0\") " pod="openstack/cinder-scheduler-0" Jan 26 08:12:06 crc kubenswrapper[4806]: I0126 08:12:06.813888 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 26 08:12:07 crc kubenswrapper[4806]: I0126 08:12:07.062131 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="101d2806-ced2-4267-86ec-114320756e46" path="/var/lib/kubelet/pods/101d2806-ced2-4267-86ec-114320756e46/volumes" Jan 26 08:12:07 crc kubenswrapper[4806]: I0126 08:12:07.075935 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d88b0ca-e8c5-40b1-b156-4f3fb3c612c3" path="/var/lib/kubelet/pods/1d88b0ca-e8c5-40b1-b156-4f3fb3c612c3/volumes" Jan 26 08:12:07 crc kubenswrapper[4806]: I0126 08:12:07.076834 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42d9518b-30e3-453d-9680-c84861b479e5" path="/var/lib/kubelet/pods/42d9518b-30e3-453d-9680-c84861b479e5/volumes" Jan 26 08:12:07 crc kubenswrapper[4806]: I0126 08:12:07.078429 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0dff0ed-4136-4f1b-b3d4-5eb610a99d38" path="/var/lib/kubelet/pods/a0dff0ed-4136-4f1b-b3d4-5eb610a99d38/volumes" Jan 26 08:12:07 crc kubenswrapper[4806]: I0126 08:12:07.080894 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 26 08:12:07 crc kubenswrapper[4806]: I0126 08:12:07.257328 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 26 08:12:07 crc kubenswrapper[4806]: I0126 08:12:07.286323 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"f39640fb-b2ef-4514-84d0-38c6d07adb11","Type":"ContainerStarted","Data":"a0cc1e0f7fa058090f74aa2876fcbef107a87731e039331ab2448e5cabf4d9a0"} Jan 26 08:12:07 crc kubenswrapper[4806]: I0126 08:12:07.292922 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 26 08:12:07 crc kubenswrapper[4806]: I0126 08:12:07.293626 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"462d2770-3796-4a02-b83e-91de31a08bd0","Type":"ContainerStarted","Data":"d2423921447bd642e5be6c67f87e0f3b76e2d4f38e8def397f8969711bc898e0"} Jan 26 08:12:07 crc kubenswrapper[4806]: I0126 08:12:07.305114 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 26 08:12:07 crc kubenswrapper[4806]: I0126 08:12:07.308190 4806 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="5a30e60a-8dcc-43ea-bb0c-25ad2c6a62f2" podUID="f39640fb-b2ef-4514-84d0-38c6d07adb11" Jan 26 08:12:07 crc kubenswrapper[4806]: I0126 08:12:07.358956 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5a30e60a-8dcc-43ea-bb0c-25ad2c6a62f2-openstack-config\") pod \"5a30e60a-8dcc-43ea-bb0c-25ad2c6a62f2\" (UID: \"5a30e60a-8dcc-43ea-bb0c-25ad2c6a62f2\") " Jan 26 08:12:07 crc kubenswrapper[4806]: I0126 08:12:07.359312 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5a30e60a-8dcc-43ea-bb0c-25ad2c6a62f2-openstack-config-secret\") pod \"5a30e60a-8dcc-43ea-bb0c-25ad2c6a62f2\" (UID: \"5a30e60a-8dcc-43ea-bb0c-25ad2c6a62f2\") " Jan 26 08:12:07 crc kubenswrapper[4806]: I0126 08:12:07.359387 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5gbtf\" (UniqueName: \"kubernetes.io/projected/5a30e60a-8dcc-43ea-bb0c-25ad2c6a62f2-kube-api-access-5gbtf\") pod \"5a30e60a-8dcc-43ea-bb0c-25ad2c6a62f2\" (UID: \"5a30e60a-8dcc-43ea-bb0c-25ad2c6a62f2\") " Jan 26 08:12:07 crc kubenswrapper[4806]: I0126 08:12:07.359449 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a30e60a-8dcc-43ea-bb0c-25ad2c6a62f2-combined-ca-bundle\") pod \"5a30e60a-8dcc-43ea-bb0c-25ad2c6a62f2\" (UID: \"5a30e60a-8dcc-43ea-bb0c-25ad2c6a62f2\") " Jan 26 08:12:07 crc kubenswrapper[4806]: I0126 08:12:07.360508 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a30e60a-8dcc-43ea-bb0c-25ad2c6a62f2-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "5a30e60a-8dcc-43ea-bb0c-25ad2c6a62f2" (UID: "5a30e60a-8dcc-43ea-bb0c-25ad2c6a62f2"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 08:12:07 crc kubenswrapper[4806]: I0126 08:12:07.367022 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a30e60a-8dcc-43ea-bb0c-25ad2c6a62f2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5a30e60a-8dcc-43ea-bb0c-25ad2c6a62f2" (UID: "5a30e60a-8dcc-43ea-bb0c-25ad2c6a62f2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:12:07 crc kubenswrapper[4806]: I0126 08:12:07.368039 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a30e60a-8dcc-43ea-bb0c-25ad2c6a62f2-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "5a30e60a-8dcc-43ea-bb0c-25ad2c6a62f2" (UID: "5a30e60a-8dcc-43ea-bb0c-25ad2c6a62f2"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:12:07 crc kubenswrapper[4806]: I0126 08:12:07.374532 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a30e60a-8dcc-43ea-bb0c-25ad2c6a62f2-kube-api-access-5gbtf" (OuterVolumeSpecName: "kube-api-access-5gbtf") pod "5a30e60a-8dcc-43ea-bb0c-25ad2c6a62f2" (UID: "5a30e60a-8dcc-43ea-bb0c-25ad2c6a62f2"). InnerVolumeSpecName "kube-api-access-5gbtf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:12:07 crc kubenswrapper[4806]: I0126 08:12:07.462122 4806 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5a30e60a-8dcc-43ea-bb0c-25ad2c6a62f2-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 26 08:12:07 crc kubenswrapper[4806]: I0126 08:12:07.462160 4806 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5a30e60a-8dcc-43ea-bb0c-25ad2c6a62f2-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 26 08:12:07 crc kubenswrapper[4806]: I0126 08:12:07.462172 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5gbtf\" (UniqueName: \"kubernetes.io/projected/5a30e60a-8dcc-43ea-bb0c-25ad2c6a62f2-kube-api-access-5gbtf\") on node \"crc\" DevicePath \"\"" Jan 26 08:12:07 crc kubenswrapper[4806]: I0126 08:12:07.462180 4806 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a30e60a-8dcc-43ea-bb0c-25ad2c6a62f2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 08:12:07 crc kubenswrapper[4806]: I0126 08:12:07.746173 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-6b8f96b47b-sbsnb" Jan 26 08:12:08 crc kubenswrapper[4806]: I0126 08:12:08.305354 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 26 08:12:08 crc kubenswrapper[4806]: I0126 08:12:08.308585 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"462d2770-3796-4a02-b83e-91de31a08bd0","Type":"ContainerStarted","Data":"06b0addf05c6144bff32c890a619b5be42c6f34257befbf70c3494d4774bef7e"} Jan 26 08:12:08 crc kubenswrapper[4806]: I0126 08:12:08.322730 4806 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="5a30e60a-8dcc-43ea-bb0c-25ad2c6a62f2" podUID="f39640fb-b2ef-4514-84d0-38c6d07adb11" Jan 26 08:12:08 crc kubenswrapper[4806]: I0126 08:12:08.583208 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-5c479d9749-55sxk"] Jan 26 08:12:08 crc kubenswrapper[4806]: I0126 08:12:08.588166 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-5c479d9749-55sxk" Jan 26 08:12:08 crc kubenswrapper[4806]: I0126 08:12:08.603639 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-engine-config-data" Jan 26 08:12:08 crc kubenswrapper[4806]: I0126 08:12:08.603879 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-6ksgl" Jan 26 08:12:08 crc kubenswrapper[4806]: I0126 08:12:08.607113 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Jan 26 08:12:08 crc kubenswrapper[4806]: I0126 08:12:08.607472 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-5c479d9749-55sxk"] Jan 26 08:12:08 crc kubenswrapper[4806]: I0126 08:12:08.690586 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ceffb75b-59c2-41e0-96e9-4ccbb69ee956-config-data-custom\") pod \"heat-engine-5c479d9749-55sxk\" (UID: \"ceffb75b-59c2-41e0-96e9-4ccbb69ee956\") " pod="openstack/heat-engine-5c479d9749-55sxk" Jan 26 08:12:08 crc kubenswrapper[4806]: I0126 08:12:08.690675 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lcglg\" (UniqueName: \"kubernetes.io/projected/ceffb75b-59c2-41e0-96e9-4ccbb69ee956-kube-api-access-lcglg\") pod \"heat-engine-5c479d9749-55sxk\" (UID: \"ceffb75b-59c2-41e0-96e9-4ccbb69ee956\") " pod="openstack/heat-engine-5c479d9749-55sxk" Jan 26 08:12:08 crc kubenswrapper[4806]: I0126 08:12:08.690710 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ceffb75b-59c2-41e0-96e9-4ccbb69ee956-combined-ca-bundle\") pod \"heat-engine-5c479d9749-55sxk\" (UID: \"ceffb75b-59c2-41e0-96e9-4ccbb69ee956\") " pod="openstack/heat-engine-5c479d9749-55sxk" Jan 26 08:12:08 crc kubenswrapper[4806]: I0126 08:12:08.690732 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ceffb75b-59c2-41e0-96e9-4ccbb69ee956-config-data\") pod \"heat-engine-5c479d9749-55sxk\" (UID: \"ceffb75b-59c2-41e0-96e9-4ccbb69ee956\") " pod="openstack/heat-engine-5c479d9749-55sxk" Jan 26 08:12:08 crc kubenswrapper[4806]: I0126 08:12:08.736186 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-7448bc75bf-txwxj"] Jan 26 08:12:08 crc kubenswrapper[4806]: I0126 08:12:08.738041 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7448bc75bf-txwxj" Jan 26 08:12:08 crc kubenswrapper[4806]: I0126 08:12:08.751905 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-cfnapi-config-data" Jan 26 08:12:08 crc kubenswrapper[4806]: I0126 08:12:08.783441 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-7448bc75bf-txwxj"] Jan 26 08:12:08 crc kubenswrapper[4806]: I0126 08:12:08.803025 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d637212-f269-4915-b30b-4ffe4e19bb2d-combined-ca-bundle\") pod \"heat-cfnapi-7448bc75bf-txwxj\" (UID: \"4d637212-f269-4915-b30b-4ffe4e19bb2d\") " pod="openstack/heat-cfnapi-7448bc75bf-txwxj" Jan 26 08:12:08 crc kubenswrapper[4806]: I0126 08:12:08.803088 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ceffb75b-59c2-41e0-96e9-4ccbb69ee956-config-data-custom\") pod \"heat-engine-5c479d9749-55sxk\" (UID: \"ceffb75b-59c2-41e0-96e9-4ccbb69ee956\") " pod="openstack/heat-engine-5c479d9749-55sxk" Jan 26 08:12:08 crc kubenswrapper[4806]: I0126 08:12:08.803124 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjgbq\" (UniqueName: \"kubernetes.io/projected/4d637212-f269-4915-b30b-4ffe4e19bb2d-kube-api-access-vjgbq\") pod \"heat-cfnapi-7448bc75bf-txwxj\" (UID: \"4d637212-f269-4915-b30b-4ffe4e19bb2d\") " pod="openstack/heat-cfnapi-7448bc75bf-txwxj" Jan 26 08:12:08 crc kubenswrapper[4806]: I0126 08:12:08.803144 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4d637212-f269-4915-b30b-4ffe4e19bb2d-config-data-custom\") pod \"heat-cfnapi-7448bc75bf-txwxj\" (UID: \"4d637212-f269-4915-b30b-4ffe4e19bb2d\") " pod="openstack/heat-cfnapi-7448bc75bf-txwxj" Jan 26 08:12:08 crc kubenswrapper[4806]: I0126 08:12:08.803176 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lcglg\" (UniqueName: \"kubernetes.io/projected/ceffb75b-59c2-41e0-96e9-4ccbb69ee956-kube-api-access-lcglg\") pod \"heat-engine-5c479d9749-55sxk\" (UID: \"ceffb75b-59c2-41e0-96e9-4ccbb69ee956\") " pod="openstack/heat-engine-5c479d9749-55sxk" Jan 26 08:12:08 crc kubenswrapper[4806]: I0126 08:12:08.803200 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ceffb75b-59c2-41e0-96e9-4ccbb69ee956-combined-ca-bundle\") pod \"heat-engine-5c479d9749-55sxk\" (UID: \"ceffb75b-59c2-41e0-96e9-4ccbb69ee956\") " pod="openstack/heat-engine-5c479d9749-55sxk" Jan 26 08:12:08 crc kubenswrapper[4806]: I0126 08:12:08.803215 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ceffb75b-59c2-41e0-96e9-4ccbb69ee956-config-data\") pod \"heat-engine-5c479d9749-55sxk\" (UID: \"ceffb75b-59c2-41e0-96e9-4ccbb69ee956\") " pod="openstack/heat-engine-5c479d9749-55sxk" Jan 26 08:12:08 crc kubenswrapper[4806]: I0126 08:12:08.803267 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d637212-f269-4915-b30b-4ffe4e19bb2d-config-data\") pod \"heat-cfnapi-7448bc75bf-txwxj\" (UID: \"4d637212-f269-4915-b30b-4ffe4e19bb2d\") " pod="openstack/heat-cfnapi-7448bc75bf-txwxj" Jan 26 08:12:08 crc kubenswrapper[4806]: I0126 08:12:08.812467 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ceffb75b-59c2-41e0-96e9-4ccbb69ee956-combined-ca-bundle\") pod \"heat-engine-5c479d9749-55sxk\" (UID: \"ceffb75b-59c2-41e0-96e9-4ccbb69ee956\") " pod="openstack/heat-engine-5c479d9749-55sxk" Jan 26 08:12:08 crc kubenswrapper[4806]: I0126 08:12:08.812757 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ceffb75b-59c2-41e0-96e9-4ccbb69ee956-config-data-custom\") pod \"heat-engine-5c479d9749-55sxk\" (UID: \"ceffb75b-59c2-41e0-96e9-4ccbb69ee956\") " pod="openstack/heat-engine-5c479d9749-55sxk" Jan 26 08:12:08 crc kubenswrapper[4806]: I0126 08:12:08.818401 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ceffb75b-59c2-41e0-96e9-4ccbb69ee956-config-data\") pod \"heat-engine-5c479d9749-55sxk\" (UID: \"ceffb75b-59c2-41e0-96e9-4ccbb69ee956\") " pod="openstack/heat-engine-5c479d9749-55sxk" Jan 26 08:12:08 crc kubenswrapper[4806]: I0126 08:12:08.853208 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lcglg\" (UniqueName: \"kubernetes.io/projected/ceffb75b-59c2-41e0-96e9-4ccbb69ee956-kube-api-access-lcglg\") pod \"heat-engine-5c479d9749-55sxk\" (UID: \"ceffb75b-59c2-41e0-96e9-4ccbb69ee956\") " pod="openstack/heat-engine-5c479d9749-55sxk" Jan 26 08:12:08 crc kubenswrapper[4806]: I0126 08:12:08.903433 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-688b9f5b49-gzqdl"] Jan 26 08:12:08 crc kubenswrapper[4806]: I0126 08:12:08.905382 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688b9f5b49-gzqdl" Jan 26 08:12:08 crc kubenswrapper[4806]: I0126 08:12:08.920213 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjgbq\" (UniqueName: \"kubernetes.io/projected/4d637212-f269-4915-b30b-4ffe4e19bb2d-kube-api-access-vjgbq\") pod \"heat-cfnapi-7448bc75bf-txwxj\" (UID: \"4d637212-f269-4915-b30b-4ffe4e19bb2d\") " pod="openstack/heat-cfnapi-7448bc75bf-txwxj" Jan 26 08:12:08 crc kubenswrapper[4806]: I0126 08:12:08.920262 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4d637212-f269-4915-b30b-4ffe4e19bb2d-config-data-custom\") pod \"heat-cfnapi-7448bc75bf-txwxj\" (UID: \"4d637212-f269-4915-b30b-4ffe4e19bb2d\") " pod="openstack/heat-cfnapi-7448bc75bf-txwxj" Jan 26 08:12:08 crc kubenswrapper[4806]: I0126 08:12:08.920350 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d637212-f269-4915-b30b-4ffe4e19bb2d-config-data\") pod \"heat-cfnapi-7448bc75bf-txwxj\" (UID: \"4d637212-f269-4915-b30b-4ffe4e19bb2d\") " pod="openstack/heat-cfnapi-7448bc75bf-txwxj" Jan 26 08:12:08 crc kubenswrapper[4806]: I0126 08:12:08.920418 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d637212-f269-4915-b30b-4ffe4e19bb2d-combined-ca-bundle\") pod \"heat-cfnapi-7448bc75bf-txwxj\" (UID: \"4d637212-f269-4915-b30b-4ffe4e19bb2d\") " pod="openstack/heat-cfnapi-7448bc75bf-txwxj" Jan 26 08:12:08 crc kubenswrapper[4806]: I0126 08:12:08.938409 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-5c479d9749-55sxk" Jan 26 08:12:08 crc kubenswrapper[4806]: I0126 08:12:08.950365 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-688b9f5b49-gzqdl"] Jan 26 08:12:08 crc kubenswrapper[4806]: I0126 08:12:08.967277 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4d637212-f269-4915-b30b-4ffe4e19bb2d-config-data-custom\") pod \"heat-cfnapi-7448bc75bf-txwxj\" (UID: \"4d637212-f269-4915-b30b-4ffe4e19bb2d\") " pod="openstack/heat-cfnapi-7448bc75bf-txwxj" Jan 26 08:12:08 crc kubenswrapper[4806]: I0126 08:12:08.994477 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d637212-f269-4915-b30b-4ffe4e19bb2d-combined-ca-bundle\") pod \"heat-cfnapi-7448bc75bf-txwxj\" (UID: \"4d637212-f269-4915-b30b-4ffe4e19bb2d\") " pod="openstack/heat-cfnapi-7448bc75bf-txwxj" Jan 26 08:12:09 crc kubenswrapper[4806]: I0126 08:12:09.000307 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d637212-f269-4915-b30b-4ffe4e19bb2d-config-data\") pod \"heat-cfnapi-7448bc75bf-txwxj\" (UID: \"4d637212-f269-4915-b30b-4ffe4e19bb2d\") " pod="openstack/heat-cfnapi-7448bc75bf-txwxj" Jan 26 08:12:09 crc kubenswrapper[4806]: I0126 08:12:09.001267 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjgbq\" (UniqueName: \"kubernetes.io/projected/4d637212-f269-4915-b30b-4ffe4e19bb2d-kube-api-access-vjgbq\") pod \"heat-cfnapi-7448bc75bf-txwxj\" (UID: \"4d637212-f269-4915-b30b-4ffe4e19bb2d\") " pod="openstack/heat-cfnapi-7448bc75bf-txwxj" Jan 26 08:12:09 crc kubenswrapper[4806]: I0126 08:12:09.009909 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-957b6fbf8-7f82k"] Jan 26 08:12:09 crc kubenswrapper[4806]: I0126 08:12:09.021726 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-957b6fbf8-7f82k" Jan 26 08:12:09 crc kubenswrapper[4806]: I0126 08:12:09.024991 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-api-config-data" Jan 26 08:12:09 crc kubenswrapper[4806]: I0126 08:12:09.036266 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c603efb4-a0d1-474b-90a0-fc0c93aa37a3-dns-swift-storage-0\") pod \"dnsmasq-dns-688b9f5b49-gzqdl\" (UID: \"c603efb4-a0d1-474b-90a0-fc0c93aa37a3\") " pod="openstack/dnsmasq-dns-688b9f5b49-gzqdl" Jan 26 08:12:09 crc kubenswrapper[4806]: I0126 08:12:09.036336 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c603efb4-a0d1-474b-90a0-fc0c93aa37a3-ovsdbserver-nb\") pod \"dnsmasq-dns-688b9f5b49-gzqdl\" (UID: \"c603efb4-a0d1-474b-90a0-fc0c93aa37a3\") " pod="openstack/dnsmasq-dns-688b9f5b49-gzqdl" Jan 26 08:12:09 crc kubenswrapper[4806]: I0126 08:12:09.036362 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c603efb4-a0d1-474b-90a0-fc0c93aa37a3-dns-svc\") pod \"dnsmasq-dns-688b9f5b49-gzqdl\" (UID: \"c603efb4-a0d1-474b-90a0-fc0c93aa37a3\") " pod="openstack/dnsmasq-dns-688b9f5b49-gzqdl" Jan 26 08:12:09 crc kubenswrapper[4806]: I0126 08:12:09.036428 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c603efb4-a0d1-474b-90a0-fc0c93aa37a3-config\") pod \"dnsmasq-dns-688b9f5b49-gzqdl\" (UID: \"c603efb4-a0d1-474b-90a0-fc0c93aa37a3\") " pod="openstack/dnsmasq-dns-688b9f5b49-gzqdl" Jan 26 08:12:09 crc kubenswrapper[4806]: I0126 08:12:09.036464 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c603efb4-a0d1-474b-90a0-fc0c93aa37a3-ovsdbserver-sb\") pod \"dnsmasq-dns-688b9f5b49-gzqdl\" (UID: \"c603efb4-a0d1-474b-90a0-fc0c93aa37a3\") " pod="openstack/dnsmasq-dns-688b9f5b49-gzqdl" Jan 26 08:12:09 crc kubenswrapper[4806]: I0126 08:12:09.050751 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xb9nn\" (UniqueName: \"kubernetes.io/projected/c603efb4-a0d1-474b-90a0-fc0c93aa37a3-kube-api-access-xb9nn\") pod \"dnsmasq-dns-688b9f5b49-gzqdl\" (UID: \"c603efb4-a0d1-474b-90a0-fc0c93aa37a3\") " pod="openstack/dnsmasq-dns-688b9f5b49-gzqdl" Jan 26 08:12:09 crc kubenswrapper[4806]: I0126 08:12:09.086298 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7448bc75bf-txwxj" Jan 26 08:12:09 crc kubenswrapper[4806]: I0126 08:12:09.155739 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c603efb4-a0d1-474b-90a0-fc0c93aa37a3-ovsdbserver-sb\") pod \"dnsmasq-dns-688b9f5b49-gzqdl\" (UID: \"c603efb4-a0d1-474b-90a0-fc0c93aa37a3\") " pod="openstack/dnsmasq-dns-688b9f5b49-gzqdl" Jan 26 08:12:09 crc kubenswrapper[4806]: I0126 08:12:09.158654 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce987c26-24cc-40b4-9898-9f00d4eda52e-config-data\") pod \"heat-api-957b6fbf8-7f82k\" (UID: \"ce987c26-24cc-40b4-9898-9f00d4eda52e\") " pod="openstack/heat-api-957b6fbf8-7f82k" Jan 26 08:12:09 crc kubenswrapper[4806]: I0126 08:12:09.158715 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xb9nn\" (UniqueName: \"kubernetes.io/projected/c603efb4-a0d1-474b-90a0-fc0c93aa37a3-kube-api-access-xb9nn\") pod \"dnsmasq-dns-688b9f5b49-gzqdl\" (UID: \"c603efb4-a0d1-474b-90a0-fc0c93aa37a3\") " pod="openstack/dnsmasq-dns-688b9f5b49-gzqdl" Jan 26 08:12:09 crc kubenswrapper[4806]: I0126 08:12:09.158766 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c603efb4-a0d1-474b-90a0-fc0c93aa37a3-dns-swift-storage-0\") pod \"dnsmasq-dns-688b9f5b49-gzqdl\" (UID: \"c603efb4-a0d1-474b-90a0-fc0c93aa37a3\") " pod="openstack/dnsmasq-dns-688b9f5b49-gzqdl" Jan 26 08:12:09 crc kubenswrapper[4806]: I0126 08:12:09.158785 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ce987c26-24cc-40b4-9898-9f00d4eda52e-config-data-custom\") pod \"heat-api-957b6fbf8-7f82k\" (UID: \"ce987c26-24cc-40b4-9898-9f00d4eda52e\") " pod="openstack/heat-api-957b6fbf8-7f82k" Jan 26 08:12:09 crc kubenswrapper[4806]: I0126 08:12:09.158803 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlzkz\" (UniqueName: \"kubernetes.io/projected/ce987c26-24cc-40b4-9898-9f00d4eda52e-kube-api-access-hlzkz\") pod \"heat-api-957b6fbf8-7f82k\" (UID: \"ce987c26-24cc-40b4-9898-9f00d4eda52e\") " pod="openstack/heat-api-957b6fbf8-7f82k" Jan 26 08:12:09 crc kubenswrapper[4806]: I0126 08:12:09.158835 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce987c26-24cc-40b4-9898-9f00d4eda52e-combined-ca-bundle\") pod \"heat-api-957b6fbf8-7f82k\" (UID: \"ce987c26-24cc-40b4-9898-9f00d4eda52e\") " pod="openstack/heat-api-957b6fbf8-7f82k" Jan 26 08:12:09 crc kubenswrapper[4806]: I0126 08:12:09.158858 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c603efb4-a0d1-474b-90a0-fc0c93aa37a3-ovsdbserver-nb\") pod \"dnsmasq-dns-688b9f5b49-gzqdl\" (UID: \"c603efb4-a0d1-474b-90a0-fc0c93aa37a3\") " pod="openstack/dnsmasq-dns-688b9f5b49-gzqdl" Jan 26 08:12:09 crc kubenswrapper[4806]: I0126 08:12:09.158882 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c603efb4-a0d1-474b-90a0-fc0c93aa37a3-dns-svc\") pod \"dnsmasq-dns-688b9f5b49-gzqdl\" (UID: \"c603efb4-a0d1-474b-90a0-fc0c93aa37a3\") " pod="openstack/dnsmasq-dns-688b9f5b49-gzqdl" Jan 26 08:12:09 crc kubenswrapper[4806]: I0126 08:12:09.158903 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c603efb4-a0d1-474b-90a0-fc0c93aa37a3-config\") pod \"dnsmasq-dns-688b9f5b49-gzqdl\" (UID: \"c603efb4-a0d1-474b-90a0-fc0c93aa37a3\") " pod="openstack/dnsmasq-dns-688b9f5b49-gzqdl" Jan 26 08:12:09 crc kubenswrapper[4806]: I0126 08:12:09.159445 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c603efb4-a0d1-474b-90a0-fc0c93aa37a3-config\") pod \"dnsmasq-dns-688b9f5b49-gzqdl\" (UID: \"c603efb4-a0d1-474b-90a0-fc0c93aa37a3\") " pod="openstack/dnsmasq-dns-688b9f5b49-gzqdl" Jan 26 08:12:09 crc kubenswrapper[4806]: I0126 08:12:09.156635 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c603efb4-a0d1-474b-90a0-fc0c93aa37a3-ovsdbserver-sb\") pod \"dnsmasq-dns-688b9f5b49-gzqdl\" (UID: \"c603efb4-a0d1-474b-90a0-fc0c93aa37a3\") " pod="openstack/dnsmasq-dns-688b9f5b49-gzqdl" Jan 26 08:12:09 crc kubenswrapper[4806]: I0126 08:12:09.164438 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c603efb4-a0d1-474b-90a0-fc0c93aa37a3-dns-swift-storage-0\") pod \"dnsmasq-dns-688b9f5b49-gzqdl\" (UID: \"c603efb4-a0d1-474b-90a0-fc0c93aa37a3\") " pod="openstack/dnsmasq-dns-688b9f5b49-gzqdl" Jan 26 08:12:09 crc kubenswrapper[4806]: I0126 08:12:09.158388 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a30e60a-8dcc-43ea-bb0c-25ad2c6a62f2" path="/var/lib/kubelet/pods/5a30e60a-8dcc-43ea-bb0c-25ad2c6a62f2/volumes" Jan 26 08:12:09 crc kubenswrapper[4806]: I0126 08:12:09.174509 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c603efb4-a0d1-474b-90a0-fc0c93aa37a3-dns-svc\") pod \"dnsmasq-dns-688b9f5b49-gzqdl\" (UID: \"c603efb4-a0d1-474b-90a0-fc0c93aa37a3\") " pod="openstack/dnsmasq-dns-688b9f5b49-gzqdl" Jan 26 08:12:09 crc kubenswrapper[4806]: I0126 08:12:09.179922 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c603efb4-a0d1-474b-90a0-fc0c93aa37a3-ovsdbserver-nb\") pod \"dnsmasq-dns-688b9f5b49-gzqdl\" (UID: \"c603efb4-a0d1-474b-90a0-fc0c93aa37a3\") " pod="openstack/dnsmasq-dns-688b9f5b49-gzqdl" Jan 26 08:12:09 crc kubenswrapper[4806]: I0126 08:12:09.205282 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xb9nn\" (UniqueName: \"kubernetes.io/projected/c603efb4-a0d1-474b-90a0-fc0c93aa37a3-kube-api-access-xb9nn\") pod \"dnsmasq-dns-688b9f5b49-gzqdl\" (UID: \"c603efb4-a0d1-474b-90a0-fc0c93aa37a3\") " pod="openstack/dnsmasq-dns-688b9f5b49-gzqdl" Jan 26 08:12:09 crc kubenswrapper[4806]: I0126 08:12:09.223595 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-957b6fbf8-7f82k"] Jan 26 08:12:09 crc kubenswrapper[4806]: I0126 08:12:09.259993 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce987c26-24cc-40b4-9898-9f00d4eda52e-config-data\") pod \"heat-api-957b6fbf8-7f82k\" (UID: \"ce987c26-24cc-40b4-9898-9f00d4eda52e\") " pod="openstack/heat-api-957b6fbf8-7f82k" Jan 26 08:12:09 crc kubenswrapper[4806]: I0126 08:12:09.260074 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ce987c26-24cc-40b4-9898-9f00d4eda52e-config-data-custom\") pod \"heat-api-957b6fbf8-7f82k\" (UID: \"ce987c26-24cc-40b4-9898-9f00d4eda52e\") " pod="openstack/heat-api-957b6fbf8-7f82k" Jan 26 08:12:09 crc kubenswrapper[4806]: I0126 08:12:09.260096 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hlzkz\" (UniqueName: \"kubernetes.io/projected/ce987c26-24cc-40b4-9898-9f00d4eda52e-kube-api-access-hlzkz\") pod \"heat-api-957b6fbf8-7f82k\" (UID: \"ce987c26-24cc-40b4-9898-9f00d4eda52e\") " pod="openstack/heat-api-957b6fbf8-7f82k" Jan 26 08:12:09 crc kubenswrapper[4806]: I0126 08:12:09.260121 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce987c26-24cc-40b4-9898-9f00d4eda52e-combined-ca-bundle\") pod \"heat-api-957b6fbf8-7f82k\" (UID: \"ce987c26-24cc-40b4-9898-9f00d4eda52e\") " pod="openstack/heat-api-957b6fbf8-7f82k" Jan 26 08:12:09 crc kubenswrapper[4806]: I0126 08:12:09.264044 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce987c26-24cc-40b4-9898-9f00d4eda52e-combined-ca-bundle\") pod \"heat-api-957b6fbf8-7f82k\" (UID: \"ce987c26-24cc-40b4-9898-9f00d4eda52e\") " pod="openstack/heat-api-957b6fbf8-7f82k" Jan 26 08:12:09 crc kubenswrapper[4806]: I0126 08:12:09.274429 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ce987c26-24cc-40b4-9898-9f00d4eda52e-config-data-custom\") pod \"heat-api-957b6fbf8-7f82k\" (UID: \"ce987c26-24cc-40b4-9898-9f00d4eda52e\") " pod="openstack/heat-api-957b6fbf8-7f82k" Jan 26 08:12:09 crc kubenswrapper[4806]: I0126 08:12:09.285043 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce987c26-24cc-40b4-9898-9f00d4eda52e-config-data\") pod \"heat-api-957b6fbf8-7f82k\" (UID: \"ce987c26-24cc-40b4-9898-9f00d4eda52e\") " pod="openstack/heat-api-957b6fbf8-7f82k" Jan 26 08:12:09 crc kubenswrapper[4806]: I0126 08:12:09.321126 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hlzkz\" (UniqueName: \"kubernetes.io/projected/ce987c26-24cc-40b4-9898-9f00d4eda52e-kube-api-access-hlzkz\") pod \"heat-api-957b6fbf8-7f82k\" (UID: \"ce987c26-24cc-40b4-9898-9f00d4eda52e\") " pod="openstack/heat-api-957b6fbf8-7f82k" Jan 26 08:12:09 crc kubenswrapper[4806]: I0126 08:12:09.369102 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688b9f5b49-gzqdl" Jan 26 08:12:09 crc kubenswrapper[4806]: I0126 08:12:09.376135 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"462d2770-3796-4a02-b83e-91de31a08bd0","Type":"ContainerStarted","Data":"2e65b8452ae5dbddde50687453af52de951a4e0607ef0d2fd954d042e6920d6f"} Jan 26 08:12:09 crc kubenswrapper[4806]: I0126 08:12:09.404620 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.404604882 podStartE2EDuration="3.404604882s" podCreationTimestamp="2026-01-26 08:12:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 08:12:09.396334271 +0000 UTC m=+1108.660742327" watchObservedRunningTime="2026-01-26 08:12:09.404604882 +0000 UTC m=+1108.669012928" Jan 26 08:12:09 crc kubenswrapper[4806]: I0126 08:12:09.448646 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-957b6fbf8-7f82k" Jan 26 08:12:09 crc kubenswrapper[4806]: I0126 08:12:09.721833 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-5c479d9749-55sxk"] Jan 26 08:12:10 crc kubenswrapper[4806]: I0126 08:12:10.212757 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-7448bc75bf-txwxj"] Jan 26 08:12:10 crc kubenswrapper[4806]: I0126 08:12:10.222119 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-688b9f5b49-gzqdl"] Jan 26 08:12:10 crc kubenswrapper[4806]: I0126 08:12:10.383931 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-957b6fbf8-7f82k"] Jan 26 08:12:10 crc kubenswrapper[4806]: I0126 08:12:10.418804 4806 generic.go:334] "Generic (PLEG): container finished" podID="93bf46a8-2942-4b36-9853-88ff5c6e756b" containerID="ce21628d69b54f9e7078ea4cc4723a743027cca289649838f8e9a6552da1cecf" exitCode=0 Jan 26 08:12:10 crc kubenswrapper[4806]: I0126 08:12:10.418862 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"93bf46a8-2942-4b36-9853-88ff5c6e756b","Type":"ContainerDied","Data":"ce21628d69b54f9e7078ea4cc4723a743027cca289649838f8e9a6552da1cecf"} Jan 26 08:12:10 crc kubenswrapper[4806]: I0126 08:12:10.440628 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688b9f5b49-gzqdl" event={"ID":"c603efb4-a0d1-474b-90a0-fc0c93aa37a3","Type":"ContainerStarted","Data":"120041f453d91f3a060585dafe4b2eb8c3464e93679b402d89e8e5fcf55a90f6"} Jan 26 08:12:10 crc kubenswrapper[4806]: I0126 08:12:10.463738 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-5c479d9749-55sxk" event={"ID":"ceffb75b-59c2-41e0-96e9-4ccbb69ee956","Type":"ContainerStarted","Data":"5e435ae63256b96544903f4c8e60a13b3975f9f77ccedf6957afb8c1d8a46baf"} Jan 26 08:12:10 crc kubenswrapper[4806]: I0126 08:12:10.463780 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-5c479d9749-55sxk" event={"ID":"ceffb75b-59c2-41e0-96e9-4ccbb69ee956","Type":"ContainerStarted","Data":"d408ccfd3c04cfa1714371533cdf8372b3b7f0cc4f09fdc7d0daf1ec778cc50a"} Jan 26 08:12:10 crc kubenswrapper[4806]: I0126 08:12:10.464896 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-5c479d9749-55sxk" Jan 26 08:12:10 crc kubenswrapper[4806]: I0126 08:12:10.485316 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7448bc75bf-txwxj" event={"ID":"4d637212-f269-4915-b30b-4ffe4e19bb2d","Type":"ContainerStarted","Data":"296de1632d3165169c06a08c93427a99e4bb1f1087217f43e5e9f692f03646dd"} Jan 26 08:12:10 crc kubenswrapper[4806]: I0126 08:12:10.487329 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-5c479d9749-55sxk" podStartSLOduration=2.48731343 podStartE2EDuration="2.48731343s" podCreationTimestamp="2026-01-26 08:12:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 08:12:10.480194111 +0000 UTC m=+1109.744602167" watchObservedRunningTime="2026-01-26 08:12:10.48731343 +0000 UTC m=+1109.751721486" Jan 26 08:12:10 crc kubenswrapper[4806]: I0126 08:12:10.665037 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 08:12:10 crc kubenswrapper[4806]: I0126 08:12:10.705746 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/93bf46a8-2942-4b36-9853-88ff5c6e756b-run-httpd\") pod \"93bf46a8-2942-4b36-9853-88ff5c6e756b\" (UID: \"93bf46a8-2942-4b36-9853-88ff5c6e756b\") " Jan 26 08:12:10 crc kubenswrapper[4806]: I0126 08:12:10.705806 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/93bf46a8-2942-4b36-9853-88ff5c6e756b-log-httpd\") pod \"93bf46a8-2942-4b36-9853-88ff5c6e756b\" (UID: \"93bf46a8-2942-4b36-9853-88ff5c6e756b\") " Jan 26 08:12:10 crc kubenswrapper[4806]: I0126 08:12:10.705919 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/93bf46a8-2942-4b36-9853-88ff5c6e756b-scripts\") pod \"93bf46a8-2942-4b36-9853-88ff5c6e756b\" (UID: \"93bf46a8-2942-4b36-9853-88ff5c6e756b\") " Jan 26 08:12:10 crc kubenswrapper[4806]: I0126 08:12:10.705956 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93bf46a8-2942-4b36-9853-88ff5c6e756b-combined-ca-bundle\") pod \"93bf46a8-2942-4b36-9853-88ff5c6e756b\" (UID: \"93bf46a8-2942-4b36-9853-88ff5c6e756b\") " Jan 26 08:12:10 crc kubenswrapper[4806]: I0126 08:12:10.705991 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93bf46a8-2942-4b36-9853-88ff5c6e756b-config-data\") pod \"93bf46a8-2942-4b36-9853-88ff5c6e756b\" (UID: \"93bf46a8-2942-4b36-9853-88ff5c6e756b\") " Jan 26 08:12:10 crc kubenswrapper[4806]: I0126 08:12:10.706026 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/93bf46a8-2942-4b36-9853-88ff5c6e756b-sg-core-conf-yaml\") pod \"93bf46a8-2942-4b36-9853-88ff5c6e756b\" (UID: \"93bf46a8-2942-4b36-9853-88ff5c6e756b\") " Jan 26 08:12:10 crc kubenswrapper[4806]: I0126 08:12:10.706083 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vrfc7\" (UniqueName: \"kubernetes.io/projected/93bf46a8-2942-4b36-9853-88ff5c6e756b-kube-api-access-vrfc7\") pod \"93bf46a8-2942-4b36-9853-88ff5c6e756b\" (UID: \"93bf46a8-2942-4b36-9853-88ff5c6e756b\") " Jan 26 08:12:10 crc kubenswrapper[4806]: I0126 08:12:10.706734 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/93bf46a8-2942-4b36-9853-88ff5c6e756b-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "93bf46a8-2942-4b36-9853-88ff5c6e756b" (UID: "93bf46a8-2942-4b36-9853-88ff5c6e756b"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 08:12:10 crc kubenswrapper[4806]: I0126 08:12:10.707223 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/93bf46a8-2942-4b36-9853-88ff5c6e756b-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "93bf46a8-2942-4b36-9853-88ff5c6e756b" (UID: "93bf46a8-2942-4b36-9853-88ff5c6e756b"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 08:12:10 crc kubenswrapper[4806]: I0126 08:12:10.713112 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93bf46a8-2942-4b36-9853-88ff5c6e756b-scripts" (OuterVolumeSpecName: "scripts") pod "93bf46a8-2942-4b36-9853-88ff5c6e756b" (UID: "93bf46a8-2942-4b36-9853-88ff5c6e756b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:12:10 crc kubenswrapper[4806]: I0126 08:12:10.714142 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93bf46a8-2942-4b36-9853-88ff5c6e756b-kube-api-access-vrfc7" (OuterVolumeSpecName: "kube-api-access-vrfc7") pod "93bf46a8-2942-4b36-9853-88ff5c6e756b" (UID: "93bf46a8-2942-4b36-9853-88ff5c6e756b"). InnerVolumeSpecName "kube-api-access-vrfc7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:12:10 crc kubenswrapper[4806]: I0126 08:12:10.747895 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93bf46a8-2942-4b36-9853-88ff5c6e756b-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "93bf46a8-2942-4b36-9853-88ff5c6e756b" (UID: "93bf46a8-2942-4b36-9853-88ff5c6e756b"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:12:10 crc kubenswrapper[4806]: I0126 08:12:10.762296 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93bf46a8-2942-4b36-9853-88ff5c6e756b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "93bf46a8-2942-4b36-9853-88ff5c6e756b" (UID: "93bf46a8-2942-4b36-9853-88ff5c6e756b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:12:10 crc kubenswrapper[4806]: I0126 08:12:10.773341 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93bf46a8-2942-4b36-9853-88ff5c6e756b-config-data" (OuterVolumeSpecName: "config-data") pod "93bf46a8-2942-4b36-9853-88ff5c6e756b" (UID: "93bf46a8-2942-4b36-9853-88ff5c6e756b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:12:10 crc kubenswrapper[4806]: I0126 08:12:10.809660 4806 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/93bf46a8-2942-4b36-9853-88ff5c6e756b-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 08:12:10 crc kubenswrapper[4806]: I0126 08:12:10.809700 4806 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/93bf46a8-2942-4b36-9853-88ff5c6e756b-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 08:12:10 crc kubenswrapper[4806]: I0126 08:12:10.809711 4806 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/93bf46a8-2942-4b36-9853-88ff5c6e756b-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 08:12:10 crc kubenswrapper[4806]: I0126 08:12:10.809720 4806 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93bf46a8-2942-4b36-9853-88ff5c6e756b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 08:12:10 crc kubenswrapper[4806]: I0126 08:12:10.809729 4806 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93bf46a8-2942-4b36-9853-88ff5c6e756b-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 08:12:10 crc kubenswrapper[4806]: I0126 08:12:10.809737 4806 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/93bf46a8-2942-4b36-9853-88ff5c6e756b-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 26 08:12:10 crc kubenswrapper[4806]: I0126 08:12:10.809745 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vrfc7\" (UniqueName: \"kubernetes.io/projected/93bf46a8-2942-4b36-9853-88ff5c6e756b-kube-api-access-vrfc7\") on node \"crc\" DevicePath \"\"" Jan 26 08:12:10 crc kubenswrapper[4806]: I0126 08:12:10.966850 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-6b8f96b47b-sbsnb" Jan 26 08:12:11 crc kubenswrapper[4806]: I0126 08:12:11.039677 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7d485d788d-5q4tb"] Jan 26 08:12:11 crc kubenswrapper[4806]: I0126 08:12:11.509407 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"93bf46a8-2942-4b36-9853-88ff5c6e756b","Type":"ContainerDied","Data":"567c7947d52421557b86a983e71f80bdae2f486af0016a50a666767ebbd09ef3"} Jan 26 08:12:11 crc kubenswrapper[4806]: I0126 08:12:11.509473 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 08:12:11 crc kubenswrapper[4806]: I0126 08:12:11.509504 4806 scope.go:117] "RemoveContainer" containerID="8301cf9275eda41f8581b82288261e3c0d7d73701303206c3c8836982952549f" Jan 26 08:12:11 crc kubenswrapper[4806]: I0126 08:12:11.514298 4806 generic.go:334] "Generic (PLEG): container finished" podID="c603efb4-a0d1-474b-90a0-fc0c93aa37a3" containerID="528e5b8373edeb679c83b5b012f1fa9fdd449f1325661a9bfa96912bc4d8e006" exitCode=0 Jan 26 08:12:11 crc kubenswrapper[4806]: I0126 08:12:11.514435 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688b9f5b49-gzqdl" event={"ID":"c603efb4-a0d1-474b-90a0-fc0c93aa37a3","Type":"ContainerDied","Data":"528e5b8373edeb679c83b5b012f1fa9fdd449f1325661a9bfa96912bc4d8e006"} Jan 26 08:12:11 crc kubenswrapper[4806]: I0126 08:12:11.531700 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-957b6fbf8-7f82k" event={"ID":"ce987c26-24cc-40b4-9898-9f00d4eda52e","Type":"ContainerStarted","Data":"6dbdaba3d63981be136fea51c50664ace2b585d642d1f7e34cbfe737cf923d67"} Jan 26 08:12:11 crc kubenswrapper[4806]: I0126 08:12:11.565293 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 08:12:11 crc kubenswrapper[4806]: I0126 08:12:11.576257 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 26 08:12:11 crc kubenswrapper[4806]: I0126 08:12:11.657788 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 26 08:12:11 crc kubenswrapper[4806]: E0126 08:12:11.658416 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93bf46a8-2942-4b36-9853-88ff5c6e756b" containerName="sg-core" Jan 26 08:12:11 crc kubenswrapper[4806]: I0126 08:12:11.658433 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="93bf46a8-2942-4b36-9853-88ff5c6e756b" containerName="sg-core" Jan 26 08:12:11 crc kubenswrapper[4806]: E0126 08:12:11.658463 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93bf46a8-2942-4b36-9853-88ff5c6e756b" containerName="ceilometer-notification-agent" Jan 26 08:12:11 crc kubenswrapper[4806]: I0126 08:12:11.658469 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="93bf46a8-2942-4b36-9853-88ff5c6e756b" containerName="ceilometer-notification-agent" Jan 26 08:12:11 crc kubenswrapper[4806]: I0126 08:12:11.658667 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="93bf46a8-2942-4b36-9853-88ff5c6e756b" containerName="ceilometer-notification-agent" Jan 26 08:12:11 crc kubenswrapper[4806]: I0126 08:12:11.658688 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="93bf46a8-2942-4b36-9853-88ff5c6e756b" containerName="sg-core" Jan 26 08:12:11 crc kubenswrapper[4806]: I0126 08:12:11.660278 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 08:12:11 crc kubenswrapper[4806]: I0126 08:12:11.671608 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 26 08:12:11 crc kubenswrapper[4806]: I0126 08:12:11.671961 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 26 08:12:11 crc kubenswrapper[4806]: I0126 08:12:11.687058 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 08:12:11 crc kubenswrapper[4806]: I0126 08:12:11.756146 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca560845-7250-4cf5-90d0-449180808340-config-data\") pod \"ceilometer-0\" (UID: \"ca560845-7250-4cf5-90d0-449180808340\") " pod="openstack/ceilometer-0" Jan 26 08:12:11 crc kubenswrapper[4806]: I0126 08:12:11.756210 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ca560845-7250-4cf5-90d0-449180808340-log-httpd\") pod \"ceilometer-0\" (UID: \"ca560845-7250-4cf5-90d0-449180808340\") " pod="openstack/ceilometer-0" Jan 26 08:12:11 crc kubenswrapper[4806]: I0126 08:12:11.756253 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ca560845-7250-4cf5-90d0-449180808340-scripts\") pod \"ceilometer-0\" (UID: \"ca560845-7250-4cf5-90d0-449180808340\") " pod="openstack/ceilometer-0" Jan 26 08:12:11 crc kubenswrapper[4806]: I0126 08:12:11.756302 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca560845-7250-4cf5-90d0-449180808340-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ca560845-7250-4cf5-90d0-449180808340\") " pod="openstack/ceilometer-0" Jan 26 08:12:11 crc kubenswrapper[4806]: I0126 08:12:11.756339 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ca560845-7250-4cf5-90d0-449180808340-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ca560845-7250-4cf5-90d0-449180808340\") " pod="openstack/ceilometer-0" Jan 26 08:12:11 crc kubenswrapper[4806]: I0126 08:12:11.756390 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nk5f9\" (UniqueName: \"kubernetes.io/projected/ca560845-7250-4cf5-90d0-449180808340-kube-api-access-nk5f9\") pod \"ceilometer-0\" (UID: \"ca560845-7250-4cf5-90d0-449180808340\") " pod="openstack/ceilometer-0" Jan 26 08:12:11 crc kubenswrapper[4806]: I0126 08:12:11.756408 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ca560845-7250-4cf5-90d0-449180808340-run-httpd\") pod \"ceilometer-0\" (UID: \"ca560845-7250-4cf5-90d0-449180808340\") " pod="openstack/ceilometer-0" Jan 26 08:12:11 crc kubenswrapper[4806]: I0126 08:12:11.810920 4806 scope.go:117] "RemoveContainer" containerID="ce21628d69b54f9e7078ea4cc4723a743027cca289649838f8e9a6552da1cecf" Jan 26 08:12:11 crc kubenswrapper[4806]: I0126 08:12:11.815745 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 26 08:12:11 crc kubenswrapper[4806]: I0126 08:12:11.863181 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca560845-7250-4cf5-90d0-449180808340-config-data\") pod \"ceilometer-0\" (UID: \"ca560845-7250-4cf5-90d0-449180808340\") " pod="openstack/ceilometer-0" Jan 26 08:12:11 crc kubenswrapper[4806]: I0126 08:12:11.863228 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ca560845-7250-4cf5-90d0-449180808340-log-httpd\") pod \"ceilometer-0\" (UID: \"ca560845-7250-4cf5-90d0-449180808340\") " pod="openstack/ceilometer-0" Jan 26 08:12:11 crc kubenswrapper[4806]: I0126 08:12:11.863279 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ca560845-7250-4cf5-90d0-449180808340-scripts\") pod \"ceilometer-0\" (UID: \"ca560845-7250-4cf5-90d0-449180808340\") " pod="openstack/ceilometer-0" Jan 26 08:12:11 crc kubenswrapper[4806]: I0126 08:12:11.863340 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca560845-7250-4cf5-90d0-449180808340-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ca560845-7250-4cf5-90d0-449180808340\") " pod="openstack/ceilometer-0" Jan 26 08:12:11 crc kubenswrapper[4806]: I0126 08:12:11.863380 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ca560845-7250-4cf5-90d0-449180808340-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ca560845-7250-4cf5-90d0-449180808340\") " pod="openstack/ceilometer-0" Jan 26 08:12:11 crc kubenswrapper[4806]: I0126 08:12:11.863452 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nk5f9\" (UniqueName: \"kubernetes.io/projected/ca560845-7250-4cf5-90d0-449180808340-kube-api-access-nk5f9\") pod \"ceilometer-0\" (UID: \"ca560845-7250-4cf5-90d0-449180808340\") " pod="openstack/ceilometer-0" Jan 26 08:12:11 crc kubenswrapper[4806]: I0126 08:12:11.863477 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ca560845-7250-4cf5-90d0-449180808340-run-httpd\") pod \"ceilometer-0\" (UID: \"ca560845-7250-4cf5-90d0-449180808340\") " pod="openstack/ceilometer-0" Jan 26 08:12:11 crc kubenswrapper[4806]: I0126 08:12:11.877640 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ca560845-7250-4cf5-90d0-449180808340-scripts\") pod \"ceilometer-0\" (UID: \"ca560845-7250-4cf5-90d0-449180808340\") " pod="openstack/ceilometer-0" Jan 26 08:12:11 crc kubenswrapper[4806]: I0126 08:12:11.878603 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ca560845-7250-4cf5-90d0-449180808340-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ca560845-7250-4cf5-90d0-449180808340\") " pod="openstack/ceilometer-0" Jan 26 08:12:11 crc kubenswrapper[4806]: I0126 08:12:11.881962 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ca560845-7250-4cf5-90d0-449180808340-run-httpd\") pod \"ceilometer-0\" (UID: \"ca560845-7250-4cf5-90d0-449180808340\") " pod="openstack/ceilometer-0" Jan 26 08:12:11 crc kubenswrapper[4806]: I0126 08:12:11.882035 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ca560845-7250-4cf5-90d0-449180808340-log-httpd\") pod \"ceilometer-0\" (UID: \"ca560845-7250-4cf5-90d0-449180808340\") " pod="openstack/ceilometer-0" Jan 26 08:12:11 crc kubenswrapper[4806]: I0126 08:12:11.883738 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca560845-7250-4cf5-90d0-449180808340-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ca560845-7250-4cf5-90d0-449180808340\") " pod="openstack/ceilometer-0" Jan 26 08:12:11 crc kubenswrapper[4806]: I0126 08:12:11.883938 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca560845-7250-4cf5-90d0-449180808340-config-data\") pod \"ceilometer-0\" (UID: \"ca560845-7250-4cf5-90d0-449180808340\") " pod="openstack/ceilometer-0" Jan 26 08:12:11 crc kubenswrapper[4806]: I0126 08:12:11.887447 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nk5f9\" (UniqueName: \"kubernetes.io/projected/ca560845-7250-4cf5-90d0-449180808340-kube-api-access-nk5f9\") pod \"ceilometer-0\" (UID: \"ca560845-7250-4cf5-90d0-449180808340\") " pod="openstack/ceilometer-0" Jan 26 08:12:12 crc kubenswrapper[4806]: I0126 08:12:12.046878 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 08:12:12 crc kubenswrapper[4806]: I0126 08:12:12.559970 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688b9f5b49-gzqdl" event={"ID":"c603efb4-a0d1-474b-90a0-fc0c93aa37a3","Type":"ContainerStarted","Data":"c195900afd175c117e075ab725623539c9d20fd9d8dc8574887dd3ddfe48f7ca"} Jan 26 08:12:12 crc kubenswrapper[4806]: I0126 08:12:12.560035 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-688b9f5b49-gzqdl" Jan 26 08:12:12 crc kubenswrapper[4806]: I0126 08:12:12.580536 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-688b9f5b49-gzqdl" podStartSLOduration=4.580502439 podStartE2EDuration="4.580502439s" podCreationTimestamp="2026-01-26 08:12:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 08:12:12.577135234 +0000 UTC m=+1111.841543300" watchObservedRunningTime="2026-01-26 08:12:12.580502439 +0000 UTC m=+1111.844910495" Jan 26 08:12:12 crc kubenswrapper[4806]: I0126 08:12:12.658496 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 08:12:13 crc kubenswrapper[4806]: I0126 08:12:13.053500 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93bf46a8-2942-4b36-9853-88ff5c6e756b" path="/var/lib/kubelet/pods/93bf46a8-2942-4b36-9853-88ff5c6e756b/volumes" Jan 26 08:12:13 crc kubenswrapper[4806]: I0126 08:12:13.603992 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ca560845-7250-4cf5-90d0-449180808340","Type":"ContainerStarted","Data":"96ef9e0af953e12a12be73791d628abfbedd4646af69ed385dc5cfa75c7cfdaf"} Jan 26 08:12:15 crc kubenswrapper[4806]: I0126 08:12:15.630957 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7448bc75bf-txwxj" event={"ID":"4d637212-f269-4915-b30b-4ffe4e19bb2d","Type":"ContainerStarted","Data":"30a00e07cdb1636dde2a329ce2788f25a3258a3af3728c5b7cebffe910c6a196"} Jan 26 08:12:15 crc kubenswrapper[4806]: I0126 08:12:15.631575 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-7448bc75bf-txwxj" Jan 26 08:12:15 crc kubenswrapper[4806]: I0126 08:12:15.645401 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ca560845-7250-4cf5-90d0-449180808340","Type":"ContainerStarted","Data":"d821d0cd3a0369da46a6a3b07f9463e2b21db6542f0834eaf3b0becc69f51cfd"} Jan 26 08:12:15 crc kubenswrapper[4806]: I0126 08:12:15.652974 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-7448bc75bf-txwxj" podStartSLOduration=3.319925155 podStartE2EDuration="7.652957682s" podCreationTimestamp="2026-01-26 08:12:08 +0000 UTC" firstStartedPulling="2026-01-26 08:12:10.253110851 +0000 UTC m=+1109.517518907" lastFinishedPulling="2026-01-26 08:12:14.586143378 +0000 UTC m=+1113.850551434" observedRunningTime="2026-01-26 08:12:15.648313552 +0000 UTC m=+1114.912721608" watchObservedRunningTime="2026-01-26 08:12:15.652957682 +0000 UTC m=+1114.917365728" Jan 26 08:12:15 crc kubenswrapper[4806]: I0126 08:12:15.661671 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-957b6fbf8-7f82k" event={"ID":"ce987c26-24cc-40b4-9898-9f00d4eda52e","Type":"ContainerStarted","Data":"b705fd70926481698eeb503d4adfdb74f689027ab44984e054b013403599b08c"} Jan 26 08:12:15 crc kubenswrapper[4806]: I0126 08:12:15.662011 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-957b6fbf8-7f82k" Jan 26 08:12:15 crc kubenswrapper[4806]: I0126 08:12:15.684680 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-957b6fbf8-7f82k" podStartSLOduration=3.526350698 podStartE2EDuration="7.684663369s" podCreationTimestamp="2026-01-26 08:12:08 +0000 UTC" firstStartedPulling="2026-01-26 08:12:10.430879502 +0000 UTC m=+1109.695287558" lastFinishedPulling="2026-01-26 08:12:14.589192173 +0000 UTC m=+1113.853600229" observedRunningTime="2026-01-26 08:12:15.680344338 +0000 UTC m=+1114.944752394" watchObservedRunningTime="2026-01-26 08:12:15.684663369 +0000 UTC m=+1114.949071425" Jan 26 08:12:16 crc kubenswrapper[4806]: I0126 08:12:16.413660 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-5d649c5968-gb8r4"] Jan 26 08:12:16 crc kubenswrapper[4806]: I0126 08:12:16.415015 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-5d649c5968-gb8r4" Jan 26 08:12:16 crc kubenswrapper[4806]: I0126 08:12:16.443155 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-5d649c5968-gb8r4"] Jan 26 08:12:16 crc kubenswrapper[4806]: I0126 08:12:16.487480 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4656694c-fa67-4546-bf62-bc929866aeae-config-data-custom\") pod \"heat-engine-5d649c5968-gb8r4\" (UID: \"4656694c-fa67-4546-bf62-bc929866aeae\") " pod="openstack/heat-engine-5d649c5968-gb8r4" Jan 26 08:12:16 crc kubenswrapper[4806]: I0126 08:12:16.487562 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86lrh\" (UniqueName: \"kubernetes.io/projected/4656694c-fa67-4546-bf62-bc929866aeae-kube-api-access-86lrh\") pod \"heat-engine-5d649c5968-gb8r4\" (UID: \"4656694c-fa67-4546-bf62-bc929866aeae\") " pod="openstack/heat-engine-5d649c5968-gb8r4" Jan 26 08:12:16 crc kubenswrapper[4806]: I0126 08:12:16.487603 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4656694c-fa67-4546-bf62-bc929866aeae-config-data\") pod \"heat-engine-5d649c5968-gb8r4\" (UID: \"4656694c-fa67-4546-bf62-bc929866aeae\") " pod="openstack/heat-engine-5d649c5968-gb8r4" Jan 26 08:12:16 crc kubenswrapper[4806]: I0126 08:12:16.487662 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4656694c-fa67-4546-bf62-bc929866aeae-combined-ca-bundle\") pod \"heat-engine-5d649c5968-gb8r4\" (UID: \"4656694c-fa67-4546-bf62-bc929866aeae\") " pod="openstack/heat-engine-5d649c5968-gb8r4" Jan 26 08:12:16 crc kubenswrapper[4806]: I0126 08:12:16.510647 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-5c84c55f78-ls58x"] Jan 26 08:12:16 crc kubenswrapper[4806]: I0126 08:12:16.512021 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-5c84c55f78-ls58x" Jan 26 08:12:16 crc kubenswrapper[4806]: I0126 08:12:16.531139 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-5c84c55f78-ls58x"] Jan 26 08:12:16 crc kubenswrapper[4806]: I0126 08:12:16.573183 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-7dd479566-6k7mz"] Jan 26 08:12:16 crc kubenswrapper[4806]: I0126 08:12:16.583833 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-7dd479566-6k7mz" Jan 26 08:12:16 crc kubenswrapper[4806]: I0126 08:12:16.594771 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4656694c-fa67-4546-bf62-bc929866aeae-combined-ca-bundle\") pod \"heat-engine-5d649c5968-gb8r4\" (UID: \"4656694c-fa67-4546-bf62-bc929866aeae\") " pod="openstack/heat-engine-5d649c5968-gb8r4" Jan 26 08:12:16 crc kubenswrapper[4806]: I0126 08:12:16.594822 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8460c49b-4775-409d-b4c0-177929af70a4-combined-ca-bundle\") pod \"heat-cfnapi-5c84c55f78-ls58x\" (UID: \"8460c49b-4775-409d-b4c0-177929af70a4\") " pod="openstack/heat-cfnapi-5c84c55f78-ls58x" Jan 26 08:12:16 crc kubenswrapper[4806]: I0126 08:12:16.594881 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8460c49b-4775-409d-b4c0-177929af70a4-config-data-custom\") pod \"heat-cfnapi-5c84c55f78-ls58x\" (UID: \"8460c49b-4775-409d-b4c0-177929af70a4\") " pod="openstack/heat-cfnapi-5c84c55f78-ls58x" Jan 26 08:12:16 crc kubenswrapper[4806]: I0126 08:12:16.594908 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f644c\" (UniqueName: \"kubernetes.io/projected/8460c49b-4775-409d-b4c0-177929af70a4-kube-api-access-f644c\") pod \"heat-cfnapi-5c84c55f78-ls58x\" (UID: \"8460c49b-4775-409d-b4c0-177929af70a4\") " pod="openstack/heat-cfnapi-5c84c55f78-ls58x" Jan 26 08:12:16 crc kubenswrapper[4806]: I0126 08:12:16.595112 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4656694c-fa67-4546-bf62-bc929866aeae-config-data-custom\") pod \"heat-engine-5d649c5968-gb8r4\" (UID: \"4656694c-fa67-4546-bf62-bc929866aeae\") " pod="openstack/heat-engine-5d649c5968-gb8r4" Jan 26 08:12:16 crc kubenswrapper[4806]: I0126 08:12:16.595168 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8460c49b-4775-409d-b4c0-177929af70a4-config-data\") pod \"heat-cfnapi-5c84c55f78-ls58x\" (UID: \"8460c49b-4775-409d-b4c0-177929af70a4\") " pod="openstack/heat-cfnapi-5c84c55f78-ls58x" Jan 26 08:12:16 crc kubenswrapper[4806]: I0126 08:12:16.595251 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86lrh\" (UniqueName: \"kubernetes.io/projected/4656694c-fa67-4546-bf62-bc929866aeae-kube-api-access-86lrh\") pod \"heat-engine-5d649c5968-gb8r4\" (UID: \"4656694c-fa67-4546-bf62-bc929866aeae\") " pod="openstack/heat-engine-5d649c5968-gb8r4" Jan 26 08:12:16 crc kubenswrapper[4806]: I0126 08:12:16.595337 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4656694c-fa67-4546-bf62-bc929866aeae-config-data\") pod \"heat-engine-5d649c5968-gb8r4\" (UID: \"4656694c-fa67-4546-bf62-bc929866aeae\") " pod="openstack/heat-engine-5d649c5968-gb8r4" Jan 26 08:12:16 crc kubenswrapper[4806]: I0126 08:12:16.634301 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4656694c-fa67-4546-bf62-bc929866aeae-config-data-custom\") pod \"heat-engine-5d649c5968-gb8r4\" (UID: \"4656694c-fa67-4546-bf62-bc929866aeae\") " pod="openstack/heat-engine-5d649c5968-gb8r4" Jan 26 08:12:16 crc kubenswrapper[4806]: I0126 08:12:16.634511 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4656694c-fa67-4546-bf62-bc929866aeae-config-data\") pod \"heat-engine-5d649c5968-gb8r4\" (UID: \"4656694c-fa67-4546-bf62-bc929866aeae\") " pod="openstack/heat-engine-5d649c5968-gb8r4" Jan 26 08:12:16 crc kubenswrapper[4806]: I0126 08:12:16.653788 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4656694c-fa67-4546-bf62-bc929866aeae-combined-ca-bundle\") pod \"heat-engine-5d649c5968-gb8r4\" (UID: \"4656694c-fa67-4546-bf62-bc929866aeae\") " pod="openstack/heat-engine-5d649c5968-gb8r4" Jan 26 08:12:16 crc kubenswrapper[4806]: I0126 08:12:16.699853 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-7dd479566-6k7mz"] Jan 26 08:12:16 crc kubenswrapper[4806]: I0126 08:12:16.700365 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86lrh\" (UniqueName: \"kubernetes.io/projected/4656694c-fa67-4546-bf62-bc929866aeae-kube-api-access-86lrh\") pod \"heat-engine-5d649c5968-gb8r4\" (UID: \"4656694c-fa67-4546-bf62-bc929866aeae\") " pod="openstack/heat-engine-5d649c5968-gb8r4" Jan 26 08:12:16 crc kubenswrapper[4806]: I0126 08:12:16.723195 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8af52b9-239c-4e7e-9f4e-80aa1e4148a2-config-data\") pod \"heat-api-7dd479566-6k7mz\" (UID: \"c8af52b9-239c-4e7e-9f4e-80aa1e4148a2\") " pod="openstack/heat-api-7dd479566-6k7mz" Jan 26 08:12:16 crc kubenswrapper[4806]: I0126 08:12:16.723580 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7w89l\" (UniqueName: \"kubernetes.io/projected/c8af52b9-239c-4e7e-9f4e-80aa1e4148a2-kube-api-access-7w89l\") pod \"heat-api-7dd479566-6k7mz\" (UID: \"c8af52b9-239c-4e7e-9f4e-80aa1e4148a2\") " pod="openstack/heat-api-7dd479566-6k7mz" Jan 26 08:12:16 crc kubenswrapper[4806]: I0126 08:12:16.723663 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c8af52b9-239c-4e7e-9f4e-80aa1e4148a2-config-data-custom\") pod \"heat-api-7dd479566-6k7mz\" (UID: \"c8af52b9-239c-4e7e-9f4e-80aa1e4148a2\") " pod="openstack/heat-api-7dd479566-6k7mz" Jan 26 08:12:16 crc kubenswrapper[4806]: I0126 08:12:16.723742 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8460c49b-4775-409d-b4c0-177929af70a4-combined-ca-bundle\") pod \"heat-cfnapi-5c84c55f78-ls58x\" (UID: \"8460c49b-4775-409d-b4c0-177929af70a4\") " pod="openstack/heat-cfnapi-5c84c55f78-ls58x" Jan 26 08:12:16 crc kubenswrapper[4806]: I0126 08:12:16.723849 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8af52b9-239c-4e7e-9f4e-80aa1e4148a2-combined-ca-bundle\") pod \"heat-api-7dd479566-6k7mz\" (UID: \"c8af52b9-239c-4e7e-9f4e-80aa1e4148a2\") " pod="openstack/heat-api-7dd479566-6k7mz" Jan 26 08:12:16 crc kubenswrapper[4806]: I0126 08:12:16.723886 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8460c49b-4775-409d-b4c0-177929af70a4-config-data-custom\") pod \"heat-cfnapi-5c84c55f78-ls58x\" (UID: \"8460c49b-4775-409d-b4c0-177929af70a4\") " pod="openstack/heat-cfnapi-5c84c55f78-ls58x" Jan 26 08:12:16 crc kubenswrapper[4806]: I0126 08:12:16.723906 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f644c\" (UniqueName: \"kubernetes.io/projected/8460c49b-4775-409d-b4c0-177929af70a4-kube-api-access-f644c\") pod \"heat-cfnapi-5c84c55f78-ls58x\" (UID: \"8460c49b-4775-409d-b4c0-177929af70a4\") " pod="openstack/heat-cfnapi-5c84c55f78-ls58x" Jan 26 08:12:16 crc kubenswrapper[4806]: I0126 08:12:16.723982 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8460c49b-4775-409d-b4c0-177929af70a4-config-data\") pod \"heat-cfnapi-5c84c55f78-ls58x\" (UID: \"8460c49b-4775-409d-b4c0-177929af70a4\") " pod="openstack/heat-cfnapi-5c84c55f78-ls58x" Jan 26 08:12:16 crc kubenswrapper[4806]: I0126 08:12:16.743117 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-5d649c5968-gb8r4" Jan 26 08:12:16 crc kubenswrapper[4806]: I0126 08:12:16.747069 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8460c49b-4775-409d-b4c0-177929af70a4-config-data-custom\") pod \"heat-cfnapi-5c84c55f78-ls58x\" (UID: \"8460c49b-4775-409d-b4c0-177929af70a4\") " pod="openstack/heat-cfnapi-5c84c55f78-ls58x" Jan 26 08:12:16 crc kubenswrapper[4806]: I0126 08:12:16.753804 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8460c49b-4775-409d-b4c0-177929af70a4-config-data\") pod \"heat-cfnapi-5c84c55f78-ls58x\" (UID: \"8460c49b-4775-409d-b4c0-177929af70a4\") " pod="openstack/heat-cfnapi-5c84c55f78-ls58x" Jan 26 08:12:16 crc kubenswrapper[4806]: I0126 08:12:16.816732 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8460c49b-4775-409d-b4c0-177929af70a4-combined-ca-bundle\") pod \"heat-cfnapi-5c84c55f78-ls58x\" (UID: \"8460c49b-4775-409d-b4c0-177929af70a4\") " pod="openstack/heat-cfnapi-5c84c55f78-ls58x" Jan 26 08:12:16 crc kubenswrapper[4806]: I0126 08:12:16.820955 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f644c\" (UniqueName: \"kubernetes.io/projected/8460c49b-4775-409d-b4c0-177929af70a4-kube-api-access-f644c\") pod \"heat-cfnapi-5c84c55f78-ls58x\" (UID: \"8460c49b-4775-409d-b4c0-177929af70a4\") " pod="openstack/heat-cfnapi-5c84c55f78-ls58x" Jan 26 08:12:16 crc kubenswrapper[4806]: I0126 08:12:16.821006 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ca560845-7250-4cf5-90d0-449180808340","Type":"ContainerStarted","Data":"419d2b00eb63da05da2dd7fa5da1a852c0fcdf75897811126d6b6766bf1d5cbd"} Jan 26 08:12:16 crc kubenswrapper[4806]: I0126 08:12:16.821040 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ca560845-7250-4cf5-90d0-449180808340","Type":"ContainerStarted","Data":"f7e87a97636c9aa22a55bafb57e12cbbc0fbf00e97b6e15c251eeed86aab3612"} Jan 26 08:12:16 crc kubenswrapper[4806]: I0126 08:12:16.831483 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8af52b9-239c-4e7e-9f4e-80aa1e4148a2-config-data\") pod \"heat-api-7dd479566-6k7mz\" (UID: \"c8af52b9-239c-4e7e-9f4e-80aa1e4148a2\") " pod="openstack/heat-api-7dd479566-6k7mz" Jan 26 08:12:16 crc kubenswrapper[4806]: I0126 08:12:16.845840 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7w89l\" (UniqueName: \"kubernetes.io/projected/c8af52b9-239c-4e7e-9f4e-80aa1e4148a2-kube-api-access-7w89l\") pod \"heat-api-7dd479566-6k7mz\" (UID: \"c8af52b9-239c-4e7e-9f4e-80aa1e4148a2\") " pod="openstack/heat-api-7dd479566-6k7mz" Jan 26 08:12:16 crc kubenswrapper[4806]: I0126 08:12:16.845999 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c8af52b9-239c-4e7e-9f4e-80aa1e4148a2-config-data-custom\") pod \"heat-api-7dd479566-6k7mz\" (UID: \"c8af52b9-239c-4e7e-9f4e-80aa1e4148a2\") " pod="openstack/heat-api-7dd479566-6k7mz" Jan 26 08:12:16 crc kubenswrapper[4806]: I0126 08:12:16.846240 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8af52b9-239c-4e7e-9f4e-80aa1e4148a2-combined-ca-bundle\") pod \"heat-api-7dd479566-6k7mz\" (UID: \"c8af52b9-239c-4e7e-9f4e-80aa1e4148a2\") " pod="openstack/heat-api-7dd479566-6k7mz" Jan 26 08:12:16 crc kubenswrapper[4806]: I0126 08:12:16.832336 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-5c84c55f78-ls58x" Jan 26 08:12:16 crc kubenswrapper[4806]: I0126 08:12:16.851191 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c8af52b9-239c-4e7e-9f4e-80aa1e4148a2-config-data-custom\") pod \"heat-api-7dd479566-6k7mz\" (UID: \"c8af52b9-239c-4e7e-9f4e-80aa1e4148a2\") " pod="openstack/heat-api-7dd479566-6k7mz" Jan 26 08:12:16 crc kubenswrapper[4806]: I0126 08:12:16.857600 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8af52b9-239c-4e7e-9f4e-80aa1e4148a2-config-data\") pod \"heat-api-7dd479566-6k7mz\" (UID: \"c8af52b9-239c-4e7e-9f4e-80aa1e4148a2\") " pod="openstack/heat-api-7dd479566-6k7mz" Jan 26 08:12:16 crc kubenswrapper[4806]: I0126 08:12:16.859402 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8af52b9-239c-4e7e-9f4e-80aa1e4148a2-combined-ca-bundle\") pod \"heat-api-7dd479566-6k7mz\" (UID: \"c8af52b9-239c-4e7e-9f4e-80aa1e4148a2\") " pod="openstack/heat-api-7dd479566-6k7mz" Jan 26 08:12:16 crc kubenswrapper[4806]: I0126 08:12:16.883281 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7w89l\" (UniqueName: \"kubernetes.io/projected/c8af52b9-239c-4e7e-9f4e-80aa1e4148a2-kube-api-access-7w89l\") pod \"heat-api-7dd479566-6k7mz\" (UID: \"c8af52b9-239c-4e7e-9f4e-80aa1e4148a2\") " pod="openstack/heat-api-7dd479566-6k7mz" Jan 26 08:12:16 crc kubenswrapper[4806]: I0126 08:12:16.930395 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-7dd479566-6k7mz" Jan 26 08:12:17 crc kubenswrapper[4806]: I0126 08:12:17.271088 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 26 08:12:17 crc kubenswrapper[4806]: I0126 08:12:17.506810 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-5d649c5968-gb8r4"] Jan 26 08:12:17 crc kubenswrapper[4806]: I0126 08:12:17.514654 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-5c84c55f78-ls58x"] Jan 26 08:12:17 crc kubenswrapper[4806]: I0126 08:12:17.771619 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-7dd479566-6k7mz"] Jan 26 08:12:17 crc kubenswrapper[4806]: I0126 08:12:17.832166 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-5c84c55f78-ls58x" event={"ID":"8460c49b-4775-409d-b4c0-177929af70a4","Type":"ContainerStarted","Data":"59f3ab31eadbbf71420c82d745e783f3c0861db07a3211d4651d783d14accf1d"} Jan 26 08:12:17 crc kubenswrapper[4806]: I0126 08:12:17.834557 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-5d649c5968-gb8r4" event={"ID":"4656694c-fa67-4546-bf62-bc929866aeae","Type":"ContainerStarted","Data":"e37bdebe68614579a6bee6a4a4979ded67a92196da1a9befabaa3853e825f984"} Jan 26 08:12:18 crc kubenswrapper[4806]: I0126 08:12:18.848068 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-5d649c5968-gb8r4" event={"ID":"4656694c-fa67-4546-bf62-bc929866aeae","Type":"ContainerStarted","Data":"24b4d7233d261dd6679ff912d8c12a05fefa42fb9b61e8afb81cbabd35d3eb12"} Jan 26 08:12:18 crc kubenswrapper[4806]: I0126 08:12:18.850865 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-5d649c5968-gb8r4" Jan 26 08:12:18 crc kubenswrapper[4806]: I0126 08:12:18.857279 4806 generic.go:334] "Generic (PLEG): container finished" podID="c8af52b9-239c-4e7e-9f4e-80aa1e4148a2" containerID="40a0c14dcf443b8d17354f2b243bd8fcd17511ad07bcfa7eae5518853fc6283e" exitCode=1 Jan 26 08:12:18 crc kubenswrapper[4806]: I0126 08:12:18.857484 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-7dd479566-6k7mz" event={"ID":"c8af52b9-239c-4e7e-9f4e-80aa1e4148a2","Type":"ContainerDied","Data":"40a0c14dcf443b8d17354f2b243bd8fcd17511ad07bcfa7eae5518853fc6283e"} Jan 26 08:12:18 crc kubenswrapper[4806]: I0126 08:12:18.857567 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-7dd479566-6k7mz" event={"ID":"c8af52b9-239c-4e7e-9f4e-80aa1e4148a2","Type":"ContainerStarted","Data":"61da011041d5b73b034f8862fab5a36b153fe45e069f97faa7148451fea10c54"} Jan 26 08:12:18 crc kubenswrapper[4806]: I0126 08:12:18.857916 4806 scope.go:117] "RemoveContainer" containerID="40a0c14dcf443b8d17354f2b243bd8fcd17511ad07bcfa7eae5518853fc6283e" Jan 26 08:12:18 crc kubenswrapper[4806]: I0126 08:12:18.875073 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-5d649c5968-gb8r4" podStartSLOduration=2.875057451 podStartE2EDuration="2.875057451s" podCreationTimestamp="2026-01-26 08:12:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 08:12:18.867192581 +0000 UTC m=+1118.131600637" watchObservedRunningTime="2026-01-26 08:12:18.875057451 +0000 UTC m=+1118.139465507" Jan 26 08:12:18 crc kubenswrapper[4806]: I0126 08:12:18.877349 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ca560845-7250-4cf5-90d0-449180808340","Type":"ContainerStarted","Data":"cb5eb2546a9b03249bcebd5ce1a4cd0ef38b92385cd7033cf6f53b557f91d5f0"} Jan 26 08:12:18 crc kubenswrapper[4806]: I0126 08:12:18.878142 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 26 08:12:18 crc kubenswrapper[4806]: I0126 08:12:18.880366 4806 generic.go:334] "Generic (PLEG): container finished" podID="8460c49b-4775-409d-b4c0-177929af70a4" containerID="be190f1e6279b35aae81bdf86d98f2aeb28623d90c9e823f7fb7ebad4ee038ce" exitCode=1 Jan 26 08:12:18 crc kubenswrapper[4806]: I0126 08:12:18.880395 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-5c84c55f78-ls58x" event={"ID":"8460c49b-4775-409d-b4c0-177929af70a4","Type":"ContainerDied","Data":"be190f1e6279b35aae81bdf86d98f2aeb28623d90c9e823f7fb7ebad4ee038ce"} Jan 26 08:12:18 crc kubenswrapper[4806]: I0126 08:12:18.880672 4806 scope.go:117] "RemoveContainer" containerID="be190f1e6279b35aae81bdf86d98f2aeb28623d90c9e823f7fb7ebad4ee038ce" Jan 26 08:12:18 crc kubenswrapper[4806]: I0126 08:12:18.936306 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.7344195190000002 podStartE2EDuration="7.936291973s" podCreationTimestamp="2026-01-26 08:12:11 +0000 UTC" firstStartedPulling="2026-01-26 08:12:12.679692103 +0000 UTC m=+1111.944100159" lastFinishedPulling="2026-01-26 08:12:17.881564557 +0000 UTC m=+1117.145972613" observedRunningTime="2026-01-26 08:12:18.931243392 +0000 UTC m=+1118.195651448" watchObservedRunningTime="2026-01-26 08:12:18.936291973 +0000 UTC m=+1118.200700029" Jan 26 08:12:19 crc kubenswrapper[4806]: I0126 08:12:19.370712 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-688b9f5b49-gzqdl" Jan 26 08:12:19 crc kubenswrapper[4806]: I0126 08:12:19.424752 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-6g69f"] Jan 26 08:12:19 crc kubenswrapper[4806]: I0126 08:12:19.425004 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6578955fd5-6g69f" podUID="ec2813a6-2bcf-4dcd-a9ab-4cd2ed03d451" containerName="dnsmasq-dns" containerID="cri-o://49c0d8e9f9683e509328152203cc80f37bdfdd244576c95436f96d95be8dfbc1" gracePeriod=10 Jan 26 08:12:19 crc kubenswrapper[4806]: I0126 08:12:19.892307 4806 generic.go:334] "Generic (PLEG): container finished" podID="8460c49b-4775-409d-b4c0-177929af70a4" containerID="faf28906d5ab543cefcb7319c9944f166e49216850d5dc9d199bb6814ca49d79" exitCode=1 Jan 26 08:12:19 crc kubenswrapper[4806]: I0126 08:12:19.892757 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-5c84c55f78-ls58x" event={"ID":"8460c49b-4775-409d-b4c0-177929af70a4","Type":"ContainerDied","Data":"faf28906d5ab543cefcb7319c9944f166e49216850d5dc9d199bb6814ca49d79"} Jan 26 08:12:19 crc kubenswrapper[4806]: I0126 08:12:19.892892 4806 scope.go:117] "RemoveContainer" containerID="be190f1e6279b35aae81bdf86d98f2aeb28623d90c9e823f7fb7ebad4ee038ce" Jan 26 08:12:19 crc kubenswrapper[4806]: I0126 08:12:19.893108 4806 scope.go:117] "RemoveContainer" containerID="faf28906d5ab543cefcb7319c9944f166e49216850d5dc9d199bb6814ca49d79" Jan 26 08:12:19 crc kubenswrapper[4806]: E0126 08:12:19.893375 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-5c84c55f78-ls58x_openstack(8460c49b-4775-409d-b4c0-177929af70a4)\"" pod="openstack/heat-cfnapi-5c84c55f78-ls58x" podUID="8460c49b-4775-409d-b4c0-177929af70a4" Jan 26 08:12:19 crc kubenswrapper[4806]: I0126 08:12:19.914234 4806 generic.go:334] "Generic (PLEG): container finished" podID="ec2813a6-2bcf-4dcd-a9ab-4cd2ed03d451" containerID="49c0d8e9f9683e509328152203cc80f37bdfdd244576c95436f96d95be8dfbc1" exitCode=0 Jan 26 08:12:19 crc kubenswrapper[4806]: I0126 08:12:19.914295 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-6g69f" event={"ID":"ec2813a6-2bcf-4dcd-a9ab-4cd2ed03d451","Type":"ContainerDied","Data":"49c0d8e9f9683e509328152203cc80f37bdfdd244576c95436f96d95be8dfbc1"} Jan 26 08:12:19 crc kubenswrapper[4806]: I0126 08:12:19.922174 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-7dd479566-6k7mz" event={"ID":"c8af52b9-239c-4e7e-9f4e-80aa1e4148a2","Type":"ContainerStarted","Data":"21a2259284eeb59ff2943baa8e7323075a4bee4436360e0885ed7593a195e81a"} Jan 26 08:12:19 crc kubenswrapper[4806]: I0126 08:12:19.922496 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-7dd479566-6k7mz" Jan 26 08:12:19 crc kubenswrapper[4806]: I0126 08:12:19.964504 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-7dd479566-6k7mz" podStartSLOduration=3.964482688 podStartE2EDuration="3.964482688s" podCreationTimestamp="2026-01-26 08:12:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 08:12:19.944450967 +0000 UTC m=+1119.208859043" watchObservedRunningTime="2026-01-26 08:12:19.964482688 +0000 UTC m=+1119.228890744" Jan 26 08:12:20 crc kubenswrapper[4806]: I0126 08:12:20.766146 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-957b6fbf8-7f82k"] Jan 26 08:12:20 crc kubenswrapper[4806]: I0126 08:12:20.766688 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-957b6fbf8-7f82k" podUID="ce987c26-24cc-40b4-9898-9f00d4eda52e" containerName="heat-api" containerID="cri-o://b705fd70926481698eeb503d4adfdb74f689027ab44984e054b013403599b08c" gracePeriod=60 Jan 26 08:12:20 crc kubenswrapper[4806]: I0126 08:12:20.790767 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-7448bc75bf-txwxj"] Jan 26 08:12:20 crc kubenswrapper[4806]: I0126 08:12:20.791190 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-7448bc75bf-txwxj" podUID="4d637212-f269-4915-b30b-4ffe4e19bb2d" containerName="heat-cfnapi" containerID="cri-o://30a00e07cdb1636dde2a329ce2788f25a3258a3af3728c5b7cebffe910c6a196" gracePeriod=60 Jan 26 08:12:20 crc kubenswrapper[4806]: I0126 08:12:20.791578 4806 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-957b6fbf8-7f82k" podUID="ce987c26-24cc-40b4-9898-9f00d4eda52e" containerName="heat-api" probeResult="failure" output="Get \"http://10.217.0.178:8004/healthcheck\": EOF" Jan 26 08:12:20 crc kubenswrapper[4806]: I0126 08:12:20.813743 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-5687f48547-kz5md"] Jan 26 08:12:20 crc kubenswrapper[4806]: I0126 08:12:20.814951 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5687f48547-kz5md" Jan 26 08:12:20 crc kubenswrapper[4806]: I0126 08:12:20.817309 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-internal-svc" Jan 26 08:12:20 crc kubenswrapper[4806]: I0126 08:12:20.817542 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-public-svc" Jan 26 08:12:20 crc kubenswrapper[4806]: I0126 08:12:20.826960 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-5687f48547-kz5md"] Jan 26 08:12:20 crc kubenswrapper[4806]: I0126 08:12:20.867502 4806 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-7448bc75bf-txwxj" podUID="4d637212-f269-4915-b30b-4ffe4e19bb2d" containerName="heat-cfnapi" probeResult="failure" output="Get \"http://10.217.0.176:8000/healthcheck\": EOF" Jan 26 08:12:20 crc kubenswrapper[4806]: I0126 08:12:20.883740 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-56c9b6cf4b-dl98j"] Jan 26 08:12:20 crc kubenswrapper[4806]: I0126 08:12:20.885362 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-56c9b6cf4b-dl98j" Jan 26 08:12:20 crc kubenswrapper[4806]: I0126 08:12:20.898876 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-internal-svc" Jan 26 08:12:20 crc kubenswrapper[4806]: I0126 08:12:20.899109 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-public-svc" Jan 26 08:12:20 crc kubenswrapper[4806]: I0126 08:12:20.941507 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-56c9b6cf4b-dl98j"] Jan 26 08:12:20 crc kubenswrapper[4806]: I0126 08:12:20.964352 4806 generic.go:334] "Generic (PLEG): container finished" podID="c8af52b9-239c-4e7e-9f4e-80aa1e4148a2" containerID="21a2259284eeb59ff2943baa8e7323075a4bee4436360e0885ed7593a195e81a" exitCode=1 Jan 26 08:12:20 crc kubenswrapper[4806]: I0126 08:12:20.964989 4806 scope.go:117] "RemoveContainer" containerID="faf28906d5ab543cefcb7319c9944f166e49216850d5dc9d199bb6814ca49d79" Jan 26 08:12:20 crc kubenswrapper[4806]: E0126 08:12:20.965187 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-5c84c55f78-ls58x_openstack(8460c49b-4775-409d-b4c0-177929af70a4)\"" pod="openstack/heat-cfnapi-5c84c55f78-ls58x" podUID="8460c49b-4775-409d-b4c0-177929af70a4" Jan 26 08:12:20 crc kubenswrapper[4806]: I0126 08:12:20.965468 4806 scope.go:117] "RemoveContainer" containerID="21a2259284eeb59ff2943baa8e7323075a4bee4436360e0885ed7593a195e81a" Jan 26 08:12:20 crc kubenswrapper[4806]: E0126 08:12:20.965671 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-7dd479566-6k7mz_openstack(c8af52b9-239c-4e7e-9f4e-80aa1e4148a2)\"" pod="openstack/heat-api-7dd479566-6k7mz" podUID="c8af52b9-239c-4e7e-9f4e-80aa1e4148a2" Jan 26 08:12:20 crc kubenswrapper[4806]: I0126 08:12:20.965695 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-7dd479566-6k7mz" event={"ID":"c8af52b9-239c-4e7e-9f4e-80aa1e4148a2","Type":"ContainerDied","Data":"21a2259284eeb59ff2943baa8e7323075a4bee4436360e0885ed7593a195e81a"} Jan 26 08:12:20 crc kubenswrapper[4806]: I0126 08:12:20.990402 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39ad9dca-7dee-4116-ab24-071e59b41dc2-config-data\") pod \"heat-api-5687f48547-kz5md\" (UID: \"39ad9dca-7dee-4116-ab24-071e59b41dc2\") " pod="openstack/heat-api-5687f48547-kz5md" Jan 26 08:12:20 crc kubenswrapper[4806]: I0126 08:12:20.990454 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2b71668-05ca-4e62-a0fc-1e240e24caff-internal-tls-certs\") pod \"heat-cfnapi-56c9b6cf4b-dl98j\" (UID: \"e2b71668-05ca-4e62-a0fc-1e240e24caff\") " pod="openstack/heat-cfnapi-56c9b6cf4b-dl98j" Jan 26 08:12:20 crc kubenswrapper[4806]: I0126 08:12:20.990491 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/39ad9dca-7dee-4116-ab24-071e59b41dc2-config-data-custom\") pod \"heat-api-5687f48547-kz5md\" (UID: \"39ad9dca-7dee-4116-ab24-071e59b41dc2\") " pod="openstack/heat-api-5687f48547-kz5md" Jan 26 08:12:20 crc kubenswrapper[4806]: I0126 08:12:20.990573 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/39ad9dca-7dee-4116-ab24-071e59b41dc2-internal-tls-certs\") pod \"heat-api-5687f48547-kz5md\" (UID: \"39ad9dca-7dee-4116-ab24-071e59b41dc2\") " pod="openstack/heat-api-5687f48547-kz5md" Jan 26 08:12:20 crc kubenswrapper[4806]: I0126 08:12:20.990609 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2b71668-05ca-4e62-a0fc-1e240e24caff-combined-ca-bundle\") pod \"heat-cfnapi-56c9b6cf4b-dl98j\" (UID: \"e2b71668-05ca-4e62-a0fc-1e240e24caff\") " pod="openstack/heat-cfnapi-56c9b6cf4b-dl98j" Jan 26 08:12:20 crc kubenswrapper[4806]: I0126 08:12:20.990626 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2b71668-05ca-4e62-a0fc-1e240e24caff-public-tls-certs\") pod \"heat-cfnapi-56c9b6cf4b-dl98j\" (UID: \"e2b71668-05ca-4e62-a0fc-1e240e24caff\") " pod="openstack/heat-cfnapi-56c9b6cf4b-dl98j" Jan 26 08:12:20 crc kubenswrapper[4806]: I0126 08:12:20.990654 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-562q9\" (UniqueName: \"kubernetes.io/projected/39ad9dca-7dee-4116-ab24-071e59b41dc2-kube-api-access-562q9\") pod \"heat-api-5687f48547-kz5md\" (UID: \"39ad9dca-7dee-4116-ab24-071e59b41dc2\") " pod="openstack/heat-api-5687f48547-kz5md" Jan 26 08:12:20 crc kubenswrapper[4806]: I0126 08:12:20.990691 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e2b71668-05ca-4e62-a0fc-1e240e24caff-config-data-custom\") pod \"heat-cfnapi-56c9b6cf4b-dl98j\" (UID: \"e2b71668-05ca-4e62-a0fc-1e240e24caff\") " pod="openstack/heat-cfnapi-56c9b6cf4b-dl98j" Jan 26 08:12:20 crc kubenswrapper[4806]: I0126 08:12:20.990704 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39ad9dca-7dee-4116-ab24-071e59b41dc2-combined-ca-bundle\") pod \"heat-api-5687f48547-kz5md\" (UID: \"39ad9dca-7dee-4116-ab24-071e59b41dc2\") " pod="openstack/heat-api-5687f48547-kz5md" Jan 26 08:12:20 crc kubenswrapper[4806]: I0126 08:12:20.990736 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/39ad9dca-7dee-4116-ab24-071e59b41dc2-public-tls-certs\") pod \"heat-api-5687f48547-kz5md\" (UID: \"39ad9dca-7dee-4116-ab24-071e59b41dc2\") " pod="openstack/heat-api-5687f48547-kz5md" Jan 26 08:12:20 crc kubenswrapper[4806]: I0126 08:12:20.990770 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mczsl\" (UniqueName: \"kubernetes.io/projected/e2b71668-05ca-4e62-a0fc-1e240e24caff-kube-api-access-mczsl\") pod \"heat-cfnapi-56c9b6cf4b-dl98j\" (UID: \"e2b71668-05ca-4e62-a0fc-1e240e24caff\") " pod="openstack/heat-cfnapi-56c9b6cf4b-dl98j" Jan 26 08:12:20 crc kubenswrapper[4806]: I0126 08:12:20.990809 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2b71668-05ca-4e62-a0fc-1e240e24caff-config-data\") pod \"heat-cfnapi-56c9b6cf4b-dl98j\" (UID: \"e2b71668-05ca-4e62-a0fc-1e240e24caff\") " pod="openstack/heat-cfnapi-56c9b6cf4b-dl98j" Jan 26 08:12:21 crc kubenswrapper[4806]: E0126 08:12:21.015716 4806 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/de04efef5d8567f4b2340bd9e005181ee817c744d615c99d41de2231194f76ec/diff" to get inode usage: stat /var/lib/containers/storage/overlay/de04efef5d8567f4b2340bd9e005181ee817c744d615c99d41de2231194f76ec/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/openstack_horizon-cbdcb8bcc-96jf5_36dba152-b43d-47c4-94bb-874f93b0884f/horizon-log/0.log" to get inode usage: stat /var/log/pods/openstack_horizon-cbdcb8bcc-96jf5_36dba152-b43d-47c4-94bb-874f93b0884f/horizon-log/0.log: no such file or directory Jan 26 08:12:21 crc kubenswrapper[4806]: W0126 08:12:21.070998 4806 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5a30e60a_8dcc_43ea_bb0c_25ad2c6a62f2.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5a30e60a_8dcc_43ea_bb0c_25ad2c6a62f2.slice: no such file or directory Jan 26 08:12:21 crc kubenswrapper[4806]: I0126 08:12:21.091973 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2b71668-05ca-4e62-a0fc-1e240e24caff-config-data\") pod \"heat-cfnapi-56c9b6cf4b-dl98j\" (UID: \"e2b71668-05ca-4e62-a0fc-1e240e24caff\") " pod="openstack/heat-cfnapi-56c9b6cf4b-dl98j" Jan 26 08:12:21 crc kubenswrapper[4806]: I0126 08:12:21.092027 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39ad9dca-7dee-4116-ab24-071e59b41dc2-config-data\") pod \"heat-api-5687f48547-kz5md\" (UID: \"39ad9dca-7dee-4116-ab24-071e59b41dc2\") " pod="openstack/heat-api-5687f48547-kz5md" Jan 26 08:12:21 crc kubenswrapper[4806]: I0126 08:12:21.092063 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2b71668-05ca-4e62-a0fc-1e240e24caff-internal-tls-certs\") pod \"heat-cfnapi-56c9b6cf4b-dl98j\" (UID: \"e2b71668-05ca-4e62-a0fc-1e240e24caff\") " pod="openstack/heat-cfnapi-56c9b6cf4b-dl98j" Jan 26 08:12:21 crc kubenswrapper[4806]: I0126 08:12:21.092102 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/39ad9dca-7dee-4116-ab24-071e59b41dc2-config-data-custom\") pod \"heat-api-5687f48547-kz5md\" (UID: \"39ad9dca-7dee-4116-ab24-071e59b41dc2\") " pod="openstack/heat-api-5687f48547-kz5md" Jan 26 08:12:21 crc kubenswrapper[4806]: I0126 08:12:21.092127 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/39ad9dca-7dee-4116-ab24-071e59b41dc2-internal-tls-certs\") pod \"heat-api-5687f48547-kz5md\" (UID: \"39ad9dca-7dee-4116-ab24-071e59b41dc2\") " pod="openstack/heat-api-5687f48547-kz5md" Jan 26 08:12:21 crc kubenswrapper[4806]: I0126 08:12:21.092189 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2b71668-05ca-4e62-a0fc-1e240e24caff-combined-ca-bundle\") pod \"heat-cfnapi-56c9b6cf4b-dl98j\" (UID: \"e2b71668-05ca-4e62-a0fc-1e240e24caff\") " pod="openstack/heat-cfnapi-56c9b6cf4b-dl98j" Jan 26 08:12:21 crc kubenswrapper[4806]: I0126 08:12:21.092207 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2b71668-05ca-4e62-a0fc-1e240e24caff-public-tls-certs\") pod \"heat-cfnapi-56c9b6cf4b-dl98j\" (UID: \"e2b71668-05ca-4e62-a0fc-1e240e24caff\") " pod="openstack/heat-cfnapi-56c9b6cf4b-dl98j" Jan 26 08:12:21 crc kubenswrapper[4806]: I0126 08:12:21.092260 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-562q9\" (UniqueName: \"kubernetes.io/projected/39ad9dca-7dee-4116-ab24-071e59b41dc2-kube-api-access-562q9\") pod \"heat-api-5687f48547-kz5md\" (UID: \"39ad9dca-7dee-4116-ab24-071e59b41dc2\") " pod="openstack/heat-api-5687f48547-kz5md" Jan 26 08:12:21 crc kubenswrapper[4806]: I0126 08:12:21.092303 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e2b71668-05ca-4e62-a0fc-1e240e24caff-config-data-custom\") pod \"heat-cfnapi-56c9b6cf4b-dl98j\" (UID: \"e2b71668-05ca-4e62-a0fc-1e240e24caff\") " pod="openstack/heat-cfnapi-56c9b6cf4b-dl98j" Jan 26 08:12:21 crc kubenswrapper[4806]: I0126 08:12:21.092318 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39ad9dca-7dee-4116-ab24-071e59b41dc2-combined-ca-bundle\") pod \"heat-api-5687f48547-kz5md\" (UID: \"39ad9dca-7dee-4116-ab24-071e59b41dc2\") " pod="openstack/heat-api-5687f48547-kz5md" Jan 26 08:12:21 crc kubenswrapper[4806]: I0126 08:12:21.092358 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/39ad9dca-7dee-4116-ab24-071e59b41dc2-public-tls-certs\") pod \"heat-api-5687f48547-kz5md\" (UID: \"39ad9dca-7dee-4116-ab24-071e59b41dc2\") " pod="openstack/heat-api-5687f48547-kz5md" Jan 26 08:12:21 crc kubenswrapper[4806]: I0126 08:12:21.092405 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mczsl\" (UniqueName: \"kubernetes.io/projected/e2b71668-05ca-4e62-a0fc-1e240e24caff-kube-api-access-mczsl\") pod \"heat-cfnapi-56c9b6cf4b-dl98j\" (UID: \"e2b71668-05ca-4e62-a0fc-1e240e24caff\") " pod="openstack/heat-cfnapi-56c9b6cf4b-dl98j" Jan 26 08:12:21 crc kubenswrapper[4806]: I0126 08:12:21.108308 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/39ad9dca-7dee-4116-ab24-071e59b41dc2-public-tls-certs\") pod \"heat-api-5687f48547-kz5md\" (UID: \"39ad9dca-7dee-4116-ab24-071e59b41dc2\") " pod="openstack/heat-api-5687f48547-kz5md" Jan 26 08:12:21 crc kubenswrapper[4806]: I0126 08:12:21.111164 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39ad9dca-7dee-4116-ab24-071e59b41dc2-combined-ca-bundle\") pod \"heat-api-5687f48547-kz5md\" (UID: \"39ad9dca-7dee-4116-ab24-071e59b41dc2\") " pod="openstack/heat-api-5687f48547-kz5md" Jan 26 08:12:21 crc kubenswrapper[4806]: I0126 08:12:21.112546 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39ad9dca-7dee-4116-ab24-071e59b41dc2-config-data\") pod \"heat-api-5687f48547-kz5md\" (UID: \"39ad9dca-7dee-4116-ab24-071e59b41dc2\") " pod="openstack/heat-api-5687f48547-kz5md" Jan 26 08:12:21 crc kubenswrapper[4806]: I0126 08:12:21.113475 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2b71668-05ca-4e62-a0fc-1e240e24caff-internal-tls-certs\") pod \"heat-cfnapi-56c9b6cf4b-dl98j\" (UID: \"e2b71668-05ca-4e62-a0fc-1e240e24caff\") " pod="openstack/heat-cfnapi-56c9b6cf4b-dl98j" Jan 26 08:12:21 crc kubenswrapper[4806]: I0126 08:12:21.113500 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/39ad9dca-7dee-4116-ab24-071e59b41dc2-internal-tls-certs\") pod \"heat-api-5687f48547-kz5md\" (UID: \"39ad9dca-7dee-4116-ab24-071e59b41dc2\") " pod="openstack/heat-api-5687f48547-kz5md" Jan 26 08:12:21 crc kubenswrapper[4806]: I0126 08:12:21.113877 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/39ad9dca-7dee-4116-ab24-071e59b41dc2-config-data-custom\") pod \"heat-api-5687f48547-kz5md\" (UID: \"39ad9dca-7dee-4116-ab24-071e59b41dc2\") " pod="openstack/heat-api-5687f48547-kz5md" Jan 26 08:12:21 crc kubenswrapper[4806]: I0126 08:12:21.114114 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2b71668-05ca-4e62-a0fc-1e240e24caff-combined-ca-bundle\") pod \"heat-cfnapi-56c9b6cf4b-dl98j\" (UID: \"e2b71668-05ca-4e62-a0fc-1e240e24caff\") " pod="openstack/heat-cfnapi-56c9b6cf4b-dl98j" Jan 26 08:12:21 crc kubenswrapper[4806]: I0126 08:12:21.117493 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2b71668-05ca-4e62-a0fc-1e240e24caff-public-tls-certs\") pod \"heat-cfnapi-56c9b6cf4b-dl98j\" (UID: \"e2b71668-05ca-4e62-a0fc-1e240e24caff\") " pod="openstack/heat-cfnapi-56c9b6cf4b-dl98j" Jan 26 08:12:21 crc kubenswrapper[4806]: I0126 08:12:21.132402 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-562q9\" (UniqueName: \"kubernetes.io/projected/39ad9dca-7dee-4116-ab24-071e59b41dc2-kube-api-access-562q9\") pod \"heat-api-5687f48547-kz5md\" (UID: \"39ad9dca-7dee-4116-ab24-071e59b41dc2\") " pod="openstack/heat-api-5687f48547-kz5md" Jan 26 08:12:21 crc kubenswrapper[4806]: I0126 08:12:21.137567 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2b71668-05ca-4e62-a0fc-1e240e24caff-config-data\") pod \"heat-cfnapi-56c9b6cf4b-dl98j\" (UID: \"e2b71668-05ca-4e62-a0fc-1e240e24caff\") " pod="openstack/heat-cfnapi-56c9b6cf4b-dl98j" Jan 26 08:12:21 crc kubenswrapper[4806]: I0126 08:12:21.141899 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e2b71668-05ca-4e62-a0fc-1e240e24caff-config-data-custom\") pod \"heat-cfnapi-56c9b6cf4b-dl98j\" (UID: \"e2b71668-05ca-4e62-a0fc-1e240e24caff\") " pod="openstack/heat-cfnapi-56c9b6cf4b-dl98j" Jan 26 08:12:21 crc kubenswrapper[4806]: I0126 08:12:21.142315 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mczsl\" (UniqueName: \"kubernetes.io/projected/e2b71668-05ca-4e62-a0fc-1e240e24caff-kube-api-access-mczsl\") pod \"heat-cfnapi-56c9b6cf4b-dl98j\" (UID: \"e2b71668-05ca-4e62-a0fc-1e240e24caff\") " pod="openstack/heat-cfnapi-56c9b6cf4b-dl98j" Jan 26 08:12:21 crc kubenswrapper[4806]: I0126 08:12:21.169855 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5687f48547-kz5md" Jan 26 08:12:21 crc kubenswrapper[4806]: I0126 08:12:21.227135 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-56c9b6cf4b-dl98j" Jan 26 08:12:21 crc kubenswrapper[4806]: I0126 08:12:21.594178 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-78c9d88fc9-5rs9s"] Jan 26 08:12:21 crc kubenswrapper[4806]: I0126 08:12:21.596373 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-78c9d88fc9-5rs9s" Jan 26 08:12:21 crc kubenswrapper[4806]: I0126 08:12:21.600342 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Jan 26 08:12:21 crc kubenswrapper[4806]: I0126 08:12:21.600504 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Jan 26 08:12:21 crc kubenswrapper[4806]: I0126 08:12:21.600461 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 26 08:12:21 crc kubenswrapper[4806]: I0126 08:12:21.603994 4806 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="7bd66660-7983-40b2-94b0-bd9663391fee" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.170:8776/healthcheck\": dial tcp 10.217.0.170:8776: connect: connection refused" Jan 26 08:12:21 crc kubenswrapper[4806]: I0126 08:12:21.645297 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-78c9d88fc9-5rs9s"] Jan 26 08:12:21 crc kubenswrapper[4806]: I0126 08:12:21.658676 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-74d94c4d65-ms88t" Jan 26 08:12:21 crc kubenswrapper[4806]: I0126 08:12:21.725425 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7c49d653-a114-4352-afd1-a2ca43c811f1-log-httpd\") pod \"swift-proxy-78c9d88fc9-5rs9s\" (UID: \"7c49d653-a114-4352-afd1-a2ca43c811f1\") " pod="openstack/swift-proxy-78c9d88fc9-5rs9s" Jan 26 08:12:21 crc kubenswrapper[4806]: I0126 08:12:21.725510 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c49d653-a114-4352-afd1-a2ca43c811f1-internal-tls-certs\") pod \"swift-proxy-78c9d88fc9-5rs9s\" (UID: \"7c49d653-a114-4352-afd1-a2ca43c811f1\") " pod="openstack/swift-proxy-78c9d88fc9-5rs9s" Jan 26 08:12:21 crc kubenswrapper[4806]: I0126 08:12:21.725551 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7c49d653-a114-4352-afd1-a2ca43c811f1-run-httpd\") pod \"swift-proxy-78c9d88fc9-5rs9s\" (UID: \"7c49d653-a114-4352-afd1-a2ca43c811f1\") " pod="openstack/swift-proxy-78c9d88fc9-5rs9s" Jan 26 08:12:21 crc kubenswrapper[4806]: I0126 08:12:21.725636 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c49d653-a114-4352-afd1-a2ca43c811f1-config-data\") pod \"swift-proxy-78c9d88fc9-5rs9s\" (UID: \"7c49d653-a114-4352-afd1-a2ca43c811f1\") " pod="openstack/swift-proxy-78c9d88fc9-5rs9s" Jan 26 08:12:21 crc kubenswrapper[4806]: I0126 08:12:21.725659 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7c49d653-a114-4352-afd1-a2ca43c811f1-etc-swift\") pod \"swift-proxy-78c9d88fc9-5rs9s\" (UID: \"7c49d653-a114-4352-afd1-a2ca43c811f1\") " pod="openstack/swift-proxy-78c9d88fc9-5rs9s" Jan 26 08:12:21 crc kubenswrapper[4806]: I0126 08:12:21.725703 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c49d653-a114-4352-afd1-a2ca43c811f1-public-tls-certs\") pod \"swift-proxy-78c9d88fc9-5rs9s\" (UID: \"7c49d653-a114-4352-afd1-a2ca43c811f1\") " pod="openstack/swift-proxy-78c9d88fc9-5rs9s" Jan 26 08:12:21 crc kubenswrapper[4806]: I0126 08:12:21.725742 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c49d653-a114-4352-afd1-a2ca43c811f1-combined-ca-bundle\") pod \"swift-proxy-78c9d88fc9-5rs9s\" (UID: \"7c49d653-a114-4352-afd1-a2ca43c811f1\") " pod="openstack/swift-proxy-78c9d88fc9-5rs9s" Jan 26 08:12:21 crc kubenswrapper[4806]: I0126 08:12:21.725760 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwg58\" (UniqueName: \"kubernetes.io/projected/7c49d653-a114-4352-afd1-a2ca43c811f1-kube-api-access-rwg58\") pod \"swift-proxy-78c9d88fc9-5rs9s\" (UID: \"7c49d653-a114-4352-afd1-a2ca43c811f1\") " pod="openstack/swift-proxy-78c9d88fc9-5rs9s" Jan 26 08:12:21 crc kubenswrapper[4806]: I0126 08:12:21.728930 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-d5647fbb4-vvzj4"] Jan 26 08:12:21 crc kubenswrapper[4806]: I0126 08:12:21.729144 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-d5647fbb4-vvzj4" podUID="c547f19a-cde0-4c88-aa6a-d7b43f868565" containerName="neutron-api" containerID="cri-o://40cbaac9b2d13f60e8b402ba949a9eef260993775c0a8f132c5f61f476c58cec" gracePeriod=30 Jan 26 08:12:21 crc kubenswrapper[4806]: I0126 08:12:21.729546 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-d5647fbb4-vvzj4" podUID="c547f19a-cde0-4c88-aa6a-d7b43f868565" containerName="neutron-httpd" containerID="cri-o://dcdc9e36654ccc8cff04cd981b11301de74f383b2f670b79f3add120dc058f25" gracePeriod=30 Jan 26 08:12:21 crc kubenswrapper[4806]: I0126 08:12:21.828643 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c49d653-a114-4352-afd1-a2ca43c811f1-config-data\") pod \"swift-proxy-78c9d88fc9-5rs9s\" (UID: \"7c49d653-a114-4352-afd1-a2ca43c811f1\") " pod="openstack/swift-proxy-78c9d88fc9-5rs9s" Jan 26 08:12:21 crc kubenswrapper[4806]: I0126 08:12:21.828701 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7c49d653-a114-4352-afd1-a2ca43c811f1-etc-swift\") pod \"swift-proxy-78c9d88fc9-5rs9s\" (UID: \"7c49d653-a114-4352-afd1-a2ca43c811f1\") " pod="openstack/swift-proxy-78c9d88fc9-5rs9s" Jan 26 08:12:21 crc kubenswrapper[4806]: I0126 08:12:21.828751 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c49d653-a114-4352-afd1-a2ca43c811f1-public-tls-certs\") pod \"swift-proxy-78c9d88fc9-5rs9s\" (UID: \"7c49d653-a114-4352-afd1-a2ca43c811f1\") " pod="openstack/swift-proxy-78c9d88fc9-5rs9s" Jan 26 08:12:21 crc kubenswrapper[4806]: I0126 08:12:21.828785 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c49d653-a114-4352-afd1-a2ca43c811f1-combined-ca-bundle\") pod \"swift-proxy-78c9d88fc9-5rs9s\" (UID: \"7c49d653-a114-4352-afd1-a2ca43c811f1\") " pod="openstack/swift-proxy-78c9d88fc9-5rs9s" Jan 26 08:12:21 crc kubenswrapper[4806]: I0126 08:12:21.828806 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwg58\" (UniqueName: \"kubernetes.io/projected/7c49d653-a114-4352-afd1-a2ca43c811f1-kube-api-access-rwg58\") pod \"swift-proxy-78c9d88fc9-5rs9s\" (UID: \"7c49d653-a114-4352-afd1-a2ca43c811f1\") " pod="openstack/swift-proxy-78c9d88fc9-5rs9s" Jan 26 08:12:21 crc kubenswrapper[4806]: I0126 08:12:21.828863 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7c49d653-a114-4352-afd1-a2ca43c811f1-log-httpd\") pod \"swift-proxy-78c9d88fc9-5rs9s\" (UID: \"7c49d653-a114-4352-afd1-a2ca43c811f1\") " pod="openstack/swift-proxy-78c9d88fc9-5rs9s" Jan 26 08:12:21 crc kubenswrapper[4806]: I0126 08:12:21.828903 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c49d653-a114-4352-afd1-a2ca43c811f1-internal-tls-certs\") pod \"swift-proxy-78c9d88fc9-5rs9s\" (UID: \"7c49d653-a114-4352-afd1-a2ca43c811f1\") " pod="openstack/swift-proxy-78c9d88fc9-5rs9s" Jan 26 08:12:21 crc kubenswrapper[4806]: I0126 08:12:21.828921 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7c49d653-a114-4352-afd1-a2ca43c811f1-run-httpd\") pod \"swift-proxy-78c9d88fc9-5rs9s\" (UID: \"7c49d653-a114-4352-afd1-a2ca43c811f1\") " pod="openstack/swift-proxy-78c9d88fc9-5rs9s" Jan 26 08:12:21 crc kubenswrapper[4806]: I0126 08:12:21.833674 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7c49d653-a114-4352-afd1-a2ca43c811f1-run-httpd\") pod \"swift-proxy-78c9d88fc9-5rs9s\" (UID: \"7c49d653-a114-4352-afd1-a2ca43c811f1\") " pod="openstack/swift-proxy-78c9d88fc9-5rs9s" Jan 26 08:12:21 crc kubenswrapper[4806]: I0126 08:12:21.836328 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c49d653-a114-4352-afd1-a2ca43c811f1-config-data\") pod \"swift-proxy-78c9d88fc9-5rs9s\" (UID: \"7c49d653-a114-4352-afd1-a2ca43c811f1\") " pod="openstack/swift-proxy-78c9d88fc9-5rs9s" Jan 26 08:12:21 crc kubenswrapper[4806]: I0126 08:12:21.836415 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-5c84c55f78-ls58x" Jan 26 08:12:21 crc kubenswrapper[4806]: I0126 08:12:21.836472 4806 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-cfnapi-5c84c55f78-ls58x" Jan 26 08:12:21 crc kubenswrapper[4806]: I0126 08:12:21.836487 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7c49d653-a114-4352-afd1-a2ca43c811f1-log-httpd\") pod \"swift-proxy-78c9d88fc9-5rs9s\" (UID: \"7c49d653-a114-4352-afd1-a2ca43c811f1\") " pod="openstack/swift-proxy-78c9d88fc9-5rs9s" Jan 26 08:12:21 crc kubenswrapper[4806]: I0126 08:12:21.838225 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c49d653-a114-4352-afd1-a2ca43c811f1-combined-ca-bundle\") pod \"swift-proxy-78c9d88fc9-5rs9s\" (UID: \"7c49d653-a114-4352-afd1-a2ca43c811f1\") " pod="openstack/swift-proxy-78c9d88fc9-5rs9s" Jan 26 08:12:21 crc kubenswrapper[4806]: I0126 08:12:21.840215 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c49d653-a114-4352-afd1-a2ca43c811f1-internal-tls-certs\") pod \"swift-proxy-78c9d88fc9-5rs9s\" (UID: \"7c49d653-a114-4352-afd1-a2ca43c811f1\") " pod="openstack/swift-proxy-78c9d88fc9-5rs9s" Jan 26 08:12:21 crc kubenswrapper[4806]: I0126 08:12:21.840800 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7c49d653-a114-4352-afd1-a2ca43c811f1-public-tls-certs\") pod \"swift-proxy-78c9d88fc9-5rs9s\" (UID: \"7c49d653-a114-4352-afd1-a2ca43c811f1\") " pod="openstack/swift-proxy-78c9d88fc9-5rs9s" Jan 26 08:12:21 crc kubenswrapper[4806]: I0126 08:12:21.841153 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7c49d653-a114-4352-afd1-a2ca43c811f1-etc-swift\") pod \"swift-proxy-78c9d88fc9-5rs9s\" (UID: \"7c49d653-a114-4352-afd1-a2ca43c811f1\") " pod="openstack/swift-proxy-78c9d88fc9-5rs9s" Jan 26 08:12:21 crc kubenswrapper[4806]: I0126 08:12:21.870882 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwg58\" (UniqueName: \"kubernetes.io/projected/7c49d653-a114-4352-afd1-a2ca43c811f1-kube-api-access-rwg58\") pod \"swift-proxy-78c9d88fc9-5rs9s\" (UID: \"7c49d653-a114-4352-afd1-a2ca43c811f1\") " pod="openstack/swift-proxy-78c9d88fc9-5rs9s" Jan 26 08:12:21 crc kubenswrapper[4806]: I0126 08:12:21.932385 4806 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-api-7dd479566-6k7mz" Jan 26 08:12:21 crc kubenswrapper[4806]: I0126 08:12:21.942987 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-78c9d88fc9-5rs9s" Jan 26 08:12:21 crc kubenswrapper[4806]: I0126 08:12:21.992633 4806 generic.go:334] "Generic (PLEG): container finished" podID="7bd66660-7983-40b2-94b0-bd9663391fee" containerID="04315d96d9ac1aee7f4e108564753a04b9c84266f4363555a34babee77a45edf" exitCode=137 Jan 26 08:12:21 crc kubenswrapper[4806]: I0126 08:12:21.993482 4806 scope.go:117] "RemoveContainer" containerID="21a2259284eeb59ff2943baa8e7323075a4bee4436360e0885ed7593a195e81a" Jan 26 08:12:21 crc kubenswrapper[4806]: E0126 08:12:21.993699 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-7dd479566-6k7mz_openstack(c8af52b9-239c-4e7e-9f4e-80aa1e4148a2)\"" pod="openstack/heat-api-7dd479566-6k7mz" podUID="c8af52b9-239c-4e7e-9f4e-80aa1e4148a2" Jan 26 08:12:21 crc kubenswrapper[4806]: I0126 08:12:21.993699 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"7bd66660-7983-40b2-94b0-bd9663391fee","Type":"ContainerDied","Data":"04315d96d9ac1aee7f4e108564753a04b9c84266f4363555a34babee77a45edf"} Jan 26 08:12:21 crc kubenswrapper[4806]: I0126 08:12:21.994512 4806 scope.go:117] "RemoveContainer" containerID="faf28906d5ab543cefcb7319c9944f166e49216850d5dc9d199bb6814ca49d79" Jan 26 08:12:21 crc kubenswrapper[4806]: E0126 08:12:21.994772 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-5c84c55f78-ls58x_openstack(8460c49b-4775-409d-b4c0-177929af70a4)\"" pod="openstack/heat-cfnapi-5c84c55f78-ls58x" podUID="8460c49b-4775-409d-b4c0-177929af70a4" Jan 26 08:12:23 crc kubenswrapper[4806]: I0126 08:12:23.009212 4806 generic.go:334] "Generic (PLEG): container finished" podID="c547f19a-cde0-4c88-aa6a-d7b43f868565" containerID="dcdc9e36654ccc8cff04cd981b11301de74f383b2f670b79f3add120dc058f25" exitCode=0 Jan 26 08:12:23 crc kubenswrapper[4806]: I0126 08:12:23.009285 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-d5647fbb4-vvzj4" event={"ID":"c547f19a-cde0-4c88-aa6a-d7b43f868565","Type":"ContainerDied","Data":"dcdc9e36654ccc8cff04cd981b11301de74f383b2f670b79f3add120dc058f25"} Jan 26 08:12:23 crc kubenswrapper[4806]: I0126 08:12:23.009886 4806 scope.go:117] "RemoveContainer" containerID="faf28906d5ab543cefcb7319c9944f166e49216850d5dc9d199bb6814ca49d79" Jan 26 08:12:23 crc kubenswrapper[4806]: I0126 08:12:23.009972 4806 scope.go:117] "RemoveContainer" containerID="21a2259284eeb59ff2943baa8e7323075a4bee4436360e0885ed7593a195e81a" Jan 26 08:12:23 crc kubenswrapper[4806]: E0126 08:12:23.010109 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-5c84c55f78-ls58x_openstack(8460c49b-4775-409d-b4c0-177929af70a4)\"" pod="openstack/heat-cfnapi-5c84c55f78-ls58x" podUID="8460c49b-4775-409d-b4c0-177929af70a4" Jan 26 08:12:23 crc kubenswrapper[4806]: E0126 08:12:23.010181 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-7dd479566-6k7mz_openstack(c8af52b9-239c-4e7e-9f4e-80aa1e4148a2)\"" pod="openstack/heat-api-7dd479566-6k7mz" podUID="c8af52b9-239c-4e7e-9f4e-80aa1e4148a2" Jan 26 08:12:24 crc kubenswrapper[4806]: I0126 08:12:24.026317 4806 generic.go:334] "Generic (PLEG): container finished" podID="d7b4ee8d-6333-4683-94c4-b79229c76537" containerID="f87d035c73c255421b48d4cd47dacbc10b097d6a12332e04ccdd30feaf74373d" exitCode=137 Jan 26 08:12:24 crc kubenswrapper[4806]: I0126 08:12:24.026700 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7d485d788d-5q4tb" event={"ID":"d7b4ee8d-6333-4683-94c4-b79229c76537","Type":"ContainerDied","Data":"f87d035c73c255421b48d4cd47dacbc10b097d6a12332e04ccdd30feaf74373d"} Jan 26 08:12:24 crc kubenswrapper[4806]: E0126 08:12:24.332810 4806 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/aff27cc8996c18161399b46f44a4ff9220558e0638aa7f08264cc656ac2c52eb/diff" to get inode usage: stat /var/lib/containers/storage/overlay/aff27cc8996c18161399b46f44a4ff9220558e0638aa7f08264cc656ac2c52eb/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/openstack_ceilometer-0_93bf46a8-2942-4b36-9853-88ff5c6e756b/ceilometer-notification-agent/0.log" to get inode usage: stat /var/log/pods/openstack_ceilometer-0_93bf46a8-2942-4b36-9853-88ff5c6e756b/ceilometer-notification-agent/0.log: no such file or directory Jan 26 08:12:26 crc kubenswrapper[4806]: I0126 08:12:26.119802 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 08:12:26 crc kubenswrapper[4806]: I0126 08:12:26.120142 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ca560845-7250-4cf5-90d0-449180808340" containerName="ceilometer-central-agent" containerID="cri-o://d821d0cd3a0369da46a6a3b07f9463e2b21db6542f0834eaf3b0becc69f51cfd" gracePeriod=30 Jan 26 08:12:26 crc kubenswrapper[4806]: I0126 08:12:26.120222 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ca560845-7250-4cf5-90d0-449180808340" containerName="sg-core" containerID="cri-o://419d2b00eb63da05da2dd7fa5da1a852c0fcdf75897811126d6b6766bf1d5cbd" gracePeriod=30 Jan 26 08:12:26 crc kubenswrapper[4806]: I0126 08:12:26.120283 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ca560845-7250-4cf5-90d0-449180808340" containerName="ceilometer-notification-agent" containerID="cri-o://f7e87a97636c9aa22a55bafb57e12cbbc0fbf00e97b6e15c251eeed86aab3612" gracePeriod=30 Jan 26 08:12:26 crc kubenswrapper[4806]: I0126 08:12:26.120251 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ca560845-7250-4cf5-90d0-449180808340" containerName="proxy-httpd" containerID="cri-o://cb5eb2546a9b03249bcebd5ce1a4cd0ef38b92385cd7033cf6f53b557f91d5f0" gracePeriod=30 Jan 26 08:12:26 crc kubenswrapper[4806]: I0126 08:12:26.227049 4806 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-957b6fbf8-7f82k" podUID="ce987c26-24cc-40b4-9898-9f00d4eda52e" containerName="heat-api" probeResult="failure" output="Get \"http://10.217.0.178:8004/healthcheck\": read tcp 10.217.0.2:45314->10.217.0.178:8004: read: connection reset by peer" Jan 26 08:12:26 crc kubenswrapper[4806]: I0126 08:12:26.597032 4806 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6578955fd5-6g69f" podUID="ec2813a6-2bcf-4dcd-a9ab-4cd2ed03d451" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.169:5353: i/o timeout" Jan 26 08:12:26 crc kubenswrapper[4806]: I0126 08:12:26.604037 4806 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="7bd66660-7983-40b2-94b0-bd9663391fee" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.170:8776/healthcheck\": dial tcp 10.217.0.170:8776: connect: connection refused" Jan 26 08:12:27 crc kubenswrapper[4806]: I0126 08:12:27.055817 4806 generic.go:334] "Generic (PLEG): container finished" podID="ca560845-7250-4cf5-90d0-449180808340" containerID="cb5eb2546a9b03249bcebd5ce1a4cd0ef38b92385cd7033cf6f53b557f91d5f0" exitCode=0 Jan 26 08:12:27 crc kubenswrapper[4806]: I0126 08:12:27.056145 4806 generic.go:334] "Generic (PLEG): container finished" podID="ca560845-7250-4cf5-90d0-449180808340" containerID="419d2b00eb63da05da2dd7fa5da1a852c0fcdf75897811126d6b6766bf1d5cbd" exitCode=2 Jan 26 08:12:27 crc kubenswrapper[4806]: I0126 08:12:27.056162 4806 generic.go:334] "Generic (PLEG): container finished" podID="ca560845-7250-4cf5-90d0-449180808340" containerID="f7e87a97636c9aa22a55bafb57e12cbbc0fbf00e97b6e15c251eeed86aab3612" exitCode=0 Jan 26 08:12:27 crc kubenswrapper[4806]: I0126 08:12:27.056173 4806 generic.go:334] "Generic (PLEG): container finished" podID="ca560845-7250-4cf5-90d0-449180808340" containerID="d821d0cd3a0369da46a6a3b07f9463e2b21db6542f0834eaf3b0becc69f51cfd" exitCode=0 Jan 26 08:12:27 crc kubenswrapper[4806]: I0126 08:12:27.055883 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ca560845-7250-4cf5-90d0-449180808340","Type":"ContainerDied","Data":"cb5eb2546a9b03249bcebd5ce1a4cd0ef38b92385cd7033cf6f53b557f91d5f0"} Jan 26 08:12:27 crc kubenswrapper[4806]: I0126 08:12:27.056231 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ca560845-7250-4cf5-90d0-449180808340","Type":"ContainerDied","Data":"419d2b00eb63da05da2dd7fa5da1a852c0fcdf75897811126d6b6766bf1d5cbd"} Jan 26 08:12:27 crc kubenswrapper[4806]: I0126 08:12:27.056243 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ca560845-7250-4cf5-90d0-449180808340","Type":"ContainerDied","Data":"f7e87a97636c9aa22a55bafb57e12cbbc0fbf00e97b6e15c251eeed86aab3612"} Jan 26 08:12:27 crc kubenswrapper[4806]: I0126 08:12:27.056252 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ca560845-7250-4cf5-90d0-449180808340","Type":"ContainerDied","Data":"d821d0cd3a0369da46a6a3b07f9463e2b21db6542f0834eaf3b0becc69f51cfd"} Jan 26 08:12:27 crc kubenswrapper[4806]: I0126 08:12:27.059297 4806 generic.go:334] "Generic (PLEG): container finished" podID="ce987c26-24cc-40b4-9898-9f00d4eda52e" containerID="b705fd70926481698eeb503d4adfdb74f689027ab44984e054b013403599b08c" exitCode=0 Jan 26 08:12:27 crc kubenswrapper[4806]: I0126 08:12:27.059384 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-957b6fbf8-7f82k" event={"ID":"ce987c26-24cc-40b4-9898-9f00d4eda52e","Type":"ContainerDied","Data":"b705fd70926481698eeb503d4adfdb74f689027ab44984e054b013403599b08c"} Jan 26 08:12:27 crc kubenswrapper[4806]: I0126 08:12:27.063233 4806 generic.go:334] "Generic (PLEG): container finished" podID="c547f19a-cde0-4c88-aa6a-d7b43f868565" containerID="40cbaac9b2d13f60e8b402ba949a9eef260993775c0a8f132c5f61f476c58cec" exitCode=0 Jan 26 08:12:27 crc kubenswrapper[4806]: I0126 08:12:27.063261 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-d5647fbb4-vvzj4" event={"ID":"c547f19a-cde0-4c88-aa6a-d7b43f868565","Type":"ContainerDied","Data":"40cbaac9b2d13f60e8b402ba949a9eef260993775c0a8f132c5f61f476c58cec"} Jan 26 08:12:27 crc kubenswrapper[4806]: I0126 08:12:27.260664 4806 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-7448bc75bf-txwxj" podUID="4d637212-f269-4915-b30b-4ffe4e19bb2d" containerName="heat-cfnapi" probeResult="failure" output="Get \"http://10.217.0.176:8000/healthcheck\": read tcp 10.217.0.2:42144->10.217.0.176:8000: read: connection reset by peer" Jan 26 08:12:28 crc kubenswrapper[4806]: I0126 08:12:28.072204 4806 generic.go:334] "Generic (PLEG): container finished" podID="4d637212-f269-4915-b30b-4ffe4e19bb2d" containerID="30a00e07cdb1636dde2a329ce2788f25a3258a3af3728c5b7cebffe910c6a196" exitCode=0 Jan 26 08:12:28 crc kubenswrapper[4806]: I0126 08:12:28.072253 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7448bc75bf-txwxj" event={"ID":"4d637212-f269-4915-b30b-4ffe4e19bb2d","Type":"ContainerDied","Data":"30a00e07cdb1636dde2a329ce2788f25a3258a3af3728c5b7cebffe910c6a196"} Jan 26 08:12:28 crc kubenswrapper[4806]: I0126 08:12:28.782421 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-6g69f" Jan 26 08:12:28 crc kubenswrapper[4806]: I0126 08:12:28.900000 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7w76l\" (UniqueName: \"kubernetes.io/projected/ec2813a6-2bcf-4dcd-a9ab-4cd2ed03d451-kube-api-access-7w76l\") pod \"ec2813a6-2bcf-4dcd-a9ab-4cd2ed03d451\" (UID: \"ec2813a6-2bcf-4dcd-a9ab-4cd2ed03d451\") " Jan 26 08:12:28 crc kubenswrapper[4806]: I0126 08:12:28.900130 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ec2813a6-2bcf-4dcd-a9ab-4cd2ed03d451-dns-swift-storage-0\") pod \"ec2813a6-2bcf-4dcd-a9ab-4cd2ed03d451\" (UID: \"ec2813a6-2bcf-4dcd-a9ab-4cd2ed03d451\") " Jan 26 08:12:28 crc kubenswrapper[4806]: I0126 08:12:28.900265 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ec2813a6-2bcf-4dcd-a9ab-4cd2ed03d451-ovsdbserver-sb\") pod \"ec2813a6-2bcf-4dcd-a9ab-4cd2ed03d451\" (UID: \"ec2813a6-2bcf-4dcd-a9ab-4cd2ed03d451\") " Jan 26 08:12:28 crc kubenswrapper[4806]: I0126 08:12:28.900354 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec2813a6-2bcf-4dcd-a9ab-4cd2ed03d451-config\") pod \"ec2813a6-2bcf-4dcd-a9ab-4cd2ed03d451\" (UID: \"ec2813a6-2bcf-4dcd-a9ab-4cd2ed03d451\") " Jan 26 08:12:28 crc kubenswrapper[4806]: I0126 08:12:28.900413 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ec2813a6-2bcf-4dcd-a9ab-4cd2ed03d451-dns-svc\") pod \"ec2813a6-2bcf-4dcd-a9ab-4cd2ed03d451\" (UID: \"ec2813a6-2bcf-4dcd-a9ab-4cd2ed03d451\") " Jan 26 08:12:28 crc kubenswrapper[4806]: I0126 08:12:28.900441 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ec2813a6-2bcf-4dcd-a9ab-4cd2ed03d451-ovsdbserver-nb\") pod \"ec2813a6-2bcf-4dcd-a9ab-4cd2ed03d451\" (UID: \"ec2813a6-2bcf-4dcd-a9ab-4cd2ed03d451\") " Jan 26 08:12:28 crc kubenswrapper[4806]: I0126 08:12:28.943235 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec2813a6-2bcf-4dcd-a9ab-4cd2ed03d451-kube-api-access-7w76l" (OuterVolumeSpecName: "kube-api-access-7w76l") pod "ec2813a6-2bcf-4dcd-a9ab-4cd2ed03d451" (UID: "ec2813a6-2bcf-4dcd-a9ab-4cd2ed03d451"). InnerVolumeSpecName "kube-api-access-7w76l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:12:28 crc kubenswrapper[4806]: I0126 08:12:28.960641 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7448bc75bf-txwxj" Jan 26 08:12:29 crc kubenswrapper[4806]: I0126 08:12:29.003405 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7w76l\" (UniqueName: \"kubernetes.io/projected/ec2813a6-2bcf-4dcd-a9ab-4cd2ed03d451-kube-api-access-7w76l\") on node \"crc\" DevicePath \"\"" Jan 26 08:12:29 crc kubenswrapper[4806]: I0126 08:12:29.071589 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec2813a6-2bcf-4dcd-a9ab-4cd2ed03d451-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ec2813a6-2bcf-4dcd-a9ab-4cd2ed03d451" (UID: "ec2813a6-2bcf-4dcd-a9ab-4cd2ed03d451"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 08:12:29 crc kubenswrapper[4806]: I0126 08:12:29.086374 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-5c479d9749-55sxk" Jan 26 08:12:29 crc kubenswrapper[4806]: I0126 08:12:29.103703 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec2813a6-2bcf-4dcd-a9ab-4cd2ed03d451-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "ec2813a6-2bcf-4dcd-a9ab-4cd2ed03d451" (UID: "ec2813a6-2bcf-4dcd-a9ab-4cd2ed03d451"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 08:12:29 crc kubenswrapper[4806]: I0126 08:12:29.104057 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec2813a6-2bcf-4dcd-a9ab-4cd2ed03d451-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ec2813a6-2bcf-4dcd-a9ab-4cd2ed03d451" (UID: "ec2813a6-2bcf-4dcd-a9ab-4cd2ed03d451"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 08:12:29 crc kubenswrapper[4806]: I0126 08:12:29.104714 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4d637212-f269-4915-b30b-4ffe4e19bb2d-config-data-custom\") pod \"4d637212-f269-4915-b30b-4ffe4e19bb2d\" (UID: \"4d637212-f269-4915-b30b-4ffe4e19bb2d\") " Jan 26 08:12:29 crc kubenswrapper[4806]: I0126 08:12:29.105245 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ec2813a6-2bcf-4dcd-a9ab-4cd2ed03d451-ovsdbserver-sb\") pod \"ec2813a6-2bcf-4dcd-a9ab-4cd2ed03d451\" (UID: \"ec2813a6-2bcf-4dcd-a9ab-4cd2ed03d451\") " Jan 26 08:12:29 crc kubenswrapper[4806]: I0126 08:12:29.105303 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d637212-f269-4915-b30b-4ffe4e19bb2d-combined-ca-bundle\") pod \"4d637212-f269-4915-b30b-4ffe4e19bb2d\" (UID: \"4d637212-f269-4915-b30b-4ffe4e19bb2d\") " Jan 26 08:12:29 crc kubenswrapper[4806]: I0126 08:12:29.105375 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vjgbq\" (UniqueName: \"kubernetes.io/projected/4d637212-f269-4915-b30b-4ffe4e19bb2d-kube-api-access-vjgbq\") pod \"4d637212-f269-4915-b30b-4ffe4e19bb2d\" (UID: \"4d637212-f269-4915-b30b-4ffe4e19bb2d\") " Jan 26 08:12:29 crc kubenswrapper[4806]: I0126 08:12:29.105404 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d637212-f269-4915-b30b-4ffe4e19bb2d-config-data\") pod \"4d637212-f269-4915-b30b-4ffe4e19bb2d\" (UID: \"4d637212-f269-4915-b30b-4ffe4e19bb2d\") " Jan 26 08:12:29 crc kubenswrapper[4806]: I0126 08:12:29.105999 4806 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ec2813a6-2bcf-4dcd-a9ab-4cd2ed03d451-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 08:12:29 crc kubenswrapper[4806]: I0126 08:12:29.106016 4806 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ec2813a6-2bcf-4dcd-a9ab-4cd2ed03d451-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 26 08:12:29 crc kubenswrapper[4806]: W0126 08:12:29.106943 4806 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/ec2813a6-2bcf-4dcd-a9ab-4cd2ed03d451/volumes/kubernetes.io~configmap/ovsdbserver-sb Jan 26 08:12:29 crc kubenswrapper[4806]: I0126 08:12:29.106961 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec2813a6-2bcf-4dcd-a9ab-4cd2ed03d451-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ec2813a6-2bcf-4dcd-a9ab-4cd2ed03d451" (UID: "ec2813a6-2bcf-4dcd-a9ab-4cd2ed03d451"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 08:12:29 crc kubenswrapper[4806]: I0126 08:12:29.110938 4806 scope.go:117] "RemoveContainer" containerID="40a0c14dcf443b8d17354f2b243bd8fcd17511ad07bcfa7eae5518853fc6283e" Jan 26 08:12:29 crc kubenswrapper[4806]: I0126 08:12:29.125231 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d637212-f269-4915-b30b-4ffe4e19bb2d-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "4d637212-f269-4915-b30b-4ffe4e19bb2d" (UID: "4d637212-f269-4915-b30b-4ffe4e19bb2d"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:12:29 crc kubenswrapper[4806]: I0126 08:12:29.128300 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec2813a6-2bcf-4dcd-a9ab-4cd2ed03d451-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ec2813a6-2bcf-4dcd-a9ab-4cd2ed03d451" (UID: "ec2813a6-2bcf-4dcd-a9ab-4cd2ed03d451"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 08:12:29 crc kubenswrapper[4806]: I0126 08:12:29.150913 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d637212-f269-4915-b30b-4ffe4e19bb2d-kube-api-access-vjgbq" (OuterVolumeSpecName: "kube-api-access-vjgbq") pod "4d637212-f269-4915-b30b-4ffe4e19bb2d" (UID: "4d637212-f269-4915-b30b-4ffe4e19bb2d"). InnerVolumeSpecName "kube-api-access-vjgbq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:12:29 crc kubenswrapper[4806]: I0126 08:12:29.155409 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-6g69f" event={"ID":"ec2813a6-2bcf-4dcd-a9ab-4cd2ed03d451","Type":"ContainerDied","Data":"204a06576a96662de2c1771ff7a815c4d192f978e42ca4d0cc4b27a0140e95dd"} Jan 26 08:12:29 crc kubenswrapper[4806]: I0126 08:12:29.155574 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-6g69f" Jan 26 08:12:29 crc kubenswrapper[4806]: I0126 08:12:29.156627 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec2813a6-2bcf-4dcd-a9ab-4cd2ed03d451-config" (OuterVolumeSpecName: "config") pod "ec2813a6-2bcf-4dcd-a9ab-4cd2ed03d451" (UID: "ec2813a6-2bcf-4dcd-a9ab-4cd2ed03d451"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 08:12:29 crc kubenswrapper[4806]: I0126 08:12:29.158837 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-7448bc75bf-txwxj" event={"ID":"4d637212-f269-4915-b30b-4ffe4e19bb2d","Type":"ContainerDied","Data":"296de1632d3165169c06a08c93427a99e4bb1f1087217f43e5e9f692f03646dd"} Jan 26 08:12:29 crc kubenswrapper[4806]: I0126 08:12:29.158919 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-7448bc75bf-txwxj" Jan 26 08:12:29 crc kubenswrapper[4806]: I0126 08:12:29.208723 4806 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec2813a6-2bcf-4dcd-a9ab-4cd2ed03d451-config\") on node \"crc\" DevicePath \"\"" Jan 26 08:12:29 crc kubenswrapper[4806]: I0126 08:12:29.209065 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vjgbq\" (UniqueName: \"kubernetes.io/projected/4d637212-f269-4915-b30b-4ffe4e19bb2d-kube-api-access-vjgbq\") on node \"crc\" DevicePath \"\"" Jan 26 08:12:29 crc kubenswrapper[4806]: I0126 08:12:29.209082 4806 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ec2813a6-2bcf-4dcd-a9ab-4cd2ed03d451-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 08:12:29 crc kubenswrapper[4806]: I0126 08:12:29.209093 4806 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4d637212-f269-4915-b30b-4ffe4e19bb2d-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 26 08:12:29 crc kubenswrapper[4806]: I0126 08:12:29.209106 4806 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ec2813a6-2bcf-4dcd-a9ab-4cd2ed03d451-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 08:12:29 crc kubenswrapper[4806]: I0126 08:12:29.224769 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d637212-f269-4915-b30b-4ffe4e19bb2d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4d637212-f269-4915-b30b-4ffe4e19bb2d" (UID: "4d637212-f269-4915-b30b-4ffe4e19bb2d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:12:29 crc kubenswrapper[4806]: I0126 08:12:29.247157 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d637212-f269-4915-b30b-4ffe4e19bb2d-config-data" (OuterVolumeSpecName: "config-data") pod "4d637212-f269-4915-b30b-4ffe4e19bb2d" (UID: "4d637212-f269-4915-b30b-4ffe4e19bb2d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:12:29 crc kubenswrapper[4806]: I0126 08:12:29.257751 4806 scope.go:117] "RemoveContainer" containerID="49c0d8e9f9683e509328152203cc80f37bdfdd244576c95436f96d95be8dfbc1" Jan 26 08:12:29 crc kubenswrapper[4806]: I0126 08:12:29.319259 4806 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d637212-f269-4915-b30b-4ffe4e19bb2d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 08:12:29 crc kubenswrapper[4806]: I0126 08:12:29.319288 4806 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d637212-f269-4915-b30b-4ffe4e19bb2d-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 08:12:29 crc kubenswrapper[4806]: I0126 08:12:29.380944 4806 scope.go:117] "RemoveContainer" containerID="52d032c86aa3acb5b6cb177137e22eaae32a133c95397506eff9fcd681c18b90" Jan 26 08:12:29 crc kubenswrapper[4806]: I0126 08:12:29.381360 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 26 08:12:29 crc kubenswrapper[4806]: I0126 08:12:29.522365 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pkzzt\" (UniqueName: \"kubernetes.io/projected/7bd66660-7983-40b2-94b0-bd9663391fee-kube-api-access-pkzzt\") pod \"7bd66660-7983-40b2-94b0-bd9663391fee\" (UID: \"7bd66660-7983-40b2-94b0-bd9663391fee\") " Jan 26 08:12:29 crc kubenswrapper[4806]: I0126 08:12:29.522748 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7bd66660-7983-40b2-94b0-bd9663391fee-config-data\") pod \"7bd66660-7983-40b2-94b0-bd9663391fee\" (UID: \"7bd66660-7983-40b2-94b0-bd9663391fee\") " Jan 26 08:12:29 crc kubenswrapper[4806]: I0126 08:12:29.522791 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bd66660-7983-40b2-94b0-bd9663391fee-combined-ca-bundle\") pod \"7bd66660-7983-40b2-94b0-bd9663391fee\" (UID: \"7bd66660-7983-40b2-94b0-bd9663391fee\") " Jan 26 08:12:29 crc kubenswrapper[4806]: I0126 08:12:29.522811 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7bd66660-7983-40b2-94b0-bd9663391fee-scripts\") pod \"7bd66660-7983-40b2-94b0-bd9663391fee\" (UID: \"7bd66660-7983-40b2-94b0-bd9663391fee\") " Jan 26 08:12:29 crc kubenswrapper[4806]: I0126 08:12:29.522838 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7bd66660-7983-40b2-94b0-bd9663391fee-logs\") pod \"7bd66660-7983-40b2-94b0-bd9663391fee\" (UID: \"7bd66660-7983-40b2-94b0-bd9663391fee\") " Jan 26 08:12:29 crc kubenswrapper[4806]: I0126 08:12:29.522931 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7bd66660-7983-40b2-94b0-bd9663391fee-etc-machine-id\") pod \"7bd66660-7983-40b2-94b0-bd9663391fee\" (UID: \"7bd66660-7983-40b2-94b0-bd9663391fee\") " Jan 26 08:12:29 crc kubenswrapper[4806]: I0126 08:12:29.522961 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7bd66660-7983-40b2-94b0-bd9663391fee-config-data-custom\") pod \"7bd66660-7983-40b2-94b0-bd9663391fee\" (UID: \"7bd66660-7983-40b2-94b0-bd9663391fee\") " Jan 26 08:12:29 crc kubenswrapper[4806]: I0126 08:12:29.530188 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7bd66660-7983-40b2-94b0-bd9663391fee-logs" (OuterVolumeSpecName: "logs") pod "7bd66660-7983-40b2-94b0-bd9663391fee" (UID: "7bd66660-7983-40b2-94b0-bd9663391fee"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 08:12:29 crc kubenswrapper[4806]: I0126 08:12:29.530678 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7bd66660-7983-40b2-94b0-bd9663391fee-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "7bd66660-7983-40b2-94b0-bd9663391fee" (UID: "7bd66660-7983-40b2-94b0-bd9663391fee"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 08:12:29 crc kubenswrapper[4806]: I0126 08:12:29.573267 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7bd66660-7983-40b2-94b0-bd9663391fee-scripts" (OuterVolumeSpecName: "scripts") pod "7bd66660-7983-40b2-94b0-bd9663391fee" (UID: "7bd66660-7983-40b2-94b0-bd9663391fee"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:12:29 crc kubenswrapper[4806]: I0126 08:12:29.574042 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7bd66660-7983-40b2-94b0-bd9663391fee-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "7bd66660-7983-40b2-94b0-bd9663391fee" (UID: "7bd66660-7983-40b2-94b0-bd9663391fee"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:12:29 crc kubenswrapper[4806]: I0126 08:12:29.626380 4806 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7bd66660-7983-40b2-94b0-bd9663391fee-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 08:12:29 crc kubenswrapper[4806]: I0126 08:12:29.626415 4806 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7bd66660-7983-40b2-94b0-bd9663391fee-logs\") on node \"crc\" DevicePath \"\"" Jan 26 08:12:29 crc kubenswrapper[4806]: I0126 08:12:29.626424 4806 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7bd66660-7983-40b2-94b0-bd9663391fee-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 26 08:12:29 crc kubenswrapper[4806]: I0126 08:12:29.626433 4806 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7bd66660-7983-40b2-94b0-bd9663391fee-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 26 08:12:29 crc kubenswrapper[4806]: I0126 08:12:29.627305 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bd66660-7983-40b2-94b0-bd9663391fee-kube-api-access-pkzzt" (OuterVolumeSpecName: "kube-api-access-pkzzt") pod "7bd66660-7983-40b2-94b0-bd9663391fee" (UID: "7bd66660-7983-40b2-94b0-bd9663391fee"). InnerVolumeSpecName "kube-api-access-pkzzt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:12:29 crc kubenswrapper[4806]: I0126 08:12:29.727806 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pkzzt\" (UniqueName: \"kubernetes.io/projected/7bd66660-7983-40b2-94b0-bd9663391fee-kube-api-access-pkzzt\") on node \"crc\" DevicePath \"\"" Jan 26 08:12:29 crc kubenswrapper[4806]: I0126 08:12:29.765750 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7bd66660-7983-40b2-94b0-bd9663391fee-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7bd66660-7983-40b2-94b0-bd9663391fee" (UID: "7bd66660-7983-40b2-94b0-bd9663391fee"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:12:29 crc kubenswrapper[4806]: I0126 08:12:29.800594 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7bd66660-7983-40b2-94b0-bd9663391fee-config-data" (OuterVolumeSpecName: "config-data") pod "7bd66660-7983-40b2-94b0-bd9663391fee" (UID: "7bd66660-7983-40b2-94b0-bd9663391fee"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:12:29 crc kubenswrapper[4806]: I0126 08:12:29.828911 4806 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7bd66660-7983-40b2-94b0-bd9663391fee-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 08:12:29 crc kubenswrapper[4806]: I0126 08:12:29.828936 4806 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bd66660-7983-40b2-94b0-bd9663391fee-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 08:12:29 crc kubenswrapper[4806]: I0126 08:12:29.914098 4806 scope.go:117] "RemoveContainer" containerID="30a00e07cdb1636dde2a329ce2788f25a3258a3af3728c5b7cebffe910c6a196" Jan 26 08:12:29 crc kubenswrapper[4806]: I0126 08:12:29.967333 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-957b6fbf8-7f82k" Jan 26 08:12:29 crc kubenswrapper[4806]: I0126 08:12:29.979566 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-6g69f"] Jan 26 08:12:29 crc kubenswrapper[4806]: I0126 08:12:29.984592 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-d5647fbb4-vvzj4" Jan 26 08:12:29 crc kubenswrapper[4806]: I0126 08:12:29.984793 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 08:12:29 crc kubenswrapper[4806]: I0126 08:12:29.990867 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-6g69f"] Jan 26 08:12:30 crc kubenswrapper[4806]: I0126 08:12:30.013439 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-7448bc75bf-txwxj"] Jan 26 08:12:30 crc kubenswrapper[4806]: I0126 08:12:30.034239 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4chc\" (UniqueName: \"kubernetes.io/projected/c547f19a-cde0-4c88-aa6a-d7b43f868565-kube-api-access-d4chc\") pod \"c547f19a-cde0-4c88-aa6a-d7b43f868565\" (UID: \"c547f19a-cde0-4c88-aa6a-d7b43f868565\") " Jan 26 08:12:30 crc kubenswrapper[4806]: I0126 08:12:30.034312 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ca560845-7250-4cf5-90d0-449180808340-run-httpd\") pod \"ca560845-7250-4cf5-90d0-449180808340\" (UID: \"ca560845-7250-4cf5-90d0-449180808340\") " Jan 26 08:12:30 crc kubenswrapper[4806]: I0126 08:12:30.034335 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c547f19a-cde0-4c88-aa6a-d7b43f868565-httpd-config\") pod \"c547f19a-cde0-4c88-aa6a-d7b43f868565\" (UID: \"c547f19a-cde0-4c88-aa6a-d7b43f868565\") " Jan 26 08:12:30 crc kubenswrapper[4806]: I0126 08:12:30.034366 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c547f19a-cde0-4c88-aa6a-d7b43f868565-combined-ca-bundle\") pod \"c547f19a-cde0-4c88-aa6a-d7b43f868565\" (UID: \"c547f19a-cde0-4c88-aa6a-d7b43f868565\") " Jan 26 08:12:30 crc kubenswrapper[4806]: I0126 08:12:30.034393 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hlzkz\" (UniqueName: \"kubernetes.io/projected/ce987c26-24cc-40b4-9898-9f00d4eda52e-kube-api-access-hlzkz\") pod \"ce987c26-24cc-40b4-9898-9f00d4eda52e\" (UID: \"ce987c26-24cc-40b4-9898-9f00d4eda52e\") " Jan 26 08:12:30 crc kubenswrapper[4806]: I0126 08:12:30.034419 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ca560845-7250-4cf5-90d0-449180808340-log-httpd\") pod \"ca560845-7250-4cf5-90d0-449180808340\" (UID: \"ca560845-7250-4cf5-90d0-449180808340\") " Jan 26 08:12:30 crc kubenswrapper[4806]: I0126 08:12:30.034479 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c547f19a-cde0-4c88-aa6a-d7b43f868565-ovndb-tls-certs\") pod \"c547f19a-cde0-4c88-aa6a-d7b43f868565\" (UID: \"c547f19a-cde0-4c88-aa6a-d7b43f868565\") " Jan 26 08:12:30 crc kubenswrapper[4806]: I0126 08:12:30.034501 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nk5f9\" (UniqueName: \"kubernetes.io/projected/ca560845-7250-4cf5-90d0-449180808340-kube-api-access-nk5f9\") pod \"ca560845-7250-4cf5-90d0-449180808340\" (UID: \"ca560845-7250-4cf5-90d0-449180808340\") " Jan 26 08:12:30 crc kubenswrapper[4806]: I0126 08:12:30.034567 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce987c26-24cc-40b4-9898-9f00d4eda52e-combined-ca-bundle\") pod \"ce987c26-24cc-40b4-9898-9f00d4eda52e\" (UID: \"ce987c26-24cc-40b4-9898-9f00d4eda52e\") " Jan 26 08:12:30 crc kubenswrapper[4806]: I0126 08:12:30.034589 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca560845-7250-4cf5-90d0-449180808340-combined-ca-bundle\") pod \"ca560845-7250-4cf5-90d0-449180808340\" (UID: \"ca560845-7250-4cf5-90d0-449180808340\") " Jan 26 08:12:30 crc kubenswrapper[4806]: I0126 08:12:30.034615 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ce987c26-24cc-40b4-9898-9f00d4eda52e-config-data-custom\") pod \"ce987c26-24cc-40b4-9898-9f00d4eda52e\" (UID: \"ce987c26-24cc-40b4-9898-9f00d4eda52e\") " Jan 26 08:12:30 crc kubenswrapper[4806]: I0126 08:12:30.034652 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c547f19a-cde0-4c88-aa6a-d7b43f868565-config\") pod \"c547f19a-cde0-4c88-aa6a-d7b43f868565\" (UID: \"c547f19a-cde0-4c88-aa6a-d7b43f868565\") " Jan 26 08:12:30 crc kubenswrapper[4806]: I0126 08:12:30.034684 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ca560845-7250-4cf5-90d0-449180808340-sg-core-conf-yaml\") pod \"ca560845-7250-4cf5-90d0-449180808340\" (UID: \"ca560845-7250-4cf5-90d0-449180808340\") " Jan 26 08:12:30 crc kubenswrapper[4806]: I0126 08:12:30.034706 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca560845-7250-4cf5-90d0-449180808340-config-data\") pod \"ca560845-7250-4cf5-90d0-449180808340\" (UID: \"ca560845-7250-4cf5-90d0-449180808340\") " Jan 26 08:12:30 crc kubenswrapper[4806]: I0126 08:12:30.034737 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ca560845-7250-4cf5-90d0-449180808340-scripts\") pod \"ca560845-7250-4cf5-90d0-449180808340\" (UID: \"ca560845-7250-4cf5-90d0-449180808340\") " Jan 26 08:12:30 crc kubenswrapper[4806]: I0126 08:12:30.034760 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce987c26-24cc-40b4-9898-9f00d4eda52e-config-data\") pod \"ce987c26-24cc-40b4-9898-9f00d4eda52e\" (UID: \"ce987c26-24cc-40b4-9898-9f00d4eda52e\") " Jan 26 08:12:30 crc kubenswrapper[4806]: I0126 08:12:30.048808 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca560845-7250-4cf5-90d0-449180808340-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "ca560845-7250-4cf5-90d0-449180808340" (UID: "ca560845-7250-4cf5-90d0-449180808340"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 08:12:30 crc kubenswrapper[4806]: I0126 08:12:30.068297 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce987c26-24cc-40b4-9898-9f00d4eda52e-kube-api-access-hlzkz" (OuterVolumeSpecName: "kube-api-access-hlzkz") pod "ce987c26-24cc-40b4-9898-9f00d4eda52e" (UID: "ce987c26-24cc-40b4-9898-9f00d4eda52e"). InnerVolumeSpecName "kube-api-access-hlzkz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:12:30 crc kubenswrapper[4806]: I0126 08:12:30.069261 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-7448bc75bf-txwxj"] Jan 26 08:12:30 crc kubenswrapper[4806]: I0126 08:12:30.077637 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca560845-7250-4cf5-90d0-449180808340-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "ca560845-7250-4cf5-90d0-449180808340" (UID: "ca560845-7250-4cf5-90d0-449180808340"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 08:12:30 crc kubenswrapper[4806]: I0126 08:12:30.084967 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca560845-7250-4cf5-90d0-449180808340-scripts" (OuterVolumeSpecName: "scripts") pod "ca560845-7250-4cf5-90d0-449180808340" (UID: "ca560845-7250-4cf5-90d0-449180808340"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:12:30 crc kubenswrapper[4806]: I0126 08:12:30.091310 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c547f19a-cde0-4c88-aa6a-d7b43f868565-kube-api-access-d4chc" (OuterVolumeSpecName: "kube-api-access-d4chc") pod "c547f19a-cde0-4c88-aa6a-d7b43f868565" (UID: "c547f19a-cde0-4c88-aa6a-d7b43f868565"). InnerVolumeSpecName "kube-api-access-d4chc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:12:30 crc kubenswrapper[4806]: I0126 08:12:30.092771 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca560845-7250-4cf5-90d0-449180808340-kube-api-access-nk5f9" (OuterVolumeSpecName: "kube-api-access-nk5f9") pod "ca560845-7250-4cf5-90d0-449180808340" (UID: "ca560845-7250-4cf5-90d0-449180808340"). InnerVolumeSpecName "kube-api-access-nk5f9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:12:30 crc kubenswrapper[4806]: I0126 08:12:30.117696 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c547f19a-cde0-4c88-aa6a-d7b43f868565-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "c547f19a-cde0-4c88-aa6a-d7b43f868565" (UID: "c547f19a-cde0-4c88-aa6a-d7b43f868565"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:12:30 crc kubenswrapper[4806]: I0126 08:12:30.137430 4806 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ca560845-7250-4cf5-90d0-449180808340-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 08:12:30 crc kubenswrapper[4806]: I0126 08:12:30.137655 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4chc\" (UniqueName: \"kubernetes.io/projected/c547f19a-cde0-4c88-aa6a-d7b43f868565-kube-api-access-d4chc\") on node \"crc\" DevicePath \"\"" Jan 26 08:12:30 crc kubenswrapper[4806]: I0126 08:12:30.137734 4806 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ca560845-7250-4cf5-90d0-449180808340-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 08:12:30 crc kubenswrapper[4806]: I0126 08:12:30.137796 4806 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c547f19a-cde0-4c88-aa6a-d7b43f868565-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 26 08:12:30 crc kubenswrapper[4806]: I0126 08:12:30.137871 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hlzkz\" (UniqueName: \"kubernetes.io/projected/ce987c26-24cc-40b4-9898-9f00d4eda52e-kube-api-access-hlzkz\") on node \"crc\" DevicePath \"\"" Jan 26 08:12:30 crc kubenswrapper[4806]: I0126 08:12:30.137937 4806 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ca560845-7250-4cf5-90d0-449180808340-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 08:12:30 crc kubenswrapper[4806]: I0126 08:12:30.137993 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nk5f9\" (UniqueName: \"kubernetes.io/projected/ca560845-7250-4cf5-90d0-449180808340-kube-api-access-nk5f9\") on node \"crc\" DevicePath \"\"" Jan 26 08:12:30 crc kubenswrapper[4806]: I0126 08:12:30.161955 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce987c26-24cc-40b4-9898-9f00d4eda52e-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "ce987c26-24cc-40b4-9898-9f00d4eda52e" (UID: "ce987c26-24cc-40b4-9898-9f00d4eda52e"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:12:30 crc kubenswrapper[4806]: I0126 08:12:30.175596 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce987c26-24cc-40b4-9898-9f00d4eda52e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ce987c26-24cc-40b4-9898-9f00d4eda52e" (UID: "ce987c26-24cc-40b4-9898-9f00d4eda52e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:12:30 crc kubenswrapper[4806]: I0126 08:12:30.198582 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-d5647fbb4-vvzj4" event={"ID":"c547f19a-cde0-4c88-aa6a-d7b43f868565","Type":"ContainerDied","Data":"5cb2c60e993ae596622a0d8a6da7ca562678b11bffdd59299536e30cb6e32b2f"} Jan 26 08:12:30 crc kubenswrapper[4806]: I0126 08:12:30.198624 4806 scope.go:117] "RemoveContainer" containerID="dcdc9e36654ccc8cff04cd981b11301de74f383b2f670b79f3add120dc058f25" Jan 26 08:12:30 crc kubenswrapper[4806]: I0126 08:12:30.198713 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-d5647fbb4-vvzj4" Jan 26 08:12:30 crc kubenswrapper[4806]: I0126 08:12:30.205283 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"7bd66660-7983-40b2-94b0-bd9663391fee","Type":"ContainerDied","Data":"ad6a6d51316413a785de65ac54e7490711c5222011bade4bfde24e8cebdaa5e9"} Jan 26 08:12:30 crc kubenswrapper[4806]: I0126 08:12:30.205409 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 26 08:12:30 crc kubenswrapper[4806]: I0126 08:12:30.208836 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca560845-7250-4cf5-90d0-449180808340-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "ca560845-7250-4cf5-90d0-449180808340" (UID: "ca560845-7250-4cf5-90d0-449180808340"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:12:30 crc kubenswrapper[4806]: I0126 08:12:30.209908 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-5687f48547-kz5md"] Jan 26 08:12:30 crc kubenswrapper[4806]: I0126 08:12:30.224806 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-957b6fbf8-7f82k" Jan 26 08:12:30 crc kubenswrapper[4806]: I0126 08:12:30.224954 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-957b6fbf8-7f82k" event={"ID":"ce987c26-24cc-40b4-9898-9f00d4eda52e","Type":"ContainerDied","Data":"6dbdaba3d63981be136fea51c50664ace2b585d642d1f7e34cbfe737cf923d67"} Jan 26 08:12:30 crc kubenswrapper[4806]: I0126 08:12:30.230068 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"f39640fb-b2ef-4514-84d0-38c6d07adb11","Type":"ContainerStarted","Data":"4b0e35eee8922b7c40917268951ab006621944bf42197457c98c1af16f1aa522"} Jan 26 08:12:30 crc kubenswrapper[4806]: I0126 08:12:30.231276 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-56c9b6cf4b-dl98j"] Jan 26 08:12:30 crc kubenswrapper[4806]: I0126 08:12:30.236477 4806 scope.go:117] "RemoveContainer" containerID="40cbaac9b2d13f60e8b402ba949a9eef260993775c0a8f132c5f61f476c58cec" Jan 26 08:12:30 crc kubenswrapper[4806]: I0126 08:12:30.236852 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7d485d788d-5q4tb" event={"ID":"d7b4ee8d-6333-4683-94c4-b79229c76537","Type":"ContainerStarted","Data":"7d67cc37c66ac3a538f6d9969d2611a95d2af70a56ce10b1b4199229b85a1143"} Jan 26 08:12:30 crc kubenswrapper[4806]: I0126 08:12:30.237010 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7d485d788d-5q4tb" podUID="d7b4ee8d-6333-4683-94c4-b79229c76537" containerName="horizon-log" containerID="cri-o://2f5c5518703436421c91f464ab9777da39f58b99c3d2a17879d7f4a25c298cfc" gracePeriod=30 Jan 26 08:12:30 crc kubenswrapper[4806]: I0126 08:12:30.237070 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7d485d788d-5q4tb" podUID="d7b4ee8d-6333-4683-94c4-b79229c76537" containerName="horizon" containerID="cri-o://7d67cc37c66ac3a538f6d9969d2611a95d2af70a56ce10b1b4199229b85a1143" gracePeriod=30 Jan 26 08:12:30 crc kubenswrapper[4806]: I0126 08:12:30.241143 4806 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce987c26-24cc-40b4-9898-9f00d4eda52e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 08:12:30 crc kubenswrapper[4806]: I0126 08:12:30.241168 4806 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ce987c26-24cc-40b4-9898-9f00d4eda52e-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 26 08:12:30 crc kubenswrapper[4806]: I0126 08:12:30.241177 4806 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ca560845-7250-4cf5-90d0-449180808340-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 26 08:12:30 crc kubenswrapper[4806]: I0126 08:12:30.267269 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ca560845-7250-4cf5-90d0-449180808340","Type":"ContainerDied","Data":"96ef9e0af953e12a12be73791d628abfbedd4646af69ed385dc5cfa75c7cfdaf"} Jan 26 08:12:30 crc kubenswrapper[4806]: I0126 08:12:30.267365 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 08:12:30 crc kubenswrapper[4806]: I0126 08:12:30.272334 4806 scope.go:117] "RemoveContainer" containerID="04315d96d9ac1aee7f4e108564753a04b9c84266f4363555a34babee77a45edf" Jan 26 08:12:30 crc kubenswrapper[4806]: I0126 08:12:30.276318 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-78c9d88fc9-5rs9s"] Jan 26 08:12:30 crc kubenswrapper[4806]: I0126 08:12:30.285811 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.324948468 podStartE2EDuration="24.285793829s" podCreationTimestamp="2026-01-26 08:12:06 +0000 UTC" firstStartedPulling="2026-01-26 08:12:07.056833694 +0000 UTC m=+1106.321241750" lastFinishedPulling="2026-01-26 08:12:29.017679055 +0000 UTC m=+1128.282087111" observedRunningTime="2026-01-26 08:12:30.258054753 +0000 UTC m=+1129.522462819" watchObservedRunningTime="2026-01-26 08:12:30.285793829 +0000 UTC m=+1129.550201895" Jan 26 08:12:30 crc kubenswrapper[4806]: I0126 08:12:30.307570 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce987c26-24cc-40b4-9898-9f00d4eda52e-config-data" (OuterVolumeSpecName: "config-data") pod "ce987c26-24cc-40b4-9898-9f00d4eda52e" (UID: "ce987c26-24cc-40b4-9898-9f00d4eda52e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:12:30 crc kubenswrapper[4806]: I0126 08:12:30.313504 4806 scope.go:117] "RemoveContainer" containerID="3c9cd6990c3993b25d4c9718941003d4655ec76a4e802f6b2857ff109f92bfb9" Jan 26 08:12:30 crc kubenswrapper[4806]: I0126 08:12:30.346416 4806 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce987c26-24cc-40b4-9898-9f00d4eda52e-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 08:12:30 crc kubenswrapper[4806]: I0126 08:12:30.346643 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c547f19a-cde0-4c88-aa6a-d7b43f868565-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c547f19a-cde0-4c88-aa6a-d7b43f868565" (UID: "c547f19a-cde0-4c88-aa6a-d7b43f868565"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:12:30 crc kubenswrapper[4806]: I0126 08:12:30.354313 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 26 08:12:30 crc kubenswrapper[4806]: I0126 08:12:30.357794 4806 scope.go:117] "RemoveContainer" containerID="b705fd70926481698eeb503d4adfdb74f689027ab44984e054b013403599b08c" Jan 26 08:12:30 crc kubenswrapper[4806]: I0126 08:12:30.390749 4806 scope.go:117] "RemoveContainer" containerID="cb5eb2546a9b03249bcebd5ce1a4cd0ef38b92385cd7033cf6f53b557f91d5f0" Jan 26 08:12:30 crc kubenswrapper[4806]: I0126 08:12:30.397315 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 26 08:12:30 crc kubenswrapper[4806]: I0126 08:12:30.416424 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca560845-7250-4cf5-90d0-449180808340-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ca560845-7250-4cf5-90d0-449180808340" (UID: "ca560845-7250-4cf5-90d0-449180808340"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:12:30 crc kubenswrapper[4806]: I0126 08:12:30.422953 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 26 08:12:30 crc kubenswrapper[4806]: E0126 08:12:30.423376 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7bd66660-7983-40b2-94b0-bd9663391fee" containerName="cinder-api" Jan 26 08:12:30 crc kubenswrapper[4806]: I0126 08:12:30.423393 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="7bd66660-7983-40b2-94b0-bd9663391fee" containerName="cinder-api" Jan 26 08:12:30 crc kubenswrapper[4806]: E0126 08:12:30.423407 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca560845-7250-4cf5-90d0-449180808340" containerName="ceilometer-central-agent" Jan 26 08:12:30 crc kubenswrapper[4806]: I0126 08:12:30.423414 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca560845-7250-4cf5-90d0-449180808340" containerName="ceilometer-central-agent" Jan 26 08:12:30 crc kubenswrapper[4806]: E0126 08:12:30.423421 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d637212-f269-4915-b30b-4ffe4e19bb2d" containerName="heat-cfnapi" Jan 26 08:12:30 crc kubenswrapper[4806]: I0126 08:12:30.423427 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d637212-f269-4915-b30b-4ffe4e19bb2d" containerName="heat-cfnapi" Jan 26 08:12:30 crc kubenswrapper[4806]: E0126 08:12:30.423439 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca560845-7250-4cf5-90d0-449180808340" containerName="ceilometer-notification-agent" Jan 26 08:12:30 crc kubenswrapper[4806]: I0126 08:12:30.423446 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca560845-7250-4cf5-90d0-449180808340" containerName="ceilometer-notification-agent" Jan 26 08:12:30 crc kubenswrapper[4806]: E0126 08:12:30.423456 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca560845-7250-4cf5-90d0-449180808340" containerName="sg-core" Jan 26 08:12:30 crc kubenswrapper[4806]: I0126 08:12:30.423461 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca560845-7250-4cf5-90d0-449180808340" containerName="sg-core" Jan 26 08:12:30 crc kubenswrapper[4806]: E0126 08:12:30.423475 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c547f19a-cde0-4c88-aa6a-d7b43f868565" containerName="neutron-api" Jan 26 08:12:30 crc kubenswrapper[4806]: I0126 08:12:30.423480 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="c547f19a-cde0-4c88-aa6a-d7b43f868565" containerName="neutron-api" Jan 26 08:12:30 crc kubenswrapper[4806]: E0126 08:12:30.423489 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec2813a6-2bcf-4dcd-a9ab-4cd2ed03d451" containerName="init" Jan 26 08:12:30 crc kubenswrapper[4806]: I0126 08:12:30.423498 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec2813a6-2bcf-4dcd-a9ab-4cd2ed03d451" containerName="init" Jan 26 08:12:30 crc kubenswrapper[4806]: E0126 08:12:30.423514 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c547f19a-cde0-4c88-aa6a-d7b43f868565" containerName="neutron-httpd" Jan 26 08:12:30 crc kubenswrapper[4806]: I0126 08:12:30.423540 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="c547f19a-cde0-4c88-aa6a-d7b43f868565" containerName="neutron-httpd" Jan 26 08:12:30 crc kubenswrapper[4806]: E0126 08:12:30.423550 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca560845-7250-4cf5-90d0-449180808340" containerName="proxy-httpd" Jan 26 08:12:30 crc kubenswrapper[4806]: I0126 08:12:30.423556 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca560845-7250-4cf5-90d0-449180808340" containerName="proxy-httpd" Jan 26 08:12:30 crc kubenswrapper[4806]: E0126 08:12:30.423573 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7bd66660-7983-40b2-94b0-bd9663391fee" containerName="cinder-api-log" Jan 26 08:12:30 crc kubenswrapper[4806]: I0126 08:12:30.423579 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="7bd66660-7983-40b2-94b0-bd9663391fee" containerName="cinder-api-log" Jan 26 08:12:30 crc kubenswrapper[4806]: E0126 08:12:30.423590 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce987c26-24cc-40b4-9898-9f00d4eda52e" containerName="heat-api" Jan 26 08:12:30 crc kubenswrapper[4806]: I0126 08:12:30.423596 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce987c26-24cc-40b4-9898-9f00d4eda52e" containerName="heat-api" Jan 26 08:12:30 crc kubenswrapper[4806]: E0126 08:12:30.423602 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec2813a6-2bcf-4dcd-a9ab-4cd2ed03d451" containerName="dnsmasq-dns" Jan 26 08:12:30 crc kubenswrapper[4806]: I0126 08:12:30.423608 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec2813a6-2bcf-4dcd-a9ab-4cd2ed03d451" containerName="dnsmasq-dns" Jan 26 08:12:30 crc kubenswrapper[4806]: I0126 08:12:30.423812 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca560845-7250-4cf5-90d0-449180808340" containerName="sg-core" Jan 26 08:12:30 crc kubenswrapper[4806]: I0126 08:12:30.423822 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="c547f19a-cde0-4c88-aa6a-d7b43f868565" containerName="neutron-httpd" Jan 26 08:12:30 crc kubenswrapper[4806]: I0126 08:12:30.423833 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca560845-7250-4cf5-90d0-449180808340" containerName="ceilometer-central-agent" Jan 26 08:12:30 crc kubenswrapper[4806]: I0126 08:12:30.423842 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d637212-f269-4915-b30b-4ffe4e19bb2d" containerName="heat-cfnapi" Jan 26 08:12:30 crc kubenswrapper[4806]: I0126 08:12:30.423851 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="c547f19a-cde0-4c88-aa6a-d7b43f868565" containerName="neutron-api" Jan 26 08:12:30 crc kubenswrapper[4806]: I0126 08:12:30.423859 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca560845-7250-4cf5-90d0-449180808340" containerName="proxy-httpd" Jan 26 08:12:30 crc kubenswrapper[4806]: I0126 08:12:30.423867 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="7bd66660-7983-40b2-94b0-bd9663391fee" containerName="cinder-api" Jan 26 08:12:30 crc kubenswrapper[4806]: I0126 08:12:30.423877 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca560845-7250-4cf5-90d0-449180808340" containerName="ceilometer-notification-agent" Jan 26 08:12:30 crc kubenswrapper[4806]: I0126 08:12:30.423887 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce987c26-24cc-40b4-9898-9f00d4eda52e" containerName="heat-api" Jan 26 08:12:30 crc kubenswrapper[4806]: I0126 08:12:30.423899 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec2813a6-2bcf-4dcd-a9ab-4cd2ed03d451" containerName="dnsmasq-dns" Jan 26 08:12:30 crc kubenswrapper[4806]: I0126 08:12:30.423907 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="7bd66660-7983-40b2-94b0-bd9663391fee" containerName="cinder-api-log" Jan 26 08:12:30 crc kubenswrapper[4806]: I0126 08:12:30.424948 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 26 08:12:30 crc kubenswrapper[4806]: I0126 08:12:30.429285 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 26 08:12:30 crc kubenswrapper[4806]: I0126 08:12:30.429706 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 26 08:12:30 crc kubenswrapper[4806]: I0126 08:12:30.430472 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 26 08:12:30 crc kubenswrapper[4806]: I0126 08:12:30.434539 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca560845-7250-4cf5-90d0-449180808340-config-data" (OuterVolumeSpecName: "config-data") pod "ca560845-7250-4cf5-90d0-449180808340" (UID: "ca560845-7250-4cf5-90d0-449180808340"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:12:30 crc kubenswrapper[4806]: I0126 08:12:30.445720 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c547f19a-cde0-4c88-aa6a-d7b43f868565-config" (OuterVolumeSpecName: "config") pod "c547f19a-cde0-4c88-aa6a-d7b43f868565" (UID: "c547f19a-cde0-4c88-aa6a-d7b43f868565"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:12:30 crc kubenswrapper[4806]: I0126 08:12:30.447852 4806 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c547f19a-cde0-4c88-aa6a-d7b43f868565-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 08:12:30 crc kubenswrapper[4806]: I0126 08:12:30.447878 4806 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca560845-7250-4cf5-90d0-449180808340-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 08:12:30 crc kubenswrapper[4806]: I0126 08:12:30.447888 4806 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/c547f19a-cde0-4c88-aa6a-d7b43f868565-config\") on node \"crc\" DevicePath \"\"" Jan 26 08:12:30 crc kubenswrapper[4806]: I0126 08:12:30.447897 4806 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca560845-7250-4cf5-90d0-449180808340-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 08:12:30 crc kubenswrapper[4806]: I0126 08:12:30.449283 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 26 08:12:30 crc kubenswrapper[4806]: I0126 08:12:30.453970 4806 scope.go:117] "RemoveContainer" containerID="419d2b00eb63da05da2dd7fa5da1a852c0fcdf75897811126d6b6766bf1d5cbd" Jan 26 08:12:30 crc kubenswrapper[4806]: I0126 08:12:30.456987 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c547f19a-cde0-4c88-aa6a-d7b43f868565-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "c547f19a-cde0-4c88-aa6a-d7b43f868565" (UID: "c547f19a-cde0-4c88-aa6a-d7b43f868565"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:12:30 crc kubenswrapper[4806]: I0126 08:12:30.578340 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e18f5e2b-f6be-4016-ad01-21b9e9b8bc58-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"e18f5e2b-f6be-4016-ad01-21b9e9b8bc58\") " pod="openstack/cinder-api-0" Jan 26 08:12:30 crc kubenswrapper[4806]: I0126 08:12:30.582958 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e18f5e2b-f6be-4016-ad01-21b9e9b8bc58-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"e18f5e2b-f6be-4016-ad01-21b9e9b8bc58\") " pod="openstack/cinder-api-0" Jan 26 08:12:30 crc kubenswrapper[4806]: I0126 08:12:30.584203 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e18f5e2b-f6be-4016-ad01-21b9e9b8bc58-config-data\") pod \"cinder-api-0\" (UID: \"e18f5e2b-f6be-4016-ad01-21b9e9b8bc58\") " pod="openstack/cinder-api-0" Jan 26 08:12:30 crc kubenswrapper[4806]: I0126 08:12:30.584344 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e18f5e2b-f6be-4016-ad01-21b9e9b8bc58-public-tls-certs\") pod \"cinder-api-0\" (UID: \"e18f5e2b-f6be-4016-ad01-21b9e9b8bc58\") " pod="openstack/cinder-api-0" Jan 26 08:12:30 crc kubenswrapper[4806]: I0126 08:12:30.584485 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e18f5e2b-f6be-4016-ad01-21b9e9b8bc58-logs\") pod \"cinder-api-0\" (UID: \"e18f5e2b-f6be-4016-ad01-21b9e9b8bc58\") " pod="openstack/cinder-api-0" Jan 26 08:12:30 crc kubenswrapper[4806]: I0126 08:12:30.584800 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e18f5e2b-f6be-4016-ad01-21b9e9b8bc58-config-data-custom\") pod \"cinder-api-0\" (UID: \"e18f5e2b-f6be-4016-ad01-21b9e9b8bc58\") " pod="openstack/cinder-api-0" Jan 26 08:12:30 crc kubenswrapper[4806]: I0126 08:12:30.584932 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e18f5e2b-f6be-4016-ad01-21b9e9b8bc58-scripts\") pod \"cinder-api-0\" (UID: \"e18f5e2b-f6be-4016-ad01-21b9e9b8bc58\") " pod="openstack/cinder-api-0" Jan 26 08:12:30 crc kubenswrapper[4806]: I0126 08:12:30.585340 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e18f5e2b-f6be-4016-ad01-21b9e9b8bc58-etc-machine-id\") pod \"cinder-api-0\" (UID: \"e18f5e2b-f6be-4016-ad01-21b9e9b8bc58\") " pod="openstack/cinder-api-0" Jan 26 08:12:30 crc kubenswrapper[4806]: I0126 08:12:30.585399 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qv7vf\" (UniqueName: \"kubernetes.io/projected/e18f5e2b-f6be-4016-ad01-21b9e9b8bc58-kube-api-access-qv7vf\") pod \"cinder-api-0\" (UID: \"e18f5e2b-f6be-4016-ad01-21b9e9b8bc58\") " pod="openstack/cinder-api-0" Jan 26 08:12:30 crc kubenswrapper[4806]: I0126 08:12:30.585603 4806 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c547f19a-cde0-4c88-aa6a-d7b43f868565-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 08:12:30 crc kubenswrapper[4806]: I0126 08:12:30.692761 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e18f5e2b-f6be-4016-ad01-21b9e9b8bc58-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"e18f5e2b-f6be-4016-ad01-21b9e9b8bc58\") " pod="openstack/cinder-api-0" Jan 26 08:12:30 crc kubenswrapper[4806]: I0126 08:12:30.694482 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e18f5e2b-f6be-4016-ad01-21b9e9b8bc58-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"e18f5e2b-f6be-4016-ad01-21b9e9b8bc58\") " pod="openstack/cinder-api-0" Jan 26 08:12:30 crc kubenswrapper[4806]: I0126 08:12:30.694511 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e18f5e2b-f6be-4016-ad01-21b9e9b8bc58-config-data\") pod \"cinder-api-0\" (UID: \"e18f5e2b-f6be-4016-ad01-21b9e9b8bc58\") " pod="openstack/cinder-api-0" Jan 26 08:12:30 crc kubenswrapper[4806]: I0126 08:12:30.694624 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e18f5e2b-f6be-4016-ad01-21b9e9b8bc58-public-tls-certs\") pod \"cinder-api-0\" (UID: \"e18f5e2b-f6be-4016-ad01-21b9e9b8bc58\") " pod="openstack/cinder-api-0" Jan 26 08:12:30 crc kubenswrapper[4806]: I0126 08:12:30.694644 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e18f5e2b-f6be-4016-ad01-21b9e9b8bc58-logs\") pod \"cinder-api-0\" (UID: \"e18f5e2b-f6be-4016-ad01-21b9e9b8bc58\") " pod="openstack/cinder-api-0" Jan 26 08:12:30 crc kubenswrapper[4806]: I0126 08:12:30.694706 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e18f5e2b-f6be-4016-ad01-21b9e9b8bc58-config-data-custom\") pod \"cinder-api-0\" (UID: \"e18f5e2b-f6be-4016-ad01-21b9e9b8bc58\") " pod="openstack/cinder-api-0" Jan 26 08:12:30 crc kubenswrapper[4806]: I0126 08:12:30.694733 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e18f5e2b-f6be-4016-ad01-21b9e9b8bc58-scripts\") pod \"cinder-api-0\" (UID: \"e18f5e2b-f6be-4016-ad01-21b9e9b8bc58\") " pod="openstack/cinder-api-0" Jan 26 08:12:30 crc kubenswrapper[4806]: I0126 08:12:30.694800 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e18f5e2b-f6be-4016-ad01-21b9e9b8bc58-etc-machine-id\") pod \"cinder-api-0\" (UID: \"e18f5e2b-f6be-4016-ad01-21b9e9b8bc58\") " pod="openstack/cinder-api-0" Jan 26 08:12:30 crc kubenswrapper[4806]: I0126 08:12:30.694821 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qv7vf\" (UniqueName: \"kubernetes.io/projected/e18f5e2b-f6be-4016-ad01-21b9e9b8bc58-kube-api-access-qv7vf\") pod \"cinder-api-0\" (UID: \"e18f5e2b-f6be-4016-ad01-21b9e9b8bc58\") " pod="openstack/cinder-api-0" Jan 26 08:12:30 crc kubenswrapper[4806]: I0126 08:12:30.695718 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e18f5e2b-f6be-4016-ad01-21b9e9b8bc58-logs\") pod \"cinder-api-0\" (UID: \"e18f5e2b-f6be-4016-ad01-21b9e9b8bc58\") " pod="openstack/cinder-api-0" Jan 26 08:12:30 crc kubenswrapper[4806]: I0126 08:12:30.697645 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e18f5e2b-f6be-4016-ad01-21b9e9b8bc58-etc-machine-id\") pod \"cinder-api-0\" (UID: \"e18f5e2b-f6be-4016-ad01-21b9e9b8bc58\") " pod="openstack/cinder-api-0" Jan 26 08:12:30 crc kubenswrapper[4806]: I0126 08:12:30.704025 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 08:12:30 crc kubenswrapper[4806]: I0126 08:12:30.704538 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="4e0c22de-1431-4fcc-9ebd-1cc4791260c8" containerName="glance-log" containerID="cri-o://2ad53b837c7da7b8af599460f6cfc99ca4cac64f501b36d44e0ac6f5929e7096" gracePeriod=30 Jan 26 08:12:30 crc kubenswrapper[4806]: I0126 08:12:30.704987 4806 scope.go:117] "RemoveContainer" containerID="f7e87a97636c9aa22a55bafb57e12cbbc0fbf00e97b6e15c251eeed86aab3612" Jan 26 08:12:30 crc kubenswrapper[4806]: I0126 08:12:30.705216 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="4e0c22de-1431-4fcc-9ebd-1cc4791260c8" containerName="glance-httpd" containerID="cri-o://9c0390bc804d1036e606767eb250807db0d925677c52f562f9fee863672b8282" gracePeriod=30 Jan 26 08:12:30 crc kubenswrapper[4806]: I0126 08:12:30.705551 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e18f5e2b-f6be-4016-ad01-21b9e9b8bc58-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"e18f5e2b-f6be-4016-ad01-21b9e9b8bc58\") " pod="openstack/cinder-api-0" Jan 26 08:12:30 crc kubenswrapper[4806]: I0126 08:12:30.719799 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e18f5e2b-f6be-4016-ad01-21b9e9b8bc58-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"e18f5e2b-f6be-4016-ad01-21b9e9b8bc58\") " pod="openstack/cinder-api-0" Jan 26 08:12:30 crc kubenswrapper[4806]: I0126 08:12:30.722968 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e18f5e2b-f6be-4016-ad01-21b9e9b8bc58-scripts\") pod \"cinder-api-0\" (UID: \"e18f5e2b-f6be-4016-ad01-21b9e9b8bc58\") " pod="openstack/cinder-api-0" Jan 26 08:12:30 crc kubenswrapper[4806]: I0126 08:12:30.725488 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e18f5e2b-f6be-4016-ad01-21b9e9b8bc58-config-data-custom\") pod \"cinder-api-0\" (UID: \"e18f5e2b-f6be-4016-ad01-21b9e9b8bc58\") " pod="openstack/cinder-api-0" Jan 26 08:12:30 crc kubenswrapper[4806]: I0126 08:12:30.730371 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e18f5e2b-f6be-4016-ad01-21b9e9b8bc58-config-data\") pod \"cinder-api-0\" (UID: \"e18f5e2b-f6be-4016-ad01-21b9e9b8bc58\") " pod="openstack/cinder-api-0" Jan 26 08:12:30 crc kubenswrapper[4806]: I0126 08:12:30.749125 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e18f5e2b-f6be-4016-ad01-21b9e9b8bc58-public-tls-certs\") pod \"cinder-api-0\" (UID: \"e18f5e2b-f6be-4016-ad01-21b9e9b8bc58\") " pod="openstack/cinder-api-0" Jan 26 08:12:30 crc kubenswrapper[4806]: I0126 08:12:30.749267 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qv7vf\" (UniqueName: \"kubernetes.io/projected/e18f5e2b-f6be-4016-ad01-21b9e9b8bc58-kube-api-access-qv7vf\") pod \"cinder-api-0\" (UID: \"e18f5e2b-f6be-4016-ad01-21b9e9b8bc58\") " pod="openstack/cinder-api-0" Jan 26 08:12:30 crc kubenswrapper[4806]: I0126 08:12:30.806043 4806 scope.go:117] "RemoveContainer" containerID="d821d0cd3a0369da46a6a3b07f9463e2b21db6542f0834eaf3b0becc69f51cfd" Jan 26 08:12:30 crc kubenswrapper[4806]: I0126 08:12:30.837594 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 08:12:30 crc kubenswrapper[4806]: I0126 08:12:30.860748 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 26 08:12:30 crc kubenswrapper[4806]: I0126 08:12:30.878439 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 26 08:12:30 crc kubenswrapper[4806]: I0126 08:12:30.891243 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 26 08:12:30 crc kubenswrapper[4806]: I0126 08:12:30.900284 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 08:12:30 crc kubenswrapper[4806]: I0126 08:12:30.905843 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 26 08:12:30 crc kubenswrapper[4806]: I0126 08:12:30.906118 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 26 08:12:30 crc kubenswrapper[4806]: I0126 08:12:30.908743 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-957b6fbf8-7f82k"] Jan 26 08:12:30 crc kubenswrapper[4806]: I0126 08:12:30.927591 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-957b6fbf8-7f82k"] Jan 26 08:12:30 crc kubenswrapper[4806]: I0126 08:12:30.939577 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 08:12:30 crc kubenswrapper[4806]: I0126 08:12:30.952198 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-d5647fbb4-vvzj4"] Jan 26 08:12:30 crc kubenswrapper[4806]: I0126 08:12:30.957428 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-d5647fbb4-vvzj4"] Jan 26 08:12:31 crc kubenswrapper[4806]: I0126 08:12:31.011495 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/59eb0fbe-8f63-41b2-9d40-0af8db4e8b4d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"59eb0fbe-8f63-41b2-9d40-0af8db4e8b4d\") " pod="openstack/ceilometer-0" Jan 26 08:12:31 crc kubenswrapper[4806]: I0126 08:12:31.011556 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xf72\" (UniqueName: \"kubernetes.io/projected/59eb0fbe-8f63-41b2-9d40-0af8db4e8b4d-kube-api-access-8xf72\") pod \"ceilometer-0\" (UID: \"59eb0fbe-8f63-41b2-9d40-0af8db4e8b4d\") " pod="openstack/ceilometer-0" Jan 26 08:12:31 crc kubenswrapper[4806]: I0126 08:12:31.011577 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/59eb0fbe-8f63-41b2-9d40-0af8db4e8b4d-run-httpd\") pod \"ceilometer-0\" (UID: \"59eb0fbe-8f63-41b2-9d40-0af8db4e8b4d\") " pod="openstack/ceilometer-0" Jan 26 08:12:31 crc kubenswrapper[4806]: I0126 08:12:31.011598 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59eb0fbe-8f63-41b2-9d40-0af8db4e8b4d-scripts\") pod \"ceilometer-0\" (UID: \"59eb0fbe-8f63-41b2-9d40-0af8db4e8b4d\") " pod="openstack/ceilometer-0" Jan 26 08:12:31 crc kubenswrapper[4806]: I0126 08:12:31.011654 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59eb0fbe-8f63-41b2-9d40-0af8db4e8b4d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"59eb0fbe-8f63-41b2-9d40-0af8db4e8b4d\") " pod="openstack/ceilometer-0" Jan 26 08:12:31 crc kubenswrapper[4806]: I0126 08:12:31.011677 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/59eb0fbe-8f63-41b2-9d40-0af8db4e8b4d-log-httpd\") pod \"ceilometer-0\" (UID: \"59eb0fbe-8f63-41b2-9d40-0af8db4e8b4d\") " pod="openstack/ceilometer-0" Jan 26 08:12:31 crc kubenswrapper[4806]: I0126 08:12:31.011700 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59eb0fbe-8f63-41b2-9d40-0af8db4e8b4d-config-data\") pod \"ceilometer-0\" (UID: \"59eb0fbe-8f63-41b2-9d40-0af8db4e8b4d\") " pod="openstack/ceilometer-0" Jan 26 08:12:31 crc kubenswrapper[4806]: I0126 08:12:31.077958 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d637212-f269-4915-b30b-4ffe4e19bb2d" path="/var/lib/kubelet/pods/4d637212-f269-4915-b30b-4ffe4e19bb2d/volumes" Jan 26 08:12:31 crc kubenswrapper[4806]: I0126 08:12:31.078703 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bd66660-7983-40b2-94b0-bd9663391fee" path="/var/lib/kubelet/pods/7bd66660-7983-40b2-94b0-bd9663391fee/volumes" Jan 26 08:12:31 crc kubenswrapper[4806]: I0126 08:12:31.079600 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c547f19a-cde0-4c88-aa6a-d7b43f868565" path="/var/lib/kubelet/pods/c547f19a-cde0-4c88-aa6a-d7b43f868565/volumes" Jan 26 08:12:31 crc kubenswrapper[4806]: I0126 08:12:31.080705 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca560845-7250-4cf5-90d0-449180808340" path="/var/lib/kubelet/pods/ca560845-7250-4cf5-90d0-449180808340/volumes" Jan 26 08:12:31 crc kubenswrapper[4806]: I0126 08:12:31.081837 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce987c26-24cc-40b4-9898-9f00d4eda52e" path="/var/lib/kubelet/pods/ce987c26-24cc-40b4-9898-9f00d4eda52e/volumes" Jan 26 08:12:31 crc kubenswrapper[4806]: I0126 08:12:31.083738 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec2813a6-2bcf-4dcd-a9ab-4cd2ed03d451" path="/var/lib/kubelet/pods/ec2813a6-2bcf-4dcd-a9ab-4cd2ed03d451/volumes" Jan 26 08:12:31 crc kubenswrapper[4806]: I0126 08:12:31.115538 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xf72\" (UniqueName: \"kubernetes.io/projected/59eb0fbe-8f63-41b2-9d40-0af8db4e8b4d-kube-api-access-8xf72\") pod \"ceilometer-0\" (UID: \"59eb0fbe-8f63-41b2-9d40-0af8db4e8b4d\") " pod="openstack/ceilometer-0" Jan 26 08:12:31 crc kubenswrapper[4806]: I0126 08:12:31.115585 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/59eb0fbe-8f63-41b2-9d40-0af8db4e8b4d-run-httpd\") pod \"ceilometer-0\" (UID: \"59eb0fbe-8f63-41b2-9d40-0af8db4e8b4d\") " pod="openstack/ceilometer-0" Jan 26 08:12:31 crc kubenswrapper[4806]: I0126 08:12:31.115612 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59eb0fbe-8f63-41b2-9d40-0af8db4e8b4d-scripts\") pod \"ceilometer-0\" (UID: \"59eb0fbe-8f63-41b2-9d40-0af8db4e8b4d\") " pod="openstack/ceilometer-0" Jan 26 08:12:31 crc kubenswrapper[4806]: I0126 08:12:31.115674 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59eb0fbe-8f63-41b2-9d40-0af8db4e8b4d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"59eb0fbe-8f63-41b2-9d40-0af8db4e8b4d\") " pod="openstack/ceilometer-0" Jan 26 08:12:31 crc kubenswrapper[4806]: I0126 08:12:31.115698 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/59eb0fbe-8f63-41b2-9d40-0af8db4e8b4d-log-httpd\") pod \"ceilometer-0\" (UID: \"59eb0fbe-8f63-41b2-9d40-0af8db4e8b4d\") " pod="openstack/ceilometer-0" Jan 26 08:12:31 crc kubenswrapper[4806]: I0126 08:12:31.115716 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59eb0fbe-8f63-41b2-9d40-0af8db4e8b4d-config-data\") pod \"ceilometer-0\" (UID: \"59eb0fbe-8f63-41b2-9d40-0af8db4e8b4d\") " pod="openstack/ceilometer-0" Jan 26 08:12:31 crc kubenswrapper[4806]: I0126 08:12:31.115797 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/59eb0fbe-8f63-41b2-9d40-0af8db4e8b4d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"59eb0fbe-8f63-41b2-9d40-0af8db4e8b4d\") " pod="openstack/ceilometer-0" Jan 26 08:12:31 crc kubenswrapper[4806]: I0126 08:12:31.129458 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/59eb0fbe-8f63-41b2-9d40-0af8db4e8b4d-log-httpd\") pod \"ceilometer-0\" (UID: \"59eb0fbe-8f63-41b2-9d40-0af8db4e8b4d\") " pod="openstack/ceilometer-0" Jan 26 08:12:31 crc kubenswrapper[4806]: I0126 08:12:31.129697 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/59eb0fbe-8f63-41b2-9d40-0af8db4e8b4d-run-httpd\") pod \"ceilometer-0\" (UID: \"59eb0fbe-8f63-41b2-9d40-0af8db4e8b4d\") " pod="openstack/ceilometer-0" Jan 26 08:12:31 crc kubenswrapper[4806]: I0126 08:12:31.130208 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/59eb0fbe-8f63-41b2-9d40-0af8db4e8b4d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"59eb0fbe-8f63-41b2-9d40-0af8db4e8b4d\") " pod="openstack/ceilometer-0" Jan 26 08:12:31 crc kubenswrapper[4806]: I0126 08:12:31.137318 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59eb0fbe-8f63-41b2-9d40-0af8db4e8b4d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"59eb0fbe-8f63-41b2-9d40-0af8db4e8b4d\") " pod="openstack/ceilometer-0" Jan 26 08:12:31 crc kubenswrapper[4806]: I0126 08:12:31.138222 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59eb0fbe-8f63-41b2-9d40-0af8db4e8b4d-config-data\") pod \"ceilometer-0\" (UID: \"59eb0fbe-8f63-41b2-9d40-0af8db4e8b4d\") " pod="openstack/ceilometer-0" Jan 26 08:12:31 crc kubenswrapper[4806]: I0126 08:12:31.146374 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59eb0fbe-8f63-41b2-9d40-0af8db4e8b4d-scripts\") pod \"ceilometer-0\" (UID: \"59eb0fbe-8f63-41b2-9d40-0af8db4e8b4d\") " pod="openstack/ceilometer-0" Jan 26 08:12:31 crc kubenswrapper[4806]: I0126 08:12:31.176704 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xf72\" (UniqueName: \"kubernetes.io/projected/59eb0fbe-8f63-41b2-9d40-0af8db4e8b4d-kube-api-access-8xf72\") pod \"ceilometer-0\" (UID: \"59eb0fbe-8f63-41b2-9d40-0af8db4e8b4d\") " pod="openstack/ceilometer-0" Jan 26 08:12:31 crc kubenswrapper[4806]: I0126 08:12:31.227611 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 08:12:31 crc kubenswrapper[4806]: I0126 08:12:31.336274 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5687f48547-kz5md" event={"ID":"39ad9dca-7dee-4116-ab24-071e59b41dc2","Type":"ContainerStarted","Data":"942304bb686eb55525650d55eaf54147b1ebf33a9df83c71b9242b2343c1e5a6"} Jan 26 08:12:31 crc kubenswrapper[4806]: I0126 08:12:31.336502 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5687f48547-kz5md" event={"ID":"39ad9dca-7dee-4116-ab24-071e59b41dc2","Type":"ContainerStarted","Data":"443952372de796e2d309c3ba27765c29aa25560010edae13196e8cd42bd52064"} Jan 26 08:12:31 crc kubenswrapper[4806]: I0126 08:12:31.339099 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-5687f48547-kz5md" Jan 26 08:12:31 crc kubenswrapper[4806]: I0126 08:12:31.342376 4806 generic.go:334] "Generic (PLEG): container finished" podID="4e0c22de-1431-4fcc-9ebd-1cc4791260c8" containerID="2ad53b837c7da7b8af599460f6cfc99ca4cac64f501b36d44e0ac6f5929e7096" exitCode=143 Jan 26 08:12:31 crc kubenswrapper[4806]: I0126 08:12:31.342468 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4e0c22de-1431-4fcc-9ebd-1cc4791260c8","Type":"ContainerDied","Data":"2ad53b837c7da7b8af599460f6cfc99ca4cac64f501b36d44e0ac6f5929e7096"} Jan 26 08:12:31 crc kubenswrapper[4806]: I0126 08:12:31.347387 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-78c9d88fc9-5rs9s" event={"ID":"7c49d653-a114-4352-afd1-a2ca43c811f1","Type":"ContainerStarted","Data":"6da4e7349870d3fe27ebbd7a35db928e8bb0723958394a890a00f6dd4e0d0f94"} Jan 26 08:12:31 crc kubenswrapper[4806]: I0126 08:12:31.347419 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-78c9d88fc9-5rs9s" event={"ID":"7c49d653-a114-4352-afd1-a2ca43c811f1","Type":"ContainerStarted","Data":"05dd521dfc95d614cf96865035193a996060429aff2154aa5d20637326685c65"} Jan 26 08:12:31 crc kubenswrapper[4806]: I0126 08:12:31.354639 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-56c9b6cf4b-dl98j" event={"ID":"e2b71668-05ca-4e62-a0fc-1e240e24caff","Type":"ContainerStarted","Data":"80ea160baa6d0176c1d5b7851d3775d11879e36cb7f79c97844d11bf568b8548"} Jan 26 08:12:31 crc kubenswrapper[4806]: I0126 08:12:31.354702 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-56c9b6cf4b-dl98j" event={"ID":"e2b71668-05ca-4e62-a0fc-1e240e24caff","Type":"ContainerStarted","Data":"77ae1f3d1354ca81efbfa7e456dc997b0569bc5c621eedd55e50a1529f62883d"} Jan 26 08:12:31 crc kubenswrapper[4806]: I0126 08:12:31.355234 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-56c9b6cf4b-dl98j" Jan 26 08:12:31 crc kubenswrapper[4806]: I0126 08:12:31.388846 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-5687f48547-kz5md" podStartSLOduration=11.388830734999999 podStartE2EDuration="11.388830735s" podCreationTimestamp="2026-01-26 08:12:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 08:12:31.378859887 +0000 UTC m=+1130.643267943" watchObservedRunningTime="2026-01-26 08:12:31.388830735 +0000 UTC m=+1130.653238791" Jan 26 08:12:31 crc kubenswrapper[4806]: I0126 08:12:31.415144 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-56c9b6cf4b-dl98j" podStartSLOduration=11.415123421 podStartE2EDuration="11.415123421s" podCreationTimestamp="2026-01-26 08:12:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 08:12:31.403780113 +0000 UTC m=+1130.668188179" watchObservedRunningTime="2026-01-26 08:12:31.415123421 +0000 UTC m=+1130.679531477" Jan 26 08:12:31 crc kubenswrapper[4806]: W0126 08:12:31.482672 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode18f5e2b_f6be_4016_ad01_21b9e9b8bc58.slice/crio-b226e41c3efdf8cdec8adaef8660f5e2a9aee944a055bf3bb107f95cc46acaf8 WatchSource:0}: Error finding container b226e41c3efdf8cdec8adaef8660f5e2a9aee944a055bf3bb107f95cc46acaf8: Status 404 returned error can't find the container with id b226e41c3efdf8cdec8adaef8660f5e2a9aee944a055bf3bb107f95cc46acaf8 Jan 26 08:12:31 crc kubenswrapper[4806]: I0126 08:12:31.482784 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 26 08:12:31 crc kubenswrapper[4806]: I0126 08:12:31.603489 4806 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6578955fd5-6g69f" podUID="ec2813a6-2bcf-4dcd-a9ab-4cd2ed03d451" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.169:5353: i/o timeout" Jan 26 08:12:31 crc kubenswrapper[4806]: I0126 08:12:31.929486 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 08:12:31 crc kubenswrapper[4806]: W0126 08:12:31.982933 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod59eb0fbe_8f63_41b2_9d40_0af8db4e8b4d.slice/crio-02fee1900d98ef883ffd9586f7951126ea9eddaffbbcab6677958d1eba638026 WatchSource:0}: Error finding container 02fee1900d98ef883ffd9586f7951126ea9eddaffbbcab6677958d1eba638026: Status 404 returned error can't find the container with id 02fee1900d98ef883ffd9586f7951126ea9eddaffbbcab6677958d1eba638026 Jan 26 08:12:32 crc kubenswrapper[4806]: I0126 08:12:32.365819 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"59eb0fbe-8f63-41b2-9d40-0af8db4e8b4d","Type":"ContainerStarted","Data":"02fee1900d98ef883ffd9586f7951126ea9eddaffbbcab6677958d1eba638026"} Jan 26 08:12:32 crc kubenswrapper[4806]: I0126 08:12:32.372598 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-78c9d88fc9-5rs9s" event={"ID":"7c49d653-a114-4352-afd1-a2ca43c811f1","Type":"ContainerStarted","Data":"5229a29445dff264450bd97ca25a268b10aaa5c18e33601b46b9ca1438a0984d"} Jan 26 08:12:32 crc kubenswrapper[4806]: I0126 08:12:32.372641 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-78c9d88fc9-5rs9s" Jan 26 08:12:32 crc kubenswrapper[4806]: I0126 08:12:32.372663 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-78c9d88fc9-5rs9s" Jan 26 08:12:32 crc kubenswrapper[4806]: I0126 08:12:32.375483 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e18f5e2b-f6be-4016-ad01-21b9e9b8bc58","Type":"ContainerStarted","Data":"b226e41c3efdf8cdec8adaef8660f5e2a9aee944a055bf3bb107f95cc46acaf8"} Jan 26 08:12:32 crc kubenswrapper[4806]: I0126 08:12:32.399421 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-78c9d88fc9-5rs9s" podStartSLOduration=11.399400757 podStartE2EDuration="11.399400757s" podCreationTimestamp="2026-01-26 08:12:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 08:12:32.398981595 +0000 UTC m=+1131.663389651" watchObservedRunningTime="2026-01-26 08:12:32.399400757 +0000 UTC m=+1131.663808813" Jan 26 08:12:32 crc kubenswrapper[4806]: I0126 08:12:32.847735 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 08:12:32 crc kubenswrapper[4806]: I0126 08:12:32.852023 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="20febbb2-a1ac-4a38-8a1d-594fa53b0b06" containerName="glance-log" containerID="cri-o://8744719c3edd57487457204fcef0336b3149b40334d7045bf497f74d4a82cd60" gracePeriod=30 Jan 26 08:12:32 crc kubenswrapper[4806]: I0126 08:12:32.852129 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="20febbb2-a1ac-4a38-8a1d-594fa53b0b06" containerName="glance-httpd" containerID="cri-o://cc152a9a8b4a3addd40f4a2697481ff2981fe983753c7a5d570dc0e5efb5eb2d" gracePeriod=30 Jan 26 08:12:33 crc kubenswrapper[4806]: I0126 08:12:33.386119 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e18f5e2b-f6be-4016-ad01-21b9e9b8bc58","Type":"ContainerStarted","Data":"f649c3a3b7b75639e604190d76b345daf80f3097e5714528896c2294aca62000"} Jan 26 08:12:33 crc kubenswrapper[4806]: I0126 08:12:33.389431 4806 generic.go:334] "Generic (PLEG): container finished" podID="20febbb2-a1ac-4a38-8a1d-594fa53b0b06" containerID="8744719c3edd57487457204fcef0336b3149b40334d7045bf497f74d4a82cd60" exitCode=143 Jan 26 08:12:33 crc kubenswrapper[4806]: I0126 08:12:33.389536 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"20febbb2-a1ac-4a38-8a1d-594fa53b0b06","Type":"ContainerDied","Data":"8744719c3edd57487457204fcef0336b3149b40334d7045bf497f74d4a82cd60"} Jan 26 08:12:33 crc kubenswrapper[4806]: I0126 08:12:33.391693 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"59eb0fbe-8f63-41b2-9d40-0af8db4e8b4d","Type":"ContainerStarted","Data":"607ba68a56fad93daf7aed89180eebc182e9a4b8b1c9d74e9080dea6a7125203"} Jan 26 08:12:33 crc kubenswrapper[4806]: I0126 08:12:33.735646 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7d485d788d-5q4tb" Jan 26 08:12:34 crc kubenswrapper[4806]: I0126 08:12:34.127901 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 08:12:34 crc kubenswrapper[4806]: I0126 08:12:34.412047 4806 generic.go:334] "Generic (PLEG): container finished" podID="4e0c22de-1431-4fcc-9ebd-1cc4791260c8" containerID="9c0390bc804d1036e606767eb250807db0d925677c52f562f9fee863672b8282" exitCode=0 Jan 26 08:12:34 crc kubenswrapper[4806]: I0126 08:12:34.412357 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4e0c22de-1431-4fcc-9ebd-1cc4791260c8","Type":"ContainerDied","Data":"9c0390bc804d1036e606767eb250807db0d925677c52f562f9fee863672b8282"} Jan 26 08:12:34 crc kubenswrapper[4806]: I0126 08:12:34.414083 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e18f5e2b-f6be-4016-ad01-21b9e9b8bc58","Type":"ContainerStarted","Data":"5dc30955f50356c5317b55b24f5e693fff82d67dfa1913d65bf507912c79a5b1"} Jan 26 08:12:34 crc kubenswrapper[4806]: I0126 08:12:34.414613 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 26 08:12:34 crc kubenswrapper[4806]: I0126 08:12:34.419925 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"59eb0fbe-8f63-41b2-9d40-0af8db4e8b4d","Type":"ContainerStarted","Data":"4e6dc21932d5d19f7362d957729c15bbd2db014cfef64a8569c768604e06b64c"} Jan 26 08:12:34 crc kubenswrapper[4806]: I0126 08:12:34.419973 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"59eb0fbe-8f63-41b2-9d40-0af8db4e8b4d","Type":"ContainerStarted","Data":"03f3f7622afb237e8be581542fbe6897e2adc9b1d714d3eb253e01ada8cd7b45"} Jan 26 08:12:34 crc kubenswrapper[4806]: I0126 08:12:34.437419 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.437402432 podStartE2EDuration="4.437402432s" podCreationTimestamp="2026-01-26 08:12:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 08:12:34.433482302 +0000 UTC m=+1133.697890358" watchObservedRunningTime="2026-01-26 08:12:34.437402432 +0000 UTC m=+1133.701810488" Jan 26 08:12:34 crc kubenswrapper[4806]: I0126 08:12:34.540848 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 26 08:12:34 crc kubenswrapper[4806]: I0126 08:12:34.608260 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4e0c22de-1431-4fcc-9ebd-1cc4791260c8-logs\") pod \"4e0c22de-1431-4fcc-9ebd-1cc4791260c8\" (UID: \"4e0c22de-1431-4fcc-9ebd-1cc4791260c8\") " Jan 26 08:12:34 crc kubenswrapper[4806]: I0126 08:12:34.608322 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e0c22de-1431-4fcc-9ebd-1cc4791260c8-scripts\") pod \"4e0c22de-1431-4fcc-9ebd-1cc4791260c8\" (UID: \"4e0c22de-1431-4fcc-9ebd-1cc4791260c8\") " Jan 26 08:12:34 crc kubenswrapper[4806]: I0126 08:12:34.608468 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e0c22de-1431-4fcc-9ebd-1cc4791260c8-combined-ca-bundle\") pod \"4e0c22de-1431-4fcc-9ebd-1cc4791260c8\" (UID: \"4e0c22de-1431-4fcc-9ebd-1cc4791260c8\") " Jan 26 08:12:34 crc kubenswrapper[4806]: I0126 08:12:34.608494 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-stk4d\" (UniqueName: \"kubernetes.io/projected/4e0c22de-1431-4fcc-9ebd-1cc4791260c8-kube-api-access-stk4d\") pod \"4e0c22de-1431-4fcc-9ebd-1cc4791260c8\" (UID: \"4e0c22de-1431-4fcc-9ebd-1cc4791260c8\") " Jan 26 08:12:34 crc kubenswrapper[4806]: I0126 08:12:34.608542 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e0c22de-1431-4fcc-9ebd-1cc4791260c8-public-tls-certs\") pod \"4e0c22de-1431-4fcc-9ebd-1cc4791260c8\" (UID: \"4e0c22de-1431-4fcc-9ebd-1cc4791260c8\") " Jan 26 08:12:34 crc kubenswrapper[4806]: I0126 08:12:34.608577 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e0c22de-1431-4fcc-9ebd-1cc4791260c8-config-data\") pod \"4e0c22de-1431-4fcc-9ebd-1cc4791260c8\" (UID: \"4e0c22de-1431-4fcc-9ebd-1cc4791260c8\") " Jan 26 08:12:34 crc kubenswrapper[4806]: I0126 08:12:34.608635 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"4e0c22de-1431-4fcc-9ebd-1cc4791260c8\" (UID: \"4e0c22de-1431-4fcc-9ebd-1cc4791260c8\") " Jan 26 08:12:34 crc kubenswrapper[4806]: I0126 08:12:34.608657 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4e0c22de-1431-4fcc-9ebd-1cc4791260c8-httpd-run\") pod \"4e0c22de-1431-4fcc-9ebd-1cc4791260c8\" (UID: \"4e0c22de-1431-4fcc-9ebd-1cc4791260c8\") " Jan 26 08:12:34 crc kubenswrapper[4806]: I0126 08:12:34.611974 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e0c22de-1431-4fcc-9ebd-1cc4791260c8-logs" (OuterVolumeSpecName: "logs") pod "4e0c22de-1431-4fcc-9ebd-1cc4791260c8" (UID: "4e0c22de-1431-4fcc-9ebd-1cc4791260c8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 08:12:34 crc kubenswrapper[4806]: I0126 08:12:34.614504 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e0c22de-1431-4fcc-9ebd-1cc4791260c8-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "4e0c22de-1431-4fcc-9ebd-1cc4791260c8" (UID: "4e0c22de-1431-4fcc-9ebd-1cc4791260c8"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 08:12:34 crc kubenswrapper[4806]: I0126 08:12:34.617618 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e0c22de-1431-4fcc-9ebd-1cc4791260c8-scripts" (OuterVolumeSpecName: "scripts") pod "4e0c22de-1431-4fcc-9ebd-1cc4791260c8" (UID: "4e0c22de-1431-4fcc-9ebd-1cc4791260c8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:12:34 crc kubenswrapper[4806]: I0126 08:12:34.621675 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "glance") pod "4e0c22de-1431-4fcc-9ebd-1cc4791260c8" (UID: "4e0c22de-1431-4fcc-9ebd-1cc4791260c8"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 26 08:12:34 crc kubenswrapper[4806]: I0126 08:12:34.644671 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e0c22de-1431-4fcc-9ebd-1cc4791260c8-kube-api-access-stk4d" (OuterVolumeSpecName: "kube-api-access-stk4d") pod "4e0c22de-1431-4fcc-9ebd-1cc4791260c8" (UID: "4e0c22de-1431-4fcc-9ebd-1cc4791260c8"). InnerVolumeSpecName "kube-api-access-stk4d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:12:34 crc kubenswrapper[4806]: I0126 08:12:34.715307 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-stk4d\" (UniqueName: \"kubernetes.io/projected/4e0c22de-1431-4fcc-9ebd-1cc4791260c8-kube-api-access-stk4d\") on node \"crc\" DevicePath \"\"" Jan 26 08:12:34 crc kubenswrapper[4806]: I0126 08:12:34.721281 4806 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Jan 26 08:12:34 crc kubenswrapper[4806]: I0126 08:12:34.721400 4806 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4e0c22de-1431-4fcc-9ebd-1cc4791260c8-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 26 08:12:34 crc kubenswrapper[4806]: I0126 08:12:34.721477 4806 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4e0c22de-1431-4fcc-9ebd-1cc4791260c8-logs\") on node \"crc\" DevicePath \"\"" Jan 26 08:12:34 crc kubenswrapper[4806]: I0126 08:12:34.721566 4806 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e0c22de-1431-4fcc-9ebd-1cc4791260c8-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 08:12:34 crc kubenswrapper[4806]: I0126 08:12:34.742637 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e0c22de-1431-4fcc-9ebd-1cc4791260c8-config-data" (OuterVolumeSpecName: "config-data") pod "4e0c22de-1431-4fcc-9ebd-1cc4791260c8" (UID: "4e0c22de-1431-4fcc-9ebd-1cc4791260c8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:12:34 crc kubenswrapper[4806]: I0126 08:12:34.752189 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e0c22de-1431-4fcc-9ebd-1cc4791260c8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4e0c22de-1431-4fcc-9ebd-1cc4791260c8" (UID: "4e0c22de-1431-4fcc-9ebd-1cc4791260c8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:12:34 crc kubenswrapper[4806]: I0126 08:12:34.770165 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e0c22de-1431-4fcc-9ebd-1cc4791260c8-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "4e0c22de-1431-4fcc-9ebd-1cc4791260c8" (UID: "4e0c22de-1431-4fcc-9ebd-1cc4791260c8"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:12:34 crc kubenswrapper[4806]: I0126 08:12:34.780677 4806 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Jan 26 08:12:34 crc kubenswrapper[4806]: I0126 08:12:34.826308 4806 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Jan 26 08:12:34 crc kubenswrapper[4806]: I0126 08:12:34.826344 4806 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e0c22de-1431-4fcc-9ebd-1cc4791260c8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 08:12:34 crc kubenswrapper[4806]: I0126 08:12:34.826355 4806 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e0c22de-1431-4fcc-9ebd-1cc4791260c8-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 08:12:34 crc kubenswrapper[4806]: I0126 08:12:34.826363 4806 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e0c22de-1431-4fcc-9ebd-1cc4791260c8-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 08:12:35 crc kubenswrapper[4806]: I0126 08:12:35.429626 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"59eb0fbe-8f63-41b2-9d40-0af8db4e8b4d","Type":"ContainerStarted","Data":"4e6187333ba99e41a8188cef5199079bc5c96e5506b03b658f1a442087e812ba"} Jan 26 08:12:35 crc kubenswrapper[4806]: I0126 08:12:35.429664 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="59eb0fbe-8f63-41b2-9d40-0af8db4e8b4d" containerName="ceilometer-central-agent" containerID="cri-o://607ba68a56fad93daf7aed89180eebc182e9a4b8b1c9d74e9080dea6a7125203" gracePeriod=30 Jan 26 08:12:35 crc kubenswrapper[4806]: I0126 08:12:35.429712 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 26 08:12:35 crc kubenswrapper[4806]: I0126 08:12:35.429725 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="59eb0fbe-8f63-41b2-9d40-0af8db4e8b4d" containerName="sg-core" containerID="cri-o://4e6dc21932d5d19f7362d957729c15bbd2db014cfef64a8569c768604e06b64c" gracePeriod=30 Jan 26 08:12:35 crc kubenswrapper[4806]: I0126 08:12:35.429737 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="59eb0fbe-8f63-41b2-9d40-0af8db4e8b4d" containerName="ceilometer-notification-agent" containerID="cri-o://03f3f7622afb237e8be581542fbe6897e2adc9b1d714d3eb253e01ada8cd7b45" gracePeriod=30 Jan 26 08:12:35 crc kubenswrapper[4806]: I0126 08:12:35.429765 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="59eb0fbe-8f63-41b2-9d40-0af8db4e8b4d" containerName="proxy-httpd" containerID="cri-o://4e6187333ba99e41a8188cef5199079bc5c96e5506b03b658f1a442087e812ba" gracePeriod=30 Jan 26 08:12:35 crc kubenswrapper[4806]: I0126 08:12:35.433731 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 26 08:12:35 crc kubenswrapper[4806]: I0126 08:12:35.434754 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4e0c22de-1431-4fcc-9ebd-1cc4791260c8","Type":"ContainerDied","Data":"9f6bf4e0c7d26f3986d737b202fc9bd31912452aa2d23cc49e7c9eede2751cd7"} Jan 26 08:12:35 crc kubenswrapper[4806]: I0126 08:12:35.434804 4806 scope.go:117] "RemoveContainer" containerID="9c0390bc804d1036e606767eb250807db0d925677c52f562f9fee863672b8282" Jan 26 08:12:35 crc kubenswrapper[4806]: I0126 08:12:35.463489 4806 scope.go:117] "RemoveContainer" containerID="2ad53b837c7da7b8af599460f6cfc99ca4cac64f501b36d44e0ac6f5929e7096" Jan 26 08:12:35 crc kubenswrapper[4806]: I0126 08:12:35.468676 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.327791254 podStartE2EDuration="5.468658931s" podCreationTimestamp="2026-01-26 08:12:30 +0000 UTC" firstStartedPulling="2026-01-26 08:12:31.991340515 +0000 UTC m=+1131.255748571" lastFinishedPulling="2026-01-26 08:12:35.132208192 +0000 UTC m=+1134.396616248" observedRunningTime="2026-01-26 08:12:35.466257324 +0000 UTC m=+1134.730665370" watchObservedRunningTime="2026-01-26 08:12:35.468658931 +0000 UTC m=+1134.733066987" Jan 26 08:12:35 crc kubenswrapper[4806]: I0126 08:12:35.494300 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 08:12:35 crc kubenswrapper[4806]: I0126 08:12:35.504670 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 08:12:35 crc kubenswrapper[4806]: I0126 08:12:35.537091 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 08:12:35 crc kubenswrapper[4806]: E0126 08:12:35.537782 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e0c22de-1431-4fcc-9ebd-1cc4791260c8" containerName="glance-log" Jan 26 08:12:35 crc kubenswrapper[4806]: I0126 08:12:35.537825 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e0c22de-1431-4fcc-9ebd-1cc4791260c8" containerName="glance-log" Jan 26 08:12:35 crc kubenswrapper[4806]: E0126 08:12:35.537849 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e0c22de-1431-4fcc-9ebd-1cc4791260c8" containerName="glance-httpd" Jan 26 08:12:35 crc kubenswrapper[4806]: I0126 08:12:35.537855 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e0c22de-1431-4fcc-9ebd-1cc4791260c8" containerName="glance-httpd" Jan 26 08:12:35 crc kubenswrapper[4806]: I0126 08:12:35.538022 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e0c22de-1431-4fcc-9ebd-1cc4791260c8" containerName="glance-log" Jan 26 08:12:35 crc kubenswrapper[4806]: I0126 08:12:35.538046 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e0c22de-1431-4fcc-9ebd-1cc4791260c8" containerName="glance-httpd" Jan 26 08:12:35 crc kubenswrapper[4806]: I0126 08:12:35.547764 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 26 08:12:35 crc kubenswrapper[4806]: I0126 08:12:35.556824 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 26 08:12:35 crc kubenswrapper[4806]: I0126 08:12:35.556864 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 26 08:12:35 crc kubenswrapper[4806]: I0126 08:12:35.570377 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 08:12:35 crc kubenswrapper[4806]: I0126 08:12:35.751968 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0a1a709-885d-4f4e-a2a2-51d7bad26f6f-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"f0a1a709-885d-4f4e-a2a2-51d7bad26f6f\") " pod="openstack/glance-default-external-api-0" Jan 26 08:12:35 crc kubenswrapper[4806]: I0126 08:12:35.752056 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mh2t\" (UniqueName: \"kubernetes.io/projected/f0a1a709-885d-4f4e-a2a2-51d7bad26f6f-kube-api-access-6mh2t\") pod \"glance-default-external-api-0\" (UID: \"f0a1a709-885d-4f4e-a2a2-51d7bad26f6f\") " pod="openstack/glance-default-external-api-0" Jan 26 08:12:35 crc kubenswrapper[4806]: I0126 08:12:35.752079 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f0a1a709-885d-4f4e-a2a2-51d7bad26f6f-scripts\") pod \"glance-default-external-api-0\" (UID: \"f0a1a709-885d-4f4e-a2a2-51d7bad26f6f\") " pod="openstack/glance-default-external-api-0" Jan 26 08:12:35 crc kubenswrapper[4806]: I0126 08:12:35.752097 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0a1a709-885d-4f4e-a2a2-51d7bad26f6f-config-data\") pod \"glance-default-external-api-0\" (UID: \"f0a1a709-885d-4f4e-a2a2-51d7bad26f6f\") " pod="openstack/glance-default-external-api-0" Jan 26 08:12:35 crc kubenswrapper[4806]: I0126 08:12:35.752134 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0a1a709-885d-4f4e-a2a2-51d7bad26f6f-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"f0a1a709-885d-4f4e-a2a2-51d7bad26f6f\") " pod="openstack/glance-default-external-api-0" Jan 26 08:12:35 crc kubenswrapper[4806]: I0126 08:12:35.752159 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f0a1a709-885d-4f4e-a2a2-51d7bad26f6f-logs\") pod \"glance-default-external-api-0\" (UID: \"f0a1a709-885d-4f4e-a2a2-51d7bad26f6f\") " pod="openstack/glance-default-external-api-0" Jan 26 08:12:35 crc kubenswrapper[4806]: I0126 08:12:35.752196 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"f0a1a709-885d-4f4e-a2a2-51d7bad26f6f\") " pod="openstack/glance-default-external-api-0" Jan 26 08:12:35 crc kubenswrapper[4806]: I0126 08:12:35.752216 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f0a1a709-885d-4f4e-a2a2-51d7bad26f6f-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"f0a1a709-885d-4f4e-a2a2-51d7bad26f6f\") " pod="openstack/glance-default-external-api-0" Jan 26 08:12:35 crc kubenswrapper[4806]: I0126 08:12:35.853337 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6mh2t\" (UniqueName: \"kubernetes.io/projected/f0a1a709-885d-4f4e-a2a2-51d7bad26f6f-kube-api-access-6mh2t\") pod \"glance-default-external-api-0\" (UID: \"f0a1a709-885d-4f4e-a2a2-51d7bad26f6f\") " pod="openstack/glance-default-external-api-0" Jan 26 08:12:35 crc kubenswrapper[4806]: I0126 08:12:35.853384 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f0a1a709-885d-4f4e-a2a2-51d7bad26f6f-scripts\") pod \"glance-default-external-api-0\" (UID: \"f0a1a709-885d-4f4e-a2a2-51d7bad26f6f\") " pod="openstack/glance-default-external-api-0" Jan 26 08:12:35 crc kubenswrapper[4806]: I0126 08:12:35.853402 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0a1a709-885d-4f4e-a2a2-51d7bad26f6f-config-data\") pod \"glance-default-external-api-0\" (UID: \"f0a1a709-885d-4f4e-a2a2-51d7bad26f6f\") " pod="openstack/glance-default-external-api-0" Jan 26 08:12:35 crc kubenswrapper[4806]: I0126 08:12:35.853439 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0a1a709-885d-4f4e-a2a2-51d7bad26f6f-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"f0a1a709-885d-4f4e-a2a2-51d7bad26f6f\") " pod="openstack/glance-default-external-api-0" Jan 26 08:12:35 crc kubenswrapper[4806]: I0126 08:12:35.853465 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f0a1a709-885d-4f4e-a2a2-51d7bad26f6f-logs\") pod \"glance-default-external-api-0\" (UID: \"f0a1a709-885d-4f4e-a2a2-51d7bad26f6f\") " pod="openstack/glance-default-external-api-0" Jan 26 08:12:35 crc kubenswrapper[4806]: I0126 08:12:35.853506 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"f0a1a709-885d-4f4e-a2a2-51d7bad26f6f\") " pod="openstack/glance-default-external-api-0" Jan 26 08:12:35 crc kubenswrapper[4806]: I0126 08:12:35.853541 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f0a1a709-885d-4f4e-a2a2-51d7bad26f6f-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"f0a1a709-885d-4f4e-a2a2-51d7bad26f6f\") " pod="openstack/glance-default-external-api-0" Jan 26 08:12:35 crc kubenswrapper[4806]: I0126 08:12:35.853596 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0a1a709-885d-4f4e-a2a2-51d7bad26f6f-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"f0a1a709-885d-4f4e-a2a2-51d7bad26f6f\") " pod="openstack/glance-default-external-api-0" Jan 26 08:12:35 crc kubenswrapper[4806]: I0126 08:12:35.854362 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f0a1a709-885d-4f4e-a2a2-51d7bad26f6f-logs\") pod \"glance-default-external-api-0\" (UID: \"f0a1a709-885d-4f4e-a2a2-51d7bad26f6f\") " pod="openstack/glance-default-external-api-0" Jan 26 08:12:35 crc kubenswrapper[4806]: I0126 08:12:35.854554 4806 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"f0a1a709-885d-4f4e-a2a2-51d7bad26f6f\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/glance-default-external-api-0" Jan 26 08:12:35 crc kubenswrapper[4806]: I0126 08:12:35.857693 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f0a1a709-885d-4f4e-a2a2-51d7bad26f6f-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"f0a1a709-885d-4f4e-a2a2-51d7bad26f6f\") " pod="openstack/glance-default-external-api-0" Jan 26 08:12:35 crc kubenswrapper[4806]: I0126 08:12:35.859716 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0a1a709-885d-4f4e-a2a2-51d7bad26f6f-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"f0a1a709-885d-4f4e-a2a2-51d7bad26f6f\") " pod="openstack/glance-default-external-api-0" Jan 26 08:12:35 crc kubenswrapper[4806]: I0126 08:12:35.859982 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0a1a709-885d-4f4e-a2a2-51d7bad26f6f-config-data\") pod \"glance-default-external-api-0\" (UID: \"f0a1a709-885d-4f4e-a2a2-51d7bad26f6f\") " pod="openstack/glance-default-external-api-0" Jan 26 08:12:35 crc kubenswrapper[4806]: I0126 08:12:35.868179 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0a1a709-885d-4f4e-a2a2-51d7bad26f6f-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"f0a1a709-885d-4f4e-a2a2-51d7bad26f6f\") " pod="openstack/glance-default-external-api-0" Jan 26 08:12:35 crc kubenswrapper[4806]: I0126 08:12:35.868907 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f0a1a709-885d-4f4e-a2a2-51d7bad26f6f-scripts\") pod \"glance-default-external-api-0\" (UID: \"f0a1a709-885d-4f4e-a2a2-51d7bad26f6f\") " pod="openstack/glance-default-external-api-0" Jan 26 08:12:35 crc kubenswrapper[4806]: I0126 08:12:35.881819 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6mh2t\" (UniqueName: \"kubernetes.io/projected/f0a1a709-885d-4f4e-a2a2-51d7bad26f6f-kube-api-access-6mh2t\") pod \"glance-default-external-api-0\" (UID: \"f0a1a709-885d-4f4e-a2a2-51d7bad26f6f\") " pod="openstack/glance-default-external-api-0" Jan 26 08:12:35 crc kubenswrapper[4806]: I0126 08:12:35.890307 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"f0a1a709-885d-4f4e-a2a2-51d7bad26f6f\") " pod="openstack/glance-default-external-api-0" Jan 26 08:12:35 crc kubenswrapper[4806]: I0126 08:12:35.923891 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 26 08:12:36 crc kubenswrapper[4806]: I0126 08:12:36.043037 4806 scope.go:117] "RemoveContainer" containerID="21a2259284eeb59ff2943baa8e7323075a4bee4436360e0885ed7593a195e81a" Jan 26 08:12:36 crc kubenswrapper[4806]: I0126 08:12:36.445009 4806 generic.go:334] "Generic (PLEG): container finished" podID="20febbb2-a1ac-4a38-8a1d-594fa53b0b06" containerID="cc152a9a8b4a3addd40f4a2697481ff2981fe983753c7a5d570dc0e5efb5eb2d" exitCode=0 Jan 26 08:12:36 crc kubenswrapper[4806]: I0126 08:12:36.445110 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"20febbb2-a1ac-4a38-8a1d-594fa53b0b06","Type":"ContainerDied","Data":"cc152a9a8b4a3addd40f4a2697481ff2981fe983753c7a5d570dc0e5efb5eb2d"} Jan 26 08:12:36 crc kubenswrapper[4806]: I0126 08:12:36.448356 4806 generic.go:334] "Generic (PLEG): container finished" podID="59eb0fbe-8f63-41b2-9d40-0af8db4e8b4d" containerID="4e6187333ba99e41a8188cef5199079bc5c96e5506b03b658f1a442087e812ba" exitCode=0 Jan 26 08:12:36 crc kubenswrapper[4806]: I0126 08:12:36.448376 4806 generic.go:334] "Generic (PLEG): container finished" podID="59eb0fbe-8f63-41b2-9d40-0af8db4e8b4d" containerID="4e6dc21932d5d19f7362d957729c15bbd2db014cfef64a8569c768604e06b64c" exitCode=2 Jan 26 08:12:36 crc kubenswrapper[4806]: I0126 08:12:36.448383 4806 generic.go:334] "Generic (PLEG): container finished" podID="59eb0fbe-8f63-41b2-9d40-0af8db4e8b4d" containerID="03f3f7622afb237e8be581542fbe6897e2adc9b1d714d3eb253e01ada8cd7b45" exitCode=0 Jan 26 08:12:36 crc kubenswrapper[4806]: I0126 08:12:36.448401 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"59eb0fbe-8f63-41b2-9d40-0af8db4e8b4d","Type":"ContainerDied","Data":"4e6187333ba99e41a8188cef5199079bc5c96e5506b03b658f1a442087e812ba"} Jan 26 08:12:36 crc kubenswrapper[4806]: I0126 08:12:36.448425 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"59eb0fbe-8f63-41b2-9d40-0af8db4e8b4d","Type":"ContainerDied","Data":"4e6dc21932d5d19f7362d957729c15bbd2db014cfef64a8569c768604e06b64c"} Jan 26 08:12:36 crc kubenswrapper[4806]: I0126 08:12:36.448435 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"59eb0fbe-8f63-41b2-9d40-0af8db4e8b4d","Type":"ContainerDied","Data":"03f3f7622afb237e8be581542fbe6897e2adc9b1d714d3eb253e01ada8cd7b45"} Jan 26 08:12:36 crc kubenswrapper[4806]: I0126 08:12:36.631062 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 08:12:36 crc kubenswrapper[4806]: I0126 08:12:36.814779 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-5d649c5968-gb8r4" Jan 26 08:12:36 crc kubenswrapper[4806]: I0126 08:12:36.877265 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-5c479d9749-55sxk"] Jan 26 08:12:36 crc kubenswrapper[4806]: I0126 08:12:36.877446 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-5c479d9749-55sxk" podUID="ceffb75b-59c2-41e0-96e9-4ccbb69ee956" containerName="heat-engine" containerID="cri-o://5e435ae63256b96544903f4c8e60a13b3975f9f77ccedf6957afb8c1d8a46baf" gracePeriod=60 Jan 26 08:12:36 crc kubenswrapper[4806]: I0126 08:12:36.969782 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-78c9d88fc9-5rs9s" Jan 26 08:12:36 crc kubenswrapper[4806]: I0126 08:12:36.982901 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-78c9d88fc9-5rs9s" Jan 26 08:12:37 crc kubenswrapper[4806]: I0126 08:12:37.080061 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e0c22de-1431-4fcc-9ebd-1cc4791260c8" path="/var/lib/kubelet/pods/4e0c22de-1431-4fcc-9ebd-1cc4791260c8/volumes" Jan 26 08:12:37 crc kubenswrapper[4806]: I0126 08:12:37.446553 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 26 08:12:37 crc kubenswrapper[4806]: I0126 08:12:37.477511 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 26 08:12:37 crc kubenswrapper[4806]: I0126 08:12:37.478210 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"20febbb2-a1ac-4a38-8a1d-594fa53b0b06","Type":"ContainerDied","Data":"14fccf9ba4ff31f8b5e7130b007494f392074236eab4f20ec1115dce87888c70"} Jan 26 08:12:37 crc kubenswrapper[4806]: I0126 08:12:37.478241 4806 scope.go:117] "RemoveContainer" containerID="cc152a9a8b4a3addd40f4a2697481ff2981fe983753c7a5d570dc0e5efb5eb2d" Jan 26 08:12:37 crc kubenswrapper[4806]: I0126 08:12:37.494218 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f0a1a709-885d-4f4e-a2a2-51d7bad26f6f","Type":"ContainerStarted","Data":"3d2c770701498f3cd049d462251c2322cc75b1da19f3d49616565f3a8f809bff"} Jan 26 08:12:37 crc kubenswrapper[4806]: I0126 08:12:37.516372 4806 generic.go:334] "Generic (PLEG): container finished" podID="c8af52b9-239c-4e7e-9f4e-80aa1e4148a2" containerID="1926db2c196dcd712953e733570421ccb04cd0962bd97581f32df0b08246e3f9" exitCode=1 Jan 26 08:12:37 crc kubenswrapper[4806]: I0126 08:12:37.517256 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-7dd479566-6k7mz" event={"ID":"c8af52b9-239c-4e7e-9f4e-80aa1e4148a2","Type":"ContainerDied","Data":"1926db2c196dcd712953e733570421ccb04cd0962bd97581f32df0b08246e3f9"} Jan 26 08:12:37 crc kubenswrapper[4806]: I0126 08:12:37.517541 4806 scope.go:117] "RemoveContainer" containerID="1926db2c196dcd712953e733570421ccb04cd0962bd97581f32df0b08246e3f9" Jan 26 08:12:37 crc kubenswrapper[4806]: E0126 08:12:37.517728 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 20s restarting failed container=heat-api pod=heat-api-7dd479566-6k7mz_openstack(c8af52b9-239c-4e7e-9f4e-80aa1e4148a2)\"" pod="openstack/heat-api-7dd479566-6k7mz" podUID="c8af52b9-239c-4e7e-9f4e-80aa1e4148a2" Jan 26 08:12:37 crc kubenswrapper[4806]: I0126 08:12:37.533207 4806 scope.go:117] "RemoveContainer" containerID="8744719c3edd57487457204fcef0336b3149b40334d7045bf497f74d4a82cd60" Jan 26 08:12:37 crc kubenswrapper[4806]: I0126 08:12:37.585640 4806 scope.go:117] "RemoveContainer" containerID="21a2259284eeb59ff2943baa8e7323075a4bee4436360e0885ed7593a195e81a" Jan 26 08:12:37 crc kubenswrapper[4806]: I0126 08:12:37.611351 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/20febbb2-a1ac-4a38-8a1d-594fa53b0b06-internal-tls-certs\") pod \"20febbb2-a1ac-4a38-8a1d-594fa53b0b06\" (UID: \"20febbb2-a1ac-4a38-8a1d-594fa53b0b06\") " Jan 26 08:12:37 crc kubenswrapper[4806]: I0126 08:12:37.611573 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hljgh\" (UniqueName: \"kubernetes.io/projected/20febbb2-a1ac-4a38-8a1d-594fa53b0b06-kube-api-access-hljgh\") pod \"20febbb2-a1ac-4a38-8a1d-594fa53b0b06\" (UID: \"20febbb2-a1ac-4a38-8a1d-594fa53b0b06\") " Jan 26 08:12:37 crc kubenswrapper[4806]: I0126 08:12:37.611616 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20febbb2-a1ac-4a38-8a1d-594fa53b0b06-config-data\") pod \"20febbb2-a1ac-4a38-8a1d-594fa53b0b06\" (UID: \"20febbb2-a1ac-4a38-8a1d-594fa53b0b06\") " Jan 26 08:12:37 crc kubenswrapper[4806]: I0126 08:12:37.611646 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"20febbb2-a1ac-4a38-8a1d-594fa53b0b06\" (UID: \"20febbb2-a1ac-4a38-8a1d-594fa53b0b06\") " Jan 26 08:12:37 crc kubenswrapper[4806]: I0126 08:12:37.611736 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/20febbb2-a1ac-4a38-8a1d-594fa53b0b06-httpd-run\") pod \"20febbb2-a1ac-4a38-8a1d-594fa53b0b06\" (UID: \"20febbb2-a1ac-4a38-8a1d-594fa53b0b06\") " Jan 26 08:12:37 crc kubenswrapper[4806]: I0126 08:12:37.611753 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20febbb2-a1ac-4a38-8a1d-594fa53b0b06-logs\") pod \"20febbb2-a1ac-4a38-8a1d-594fa53b0b06\" (UID: \"20febbb2-a1ac-4a38-8a1d-594fa53b0b06\") " Jan 26 08:12:37 crc kubenswrapper[4806]: I0126 08:12:37.611790 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20febbb2-a1ac-4a38-8a1d-594fa53b0b06-scripts\") pod \"20febbb2-a1ac-4a38-8a1d-594fa53b0b06\" (UID: \"20febbb2-a1ac-4a38-8a1d-594fa53b0b06\") " Jan 26 08:12:37 crc kubenswrapper[4806]: I0126 08:12:37.611829 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20febbb2-a1ac-4a38-8a1d-594fa53b0b06-combined-ca-bundle\") pod \"20febbb2-a1ac-4a38-8a1d-594fa53b0b06\" (UID: \"20febbb2-a1ac-4a38-8a1d-594fa53b0b06\") " Jan 26 08:12:37 crc kubenswrapper[4806]: I0126 08:12:37.615302 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20febbb2-a1ac-4a38-8a1d-594fa53b0b06-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "20febbb2-a1ac-4a38-8a1d-594fa53b0b06" (UID: "20febbb2-a1ac-4a38-8a1d-594fa53b0b06"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 08:12:37 crc kubenswrapper[4806]: I0126 08:12:37.615920 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20febbb2-a1ac-4a38-8a1d-594fa53b0b06-logs" (OuterVolumeSpecName: "logs") pod "20febbb2-a1ac-4a38-8a1d-594fa53b0b06" (UID: "20febbb2-a1ac-4a38-8a1d-594fa53b0b06"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 08:12:37 crc kubenswrapper[4806]: I0126 08:12:37.626739 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20febbb2-a1ac-4a38-8a1d-594fa53b0b06-kube-api-access-hljgh" (OuterVolumeSpecName: "kube-api-access-hljgh") pod "20febbb2-a1ac-4a38-8a1d-594fa53b0b06" (UID: "20febbb2-a1ac-4a38-8a1d-594fa53b0b06"). InnerVolumeSpecName "kube-api-access-hljgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:12:37 crc kubenswrapper[4806]: I0126 08:12:37.627254 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20febbb2-a1ac-4a38-8a1d-594fa53b0b06-scripts" (OuterVolumeSpecName: "scripts") pod "20febbb2-a1ac-4a38-8a1d-594fa53b0b06" (UID: "20febbb2-a1ac-4a38-8a1d-594fa53b0b06"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:12:37 crc kubenswrapper[4806]: I0126 08:12:37.627402 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "glance") pod "20febbb2-a1ac-4a38-8a1d-594fa53b0b06" (UID: "20febbb2-a1ac-4a38-8a1d-594fa53b0b06"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 26 08:12:37 crc kubenswrapper[4806]: I0126 08:12:37.739274 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hljgh\" (UniqueName: \"kubernetes.io/projected/20febbb2-a1ac-4a38-8a1d-594fa53b0b06-kube-api-access-hljgh\") on node \"crc\" DevicePath \"\"" Jan 26 08:12:37 crc kubenswrapper[4806]: I0126 08:12:37.739552 4806 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Jan 26 08:12:37 crc kubenswrapper[4806]: I0126 08:12:37.739563 4806 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/20febbb2-a1ac-4a38-8a1d-594fa53b0b06-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 26 08:12:37 crc kubenswrapper[4806]: I0126 08:12:37.739571 4806 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20febbb2-a1ac-4a38-8a1d-594fa53b0b06-logs\") on node \"crc\" DevicePath \"\"" Jan 26 08:12:37 crc kubenswrapper[4806]: I0126 08:12:37.739581 4806 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20febbb2-a1ac-4a38-8a1d-594fa53b0b06-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 08:12:37 crc kubenswrapper[4806]: I0126 08:12:37.742819 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20febbb2-a1ac-4a38-8a1d-594fa53b0b06-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "20febbb2-a1ac-4a38-8a1d-594fa53b0b06" (UID: "20febbb2-a1ac-4a38-8a1d-594fa53b0b06"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:12:37 crc kubenswrapper[4806]: I0126 08:12:37.769824 4806 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Jan 26 08:12:37 crc kubenswrapper[4806]: I0126 08:12:37.795158 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20febbb2-a1ac-4a38-8a1d-594fa53b0b06-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "20febbb2-a1ac-4a38-8a1d-594fa53b0b06" (UID: "20febbb2-a1ac-4a38-8a1d-594fa53b0b06"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:12:37 crc kubenswrapper[4806]: I0126 08:12:37.829626 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20febbb2-a1ac-4a38-8a1d-594fa53b0b06-config-data" (OuterVolumeSpecName: "config-data") pod "20febbb2-a1ac-4a38-8a1d-594fa53b0b06" (UID: "20febbb2-a1ac-4a38-8a1d-594fa53b0b06"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:12:37 crc kubenswrapper[4806]: I0126 08:12:37.840500 4806 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20febbb2-a1ac-4a38-8a1d-594fa53b0b06-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 08:12:37 crc kubenswrapper[4806]: I0126 08:12:37.840548 4806 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Jan 26 08:12:37 crc kubenswrapper[4806]: I0126 08:12:37.840560 4806 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20febbb2-a1ac-4a38-8a1d-594fa53b0b06-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 08:12:37 crc kubenswrapper[4806]: I0126 08:12:37.840569 4806 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/20febbb2-a1ac-4a38-8a1d-594fa53b0b06-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 08:12:38 crc kubenswrapper[4806]: I0126 08:12:38.043074 4806 scope.go:117] "RemoveContainer" containerID="faf28906d5ab543cefcb7319c9944f166e49216850d5dc9d199bb6814ca49d79" Jan 26 08:12:38 crc kubenswrapper[4806]: I0126 08:12:38.124609 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 08:12:38 crc kubenswrapper[4806]: I0126 08:12:38.142138 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 08:12:38 crc kubenswrapper[4806]: I0126 08:12:38.150892 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 08:12:38 crc kubenswrapper[4806]: E0126 08:12:38.151251 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20febbb2-a1ac-4a38-8a1d-594fa53b0b06" containerName="glance-httpd" Jan 26 08:12:38 crc kubenswrapper[4806]: I0126 08:12:38.151267 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="20febbb2-a1ac-4a38-8a1d-594fa53b0b06" containerName="glance-httpd" Jan 26 08:12:38 crc kubenswrapper[4806]: E0126 08:12:38.151282 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20febbb2-a1ac-4a38-8a1d-594fa53b0b06" containerName="glance-log" Jan 26 08:12:38 crc kubenswrapper[4806]: I0126 08:12:38.151290 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="20febbb2-a1ac-4a38-8a1d-594fa53b0b06" containerName="glance-log" Jan 26 08:12:38 crc kubenswrapper[4806]: I0126 08:12:38.151483 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="20febbb2-a1ac-4a38-8a1d-594fa53b0b06" containerName="glance-httpd" Jan 26 08:12:38 crc kubenswrapper[4806]: I0126 08:12:38.151497 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="20febbb2-a1ac-4a38-8a1d-594fa53b0b06" containerName="glance-log" Jan 26 08:12:38 crc kubenswrapper[4806]: I0126 08:12:38.163694 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 26 08:12:38 crc kubenswrapper[4806]: I0126 08:12:38.166438 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 26 08:12:38 crc kubenswrapper[4806]: I0126 08:12:38.166642 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 26 08:12:38 crc kubenswrapper[4806]: I0126 08:12:38.186587 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 08:12:38 crc kubenswrapper[4806]: I0126 08:12:38.252949 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b728490f-ad14-45d0-aa07-096fecf7be60-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b728490f-ad14-45d0-aa07-096fecf7be60\") " pod="openstack/glance-default-internal-api-0" Jan 26 08:12:38 crc kubenswrapper[4806]: I0126 08:12:38.253000 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gbr2\" (UniqueName: \"kubernetes.io/projected/b728490f-ad14-45d0-aa07-096fecf7be60-kube-api-access-2gbr2\") pod \"glance-default-internal-api-0\" (UID: \"b728490f-ad14-45d0-aa07-096fecf7be60\") " pod="openstack/glance-default-internal-api-0" Jan 26 08:12:38 crc kubenswrapper[4806]: I0126 08:12:38.253039 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b728490f-ad14-45d0-aa07-096fecf7be60-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b728490f-ad14-45d0-aa07-096fecf7be60\") " pod="openstack/glance-default-internal-api-0" Jan 26 08:12:38 crc kubenswrapper[4806]: I0126 08:12:38.253064 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"b728490f-ad14-45d0-aa07-096fecf7be60\") " pod="openstack/glance-default-internal-api-0" Jan 26 08:12:38 crc kubenswrapper[4806]: I0126 08:12:38.253089 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b728490f-ad14-45d0-aa07-096fecf7be60-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"b728490f-ad14-45d0-aa07-096fecf7be60\") " pod="openstack/glance-default-internal-api-0" Jan 26 08:12:38 crc kubenswrapper[4806]: I0126 08:12:38.253127 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b728490f-ad14-45d0-aa07-096fecf7be60-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b728490f-ad14-45d0-aa07-096fecf7be60\") " pod="openstack/glance-default-internal-api-0" Jan 26 08:12:38 crc kubenswrapper[4806]: I0126 08:12:38.253148 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b728490f-ad14-45d0-aa07-096fecf7be60-logs\") pod \"glance-default-internal-api-0\" (UID: \"b728490f-ad14-45d0-aa07-096fecf7be60\") " pod="openstack/glance-default-internal-api-0" Jan 26 08:12:38 crc kubenswrapper[4806]: I0126 08:12:38.253201 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b728490f-ad14-45d0-aa07-096fecf7be60-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b728490f-ad14-45d0-aa07-096fecf7be60\") " pod="openstack/glance-default-internal-api-0" Jan 26 08:12:38 crc kubenswrapper[4806]: I0126 08:12:38.355312 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b728490f-ad14-45d0-aa07-096fecf7be60-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"b728490f-ad14-45d0-aa07-096fecf7be60\") " pod="openstack/glance-default-internal-api-0" Jan 26 08:12:38 crc kubenswrapper[4806]: I0126 08:12:38.355410 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b728490f-ad14-45d0-aa07-096fecf7be60-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b728490f-ad14-45d0-aa07-096fecf7be60\") " pod="openstack/glance-default-internal-api-0" Jan 26 08:12:38 crc kubenswrapper[4806]: I0126 08:12:38.355438 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b728490f-ad14-45d0-aa07-096fecf7be60-logs\") pod \"glance-default-internal-api-0\" (UID: \"b728490f-ad14-45d0-aa07-096fecf7be60\") " pod="openstack/glance-default-internal-api-0" Jan 26 08:12:38 crc kubenswrapper[4806]: I0126 08:12:38.355497 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b728490f-ad14-45d0-aa07-096fecf7be60-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b728490f-ad14-45d0-aa07-096fecf7be60\") " pod="openstack/glance-default-internal-api-0" Jan 26 08:12:38 crc kubenswrapper[4806]: I0126 08:12:38.355573 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b728490f-ad14-45d0-aa07-096fecf7be60-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b728490f-ad14-45d0-aa07-096fecf7be60\") " pod="openstack/glance-default-internal-api-0" Jan 26 08:12:38 crc kubenswrapper[4806]: I0126 08:12:38.355601 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2gbr2\" (UniqueName: \"kubernetes.io/projected/b728490f-ad14-45d0-aa07-096fecf7be60-kube-api-access-2gbr2\") pod \"glance-default-internal-api-0\" (UID: \"b728490f-ad14-45d0-aa07-096fecf7be60\") " pod="openstack/glance-default-internal-api-0" Jan 26 08:12:38 crc kubenswrapper[4806]: I0126 08:12:38.355633 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b728490f-ad14-45d0-aa07-096fecf7be60-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b728490f-ad14-45d0-aa07-096fecf7be60\") " pod="openstack/glance-default-internal-api-0" Jan 26 08:12:38 crc kubenswrapper[4806]: I0126 08:12:38.355653 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"b728490f-ad14-45d0-aa07-096fecf7be60\") " pod="openstack/glance-default-internal-api-0" Jan 26 08:12:38 crc kubenswrapper[4806]: I0126 08:12:38.355925 4806 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"b728490f-ad14-45d0-aa07-096fecf7be60\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/glance-default-internal-api-0" Jan 26 08:12:38 crc kubenswrapper[4806]: I0126 08:12:38.356334 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b728490f-ad14-45d0-aa07-096fecf7be60-logs\") pod \"glance-default-internal-api-0\" (UID: \"b728490f-ad14-45d0-aa07-096fecf7be60\") " pod="openstack/glance-default-internal-api-0" Jan 26 08:12:38 crc kubenswrapper[4806]: I0126 08:12:38.356372 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b728490f-ad14-45d0-aa07-096fecf7be60-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b728490f-ad14-45d0-aa07-096fecf7be60\") " pod="openstack/glance-default-internal-api-0" Jan 26 08:12:38 crc kubenswrapper[4806]: I0126 08:12:38.362336 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b728490f-ad14-45d0-aa07-096fecf7be60-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"b728490f-ad14-45d0-aa07-096fecf7be60\") " pod="openstack/glance-default-internal-api-0" Jan 26 08:12:38 crc kubenswrapper[4806]: I0126 08:12:38.362921 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b728490f-ad14-45d0-aa07-096fecf7be60-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b728490f-ad14-45d0-aa07-096fecf7be60\") " pod="openstack/glance-default-internal-api-0" Jan 26 08:12:38 crc kubenswrapper[4806]: I0126 08:12:38.365606 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b728490f-ad14-45d0-aa07-096fecf7be60-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b728490f-ad14-45d0-aa07-096fecf7be60\") " pod="openstack/glance-default-internal-api-0" Jan 26 08:12:38 crc kubenswrapper[4806]: I0126 08:12:38.366178 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b728490f-ad14-45d0-aa07-096fecf7be60-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b728490f-ad14-45d0-aa07-096fecf7be60\") " pod="openstack/glance-default-internal-api-0" Jan 26 08:12:38 crc kubenswrapper[4806]: I0126 08:12:38.385114 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2gbr2\" (UniqueName: \"kubernetes.io/projected/b728490f-ad14-45d0-aa07-096fecf7be60-kube-api-access-2gbr2\") pod \"glance-default-internal-api-0\" (UID: \"b728490f-ad14-45d0-aa07-096fecf7be60\") " pod="openstack/glance-default-internal-api-0" Jan 26 08:12:38 crc kubenswrapper[4806]: I0126 08:12:38.418207 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"b728490f-ad14-45d0-aa07-096fecf7be60\") " pod="openstack/glance-default-internal-api-0" Jan 26 08:12:38 crc kubenswrapper[4806]: I0126 08:12:38.528597 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-5c84c55f78-ls58x" event={"ID":"8460c49b-4775-409d-b4c0-177929af70a4","Type":"ContainerStarted","Data":"84b52646eda1e4dc387ce409bcd0f5126282653ec066486e42e6f433e58586ee"} Jan 26 08:12:38 crc kubenswrapper[4806]: I0126 08:12:38.529066 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-5c84c55f78-ls58x" Jan 26 08:12:38 crc kubenswrapper[4806]: I0126 08:12:38.545087 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f0a1a709-885d-4f4e-a2a2-51d7bad26f6f","Type":"ContainerStarted","Data":"b5462e22b7664a5e55dcfaf707edacacbaa2f593abe1a709802ed1010136057b"} Jan 26 08:12:38 crc kubenswrapper[4806]: I0126 08:12:38.547917 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 26 08:12:38 crc kubenswrapper[4806]: I0126 08:12:38.559956 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-5c84c55f78-ls58x" podStartSLOduration=22.559937841 podStartE2EDuration="22.559937841s" podCreationTimestamp="2026-01-26 08:12:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 08:12:38.554763376 +0000 UTC m=+1137.819171442" watchObservedRunningTime="2026-01-26 08:12:38.559937841 +0000 UTC m=+1137.824345897" Jan 26 08:12:38 crc kubenswrapper[4806]: I0126 08:12:38.813853 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-5687f48547-kz5md" Jan 26 08:12:38 crc kubenswrapper[4806]: I0126 08:12:38.869376 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-7dd479566-6k7mz"] Jan 26 08:12:38 crc kubenswrapper[4806]: E0126 08:12:38.961154 4806 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5e435ae63256b96544903f4c8e60a13b3975f9f77ccedf6957afb8c1d8a46baf" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 26 08:12:38 crc kubenswrapper[4806]: E0126 08:12:38.963713 4806 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5e435ae63256b96544903f4c8e60a13b3975f9f77ccedf6957afb8c1d8a46baf" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 26 08:12:38 crc kubenswrapper[4806]: E0126 08:12:38.966250 4806 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5e435ae63256b96544903f4c8e60a13b3975f9f77ccedf6957afb8c1d8a46baf" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 26 08:12:38 crc kubenswrapper[4806]: E0126 08:12:38.966304 4806 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-5c479d9749-55sxk" podUID="ceffb75b-59c2-41e0-96e9-4ccbb69ee956" containerName="heat-engine" Jan 26 08:12:39 crc kubenswrapper[4806]: I0126 08:12:39.053112 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20febbb2-a1ac-4a38-8a1d-594fa53b0b06" path="/var/lib/kubelet/pods/20febbb2-a1ac-4a38-8a1d-594fa53b0b06/volumes" Jan 26 08:12:39 crc kubenswrapper[4806]: I0126 08:12:39.174322 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 08:12:39 crc kubenswrapper[4806]: I0126 08:12:39.387758 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-7dd479566-6k7mz" Jan 26 08:12:39 crc kubenswrapper[4806]: I0126 08:12:39.390281 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8af52b9-239c-4e7e-9f4e-80aa1e4148a2-config-data\") pod \"c8af52b9-239c-4e7e-9f4e-80aa1e4148a2\" (UID: \"c8af52b9-239c-4e7e-9f4e-80aa1e4148a2\") " Jan 26 08:12:39 crc kubenswrapper[4806]: I0126 08:12:39.390352 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7w89l\" (UniqueName: \"kubernetes.io/projected/c8af52b9-239c-4e7e-9f4e-80aa1e4148a2-kube-api-access-7w89l\") pod \"c8af52b9-239c-4e7e-9f4e-80aa1e4148a2\" (UID: \"c8af52b9-239c-4e7e-9f4e-80aa1e4148a2\") " Jan 26 08:12:39 crc kubenswrapper[4806]: I0126 08:12:39.390387 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c8af52b9-239c-4e7e-9f4e-80aa1e4148a2-config-data-custom\") pod \"c8af52b9-239c-4e7e-9f4e-80aa1e4148a2\" (UID: \"c8af52b9-239c-4e7e-9f4e-80aa1e4148a2\") " Jan 26 08:12:39 crc kubenswrapper[4806]: I0126 08:12:39.390419 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8af52b9-239c-4e7e-9f4e-80aa1e4148a2-combined-ca-bundle\") pod \"c8af52b9-239c-4e7e-9f4e-80aa1e4148a2\" (UID: \"c8af52b9-239c-4e7e-9f4e-80aa1e4148a2\") " Jan 26 08:12:39 crc kubenswrapper[4806]: I0126 08:12:39.403280 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8af52b9-239c-4e7e-9f4e-80aa1e4148a2-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "c8af52b9-239c-4e7e-9f4e-80aa1e4148a2" (UID: "c8af52b9-239c-4e7e-9f4e-80aa1e4148a2"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:12:39 crc kubenswrapper[4806]: I0126 08:12:39.417711 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8af52b9-239c-4e7e-9f4e-80aa1e4148a2-kube-api-access-7w89l" (OuterVolumeSpecName: "kube-api-access-7w89l") pod "c8af52b9-239c-4e7e-9f4e-80aa1e4148a2" (UID: "c8af52b9-239c-4e7e-9f4e-80aa1e4148a2"). InnerVolumeSpecName "kube-api-access-7w89l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:12:39 crc kubenswrapper[4806]: I0126 08:12:39.442226 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8af52b9-239c-4e7e-9f4e-80aa1e4148a2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c8af52b9-239c-4e7e-9f4e-80aa1e4148a2" (UID: "c8af52b9-239c-4e7e-9f4e-80aa1e4148a2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:12:39 crc kubenswrapper[4806]: I0126 08:12:39.450652 4806 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-957b6fbf8-7f82k" podUID="ce987c26-24cc-40b4-9898-9f00d4eda52e" containerName="heat-api" probeResult="failure" output="Get \"http://10.217.0.178:8004/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 08:12:39 crc kubenswrapper[4806]: I0126 08:12:39.504760 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7w89l\" (UniqueName: \"kubernetes.io/projected/c8af52b9-239c-4e7e-9f4e-80aa1e4148a2-kube-api-access-7w89l\") on node \"crc\" DevicePath \"\"" Jan 26 08:12:39 crc kubenswrapper[4806]: I0126 08:12:39.504790 4806 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c8af52b9-239c-4e7e-9f4e-80aa1e4148a2-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 26 08:12:39 crc kubenswrapper[4806]: I0126 08:12:39.504798 4806 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8af52b9-239c-4e7e-9f4e-80aa1e4148a2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 08:12:39 crc kubenswrapper[4806]: I0126 08:12:39.516861 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8af52b9-239c-4e7e-9f4e-80aa1e4148a2-config-data" (OuterVolumeSpecName: "config-data") pod "c8af52b9-239c-4e7e-9f4e-80aa1e4148a2" (UID: "c8af52b9-239c-4e7e-9f4e-80aa1e4148a2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:12:39 crc kubenswrapper[4806]: I0126 08:12:39.523122 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-56c9b6cf4b-dl98j" Jan 26 08:12:39 crc kubenswrapper[4806]: I0126 08:12:39.595806 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-5c84c55f78-ls58x"] Jan 26 08:12:39 crc kubenswrapper[4806]: I0126 08:12:39.610874 4806 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8af52b9-239c-4e7e-9f4e-80aa1e4148a2-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 08:12:39 crc kubenswrapper[4806]: I0126 08:12:39.611667 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f0a1a709-885d-4f4e-a2a2-51d7bad26f6f","Type":"ContainerStarted","Data":"be7a63c734a1350eee2fed0ea6fd489cdfefc2bacca902a3ec677b22f0bcd17e"} Jan 26 08:12:39 crc kubenswrapper[4806]: I0126 08:12:39.632865 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b728490f-ad14-45d0-aa07-096fecf7be60","Type":"ContainerStarted","Data":"0a3e77ecce3ced0350a0d9848a1d594959c3927d43ef32e0a95bc6483a584d0c"} Jan 26 08:12:39 crc kubenswrapper[4806]: I0126 08:12:39.644375 4806 generic.go:334] "Generic (PLEG): container finished" podID="8460c49b-4775-409d-b4c0-177929af70a4" containerID="84b52646eda1e4dc387ce409bcd0f5126282653ec066486e42e6f433e58586ee" exitCode=1 Jan 26 08:12:39 crc kubenswrapper[4806]: I0126 08:12:39.644432 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-5c84c55f78-ls58x" event={"ID":"8460c49b-4775-409d-b4c0-177929af70a4","Type":"ContainerDied","Data":"84b52646eda1e4dc387ce409bcd0f5126282653ec066486e42e6f433e58586ee"} Jan 26 08:12:39 crc kubenswrapper[4806]: I0126 08:12:39.644464 4806 scope.go:117] "RemoveContainer" containerID="faf28906d5ab543cefcb7319c9944f166e49216850d5dc9d199bb6814ca49d79" Jan 26 08:12:39 crc kubenswrapper[4806]: I0126 08:12:39.644961 4806 scope.go:117] "RemoveContainer" containerID="84b52646eda1e4dc387ce409bcd0f5126282653ec066486e42e6f433e58586ee" Jan 26 08:12:39 crc kubenswrapper[4806]: E0126 08:12:39.645162 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 20s restarting failed container=heat-cfnapi pod=heat-cfnapi-5c84c55f78-ls58x_openstack(8460c49b-4775-409d-b4c0-177929af70a4)\"" pod="openstack/heat-cfnapi-5c84c55f78-ls58x" podUID="8460c49b-4775-409d-b4c0-177929af70a4" Jan 26 08:12:39 crc kubenswrapper[4806]: I0126 08:12:39.658149 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-7dd479566-6k7mz" event={"ID":"c8af52b9-239c-4e7e-9f4e-80aa1e4148a2","Type":"ContainerDied","Data":"61da011041d5b73b034f8862fab5a36b153fe45e069f97faa7148451fea10c54"} Jan 26 08:12:39 crc kubenswrapper[4806]: I0126 08:12:39.662909 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-7dd479566-6k7mz" Jan 26 08:12:39 crc kubenswrapper[4806]: I0126 08:12:39.671129 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.671109206 podStartE2EDuration="4.671109206s" podCreationTimestamp="2026-01-26 08:12:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 08:12:39.656671003 +0000 UTC m=+1138.921079059" watchObservedRunningTime="2026-01-26 08:12:39.671109206 +0000 UTC m=+1138.935517262" Jan 26 08:12:39 crc kubenswrapper[4806]: I0126 08:12:39.752691 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-7dd479566-6k7mz"] Jan 26 08:12:39 crc kubenswrapper[4806]: I0126 08:12:39.757671 4806 scope.go:117] "RemoveContainer" containerID="1926db2c196dcd712953e733570421ccb04cd0962bd97581f32df0b08246e3f9" Jan 26 08:12:39 crc kubenswrapper[4806]: I0126 08:12:39.761365 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-7dd479566-6k7mz"] Jan 26 08:12:40 crc kubenswrapper[4806]: I0126 08:12:40.668325 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b728490f-ad14-45d0-aa07-096fecf7be60","Type":"ContainerStarted","Data":"a787ef55f2f73c4d6b35e706ad112dfe1b3ed186a0a18abd7ccc350b5fc9fcd6"} Jan 26 08:12:41 crc kubenswrapper[4806]: I0126 08:12:41.053001 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8af52b9-239c-4e7e-9f4e-80aa1e4148a2" path="/var/lib/kubelet/pods/c8af52b9-239c-4e7e-9f4e-80aa1e4148a2/volumes" Jan 26 08:12:41 crc kubenswrapper[4806]: I0126 08:12:41.068915 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-5c84c55f78-ls58x" Jan 26 08:12:41 crc kubenswrapper[4806]: I0126 08:12:41.144980 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f644c\" (UniqueName: \"kubernetes.io/projected/8460c49b-4775-409d-b4c0-177929af70a4-kube-api-access-f644c\") pod \"8460c49b-4775-409d-b4c0-177929af70a4\" (UID: \"8460c49b-4775-409d-b4c0-177929af70a4\") " Jan 26 08:12:41 crc kubenswrapper[4806]: I0126 08:12:41.145064 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8460c49b-4775-409d-b4c0-177929af70a4-config-data\") pod \"8460c49b-4775-409d-b4c0-177929af70a4\" (UID: \"8460c49b-4775-409d-b4c0-177929af70a4\") " Jan 26 08:12:41 crc kubenswrapper[4806]: I0126 08:12:41.145218 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8460c49b-4775-409d-b4c0-177929af70a4-config-data-custom\") pod \"8460c49b-4775-409d-b4c0-177929af70a4\" (UID: \"8460c49b-4775-409d-b4c0-177929af70a4\") " Jan 26 08:12:41 crc kubenswrapper[4806]: I0126 08:12:41.145297 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8460c49b-4775-409d-b4c0-177929af70a4-combined-ca-bundle\") pod \"8460c49b-4775-409d-b4c0-177929af70a4\" (UID: \"8460c49b-4775-409d-b4c0-177929af70a4\") " Jan 26 08:12:41 crc kubenswrapper[4806]: I0126 08:12:41.162858 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8460c49b-4775-409d-b4c0-177929af70a4-kube-api-access-f644c" (OuterVolumeSpecName: "kube-api-access-f644c") pod "8460c49b-4775-409d-b4c0-177929af70a4" (UID: "8460c49b-4775-409d-b4c0-177929af70a4"). InnerVolumeSpecName "kube-api-access-f644c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:12:41 crc kubenswrapper[4806]: I0126 08:12:41.176786 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8460c49b-4775-409d-b4c0-177929af70a4-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "8460c49b-4775-409d-b4c0-177929af70a4" (UID: "8460c49b-4775-409d-b4c0-177929af70a4"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:12:41 crc kubenswrapper[4806]: I0126 08:12:41.198184 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8460c49b-4775-409d-b4c0-177929af70a4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8460c49b-4775-409d-b4c0-177929af70a4" (UID: "8460c49b-4775-409d-b4c0-177929af70a4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:12:41 crc kubenswrapper[4806]: I0126 08:12:41.247909 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f644c\" (UniqueName: \"kubernetes.io/projected/8460c49b-4775-409d-b4c0-177929af70a4-kube-api-access-f644c\") on node \"crc\" DevicePath \"\"" Jan 26 08:12:41 crc kubenswrapper[4806]: I0126 08:12:41.247939 4806 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8460c49b-4775-409d-b4c0-177929af70a4-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 26 08:12:41 crc kubenswrapper[4806]: I0126 08:12:41.247948 4806 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8460c49b-4775-409d-b4c0-177929af70a4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 08:12:41 crc kubenswrapper[4806]: I0126 08:12:41.288670 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8460c49b-4775-409d-b4c0-177929af70a4-config-data" (OuterVolumeSpecName: "config-data") pod "8460c49b-4775-409d-b4c0-177929af70a4" (UID: "8460c49b-4775-409d-b4c0-177929af70a4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:12:41 crc kubenswrapper[4806]: I0126 08:12:41.349540 4806 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8460c49b-4775-409d-b4c0-177929af70a4-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 08:12:41 crc kubenswrapper[4806]: I0126 08:12:41.719615 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b728490f-ad14-45d0-aa07-096fecf7be60","Type":"ContainerStarted","Data":"dd508a1e684cd1432c9bcf357a1eb82c5a6b06f228a40bb6599d8a695d2a037d"} Jan 26 08:12:41 crc kubenswrapper[4806]: I0126 08:12:41.727246 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-5c84c55f78-ls58x" event={"ID":"8460c49b-4775-409d-b4c0-177929af70a4","Type":"ContainerDied","Data":"59f3ab31eadbbf71420c82d745e783f3c0861db07a3211d4651d783d14accf1d"} Jan 26 08:12:41 crc kubenswrapper[4806]: I0126 08:12:41.727299 4806 scope.go:117] "RemoveContainer" containerID="84b52646eda1e4dc387ce409bcd0f5126282653ec066486e42e6f433e58586ee" Jan 26 08:12:41 crc kubenswrapper[4806]: I0126 08:12:41.727450 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-5c84c55f78-ls58x" Jan 26 08:12:41 crc kubenswrapper[4806]: I0126 08:12:41.766552 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.766536187 podStartE2EDuration="3.766536187s" podCreationTimestamp="2026-01-26 08:12:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 08:12:41.764700626 +0000 UTC m=+1141.029108682" watchObservedRunningTime="2026-01-26 08:12:41.766536187 +0000 UTC m=+1141.030944233" Jan 26 08:12:41 crc kubenswrapper[4806]: I0126 08:12:41.821578 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-5c84c55f78-ls58x"] Jan 26 08:12:41 crc kubenswrapper[4806]: I0126 08:12:41.832281 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-5c84c55f78-ls58x"] Jan 26 08:12:43 crc kubenswrapper[4806]: I0126 08:12:43.051532 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8460c49b-4775-409d-b4c0-177929af70a4" path="/var/lib/kubelet/pods/8460c49b-4775-409d-b4c0-177929af70a4/volumes" Jan 26 08:12:44 crc kubenswrapper[4806]: I0126 08:12:44.174252 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 26 08:12:44 crc kubenswrapper[4806]: I0126 08:12:44.764939 4806 generic.go:334] "Generic (PLEG): container finished" podID="59eb0fbe-8f63-41b2-9d40-0af8db4e8b4d" containerID="607ba68a56fad93daf7aed89180eebc182e9a4b8b1c9d74e9080dea6a7125203" exitCode=0 Jan 26 08:12:44 crc kubenswrapper[4806]: I0126 08:12:44.765294 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"59eb0fbe-8f63-41b2-9d40-0af8db4e8b4d","Type":"ContainerDied","Data":"607ba68a56fad93daf7aed89180eebc182e9a4b8b1c9d74e9080dea6a7125203"} Jan 26 08:12:45 crc kubenswrapper[4806]: I0126 08:12:45.162345 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 08:12:45 crc kubenswrapper[4806]: I0126 08:12:45.219117 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59eb0fbe-8f63-41b2-9d40-0af8db4e8b4d-config-data\") pod \"59eb0fbe-8f63-41b2-9d40-0af8db4e8b4d\" (UID: \"59eb0fbe-8f63-41b2-9d40-0af8db4e8b4d\") " Jan 26 08:12:45 crc kubenswrapper[4806]: I0126 08:12:45.220183 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8xf72\" (UniqueName: \"kubernetes.io/projected/59eb0fbe-8f63-41b2-9d40-0af8db4e8b4d-kube-api-access-8xf72\") pod \"59eb0fbe-8f63-41b2-9d40-0af8db4e8b4d\" (UID: \"59eb0fbe-8f63-41b2-9d40-0af8db4e8b4d\") " Jan 26 08:12:45 crc kubenswrapper[4806]: I0126 08:12:45.220307 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59eb0fbe-8f63-41b2-9d40-0af8db4e8b4d-scripts\") pod \"59eb0fbe-8f63-41b2-9d40-0af8db4e8b4d\" (UID: \"59eb0fbe-8f63-41b2-9d40-0af8db4e8b4d\") " Jan 26 08:12:45 crc kubenswrapper[4806]: I0126 08:12:45.220400 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/59eb0fbe-8f63-41b2-9d40-0af8db4e8b4d-log-httpd\") pod \"59eb0fbe-8f63-41b2-9d40-0af8db4e8b4d\" (UID: \"59eb0fbe-8f63-41b2-9d40-0af8db4e8b4d\") " Jan 26 08:12:45 crc kubenswrapper[4806]: I0126 08:12:45.220601 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/59eb0fbe-8f63-41b2-9d40-0af8db4e8b4d-run-httpd\") pod \"59eb0fbe-8f63-41b2-9d40-0af8db4e8b4d\" (UID: \"59eb0fbe-8f63-41b2-9d40-0af8db4e8b4d\") " Jan 26 08:12:45 crc kubenswrapper[4806]: I0126 08:12:45.220684 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/59eb0fbe-8f63-41b2-9d40-0af8db4e8b4d-sg-core-conf-yaml\") pod \"59eb0fbe-8f63-41b2-9d40-0af8db4e8b4d\" (UID: \"59eb0fbe-8f63-41b2-9d40-0af8db4e8b4d\") " Jan 26 08:12:45 crc kubenswrapper[4806]: I0126 08:12:45.220800 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59eb0fbe-8f63-41b2-9d40-0af8db4e8b4d-combined-ca-bundle\") pod \"59eb0fbe-8f63-41b2-9d40-0af8db4e8b4d\" (UID: \"59eb0fbe-8f63-41b2-9d40-0af8db4e8b4d\") " Jan 26 08:12:45 crc kubenswrapper[4806]: I0126 08:12:45.220938 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/59eb0fbe-8f63-41b2-9d40-0af8db4e8b4d-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "59eb0fbe-8f63-41b2-9d40-0af8db4e8b4d" (UID: "59eb0fbe-8f63-41b2-9d40-0af8db4e8b4d"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 08:12:45 crc kubenswrapper[4806]: I0126 08:12:45.221075 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/59eb0fbe-8f63-41b2-9d40-0af8db4e8b4d-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "59eb0fbe-8f63-41b2-9d40-0af8db4e8b4d" (UID: "59eb0fbe-8f63-41b2-9d40-0af8db4e8b4d"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 08:12:45 crc kubenswrapper[4806]: I0126 08:12:45.243765 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59eb0fbe-8f63-41b2-9d40-0af8db4e8b4d-scripts" (OuterVolumeSpecName: "scripts") pod "59eb0fbe-8f63-41b2-9d40-0af8db4e8b4d" (UID: "59eb0fbe-8f63-41b2-9d40-0af8db4e8b4d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:12:45 crc kubenswrapper[4806]: I0126 08:12:45.263733 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59eb0fbe-8f63-41b2-9d40-0af8db4e8b4d-kube-api-access-8xf72" (OuterVolumeSpecName: "kube-api-access-8xf72") pod "59eb0fbe-8f63-41b2-9d40-0af8db4e8b4d" (UID: "59eb0fbe-8f63-41b2-9d40-0af8db4e8b4d"). InnerVolumeSpecName "kube-api-access-8xf72". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:12:45 crc kubenswrapper[4806]: I0126 08:12:45.296613 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59eb0fbe-8f63-41b2-9d40-0af8db4e8b4d-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "59eb0fbe-8f63-41b2-9d40-0af8db4e8b4d" (UID: "59eb0fbe-8f63-41b2-9d40-0af8db4e8b4d"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:12:45 crc kubenswrapper[4806]: I0126 08:12:45.324933 4806 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/59eb0fbe-8f63-41b2-9d40-0af8db4e8b4d-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 08:12:45 crc kubenswrapper[4806]: I0126 08:12:45.324969 4806 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/59eb0fbe-8f63-41b2-9d40-0af8db4e8b4d-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 26 08:12:45 crc kubenswrapper[4806]: I0126 08:12:45.324982 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8xf72\" (UniqueName: \"kubernetes.io/projected/59eb0fbe-8f63-41b2-9d40-0af8db4e8b4d-kube-api-access-8xf72\") on node \"crc\" DevicePath \"\"" Jan 26 08:12:45 crc kubenswrapper[4806]: I0126 08:12:45.324990 4806 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59eb0fbe-8f63-41b2-9d40-0af8db4e8b4d-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 08:12:45 crc kubenswrapper[4806]: I0126 08:12:45.324999 4806 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/59eb0fbe-8f63-41b2-9d40-0af8db4e8b4d-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 08:12:45 crc kubenswrapper[4806]: I0126 08:12:45.354479 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59eb0fbe-8f63-41b2-9d40-0af8db4e8b4d-config-data" (OuterVolumeSpecName: "config-data") pod "59eb0fbe-8f63-41b2-9d40-0af8db4e8b4d" (UID: "59eb0fbe-8f63-41b2-9d40-0af8db4e8b4d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:12:45 crc kubenswrapper[4806]: I0126 08:12:45.357731 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59eb0fbe-8f63-41b2-9d40-0af8db4e8b4d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "59eb0fbe-8f63-41b2-9d40-0af8db4e8b4d" (UID: "59eb0fbe-8f63-41b2-9d40-0af8db4e8b4d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:12:45 crc kubenswrapper[4806]: I0126 08:12:45.431177 4806 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59eb0fbe-8f63-41b2-9d40-0af8db4e8b4d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 08:12:45 crc kubenswrapper[4806]: I0126 08:12:45.431210 4806 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59eb0fbe-8f63-41b2-9d40-0af8db4e8b4d-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 08:12:45 crc kubenswrapper[4806]: I0126 08:12:45.777743 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"59eb0fbe-8f63-41b2-9d40-0af8db4e8b4d","Type":"ContainerDied","Data":"02fee1900d98ef883ffd9586f7951126ea9eddaffbbcab6677958d1eba638026"} Jan 26 08:12:45 crc kubenswrapper[4806]: I0126 08:12:45.777804 4806 scope.go:117] "RemoveContainer" containerID="4e6187333ba99e41a8188cef5199079bc5c96e5506b03b658f1a442087e812ba" Jan 26 08:12:45 crc kubenswrapper[4806]: I0126 08:12:45.777975 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 08:12:45 crc kubenswrapper[4806]: I0126 08:12:45.838639 4806 scope.go:117] "RemoveContainer" containerID="4e6dc21932d5d19f7362d957729c15bbd2db014cfef64a8569c768604e06b64c" Jan 26 08:12:45 crc kubenswrapper[4806]: I0126 08:12:45.865281 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 08:12:45 crc kubenswrapper[4806]: I0126 08:12:45.878800 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 26 08:12:45 crc kubenswrapper[4806]: I0126 08:12:45.927441 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 26 08:12:45 crc kubenswrapper[4806]: I0126 08:12:45.927484 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 26 08:12:45 crc kubenswrapper[4806]: I0126 08:12:45.948652 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 26 08:12:45 crc kubenswrapper[4806]: E0126 08:12:45.949222 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8af52b9-239c-4e7e-9f4e-80aa1e4148a2" containerName="heat-api" Jan 26 08:12:45 crc kubenswrapper[4806]: I0126 08:12:45.949294 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8af52b9-239c-4e7e-9f4e-80aa1e4148a2" containerName="heat-api" Jan 26 08:12:45 crc kubenswrapper[4806]: E0126 08:12:45.949359 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8460c49b-4775-409d-b4c0-177929af70a4" containerName="heat-cfnapi" Jan 26 08:12:45 crc kubenswrapper[4806]: I0126 08:12:45.949418 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="8460c49b-4775-409d-b4c0-177929af70a4" containerName="heat-cfnapi" Jan 26 08:12:45 crc kubenswrapper[4806]: E0126 08:12:45.949470 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59eb0fbe-8f63-41b2-9d40-0af8db4e8b4d" containerName="sg-core" Jan 26 08:12:45 crc kubenswrapper[4806]: I0126 08:12:45.949562 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="59eb0fbe-8f63-41b2-9d40-0af8db4e8b4d" containerName="sg-core" Jan 26 08:12:45 crc kubenswrapper[4806]: E0126 08:12:45.949629 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8460c49b-4775-409d-b4c0-177929af70a4" containerName="heat-cfnapi" Jan 26 08:12:45 crc kubenswrapper[4806]: I0126 08:12:45.949680 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="8460c49b-4775-409d-b4c0-177929af70a4" containerName="heat-cfnapi" Jan 26 08:12:45 crc kubenswrapper[4806]: E0126 08:12:45.949810 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8460c49b-4775-409d-b4c0-177929af70a4" containerName="heat-cfnapi" Jan 26 08:12:45 crc kubenswrapper[4806]: I0126 08:12:45.949864 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="8460c49b-4775-409d-b4c0-177929af70a4" containerName="heat-cfnapi" Jan 26 08:12:45 crc kubenswrapper[4806]: E0126 08:12:45.949928 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8af52b9-239c-4e7e-9f4e-80aa1e4148a2" containerName="heat-api" Jan 26 08:12:45 crc kubenswrapper[4806]: I0126 08:12:45.949977 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8af52b9-239c-4e7e-9f4e-80aa1e4148a2" containerName="heat-api" Jan 26 08:12:45 crc kubenswrapper[4806]: E0126 08:12:45.950038 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8af52b9-239c-4e7e-9f4e-80aa1e4148a2" containerName="heat-api" Jan 26 08:12:45 crc kubenswrapper[4806]: I0126 08:12:45.950096 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8af52b9-239c-4e7e-9f4e-80aa1e4148a2" containerName="heat-api" Jan 26 08:12:45 crc kubenswrapper[4806]: E0126 08:12:45.950165 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59eb0fbe-8f63-41b2-9d40-0af8db4e8b4d" containerName="ceilometer-notification-agent" Jan 26 08:12:45 crc kubenswrapper[4806]: I0126 08:12:45.950221 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="59eb0fbe-8f63-41b2-9d40-0af8db4e8b4d" containerName="ceilometer-notification-agent" Jan 26 08:12:45 crc kubenswrapper[4806]: E0126 08:12:45.950299 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59eb0fbe-8f63-41b2-9d40-0af8db4e8b4d" containerName="proxy-httpd" Jan 26 08:12:45 crc kubenswrapper[4806]: I0126 08:12:45.950372 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="59eb0fbe-8f63-41b2-9d40-0af8db4e8b4d" containerName="proxy-httpd" Jan 26 08:12:45 crc kubenswrapper[4806]: E0126 08:12:45.950438 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59eb0fbe-8f63-41b2-9d40-0af8db4e8b4d" containerName="ceilometer-central-agent" Jan 26 08:12:45 crc kubenswrapper[4806]: I0126 08:12:45.950491 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="59eb0fbe-8f63-41b2-9d40-0af8db4e8b4d" containerName="ceilometer-central-agent" Jan 26 08:12:45 crc kubenswrapper[4806]: I0126 08:12:45.950739 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="59eb0fbe-8f63-41b2-9d40-0af8db4e8b4d" containerName="ceilometer-notification-agent" Jan 26 08:12:45 crc kubenswrapper[4806]: I0126 08:12:45.950816 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8af52b9-239c-4e7e-9f4e-80aa1e4148a2" containerName="heat-api" Jan 26 08:12:45 crc kubenswrapper[4806]: I0126 08:12:45.950871 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="8460c49b-4775-409d-b4c0-177929af70a4" containerName="heat-cfnapi" Jan 26 08:12:45 crc kubenswrapper[4806]: I0126 08:12:45.950921 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="59eb0fbe-8f63-41b2-9d40-0af8db4e8b4d" containerName="ceilometer-central-agent" Jan 26 08:12:45 crc kubenswrapper[4806]: I0126 08:12:45.950979 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="8460c49b-4775-409d-b4c0-177929af70a4" containerName="heat-cfnapi" Jan 26 08:12:45 crc kubenswrapper[4806]: I0126 08:12:45.951032 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8af52b9-239c-4e7e-9f4e-80aa1e4148a2" containerName="heat-api" Jan 26 08:12:45 crc kubenswrapper[4806]: I0126 08:12:45.951089 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="59eb0fbe-8f63-41b2-9d40-0af8db4e8b4d" containerName="proxy-httpd" Jan 26 08:12:45 crc kubenswrapper[4806]: I0126 08:12:45.951154 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="59eb0fbe-8f63-41b2-9d40-0af8db4e8b4d" containerName="sg-core" Jan 26 08:12:45 crc kubenswrapper[4806]: I0126 08:12:45.951541 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="8460c49b-4775-409d-b4c0-177929af70a4" containerName="heat-cfnapi" Jan 26 08:12:45 crc kubenswrapper[4806]: I0126 08:12:45.951615 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8af52b9-239c-4e7e-9f4e-80aa1e4148a2" containerName="heat-api" Jan 26 08:12:45 crc kubenswrapper[4806]: I0126 08:12:45.952862 4806 scope.go:117] "RemoveContainer" containerID="03f3f7622afb237e8be581542fbe6897e2adc9b1d714d3eb253e01ada8cd7b45" Jan 26 08:12:45 crc kubenswrapper[4806]: I0126 08:12:45.964319 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 08:12:45 crc kubenswrapper[4806]: I0126 08:12:45.964622 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 08:12:45 crc kubenswrapper[4806]: I0126 08:12:45.967361 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 26 08:12:45 crc kubenswrapper[4806]: I0126 08:12:45.967358 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 26 08:12:45 crc kubenswrapper[4806]: I0126 08:12:45.970964 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 26 08:12:46 crc kubenswrapper[4806]: I0126 08:12:46.010301 4806 scope.go:117] "RemoveContainer" containerID="607ba68a56fad93daf7aed89180eebc182e9a4b8b1c9d74e9080dea6a7125203" Jan 26 08:12:46 crc kubenswrapper[4806]: I0126 08:12:46.020505 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 26 08:12:46 crc kubenswrapper[4806]: I0126 08:12:46.050658 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2eb6ded-d014-4732-bf29-d873534b7e1a-config-data\") pod \"ceilometer-0\" (UID: \"f2eb6ded-d014-4732-bf29-d873534b7e1a\") " pod="openstack/ceilometer-0" Jan 26 08:12:46 crc kubenswrapper[4806]: I0126 08:12:46.050708 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjkzg\" (UniqueName: \"kubernetes.io/projected/f2eb6ded-d014-4732-bf29-d873534b7e1a-kube-api-access-gjkzg\") pod \"ceilometer-0\" (UID: \"f2eb6ded-d014-4732-bf29-d873534b7e1a\") " pod="openstack/ceilometer-0" Jan 26 08:12:46 crc kubenswrapper[4806]: I0126 08:12:46.050771 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2eb6ded-d014-4732-bf29-d873534b7e1a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f2eb6ded-d014-4732-bf29-d873534b7e1a\") " pod="openstack/ceilometer-0" Jan 26 08:12:46 crc kubenswrapper[4806]: I0126 08:12:46.050792 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f2eb6ded-d014-4732-bf29-d873534b7e1a-run-httpd\") pod \"ceilometer-0\" (UID: \"f2eb6ded-d014-4732-bf29-d873534b7e1a\") " pod="openstack/ceilometer-0" Jan 26 08:12:46 crc kubenswrapper[4806]: I0126 08:12:46.050892 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f2eb6ded-d014-4732-bf29-d873534b7e1a-log-httpd\") pod \"ceilometer-0\" (UID: \"f2eb6ded-d014-4732-bf29-d873534b7e1a\") " pod="openstack/ceilometer-0" Jan 26 08:12:46 crc kubenswrapper[4806]: I0126 08:12:46.051071 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f2eb6ded-d014-4732-bf29-d873534b7e1a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f2eb6ded-d014-4732-bf29-d873534b7e1a\") " pod="openstack/ceilometer-0" Jan 26 08:12:46 crc kubenswrapper[4806]: I0126 08:12:46.051136 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2eb6ded-d014-4732-bf29-d873534b7e1a-scripts\") pod \"ceilometer-0\" (UID: \"f2eb6ded-d014-4732-bf29-d873534b7e1a\") " pod="openstack/ceilometer-0" Jan 26 08:12:46 crc kubenswrapper[4806]: I0126 08:12:46.153037 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjkzg\" (UniqueName: \"kubernetes.io/projected/f2eb6ded-d014-4732-bf29-d873534b7e1a-kube-api-access-gjkzg\") pod \"ceilometer-0\" (UID: \"f2eb6ded-d014-4732-bf29-d873534b7e1a\") " pod="openstack/ceilometer-0" Jan 26 08:12:46 crc kubenswrapper[4806]: I0126 08:12:46.153119 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2eb6ded-d014-4732-bf29-d873534b7e1a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f2eb6ded-d014-4732-bf29-d873534b7e1a\") " pod="openstack/ceilometer-0" Jan 26 08:12:46 crc kubenswrapper[4806]: I0126 08:12:46.153149 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f2eb6ded-d014-4732-bf29-d873534b7e1a-run-httpd\") pod \"ceilometer-0\" (UID: \"f2eb6ded-d014-4732-bf29-d873534b7e1a\") " pod="openstack/ceilometer-0" Jan 26 08:12:46 crc kubenswrapper[4806]: I0126 08:12:46.153188 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f2eb6ded-d014-4732-bf29-d873534b7e1a-log-httpd\") pod \"ceilometer-0\" (UID: \"f2eb6ded-d014-4732-bf29-d873534b7e1a\") " pod="openstack/ceilometer-0" Jan 26 08:12:46 crc kubenswrapper[4806]: I0126 08:12:46.153253 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f2eb6ded-d014-4732-bf29-d873534b7e1a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f2eb6ded-d014-4732-bf29-d873534b7e1a\") " pod="openstack/ceilometer-0" Jan 26 08:12:46 crc kubenswrapper[4806]: I0126 08:12:46.153286 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2eb6ded-d014-4732-bf29-d873534b7e1a-scripts\") pod \"ceilometer-0\" (UID: \"f2eb6ded-d014-4732-bf29-d873534b7e1a\") " pod="openstack/ceilometer-0" Jan 26 08:12:46 crc kubenswrapper[4806]: I0126 08:12:46.153410 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2eb6ded-d014-4732-bf29-d873534b7e1a-config-data\") pod \"ceilometer-0\" (UID: \"f2eb6ded-d014-4732-bf29-d873534b7e1a\") " pod="openstack/ceilometer-0" Jan 26 08:12:46 crc kubenswrapper[4806]: I0126 08:12:46.154082 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f2eb6ded-d014-4732-bf29-d873534b7e1a-log-httpd\") pod \"ceilometer-0\" (UID: \"f2eb6ded-d014-4732-bf29-d873534b7e1a\") " pod="openstack/ceilometer-0" Jan 26 08:12:46 crc kubenswrapper[4806]: I0126 08:12:46.155149 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f2eb6ded-d014-4732-bf29-d873534b7e1a-run-httpd\") pod \"ceilometer-0\" (UID: \"f2eb6ded-d014-4732-bf29-d873534b7e1a\") " pod="openstack/ceilometer-0" Jan 26 08:12:46 crc kubenswrapper[4806]: I0126 08:12:46.159428 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2eb6ded-d014-4732-bf29-d873534b7e1a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f2eb6ded-d014-4732-bf29-d873534b7e1a\") " pod="openstack/ceilometer-0" Jan 26 08:12:46 crc kubenswrapper[4806]: I0126 08:12:46.159871 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2eb6ded-d014-4732-bf29-d873534b7e1a-scripts\") pod \"ceilometer-0\" (UID: \"f2eb6ded-d014-4732-bf29-d873534b7e1a\") " pod="openstack/ceilometer-0" Jan 26 08:12:46 crc kubenswrapper[4806]: I0126 08:12:46.162063 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2eb6ded-d014-4732-bf29-d873534b7e1a-config-data\") pod \"ceilometer-0\" (UID: \"f2eb6ded-d014-4732-bf29-d873534b7e1a\") " pod="openstack/ceilometer-0" Jan 26 08:12:46 crc kubenswrapper[4806]: I0126 08:12:46.162564 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f2eb6ded-d014-4732-bf29-d873534b7e1a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f2eb6ded-d014-4732-bf29-d873534b7e1a\") " pod="openstack/ceilometer-0" Jan 26 08:12:46 crc kubenswrapper[4806]: I0126 08:12:46.171802 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjkzg\" (UniqueName: \"kubernetes.io/projected/f2eb6ded-d014-4732-bf29-d873534b7e1a-kube-api-access-gjkzg\") pod \"ceilometer-0\" (UID: \"f2eb6ded-d014-4732-bf29-d873534b7e1a\") " pod="openstack/ceilometer-0" Jan 26 08:12:46 crc kubenswrapper[4806]: I0126 08:12:46.565163 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 08:12:46 crc kubenswrapper[4806]: I0126 08:12:46.811237 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 26 08:12:46 crc kubenswrapper[4806]: I0126 08:12:46.811590 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 26 08:12:47 crc kubenswrapper[4806]: I0126 08:12:47.063906 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59eb0fbe-8f63-41b2-9d40-0af8db4e8b4d" path="/var/lib/kubelet/pods/59eb0fbe-8f63-41b2-9d40-0af8db4e8b4d/volumes" Jan 26 08:12:47 crc kubenswrapper[4806]: I0126 08:12:47.152552 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 08:12:47 crc kubenswrapper[4806]: I0126 08:12:47.817240 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f2eb6ded-d014-4732-bf29-d873534b7e1a","Type":"ContainerStarted","Data":"26f96b160b3b853c332cd3a83a6829841998b5e562a9f904b2f547d6b6fac338"} Jan 26 08:12:48 crc kubenswrapper[4806]: I0126 08:12:48.548977 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 26 08:12:48 crc kubenswrapper[4806]: I0126 08:12:48.549305 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 26 08:12:48 crc kubenswrapper[4806]: I0126 08:12:48.605760 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 26 08:12:48 crc kubenswrapper[4806]: I0126 08:12:48.611775 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 26 08:12:48 crc kubenswrapper[4806]: I0126 08:12:48.828226 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f2eb6ded-d014-4732-bf29-d873534b7e1a","Type":"ContainerStarted","Data":"1769421a056c9243c15aee5a7cdb2d2f67048a6b96bd6b3c8da9fd219dfb8d96"} Jan 26 08:12:48 crc kubenswrapper[4806]: I0126 08:12:48.828273 4806 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 08:12:48 crc kubenswrapper[4806]: I0126 08:12:48.828285 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f2eb6ded-d014-4732-bf29-d873534b7e1a","Type":"ContainerStarted","Data":"397abdc1659ca83caeab9c0f603744ca377574767bb2cbfba45f4b067c857727"} Jan 26 08:12:48 crc kubenswrapper[4806]: I0126 08:12:48.828289 4806 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 08:12:48 crc kubenswrapper[4806]: I0126 08:12:48.828773 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 26 08:12:48 crc kubenswrapper[4806]: I0126 08:12:48.828804 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 26 08:12:48 crc kubenswrapper[4806]: E0126 08:12:48.950787 4806 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5e435ae63256b96544903f4c8e60a13b3975f9f77ccedf6957afb8c1d8a46baf" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 26 08:12:48 crc kubenswrapper[4806]: E0126 08:12:48.960751 4806 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5e435ae63256b96544903f4c8e60a13b3975f9f77ccedf6957afb8c1d8a46baf" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 26 08:12:48 crc kubenswrapper[4806]: E0126 08:12:48.964335 4806 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5e435ae63256b96544903f4c8e60a13b3975f9f77ccedf6957afb8c1d8a46baf" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 26 08:12:48 crc kubenswrapper[4806]: E0126 08:12:48.964409 4806 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-5c479d9749-55sxk" podUID="ceffb75b-59c2-41e0-96e9-4ccbb69ee956" containerName="heat-engine" Jan 26 08:12:49 crc kubenswrapper[4806]: I0126 08:12:49.839436 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f2eb6ded-d014-4732-bf29-d873534b7e1a","Type":"ContainerStarted","Data":"4b812439c1b601addebfc1d72ccdd85ed0c36946c4f8a78933d832517c641484"} Jan 26 08:12:50 crc kubenswrapper[4806]: I0126 08:12:50.398750 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-kljvf"] Jan 26 08:12:50 crc kubenswrapper[4806]: I0126 08:12:50.400219 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-kljvf" Jan 26 08:12:50 crc kubenswrapper[4806]: I0126 08:12:50.409819 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-kljvf"] Jan 26 08:12:50 crc kubenswrapper[4806]: I0126 08:12:50.457625 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5x4d\" (UniqueName: \"kubernetes.io/projected/f8e1e344-4554-4155-bb19-26a51af1af1a-kube-api-access-b5x4d\") pod \"nova-api-db-create-kljvf\" (UID: \"f8e1e344-4554-4155-bb19-26a51af1af1a\") " pod="openstack/nova-api-db-create-kljvf" Jan 26 08:12:50 crc kubenswrapper[4806]: I0126 08:12:50.457733 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f8e1e344-4554-4155-bb19-26a51af1af1a-operator-scripts\") pod \"nova-api-db-create-kljvf\" (UID: \"f8e1e344-4554-4155-bb19-26a51af1af1a\") " pod="openstack/nova-api-db-create-kljvf" Jan 26 08:12:50 crc kubenswrapper[4806]: I0126 08:12:50.525360 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 08:12:50 crc kubenswrapper[4806]: I0126 08:12:50.559197 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5x4d\" (UniqueName: \"kubernetes.io/projected/f8e1e344-4554-4155-bb19-26a51af1af1a-kube-api-access-b5x4d\") pod \"nova-api-db-create-kljvf\" (UID: \"f8e1e344-4554-4155-bb19-26a51af1af1a\") " pod="openstack/nova-api-db-create-kljvf" Jan 26 08:12:50 crc kubenswrapper[4806]: I0126 08:12:50.559293 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f8e1e344-4554-4155-bb19-26a51af1af1a-operator-scripts\") pod \"nova-api-db-create-kljvf\" (UID: \"f8e1e344-4554-4155-bb19-26a51af1af1a\") " pod="openstack/nova-api-db-create-kljvf" Jan 26 08:12:50 crc kubenswrapper[4806]: I0126 08:12:50.560207 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f8e1e344-4554-4155-bb19-26a51af1af1a-operator-scripts\") pod \"nova-api-db-create-kljvf\" (UID: \"f8e1e344-4554-4155-bb19-26a51af1af1a\") " pod="openstack/nova-api-db-create-kljvf" Jan 26 08:12:50 crc kubenswrapper[4806]: I0126 08:12:50.577124 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5x4d\" (UniqueName: \"kubernetes.io/projected/f8e1e344-4554-4155-bb19-26a51af1af1a-kube-api-access-b5x4d\") pod \"nova-api-db-create-kljvf\" (UID: \"f8e1e344-4554-4155-bb19-26a51af1af1a\") " pod="openstack/nova-api-db-create-kljvf" Jan 26 08:12:50 crc kubenswrapper[4806]: I0126 08:12:50.609203 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-5sxlk"] Jan 26 08:12:50 crc kubenswrapper[4806]: I0126 08:12:50.610371 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-5sxlk" Jan 26 08:12:50 crc kubenswrapper[4806]: I0126 08:12:50.621606 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-5sxlk"] Jan 26 08:12:50 crc kubenswrapper[4806]: I0126 08:12:50.725284 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-t5vh7"] Jan 26 08:12:50 crc kubenswrapper[4806]: I0126 08:12:50.727634 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-t5vh7" Jan 26 08:12:50 crc kubenswrapper[4806]: I0126 08:12:50.741850 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-fb8d-account-create-update-jt484"] Jan 26 08:12:50 crc kubenswrapper[4806]: I0126 08:12:50.742981 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-fb8d-account-create-update-jt484" Jan 26 08:12:50 crc kubenswrapper[4806]: I0126 08:12:50.751167 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 26 08:12:50 crc kubenswrapper[4806]: I0126 08:12:50.765116 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f48c46b-0896-4b60-8c97-f9b6608a368f-operator-scripts\") pod \"nova-cell0-db-create-5sxlk\" (UID: \"8f48c46b-0896-4b60-8c97-f9b6608a368f\") " pod="openstack/nova-cell0-db-create-5sxlk" Jan 26 08:12:50 crc kubenswrapper[4806]: I0126 08:12:50.765241 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhx5h\" (UniqueName: \"kubernetes.io/projected/8f48c46b-0896-4b60-8c97-f9b6608a368f-kube-api-access-rhx5h\") pod \"nova-cell0-db-create-5sxlk\" (UID: \"8f48c46b-0896-4b60-8c97-f9b6608a368f\") " pod="openstack/nova-cell0-db-create-5sxlk" Jan 26 08:12:50 crc kubenswrapper[4806]: I0126 08:12:50.767805 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-t5vh7"] Jan 26 08:12:50 crc kubenswrapper[4806]: I0126 08:12:50.789370 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-kljvf" Jan 26 08:12:50 crc kubenswrapper[4806]: I0126 08:12:50.848115 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-fb8d-account-create-update-jt484"] Jan 26 08:12:50 crc kubenswrapper[4806]: I0126 08:12:50.870150 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2e44fbc1-418c-4be1-bd7e-70489014622c-operator-scripts\") pod \"nova-cell1-db-create-t5vh7\" (UID: \"2e44fbc1-418c-4be1-bd7e-70489014622c\") " pod="openstack/nova-cell1-db-create-t5vh7" Jan 26 08:12:50 crc kubenswrapper[4806]: I0126 08:12:50.870200 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bd3fae76-85a4-45ab-87b8-4ccd3303cd0e-operator-scripts\") pod \"nova-api-fb8d-account-create-update-jt484\" (UID: \"bd3fae76-85a4-45ab-87b8-4ccd3303cd0e\") " pod="openstack/nova-api-fb8d-account-create-update-jt484" Jan 26 08:12:50 crc kubenswrapper[4806]: I0126 08:12:50.870238 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q28sq\" (UniqueName: \"kubernetes.io/projected/2e44fbc1-418c-4be1-bd7e-70489014622c-kube-api-access-q28sq\") pod \"nova-cell1-db-create-t5vh7\" (UID: \"2e44fbc1-418c-4be1-bd7e-70489014622c\") " pod="openstack/nova-cell1-db-create-t5vh7" Jan 26 08:12:50 crc kubenswrapper[4806]: I0126 08:12:50.870325 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f48c46b-0896-4b60-8c97-f9b6608a368f-operator-scripts\") pod \"nova-cell0-db-create-5sxlk\" (UID: \"8f48c46b-0896-4b60-8c97-f9b6608a368f\") " pod="openstack/nova-cell0-db-create-5sxlk" Jan 26 08:12:50 crc kubenswrapper[4806]: I0126 08:12:50.870363 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7mgt\" (UniqueName: \"kubernetes.io/projected/bd3fae76-85a4-45ab-87b8-4ccd3303cd0e-kube-api-access-l7mgt\") pod \"nova-api-fb8d-account-create-update-jt484\" (UID: \"bd3fae76-85a4-45ab-87b8-4ccd3303cd0e\") " pod="openstack/nova-api-fb8d-account-create-update-jt484" Jan 26 08:12:50 crc kubenswrapper[4806]: I0126 08:12:50.870390 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rhx5h\" (UniqueName: \"kubernetes.io/projected/8f48c46b-0896-4b60-8c97-f9b6608a368f-kube-api-access-rhx5h\") pod \"nova-cell0-db-create-5sxlk\" (UID: \"8f48c46b-0896-4b60-8c97-f9b6608a368f\") " pod="openstack/nova-cell0-db-create-5sxlk" Jan 26 08:12:50 crc kubenswrapper[4806]: I0126 08:12:50.871353 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f48c46b-0896-4b60-8c97-f9b6608a368f-operator-scripts\") pod \"nova-cell0-db-create-5sxlk\" (UID: \"8f48c46b-0896-4b60-8c97-f9b6608a368f\") " pod="openstack/nova-cell0-db-create-5sxlk" Jan 26 08:12:50 crc kubenswrapper[4806]: I0126 08:12:50.906614 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rhx5h\" (UniqueName: \"kubernetes.io/projected/8f48c46b-0896-4b60-8c97-f9b6608a368f-kube-api-access-rhx5h\") pod \"nova-cell0-db-create-5sxlk\" (UID: \"8f48c46b-0896-4b60-8c97-f9b6608a368f\") " pod="openstack/nova-cell0-db-create-5sxlk" Jan 26 08:12:50 crc kubenswrapper[4806]: I0126 08:12:50.932370 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f2eb6ded-d014-4732-bf29-d873534b7e1a","Type":"ContainerStarted","Data":"523718337d85ef43b4b5506b860d7dfafe651b79561439ed689ce8a7968d65dd"} Jan 26 08:12:50 crc kubenswrapper[4806]: I0126 08:12:50.932704 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f2eb6ded-d014-4732-bf29-d873534b7e1a" containerName="ceilometer-central-agent" containerID="cri-o://397abdc1659ca83caeab9c0f603744ca377574767bb2cbfba45f4b067c857727" gracePeriod=30 Jan 26 08:12:50 crc kubenswrapper[4806]: I0126 08:12:50.933163 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f2eb6ded-d014-4732-bf29-d873534b7e1a" containerName="proxy-httpd" containerID="cri-o://523718337d85ef43b4b5506b860d7dfafe651b79561439ed689ce8a7968d65dd" gracePeriod=30 Jan 26 08:12:50 crc kubenswrapper[4806]: I0126 08:12:50.933178 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f2eb6ded-d014-4732-bf29-d873534b7e1a" containerName="sg-core" containerID="cri-o://4b812439c1b601addebfc1d72ccdd85ed0c36946c4f8a78933d832517c641484" gracePeriod=30 Jan 26 08:12:50 crc kubenswrapper[4806]: I0126 08:12:50.933188 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f2eb6ded-d014-4732-bf29-d873534b7e1a" containerName="ceilometer-notification-agent" containerID="cri-o://1769421a056c9243c15aee5a7cdb2d2f67048a6b96bd6b3c8da9fd219dfb8d96" gracePeriod=30 Jan 26 08:12:50 crc kubenswrapper[4806]: I0126 08:12:50.933208 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 26 08:12:50 crc kubenswrapper[4806]: I0126 08:12:50.945244 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-fe96-account-create-update-fc2tj"] Jan 26 08:12:50 crc kubenswrapper[4806]: I0126 08:12:50.945258 4806 generic.go:334] "Generic (PLEG): container finished" podID="ceffb75b-59c2-41e0-96e9-4ccbb69ee956" containerID="5e435ae63256b96544903f4c8e60a13b3975f9f77ccedf6957afb8c1d8a46baf" exitCode=0 Jan 26 08:12:50 crc kubenswrapper[4806]: I0126 08:12:50.946505 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-5c479d9749-55sxk" event={"ID":"ceffb75b-59c2-41e0-96e9-4ccbb69ee956","Type":"ContainerDied","Data":"5e435ae63256b96544903f4c8e60a13b3975f9f77ccedf6957afb8c1d8a46baf"} Jan 26 08:12:50 crc kubenswrapper[4806]: I0126 08:12:50.946616 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-fe96-account-create-update-fc2tj" Jan 26 08:12:50 crc kubenswrapper[4806]: I0126 08:12:50.961817 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 26 08:12:50 crc kubenswrapper[4806]: I0126 08:12:50.976176 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7mgt\" (UniqueName: \"kubernetes.io/projected/bd3fae76-85a4-45ab-87b8-4ccd3303cd0e-kube-api-access-l7mgt\") pod \"nova-api-fb8d-account-create-update-jt484\" (UID: \"bd3fae76-85a4-45ab-87b8-4ccd3303cd0e\") " pod="openstack/nova-api-fb8d-account-create-update-jt484" Jan 26 08:12:50 crc kubenswrapper[4806]: I0126 08:12:50.976260 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/72d6ee23-489e-4a9c-a0d6-277b06b2616f-operator-scripts\") pod \"nova-cell0-fe96-account-create-update-fc2tj\" (UID: \"72d6ee23-489e-4a9c-a0d6-277b06b2616f\") " pod="openstack/nova-cell0-fe96-account-create-update-fc2tj" Jan 26 08:12:50 crc kubenswrapper[4806]: I0126 08:12:50.976304 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2e44fbc1-418c-4be1-bd7e-70489014622c-operator-scripts\") pod \"nova-cell1-db-create-t5vh7\" (UID: \"2e44fbc1-418c-4be1-bd7e-70489014622c\") " pod="openstack/nova-cell1-db-create-t5vh7" Jan 26 08:12:50 crc kubenswrapper[4806]: I0126 08:12:50.976332 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bd3fae76-85a4-45ab-87b8-4ccd3303cd0e-operator-scripts\") pod \"nova-api-fb8d-account-create-update-jt484\" (UID: \"bd3fae76-85a4-45ab-87b8-4ccd3303cd0e\") " pod="openstack/nova-api-fb8d-account-create-update-jt484" Jan 26 08:12:50 crc kubenswrapper[4806]: I0126 08:12:50.976364 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q28sq\" (UniqueName: \"kubernetes.io/projected/2e44fbc1-418c-4be1-bd7e-70489014622c-kube-api-access-q28sq\") pod \"nova-cell1-db-create-t5vh7\" (UID: \"2e44fbc1-418c-4be1-bd7e-70489014622c\") " pod="openstack/nova-cell1-db-create-t5vh7" Jan 26 08:12:50 crc kubenswrapper[4806]: I0126 08:12:50.976403 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgr9q\" (UniqueName: \"kubernetes.io/projected/72d6ee23-489e-4a9c-a0d6-277b06b2616f-kube-api-access-rgr9q\") pod \"nova-cell0-fe96-account-create-update-fc2tj\" (UID: \"72d6ee23-489e-4a9c-a0d6-277b06b2616f\") " pod="openstack/nova-cell0-fe96-account-create-update-fc2tj" Jan 26 08:12:50 crc kubenswrapper[4806]: I0126 08:12:50.976703 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-fe96-account-create-update-fc2tj"] Jan 26 08:12:50 crc kubenswrapper[4806]: I0126 08:12:50.977380 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bd3fae76-85a4-45ab-87b8-4ccd3303cd0e-operator-scripts\") pod \"nova-api-fb8d-account-create-update-jt484\" (UID: \"bd3fae76-85a4-45ab-87b8-4ccd3303cd0e\") " pod="openstack/nova-api-fb8d-account-create-update-jt484" Jan 26 08:12:50 crc kubenswrapper[4806]: I0126 08:12:50.977427 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2e44fbc1-418c-4be1-bd7e-70489014622c-operator-scripts\") pod \"nova-cell1-db-create-t5vh7\" (UID: \"2e44fbc1-418c-4be1-bd7e-70489014622c\") " pod="openstack/nova-cell1-db-create-t5vh7" Jan 26 08:12:50 crc kubenswrapper[4806]: I0126 08:12:50.979171 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.036262915 podStartE2EDuration="5.979152495s" podCreationTimestamp="2026-01-26 08:12:45 +0000 UTC" firstStartedPulling="2026-01-26 08:12:47.178895128 +0000 UTC m=+1146.443303184" lastFinishedPulling="2026-01-26 08:12:50.121784708 +0000 UTC m=+1149.386192764" observedRunningTime="2026-01-26 08:12:50.972302153 +0000 UTC m=+1150.236710199" watchObservedRunningTime="2026-01-26 08:12:50.979152495 +0000 UTC m=+1150.243560551" Jan 26 08:12:51 crc kubenswrapper[4806]: I0126 08:12:51.007180 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7mgt\" (UniqueName: \"kubernetes.io/projected/bd3fae76-85a4-45ab-87b8-4ccd3303cd0e-kube-api-access-l7mgt\") pod \"nova-api-fb8d-account-create-update-jt484\" (UID: \"bd3fae76-85a4-45ab-87b8-4ccd3303cd0e\") " pod="openstack/nova-api-fb8d-account-create-update-jt484" Jan 26 08:12:51 crc kubenswrapper[4806]: I0126 08:12:51.014301 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q28sq\" (UniqueName: \"kubernetes.io/projected/2e44fbc1-418c-4be1-bd7e-70489014622c-kube-api-access-q28sq\") pod \"nova-cell1-db-create-t5vh7\" (UID: \"2e44fbc1-418c-4be1-bd7e-70489014622c\") " pod="openstack/nova-cell1-db-create-t5vh7" Jan 26 08:12:51 crc kubenswrapper[4806]: I0126 08:12:51.022770 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-5sxlk" Jan 26 08:12:51 crc kubenswrapper[4806]: I0126 08:12:51.052132 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-t5vh7" Jan 26 08:12:51 crc kubenswrapper[4806]: I0126 08:12:51.080065 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/72d6ee23-489e-4a9c-a0d6-277b06b2616f-operator-scripts\") pod \"nova-cell0-fe96-account-create-update-fc2tj\" (UID: \"72d6ee23-489e-4a9c-a0d6-277b06b2616f\") " pod="openstack/nova-cell0-fe96-account-create-update-fc2tj" Jan 26 08:12:51 crc kubenswrapper[4806]: I0126 08:12:51.080219 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rgr9q\" (UniqueName: \"kubernetes.io/projected/72d6ee23-489e-4a9c-a0d6-277b06b2616f-kube-api-access-rgr9q\") pod \"nova-cell0-fe96-account-create-update-fc2tj\" (UID: \"72d6ee23-489e-4a9c-a0d6-277b06b2616f\") " pod="openstack/nova-cell0-fe96-account-create-update-fc2tj" Jan 26 08:12:51 crc kubenswrapper[4806]: I0126 08:12:51.081240 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/72d6ee23-489e-4a9c-a0d6-277b06b2616f-operator-scripts\") pod \"nova-cell0-fe96-account-create-update-fc2tj\" (UID: \"72d6ee23-489e-4a9c-a0d6-277b06b2616f\") " pod="openstack/nova-cell0-fe96-account-create-update-fc2tj" Jan 26 08:12:51 crc kubenswrapper[4806]: I0126 08:12:51.092092 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-fb8d-account-create-update-jt484" Jan 26 08:12:51 crc kubenswrapper[4806]: I0126 08:12:51.143493 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rgr9q\" (UniqueName: \"kubernetes.io/projected/72d6ee23-489e-4a9c-a0d6-277b06b2616f-kube-api-access-rgr9q\") pod \"nova-cell0-fe96-account-create-update-fc2tj\" (UID: \"72d6ee23-489e-4a9c-a0d6-277b06b2616f\") " pod="openstack/nova-cell0-fe96-account-create-update-fc2tj" Jan 26 08:12:51 crc kubenswrapper[4806]: I0126 08:12:51.145799 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-c947-account-create-update-7ctwz"] Jan 26 08:12:51 crc kubenswrapper[4806]: I0126 08:12:51.152895 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-c947-account-create-update-7ctwz" Jan 26 08:12:51 crc kubenswrapper[4806]: I0126 08:12:51.157061 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 26 08:12:51 crc kubenswrapper[4806]: I0126 08:12:51.172654 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-c947-account-create-update-7ctwz"] Jan 26 08:12:51 crc kubenswrapper[4806]: I0126 08:12:51.182423 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7eda8877-1136-4de4-8bdf-b53e018a7a7b-operator-scripts\") pod \"nova-cell1-c947-account-create-update-7ctwz\" (UID: \"7eda8877-1136-4de4-8bdf-b53e018a7a7b\") " pod="openstack/nova-cell1-c947-account-create-update-7ctwz" Jan 26 08:12:51 crc kubenswrapper[4806]: I0126 08:12:51.182657 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjgg2\" (UniqueName: \"kubernetes.io/projected/7eda8877-1136-4de4-8bdf-b53e018a7a7b-kube-api-access-rjgg2\") pod \"nova-cell1-c947-account-create-update-7ctwz\" (UID: \"7eda8877-1136-4de4-8bdf-b53e018a7a7b\") " pod="openstack/nova-cell1-c947-account-create-update-7ctwz" Jan 26 08:12:51 crc kubenswrapper[4806]: I0126 08:12:51.289907 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7eda8877-1136-4de4-8bdf-b53e018a7a7b-operator-scripts\") pod \"nova-cell1-c947-account-create-update-7ctwz\" (UID: \"7eda8877-1136-4de4-8bdf-b53e018a7a7b\") " pod="openstack/nova-cell1-c947-account-create-update-7ctwz" Jan 26 08:12:51 crc kubenswrapper[4806]: I0126 08:12:51.290051 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjgg2\" (UniqueName: \"kubernetes.io/projected/7eda8877-1136-4de4-8bdf-b53e018a7a7b-kube-api-access-rjgg2\") pod \"nova-cell1-c947-account-create-update-7ctwz\" (UID: \"7eda8877-1136-4de4-8bdf-b53e018a7a7b\") " pod="openstack/nova-cell1-c947-account-create-update-7ctwz" Jan 26 08:12:51 crc kubenswrapper[4806]: I0126 08:12:51.291293 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7eda8877-1136-4de4-8bdf-b53e018a7a7b-operator-scripts\") pod \"nova-cell1-c947-account-create-update-7ctwz\" (UID: \"7eda8877-1136-4de4-8bdf-b53e018a7a7b\") " pod="openstack/nova-cell1-c947-account-create-update-7ctwz" Jan 26 08:12:51 crc kubenswrapper[4806]: I0126 08:12:51.333385 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjgg2\" (UniqueName: \"kubernetes.io/projected/7eda8877-1136-4de4-8bdf-b53e018a7a7b-kube-api-access-rjgg2\") pod \"nova-cell1-c947-account-create-update-7ctwz\" (UID: \"7eda8877-1136-4de4-8bdf-b53e018a7a7b\") " pod="openstack/nova-cell1-c947-account-create-update-7ctwz" Jan 26 08:12:51 crc kubenswrapper[4806]: I0126 08:12:51.358168 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-fe96-account-create-update-fc2tj" Jan 26 08:12:51 crc kubenswrapper[4806]: I0126 08:12:51.460094 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-5c479d9749-55sxk" Jan 26 08:12:51 crc kubenswrapper[4806]: I0126 08:12:51.511031 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ceffb75b-59c2-41e0-96e9-4ccbb69ee956-config-data-custom\") pod \"ceffb75b-59c2-41e0-96e9-4ccbb69ee956\" (UID: \"ceffb75b-59c2-41e0-96e9-4ccbb69ee956\") " Jan 26 08:12:51 crc kubenswrapper[4806]: I0126 08:12:51.511145 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ceffb75b-59c2-41e0-96e9-4ccbb69ee956-config-data\") pod \"ceffb75b-59c2-41e0-96e9-4ccbb69ee956\" (UID: \"ceffb75b-59c2-41e0-96e9-4ccbb69ee956\") " Jan 26 08:12:51 crc kubenswrapper[4806]: I0126 08:12:51.511201 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lcglg\" (UniqueName: \"kubernetes.io/projected/ceffb75b-59c2-41e0-96e9-4ccbb69ee956-kube-api-access-lcglg\") pod \"ceffb75b-59c2-41e0-96e9-4ccbb69ee956\" (UID: \"ceffb75b-59c2-41e0-96e9-4ccbb69ee956\") " Jan 26 08:12:51 crc kubenswrapper[4806]: I0126 08:12:51.511256 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ceffb75b-59c2-41e0-96e9-4ccbb69ee956-combined-ca-bundle\") pod \"ceffb75b-59c2-41e0-96e9-4ccbb69ee956\" (UID: \"ceffb75b-59c2-41e0-96e9-4ccbb69ee956\") " Jan 26 08:12:51 crc kubenswrapper[4806]: I0126 08:12:51.533691 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-c947-account-create-update-7ctwz" Jan 26 08:12:51 crc kubenswrapper[4806]: I0126 08:12:51.637755 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ceffb75b-59c2-41e0-96e9-4ccbb69ee956-kube-api-access-lcglg" (OuterVolumeSpecName: "kube-api-access-lcglg") pod "ceffb75b-59c2-41e0-96e9-4ccbb69ee956" (UID: "ceffb75b-59c2-41e0-96e9-4ccbb69ee956"). InnerVolumeSpecName "kube-api-access-lcglg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:12:51 crc kubenswrapper[4806]: I0126 08:12:51.640223 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lcglg\" (UniqueName: \"kubernetes.io/projected/ceffb75b-59c2-41e0-96e9-4ccbb69ee956-kube-api-access-lcglg\") on node \"crc\" DevicePath \"\"" Jan 26 08:12:51 crc kubenswrapper[4806]: I0126 08:12:51.723841 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ceffb75b-59c2-41e0-96e9-4ccbb69ee956-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "ceffb75b-59c2-41e0-96e9-4ccbb69ee956" (UID: "ceffb75b-59c2-41e0-96e9-4ccbb69ee956"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:12:51 crc kubenswrapper[4806]: I0126 08:12:51.741798 4806 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ceffb75b-59c2-41e0-96e9-4ccbb69ee956-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 26 08:12:51 crc kubenswrapper[4806]: I0126 08:12:51.776979 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ceffb75b-59c2-41e0-96e9-4ccbb69ee956-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ceffb75b-59c2-41e0-96e9-4ccbb69ee956" (UID: "ceffb75b-59c2-41e0-96e9-4ccbb69ee956"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:12:51 crc kubenswrapper[4806]: I0126 08:12:51.808804 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-kljvf"] Jan 26 08:12:51 crc kubenswrapper[4806]: I0126 08:12:51.849975 4806 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ceffb75b-59c2-41e0-96e9-4ccbb69ee956-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 08:12:51 crc kubenswrapper[4806]: I0126 08:12:51.907660 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ceffb75b-59c2-41e0-96e9-4ccbb69ee956-config-data" (OuterVolumeSpecName: "config-data") pod "ceffb75b-59c2-41e0-96e9-4ccbb69ee956" (UID: "ceffb75b-59c2-41e0-96e9-4ccbb69ee956"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:12:51 crc kubenswrapper[4806]: I0126 08:12:51.955544 4806 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ceffb75b-59c2-41e0-96e9-4ccbb69ee956-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 08:12:52 crc kubenswrapper[4806]: I0126 08:12:52.039208 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-5c479d9749-55sxk" event={"ID":"ceffb75b-59c2-41e0-96e9-4ccbb69ee956","Type":"ContainerDied","Data":"d408ccfd3c04cfa1714371533cdf8372b3b7f0cc4f09fdc7d0daf1ec778cc50a"} Jan 26 08:12:52 crc kubenswrapper[4806]: I0126 08:12:52.039267 4806 scope.go:117] "RemoveContainer" containerID="5e435ae63256b96544903f4c8e60a13b3975f9f77ccedf6957afb8c1d8a46baf" Jan 26 08:12:52 crc kubenswrapper[4806]: I0126 08:12:52.039401 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-5c479d9749-55sxk" Jan 26 08:12:52 crc kubenswrapper[4806]: I0126 08:12:52.074608 4806 generic.go:334] "Generic (PLEG): container finished" podID="f2eb6ded-d014-4732-bf29-d873534b7e1a" containerID="4b812439c1b601addebfc1d72ccdd85ed0c36946c4f8a78933d832517c641484" exitCode=2 Jan 26 08:12:52 crc kubenswrapper[4806]: I0126 08:12:52.074668 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f2eb6ded-d014-4732-bf29-d873534b7e1a","Type":"ContainerDied","Data":"4b812439c1b601addebfc1d72ccdd85ed0c36946c4f8a78933d832517c641484"} Jan 26 08:12:52 crc kubenswrapper[4806]: I0126 08:12:52.100716 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-kljvf" event={"ID":"f8e1e344-4554-4155-bb19-26a51af1af1a","Type":"ContainerStarted","Data":"d5c2c77b17248458b3205b3e1adb8711d79a619bcc26cd38fecbbff263fed3ed"} Jan 26 08:12:52 crc kubenswrapper[4806]: I0126 08:12:52.138075 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-5c479d9749-55sxk"] Jan 26 08:12:52 crc kubenswrapper[4806]: I0126 08:12:52.146172 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-5c479d9749-55sxk"] Jan 26 08:12:52 crc kubenswrapper[4806]: I0126 08:12:52.312719 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-5sxlk"] Jan 26 08:12:52 crc kubenswrapper[4806]: I0126 08:12:52.346170 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-fb8d-account-create-update-jt484"] Jan 26 08:12:52 crc kubenswrapper[4806]: I0126 08:12:52.357773 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-t5vh7"] Jan 26 08:12:52 crc kubenswrapper[4806]: I0126 08:12:52.390606 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 26 08:12:52 crc kubenswrapper[4806]: I0126 08:12:52.390699 4806 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 08:12:52 crc kubenswrapper[4806]: I0126 08:12:52.528719 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-c947-account-create-update-7ctwz"] Jan 26 08:12:52 crc kubenswrapper[4806]: I0126 08:12:52.553236 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-fe96-account-create-update-fc2tj"] Jan 26 08:12:52 crc kubenswrapper[4806]: I0126 08:12:52.739859 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 26 08:12:53 crc kubenswrapper[4806]: I0126 08:12:53.068068 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ceffb75b-59c2-41e0-96e9-4ccbb69ee956" path="/var/lib/kubelet/pods/ceffb75b-59c2-41e0-96e9-4ccbb69ee956/volumes" Jan 26 08:12:53 crc kubenswrapper[4806]: I0126 08:12:53.135781 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-kljvf" event={"ID":"f8e1e344-4554-4155-bb19-26a51af1af1a","Type":"ContainerStarted","Data":"bc384626e62ad7f9443fb10ffc2563ce19f784c897274d712c0aab1ff5997125"} Jan 26 08:12:53 crc kubenswrapper[4806]: I0126 08:12:53.157769 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-c947-account-create-update-7ctwz" event={"ID":"7eda8877-1136-4de4-8bdf-b53e018a7a7b","Type":"ContainerStarted","Data":"7cce2122e8c14d1dedda1a85e55dec327911cbf5b8dbc8a94719235fa96e32c1"} Jan 26 08:12:53 crc kubenswrapper[4806]: I0126 08:12:53.157813 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-c947-account-create-update-7ctwz" event={"ID":"7eda8877-1136-4de4-8bdf-b53e018a7a7b","Type":"ContainerStarted","Data":"065bfcc4e66bde9b1453df8e1dcfc6b0052debdf734307b21653c5f6c428ec0c"} Jan 26 08:12:53 crc kubenswrapper[4806]: I0126 08:12:53.162726 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-kljvf" podStartSLOduration=3.162700199 podStartE2EDuration="3.162700199s" podCreationTimestamp="2026-01-26 08:12:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 08:12:53.15879035 +0000 UTC m=+1152.423198406" watchObservedRunningTime="2026-01-26 08:12:53.162700199 +0000 UTC m=+1152.427108255" Jan 26 08:12:53 crc kubenswrapper[4806]: I0126 08:12:53.168941 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-t5vh7" event={"ID":"2e44fbc1-418c-4be1-bd7e-70489014622c","Type":"ContainerStarted","Data":"c756510f061ea927de4e4bc3a2c55f0d5e110989246304ce40a43967c0a820a2"} Jan 26 08:12:53 crc kubenswrapper[4806]: I0126 08:12:53.168986 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-t5vh7" event={"ID":"2e44fbc1-418c-4be1-bd7e-70489014622c","Type":"ContainerStarted","Data":"0a7eeb011981a5bfaee7adad8593d1fac044f7148b05cc50bf116d62002569c0"} Jan 26 08:12:53 crc kubenswrapper[4806]: I0126 08:12:53.179462 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-5sxlk" event={"ID":"8f48c46b-0896-4b60-8c97-f9b6608a368f","Type":"ContainerStarted","Data":"c27d308f427d70d8482fc608a3c50ed809de85b99715a9227d9e13d191388913"} Jan 26 08:12:53 crc kubenswrapper[4806]: I0126 08:12:53.179686 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-5sxlk" event={"ID":"8f48c46b-0896-4b60-8c97-f9b6608a368f","Type":"ContainerStarted","Data":"df52d49cd513daca2c66e1764210572cf3e1c7d4cb61b964f13d76ed34a9fc81"} Jan 26 08:12:53 crc kubenswrapper[4806]: I0126 08:12:53.188977 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-c947-account-create-update-7ctwz" podStartSLOduration=2.188957943 podStartE2EDuration="2.188957943s" podCreationTimestamp="2026-01-26 08:12:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 08:12:53.175195308 +0000 UTC m=+1152.439603364" watchObservedRunningTime="2026-01-26 08:12:53.188957943 +0000 UTC m=+1152.453365999" Jan 26 08:12:53 crc kubenswrapper[4806]: I0126 08:12:53.210560 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-fb8d-account-create-update-jt484" event={"ID":"bd3fae76-85a4-45ab-87b8-4ccd3303cd0e","Type":"ContainerStarted","Data":"856e69703456bc95bc94372c06144d9d777d50e705747fdc72ee9acee8ba2ac0"} Jan 26 08:12:53 crc kubenswrapper[4806]: I0126 08:12:53.210608 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-fb8d-account-create-update-jt484" event={"ID":"bd3fae76-85a4-45ab-87b8-4ccd3303cd0e","Type":"ContainerStarted","Data":"01224c295c8e67dc6a829bb6787af5da1f43e809d40a461ce0faa27686e557ee"} Jan 26 08:12:53 crc kubenswrapper[4806]: I0126 08:12:53.217779 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-t5vh7" podStartSLOduration=3.217757929 podStartE2EDuration="3.217757929s" podCreationTimestamp="2026-01-26 08:12:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 08:12:53.191115304 +0000 UTC m=+1152.455523360" watchObservedRunningTime="2026-01-26 08:12:53.217757929 +0000 UTC m=+1152.482165985" Jan 26 08:12:53 crc kubenswrapper[4806]: I0126 08:12:53.228488 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-fe96-account-create-update-fc2tj" event={"ID":"72d6ee23-489e-4a9c-a0d6-277b06b2616f","Type":"ContainerStarted","Data":"ef7bc0b4fd6f0106085491af672fa3981bd6bac246489f5f455caab983c8d6d4"} Jan 26 08:12:53 crc kubenswrapper[4806]: I0126 08:12:53.228549 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-fe96-account-create-update-fc2tj" event={"ID":"72d6ee23-489e-4a9c-a0d6-277b06b2616f","Type":"ContainerStarted","Data":"cd2aacc90b09b8991a45f85a643da4f5f35f0c3592ed0456b7ab11ea3a2d27a0"} Jan 26 08:12:53 crc kubenswrapper[4806]: I0126 08:12:53.229605 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-5sxlk" podStartSLOduration=3.22958861 podStartE2EDuration="3.22958861s" podCreationTimestamp="2026-01-26 08:12:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 08:12:53.20422128 +0000 UTC m=+1152.468629336" watchObservedRunningTime="2026-01-26 08:12:53.22958861 +0000 UTC m=+1152.493996666" Jan 26 08:12:53 crc kubenswrapper[4806]: I0126 08:12:53.260922 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-fb8d-account-create-update-jt484" podStartSLOduration=3.2608969549999998 podStartE2EDuration="3.260896955s" podCreationTimestamp="2026-01-26 08:12:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 08:12:53.226441442 +0000 UTC m=+1152.490849508" watchObservedRunningTime="2026-01-26 08:12:53.260896955 +0000 UTC m=+1152.525305011" Jan 26 08:12:53 crc kubenswrapper[4806]: I0126 08:12:53.282277 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-fe96-account-create-update-fc2tj" podStartSLOduration=3.282257213 podStartE2EDuration="3.282257213s" podCreationTimestamp="2026-01-26 08:12:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 08:12:53.249508457 +0000 UTC m=+1152.513916513" watchObservedRunningTime="2026-01-26 08:12:53.282257213 +0000 UTC m=+1152.546665269" Jan 26 08:12:54 crc kubenswrapper[4806]: I0126 08:12:54.237320 4806 generic.go:334] "Generic (PLEG): container finished" podID="72d6ee23-489e-4a9c-a0d6-277b06b2616f" containerID="ef7bc0b4fd6f0106085491af672fa3981bd6bac246489f5f455caab983c8d6d4" exitCode=0 Jan 26 08:12:54 crc kubenswrapper[4806]: I0126 08:12:54.237363 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-fe96-account-create-update-fc2tj" event={"ID":"72d6ee23-489e-4a9c-a0d6-277b06b2616f","Type":"ContainerDied","Data":"ef7bc0b4fd6f0106085491af672fa3981bd6bac246489f5f455caab983c8d6d4"} Jan 26 08:12:54 crc kubenswrapper[4806]: I0126 08:12:54.239091 4806 generic.go:334] "Generic (PLEG): container finished" podID="f8e1e344-4554-4155-bb19-26a51af1af1a" containerID="bc384626e62ad7f9443fb10ffc2563ce19f784c897274d712c0aab1ff5997125" exitCode=0 Jan 26 08:12:54 crc kubenswrapper[4806]: I0126 08:12:54.239159 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-kljvf" event={"ID":"f8e1e344-4554-4155-bb19-26a51af1af1a","Type":"ContainerDied","Data":"bc384626e62ad7f9443fb10ffc2563ce19f784c897274d712c0aab1ff5997125"} Jan 26 08:12:54 crc kubenswrapper[4806]: I0126 08:12:54.240300 4806 generic.go:334] "Generic (PLEG): container finished" podID="7eda8877-1136-4de4-8bdf-b53e018a7a7b" containerID="7cce2122e8c14d1dedda1a85e55dec327911cbf5b8dbc8a94719235fa96e32c1" exitCode=0 Jan 26 08:12:54 crc kubenswrapper[4806]: I0126 08:12:54.240358 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-c947-account-create-update-7ctwz" event={"ID":"7eda8877-1136-4de4-8bdf-b53e018a7a7b","Type":"ContainerDied","Data":"7cce2122e8c14d1dedda1a85e55dec327911cbf5b8dbc8a94719235fa96e32c1"} Jan 26 08:12:54 crc kubenswrapper[4806]: I0126 08:12:54.241870 4806 generic.go:334] "Generic (PLEG): container finished" podID="2e44fbc1-418c-4be1-bd7e-70489014622c" containerID="c756510f061ea927de4e4bc3a2c55f0d5e110989246304ce40a43967c0a820a2" exitCode=0 Jan 26 08:12:54 crc kubenswrapper[4806]: I0126 08:12:54.241926 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-t5vh7" event={"ID":"2e44fbc1-418c-4be1-bd7e-70489014622c","Type":"ContainerDied","Data":"c756510f061ea927de4e4bc3a2c55f0d5e110989246304ce40a43967c0a820a2"} Jan 26 08:12:54 crc kubenswrapper[4806]: I0126 08:12:54.243409 4806 generic.go:334] "Generic (PLEG): container finished" podID="8f48c46b-0896-4b60-8c97-f9b6608a368f" containerID="c27d308f427d70d8482fc608a3c50ed809de85b99715a9227d9e13d191388913" exitCode=0 Jan 26 08:12:54 crc kubenswrapper[4806]: I0126 08:12:54.243456 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-5sxlk" event={"ID":"8f48c46b-0896-4b60-8c97-f9b6608a368f","Type":"ContainerDied","Data":"c27d308f427d70d8482fc608a3c50ed809de85b99715a9227d9e13d191388913"} Jan 26 08:12:54 crc kubenswrapper[4806]: I0126 08:12:54.244734 4806 generic.go:334] "Generic (PLEG): container finished" podID="bd3fae76-85a4-45ab-87b8-4ccd3303cd0e" containerID="856e69703456bc95bc94372c06144d9d777d50e705747fdc72ee9acee8ba2ac0" exitCode=0 Jan 26 08:12:54 crc kubenswrapper[4806]: I0126 08:12:54.244777 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-fb8d-account-create-update-jt484" event={"ID":"bd3fae76-85a4-45ab-87b8-4ccd3303cd0e","Type":"ContainerDied","Data":"856e69703456bc95bc94372c06144d9d777d50e705747fdc72ee9acee8ba2ac0"} Jan 26 08:12:54 crc kubenswrapper[4806]: I0126 08:12:54.408641 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 26 08:12:54 crc kubenswrapper[4806]: I0126 08:12:54.408750 4806 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 08:12:54 crc kubenswrapper[4806]: I0126 08:12:54.419173 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 26 08:12:55 crc kubenswrapper[4806]: I0126 08:12:55.764697 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-c947-account-create-update-7ctwz" Jan 26 08:12:55 crc kubenswrapper[4806]: I0126 08:12:55.848865 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7eda8877-1136-4de4-8bdf-b53e018a7a7b-operator-scripts\") pod \"7eda8877-1136-4de4-8bdf-b53e018a7a7b\" (UID: \"7eda8877-1136-4de4-8bdf-b53e018a7a7b\") " Jan 26 08:12:55 crc kubenswrapper[4806]: I0126 08:12:55.849302 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rjgg2\" (UniqueName: \"kubernetes.io/projected/7eda8877-1136-4de4-8bdf-b53e018a7a7b-kube-api-access-rjgg2\") pod \"7eda8877-1136-4de4-8bdf-b53e018a7a7b\" (UID: \"7eda8877-1136-4de4-8bdf-b53e018a7a7b\") " Jan 26 08:12:55 crc kubenswrapper[4806]: I0126 08:12:55.850401 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7eda8877-1136-4de4-8bdf-b53e018a7a7b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7eda8877-1136-4de4-8bdf-b53e018a7a7b" (UID: "7eda8877-1136-4de4-8bdf-b53e018a7a7b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 08:12:55 crc kubenswrapper[4806]: I0126 08:12:55.862711 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7eda8877-1136-4de4-8bdf-b53e018a7a7b-kube-api-access-rjgg2" (OuterVolumeSpecName: "kube-api-access-rjgg2") pod "7eda8877-1136-4de4-8bdf-b53e018a7a7b" (UID: "7eda8877-1136-4de4-8bdf-b53e018a7a7b"). InnerVolumeSpecName "kube-api-access-rjgg2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:12:55 crc kubenswrapper[4806]: I0126 08:12:55.952040 4806 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7eda8877-1136-4de4-8bdf-b53e018a7a7b-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 08:12:55 crc kubenswrapper[4806]: I0126 08:12:55.953560 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rjgg2\" (UniqueName: \"kubernetes.io/projected/7eda8877-1136-4de4-8bdf-b53e018a7a7b-kube-api-access-rjgg2\") on node \"crc\" DevicePath \"\"" Jan 26 08:12:56 crc kubenswrapper[4806]: I0126 08:12:56.120359 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-fe96-account-create-update-fc2tj" Jan 26 08:12:56 crc kubenswrapper[4806]: I0126 08:12:56.124947 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-t5vh7" Jan 26 08:12:56 crc kubenswrapper[4806]: I0126 08:12:56.129292 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-5sxlk" Jan 26 08:12:56 crc kubenswrapper[4806]: I0126 08:12:56.138086 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-kljvf" Jan 26 08:12:56 crc kubenswrapper[4806]: I0126 08:12:56.164911 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rhx5h\" (UniqueName: \"kubernetes.io/projected/8f48c46b-0896-4b60-8c97-f9b6608a368f-kube-api-access-rhx5h\") pod \"8f48c46b-0896-4b60-8c97-f9b6608a368f\" (UID: \"8f48c46b-0896-4b60-8c97-f9b6608a368f\") " Jan 26 08:12:56 crc kubenswrapper[4806]: I0126 08:12:56.164966 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rgr9q\" (UniqueName: \"kubernetes.io/projected/72d6ee23-489e-4a9c-a0d6-277b06b2616f-kube-api-access-rgr9q\") pod \"72d6ee23-489e-4a9c-a0d6-277b06b2616f\" (UID: \"72d6ee23-489e-4a9c-a0d6-277b06b2616f\") " Jan 26 08:12:56 crc kubenswrapper[4806]: I0126 08:12:56.165063 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2e44fbc1-418c-4be1-bd7e-70489014622c-operator-scripts\") pod \"2e44fbc1-418c-4be1-bd7e-70489014622c\" (UID: \"2e44fbc1-418c-4be1-bd7e-70489014622c\") " Jan 26 08:12:56 crc kubenswrapper[4806]: I0126 08:12:56.165140 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f48c46b-0896-4b60-8c97-f9b6608a368f-operator-scripts\") pod \"8f48c46b-0896-4b60-8c97-f9b6608a368f\" (UID: \"8f48c46b-0896-4b60-8c97-f9b6608a368f\") " Jan 26 08:12:56 crc kubenswrapper[4806]: I0126 08:12:56.165208 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q28sq\" (UniqueName: \"kubernetes.io/projected/2e44fbc1-418c-4be1-bd7e-70489014622c-kube-api-access-q28sq\") pod \"2e44fbc1-418c-4be1-bd7e-70489014622c\" (UID: \"2e44fbc1-418c-4be1-bd7e-70489014622c\") " Jan 26 08:12:56 crc kubenswrapper[4806]: I0126 08:12:56.165242 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/72d6ee23-489e-4a9c-a0d6-277b06b2616f-operator-scripts\") pod \"72d6ee23-489e-4a9c-a0d6-277b06b2616f\" (UID: \"72d6ee23-489e-4a9c-a0d6-277b06b2616f\") " Jan 26 08:12:56 crc kubenswrapper[4806]: I0126 08:12:56.177115 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f48c46b-0896-4b60-8c97-f9b6608a368f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8f48c46b-0896-4b60-8c97-f9b6608a368f" (UID: "8f48c46b-0896-4b60-8c97-f9b6608a368f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 08:12:56 crc kubenswrapper[4806]: I0126 08:12:56.186275 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/72d6ee23-489e-4a9c-a0d6-277b06b2616f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "72d6ee23-489e-4a9c-a0d6-277b06b2616f" (UID: "72d6ee23-489e-4a9c-a0d6-277b06b2616f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 08:12:56 crc kubenswrapper[4806]: I0126 08:12:56.193062 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e44fbc1-418c-4be1-bd7e-70489014622c-kube-api-access-q28sq" (OuterVolumeSpecName: "kube-api-access-q28sq") pod "2e44fbc1-418c-4be1-bd7e-70489014622c" (UID: "2e44fbc1-418c-4be1-bd7e-70489014622c"). InnerVolumeSpecName "kube-api-access-q28sq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:12:56 crc kubenswrapper[4806]: I0126 08:12:56.193258 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-fb8d-account-create-update-jt484" Jan 26 08:12:56 crc kubenswrapper[4806]: I0126 08:12:56.217642 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e44fbc1-418c-4be1-bd7e-70489014622c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2e44fbc1-418c-4be1-bd7e-70489014622c" (UID: "2e44fbc1-418c-4be1-bd7e-70489014622c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 08:12:56 crc kubenswrapper[4806]: I0126 08:12:56.247430 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f48c46b-0896-4b60-8c97-f9b6608a368f-kube-api-access-rhx5h" (OuterVolumeSpecName: "kube-api-access-rhx5h") pod "8f48c46b-0896-4b60-8c97-f9b6608a368f" (UID: "8f48c46b-0896-4b60-8c97-f9b6608a368f"). InnerVolumeSpecName "kube-api-access-rhx5h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:12:56 crc kubenswrapper[4806]: I0126 08:12:56.262545 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72d6ee23-489e-4a9c-a0d6-277b06b2616f-kube-api-access-rgr9q" (OuterVolumeSpecName: "kube-api-access-rgr9q") pod "72d6ee23-489e-4a9c-a0d6-277b06b2616f" (UID: "72d6ee23-489e-4a9c-a0d6-277b06b2616f"). InnerVolumeSpecName "kube-api-access-rgr9q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:12:56 crc kubenswrapper[4806]: I0126 08:12:56.267310 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bd3fae76-85a4-45ab-87b8-4ccd3303cd0e-operator-scripts\") pod \"bd3fae76-85a4-45ab-87b8-4ccd3303cd0e\" (UID: \"bd3fae76-85a4-45ab-87b8-4ccd3303cd0e\") " Jan 26 08:12:56 crc kubenswrapper[4806]: I0126 08:12:56.267406 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f8e1e344-4554-4155-bb19-26a51af1af1a-operator-scripts\") pod \"f8e1e344-4554-4155-bb19-26a51af1af1a\" (UID: \"f8e1e344-4554-4155-bb19-26a51af1af1a\") " Jan 26 08:12:56 crc kubenswrapper[4806]: I0126 08:12:56.267445 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b5x4d\" (UniqueName: \"kubernetes.io/projected/f8e1e344-4554-4155-bb19-26a51af1af1a-kube-api-access-b5x4d\") pod \"f8e1e344-4554-4155-bb19-26a51af1af1a\" (UID: \"f8e1e344-4554-4155-bb19-26a51af1af1a\") " Jan 26 08:12:56 crc kubenswrapper[4806]: I0126 08:12:56.267579 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l7mgt\" (UniqueName: \"kubernetes.io/projected/bd3fae76-85a4-45ab-87b8-4ccd3303cd0e-kube-api-access-l7mgt\") pod \"bd3fae76-85a4-45ab-87b8-4ccd3303cd0e\" (UID: \"bd3fae76-85a4-45ab-87b8-4ccd3303cd0e\") " Jan 26 08:12:56 crc kubenswrapper[4806]: I0126 08:12:56.268344 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f8e1e344-4554-4155-bb19-26a51af1af1a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f8e1e344-4554-4155-bb19-26a51af1af1a" (UID: "f8e1e344-4554-4155-bb19-26a51af1af1a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 08:12:56 crc kubenswrapper[4806]: I0126 08:12:56.268353 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd3fae76-85a4-45ab-87b8-4ccd3303cd0e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bd3fae76-85a4-45ab-87b8-4ccd3303cd0e" (UID: "bd3fae76-85a4-45ab-87b8-4ccd3303cd0e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 08:12:56 crc kubenswrapper[4806]: I0126 08:12:56.268416 4806 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f48c46b-0896-4b60-8c97-f9b6608a368f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 08:12:56 crc kubenswrapper[4806]: I0126 08:12:56.268432 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q28sq\" (UniqueName: \"kubernetes.io/projected/2e44fbc1-418c-4be1-bd7e-70489014622c-kube-api-access-q28sq\") on node \"crc\" DevicePath \"\"" Jan 26 08:12:56 crc kubenswrapper[4806]: I0126 08:12:56.268444 4806 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/72d6ee23-489e-4a9c-a0d6-277b06b2616f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 08:12:56 crc kubenswrapper[4806]: I0126 08:12:56.268453 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rhx5h\" (UniqueName: \"kubernetes.io/projected/8f48c46b-0896-4b60-8c97-f9b6608a368f-kube-api-access-rhx5h\") on node \"crc\" DevicePath \"\"" Jan 26 08:12:56 crc kubenswrapper[4806]: I0126 08:12:56.268465 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rgr9q\" (UniqueName: \"kubernetes.io/projected/72d6ee23-489e-4a9c-a0d6-277b06b2616f-kube-api-access-rgr9q\") on node \"crc\" DevicePath \"\"" Jan 26 08:12:56 crc kubenswrapper[4806]: I0126 08:12:56.268474 4806 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2e44fbc1-418c-4be1-bd7e-70489014622c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 08:12:56 crc kubenswrapper[4806]: I0126 08:12:56.280036 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8e1e344-4554-4155-bb19-26a51af1af1a-kube-api-access-b5x4d" (OuterVolumeSpecName: "kube-api-access-b5x4d") pod "f8e1e344-4554-4155-bb19-26a51af1af1a" (UID: "f8e1e344-4554-4155-bb19-26a51af1af1a"). InnerVolumeSpecName "kube-api-access-b5x4d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:12:56 crc kubenswrapper[4806]: I0126 08:12:56.285896 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd3fae76-85a4-45ab-87b8-4ccd3303cd0e-kube-api-access-l7mgt" (OuterVolumeSpecName: "kube-api-access-l7mgt") pod "bd3fae76-85a4-45ab-87b8-4ccd3303cd0e" (UID: "bd3fae76-85a4-45ab-87b8-4ccd3303cd0e"). InnerVolumeSpecName "kube-api-access-l7mgt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:12:56 crc kubenswrapper[4806]: I0126 08:12:56.294880 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-fb8d-account-create-update-jt484" Jan 26 08:12:56 crc kubenswrapper[4806]: I0126 08:12:56.294884 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-fb8d-account-create-update-jt484" event={"ID":"bd3fae76-85a4-45ab-87b8-4ccd3303cd0e","Type":"ContainerDied","Data":"01224c295c8e67dc6a829bb6787af5da1f43e809d40a461ce0faa27686e557ee"} Jan 26 08:12:56 crc kubenswrapper[4806]: I0126 08:12:56.295036 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="01224c295c8e67dc6a829bb6787af5da1f43e809d40a461ce0faa27686e557ee" Jan 26 08:12:56 crc kubenswrapper[4806]: I0126 08:12:56.296457 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-fe96-account-create-update-fc2tj" event={"ID":"72d6ee23-489e-4a9c-a0d6-277b06b2616f","Type":"ContainerDied","Data":"cd2aacc90b09b8991a45f85a643da4f5f35f0c3592ed0456b7ab11ea3a2d27a0"} Jan 26 08:12:56 crc kubenswrapper[4806]: I0126 08:12:56.296504 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cd2aacc90b09b8991a45f85a643da4f5f35f0c3592ed0456b7ab11ea3a2d27a0" Jan 26 08:12:56 crc kubenswrapper[4806]: I0126 08:12:56.296571 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-fe96-account-create-update-fc2tj" Jan 26 08:12:56 crc kubenswrapper[4806]: I0126 08:12:56.305220 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-kljvf" event={"ID":"f8e1e344-4554-4155-bb19-26a51af1af1a","Type":"ContainerDied","Data":"d5c2c77b17248458b3205b3e1adb8711d79a619bcc26cd38fecbbff263fed3ed"} Jan 26 08:12:56 crc kubenswrapper[4806]: I0126 08:12:56.305260 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d5c2c77b17248458b3205b3e1adb8711d79a619bcc26cd38fecbbff263fed3ed" Jan 26 08:12:56 crc kubenswrapper[4806]: I0126 08:12:56.305340 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-kljvf" Jan 26 08:12:56 crc kubenswrapper[4806]: I0126 08:12:56.310259 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-c947-account-create-update-7ctwz" Jan 26 08:12:56 crc kubenswrapper[4806]: I0126 08:12:56.310357 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-c947-account-create-update-7ctwz" event={"ID":"7eda8877-1136-4de4-8bdf-b53e018a7a7b","Type":"ContainerDied","Data":"065bfcc4e66bde9b1453df8e1dcfc6b0052debdf734307b21653c5f6c428ec0c"} Jan 26 08:12:56 crc kubenswrapper[4806]: I0126 08:12:56.310676 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="065bfcc4e66bde9b1453df8e1dcfc6b0052debdf734307b21653c5f6c428ec0c" Jan 26 08:12:56 crc kubenswrapper[4806]: I0126 08:12:56.318414 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-t5vh7" event={"ID":"2e44fbc1-418c-4be1-bd7e-70489014622c","Type":"ContainerDied","Data":"0a7eeb011981a5bfaee7adad8593d1fac044f7148b05cc50bf116d62002569c0"} Jan 26 08:12:56 crc kubenswrapper[4806]: I0126 08:12:56.318452 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0a7eeb011981a5bfaee7adad8593d1fac044f7148b05cc50bf116d62002569c0" Jan 26 08:12:56 crc kubenswrapper[4806]: I0126 08:12:56.318512 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-t5vh7" Jan 26 08:12:56 crc kubenswrapper[4806]: I0126 08:12:56.334225 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-5sxlk" Jan 26 08:12:56 crc kubenswrapper[4806]: I0126 08:12:56.334430 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-5sxlk" event={"ID":"8f48c46b-0896-4b60-8c97-f9b6608a368f","Type":"ContainerDied","Data":"df52d49cd513daca2c66e1764210572cf3e1c7d4cb61b964f13d76ed34a9fc81"} Jan 26 08:12:56 crc kubenswrapper[4806]: I0126 08:12:56.334463 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="df52d49cd513daca2c66e1764210572cf3e1c7d4cb61b964f13d76ed34a9fc81" Jan 26 08:12:56 crc kubenswrapper[4806]: I0126 08:12:56.371781 4806 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bd3fae76-85a4-45ab-87b8-4ccd3303cd0e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 08:12:56 crc kubenswrapper[4806]: I0126 08:12:56.371809 4806 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f8e1e344-4554-4155-bb19-26a51af1af1a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 08:12:56 crc kubenswrapper[4806]: I0126 08:12:56.371822 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b5x4d\" (UniqueName: \"kubernetes.io/projected/f8e1e344-4554-4155-bb19-26a51af1af1a-kube-api-access-b5x4d\") on node \"crc\" DevicePath \"\"" Jan 26 08:12:56 crc kubenswrapper[4806]: I0126 08:12:56.371831 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l7mgt\" (UniqueName: \"kubernetes.io/projected/bd3fae76-85a4-45ab-87b8-4ccd3303cd0e-kube-api-access-l7mgt\") on node \"crc\" DevicePath \"\"" Jan 26 08:13:00 crc kubenswrapper[4806]: I0126 08:13:00.378772 4806 generic.go:334] "Generic (PLEG): container finished" podID="d7b4ee8d-6333-4683-94c4-b79229c76537" containerID="2f5c5518703436421c91f464ab9777da39f58b99c3d2a17879d7f4a25c298cfc" exitCode=137 Jan 26 08:13:00 crc kubenswrapper[4806]: I0126 08:13:00.379299 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7d485d788d-5q4tb" event={"ID":"d7b4ee8d-6333-4683-94c4-b79229c76537","Type":"ContainerDied","Data":"2f5c5518703436421c91f464ab9777da39f58b99c3d2a17879d7f4a25c298cfc"} Jan 26 08:13:01 crc kubenswrapper[4806]: I0126 08:13:01.176339 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7d485d788d-5q4tb" Jan 26 08:13:01 crc kubenswrapper[4806]: I0126 08:13:01.183982 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-wvpkk"] Jan 26 08:13:01 crc kubenswrapper[4806]: E0126 08:13:01.184422 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd3fae76-85a4-45ab-87b8-4ccd3303cd0e" containerName="mariadb-account-create-update" Jan 26 08:13:01 crc kubenswrapper[4806]: I0126 08:13:01.184447 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd3fae76-85a4-45ab-87b8-4ccd3303cd0e" containerName="mariadb-account-create-update" Jan 26 08:13:01 crc kubenswrapper[4806]: E0126 08:13:01.184477 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7b4ee8d-6333-4683-94c4-b79229c76537" containerName="horizon" Jan 26 08:13:01 crc kubenswrapper[4806]: I0126 08:13:01.184485 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7b4ee8d-6333-4683-94c4-b79229c76537" containerName="horizon" Jan 26 08:13:01 crc kubenswrapper[4806]: E0126 08:13:01.184497 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72d6ee23-489e-4a9c-a0d6-277b06b2616f" containerName="mariadb-account-create-update" Jan 26 08:13:01 crc kubenswrapper[4806]: I0126 08:13:01.184507 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="72d6ee23-489e-4a9c-a0d6-277b06b2616f" containerName="mariadb-account-create-update" Jan 26 08:13:01 crc kubenswrapper[4806]: E0126 08:13:01.184538 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8e1e344-4554-4155-bb19-26a51af1af1a" containerName="mariadb-database-create" Jan 26 08:13:01 crc kubenswrapper[4806]: I0126 08:13:01.184547 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8e1e344-4554-4155-bb19-26a51af1af1a" containerName="mariadb-database-create" Jan 26 08:13:01 crc kubenswrapper[4806]: E0126 08:13:01.184561 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ceffb75b-59c2-41e0-96e9-4ccbb69ee956" containerName="heat-engine" Jan 26 08:13:01 crc kubenswrapper[4806]: I0126 08:13:01.184569 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="ceffb75b-59c2-41e0-96e9-4ccbb69ee956" containerName="heat-engine" Jan 26 08:13:01 crc kubenswrapper[4806]: E0126 08:13:01.184587 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7b4ee8d-6333-4683-94c4-b79229c76537" containerName="horizon-log" Jan 26 08:13:01 crc kubenswrapper[4806]: I0126 08:13:01.184596 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7b4ee8d-6333-4683-94c4-b79229c76537" containerName="horizon-log" Jan 26 08:13:01 crc kubenswrapper[4806]: E0126 08:13:01.184610 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f48c46b-0896-4b60-8c97-f9b6608a368f" containerName="mariadb-database-create" Jan 26 08:13:01 crc kubenswrapper[4806]: I0126 08:13:01.184618 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f48c46b-0896-4b60-8c97-f9b6608a368f" containerName="mariadb-database-create" Jan 26 08:13:01 crc kubenswrapper[4806]: E0126 08:13:01.184629 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e44fbc1-418c-4be1-bd7e-70489014622c" containerName="mariadb-database-create" Jan 26 08:13:01 crc kubenswrapper[4806]: I0126 08:13:01.184640 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e44fbc1-418c-4be1-bd7e-70489014622c" containerName="mariadb-database-create" Jan 26 08:13:01 crc kubenswrapper[4806]: E0126 08:13:01.184652 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7eda8877-1136-4de4-8bdf-b53e018a7a7b" containerName="mariadb-account-create-update" Jan 26 08:13:01 crc kubenswrapper[4806]: I0126 08:13:01.184662 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="7eda8877-1136-4de4-8bdf-b53e018a7a7b" containerName="mariadb-account-create-update" Jan 26 08:13:01 crc kubenswrapper[4806]: I0126 08:13:01.184869 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e44fbc1-418c-4be1-bd7e-70489014622c" containerName="mariadb-database-create" Jan 26 08:13:01 crc kubenswrapper[4806]: I0126 08:13:01.184889 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8e1e344-4554-4155-bb19-26a51af1af1a" containerName="mariadb-database-create" Jan 26 08:13:01 crc kubenswrapper[4806]: I0126 08:13:01.184904 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd3fae76-85a4-45ab-87b8-4ccd3303cd0e" containerName="mariadb-account-create-update" Jan 26 08:13:01 crc kubenswrapper[4806]: I0126 08:13:01.184918 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="7eda8877-1136-4de4-8bdf-b53e018a7a7b" containerName="mariadb-account-create-update" Jan 26 08:13:01 crc kubenswrapper[4806]: I0126 08:13:01.184926 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="72d6ee23-489e-4a9c-a0d6-277b06b2616f" containerName="mariadb-account-create-update" Jan 26 08:13:01 crc kubenswrapper[4806]: I0126 08:13:01.184939 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7b4ee8d-6333-4683-94c4-b79229c76537" containerName="horizon-log" Jan 26 08:13:01 crc kubenswrapper[4806]: I0126 08:13:01.184952 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7b4ee8d-6333-4683-94c4-b79229c76537" containerName="horizon" Jan 26 08:13:01 crc kubenswrapper[4806]: I0126 08:13:01.184962 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="ceffb75b-59c2-41e0-96e9-4ccbb69ee956" containerName="heat-engine" Jan 26 08:13:01 crc kubenswrapper[4806]: I0126 08:13:01.184972 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f48c46b-0896-4b60-8c97-f9b6608a368f" containerName="mariadb-database-create" Jan 26 08:13:01 crc kubenswrapper[4806]: I0126 08:13:01.185684 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-wvpkk" Jan 26 08:13:01 crc kubenswrapper[4806]: I0126 08:13:01.188375 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 26 08:13:01 crc kubenswrapper[4806]: I0126 08:13:01.188491 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-q4qd9" Jan 26 08:13:01 crc kubenswrapper[4806]: I0126 08:13:01.188594 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 26 08:13:01 crc kubenswrapper[4806]: I0126 08:13:01.200728 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-wvpkk"] Jan 26 08:13:01 crc kubenswrapper[4806]: I0126 08:13:01.283913 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d7b4ee8d-6333-4683-94c4-b79229c76537-config-data\") pod \"d7b4ee8d-6333-4683-94c4-b79229c76537\" (UID: \"d7b4ee8d-6333-4683-94c4-b79229c76537\") " Jan 26 08:13:01 crc kubenswrapper[4806]: I0126 08:13:01.284016 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d7b4ee8d-6333-4683-94c4-b79229c76537-scripts\") pod \"d7b4ee8d-6333-4683-94c4-b79229c76537\" (UID: \"d7b4ee8d-6333-4683-94c4-b79229c76537\") " Jan 26 08:13:01 crc kubenswrapper[4806]: I0126 08:13:01.284041 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h8k79\" (UniqueName: \"kubernetes.io/projected/d7b4ee8d-6333-4683-94c4-b79229c76537-kube-api-access-h8k79\") pod \"d7b4ee8d-6333-4683-94c4-b79229c76537\" (UID: \"d7b4ee8d-6333-4683-94c4-b79229c76537\") " Jan 26 08:13:01 crc kubenswrapper[4806]: I0126 08:13:01.284064 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d7b4ee8d-6333-4683-94c4-b79229c76537-logs\") pod \"d7b4ee8d-6333-4683-94c4-b79229c76537\" (UID: \"d7b4ee8d-6333-4683-94c4-b79229c76537\") " Jan 26 08:13:01 crc kubenswrapper[4806]: I0126 08:13:01.284170 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7b4ee8d-6333-4683-94c4-b79229c76537-horizon-tls-certs\") pod \"d7b4ee8d-6333-4683-94c4-b79229c76537\" (UID: \"d7b4ee8d-6333-4683-94c4-b79229c76537\") " Jan 26 08:13:01 crc kubenswrapper[4806]: I0126 08:13:01.284203 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7b4ee8d-6333-4683-94c4-b79229c76537-combined-ca-bundle\") pod \"d7b4ee8d-6333-4683-94c4-b79229c76537\" (UID: \"d7b4ee8d-6333-4683-94c4-b79229c76537\") " Jan 26 08:13:01 crc kubenswrapper[4806]: I0126 08:13:01.284239 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d7b4ee8d-6333-4683-94c4-b79229c76537-horizon-secret-key\") pod \"d7b4ee8d-6333-4683-94c4-b79229c76537\" (UID: \"d7b4ee8d-6333-4683-94c4-b79229c76537\") " Jan 26 08:13:01 crc kubenswrapper[4806]: I0126 08:13:01.284669 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5558\" (UniqueName: \"kubernetes.io/projected/9e29f7cf-8720-4555-8418-e53025a6bdac-kube-api-access-r5558\") pod \"nova-cell0-conductor-db-sync-wvpkk\" (UID: \"9e29f7cf-8720-4555-8418-e53025a6bdac\") " pod="openstack/nova-cell0-conductor-db-sync-wvpkk" Jan 26 08:13:01 crc kubenswrapper[4806]: I0126 08:13:01.284726 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e29f7cf-8720-4555-8418-e53025a6bdac-config-data\") pod \"nova-cell0-conductor-db-sync-wvpkk\" (UID: \"9e29f7cf-8720-4555-8418-e53025a6bdac\") " pod="openstack/nova-cell0-conductor-db-sync-wvpkk" Jan 26 08:13:01 crc kubenswrapper[4806]: I0126 08:13:01.284800 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e29f7cf-8720-4555-8418-e53025a6bdac-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-wvpkk\" (UID: \"9e29f7cf-8720-4555-8418-e53025a6bdac\") " pod="openstack/nova-cell0-conductor-db-sync-wvpkk" Jan 26 08:13:01 crc kubenswrapper[4806]: I0126 08:13:01.284888 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e29f7cf-8720-4555-8418-e53025a6bdac-scripts\") pod \"nova-cell0-conductor-db-sync-wvpkk\" (UID: \"9e29f7cf-8720-4555-8418-e53025a6bdac\") " pod="openstack/nova-cell0-conductor-db-sync-wvpkk" Jan 26 08:13:01 crc kubenswrapper[4806]: I0126 08:13:01.287882 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7b4ee8d-6333-4683-94c4-b79229c76537-logs" (OuterVolumeSpecName: "logs") pod "d7b4ee8d-6333-4683-94c4-b79229c76537" (UID: "d7b4ee8d-6333-4683-94c4-b79229c76537"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 08:13:01 crc kubenswrapper[4806]: I0126 08:13:01.291112 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7b4ee8d-6333-4683-94c4-b79229c76537-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "d7b4ee8d-6333-4683-94c4-b79229c76537" (UID: "d7b4ee8d-6333-4683-94c4-b79229c76537"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:13:01 crc kubenswrapper[4806]: I0126 08:13:01.302766 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7b4ee8d-6333-4683-94c4-b79229c76537-kube-api-access-h8k79" (OuterVolumeSpecName: "kube-api-access-h8k79") pod "d7b4ee8d-6333-4683-94c4-b79229c76537" (UID: "d7b4ee8d-6333-4683-94c4-b79229c76537"). InnerVolumeSpecName "kube-api-access-h8k79". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:13:01 crc kubenswrapper[4806]: I0126 08:13:01.347321 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7b4ee8d-6333-4683-94c4-b79229c76537-config-data" (OuterVolumeSpecName: "config-data") pod "d7b4ee8d-6333-4683-94c4-b79229c76537" (UID: "d7b4ee8d-6333-4683-94c4-b79229c76537"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 08:13:01 crc kubenswrapper[4806]: I0126 08:13:01.351849 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7b4ee8d-6333-4683-94c4-b79229c76537-scripts" (OuterVolumeSpecName: "scripts") pod "d7b4ee8d-6333-4683-94c4-b79229c76537" (UID: "d7b4ee8d-6333-4683-94c4-b79229c76537"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 08:13:01 crc kubenswrapper[4806]: I0126 08:13:01.369515 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7b4ee8d-6333-4683-94c4-b79229c76537-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d7b4ee8d-6333-4683-94c4-b79229c76537" (UID: "d7b4ee8d-6333-4683-94c4-b79229c76537"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:13:01 crc kubenswrapper[4806]: I0126 08:13:01.377726 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7b4ee8d-6333-4683-94c4-b79229c76537-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "d7b4ee8d-6333-4683-94c4-b79229c76537" (UID: "d7b4ee8d-6333-4683-94c4-b79229c76537"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:13:01 crc kubenswrapper[4806]: I0126 08:13:01.386999 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5558\" (UniqueName: \"kubernetes.io/projected/9e29f7cf-8720-4555-8418-e53025a6bdac-kube-api-access-r5558\") pod \"nova-cell0-conductor-db-sync-wvpkk\" (UID: \"9e29f7cf-8720-4555-8418-e53025a6bdac\") " pod="openstack/nova-cell0-conductor-db-sync-wvpkk" Jan 26 08:13:01 crc kubenswrapper[4806]: I0126 08:13:01.387110 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e29f7cf-8720-4555-8418-e53025a6bdac-config-data\") pod \"nova-cell0-conductor-db-sync-wvpkk\" (UID: \"9e29f7cf-8720-4555-8418-e53025a6bdac\") " pod="openstack/nova-cell0-conductor-db-sync-wvpkk" Jan 26 08:13:01 crc kubenswrapper[4806]: I0126 08:13:01.387191 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e29f7cf-8720-4555-8418-e53025a6bdac-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-wvpkk\" (UID: \"9e29f7cf-8720-4555-8418-e53025a6bdac\") " pod="openstack/nova-cell0-conductor-db-sync-wvpkk" Jan 26 08:13:01 crc kubenswrapper[4806]: I0126 08:13:01.387275 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e29f7cf-8720-4555-8418-e53025a6bdac-scripts\") pod \"nova-cell0-conductor-db-sync-wvpkk\" (UID: \"9e29f7cf-8720-4555-8418-e53025a6bdac\") " pod="openstack/nova-cell0-conductor-db-sync-wvpkk" Jan 26 08:13:01 crc kubenswrapper[4806]: I0126 08:13:01.387382 4806 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d7b4ee8d-6333-4683-94c4-b79229c76537-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 08:13:01 crc kubenswrapper[4806]: I0126 08:13:01.387399 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h8k79\" (UniqueName: \"kubernetes.io/projected/d7b4ee8d-6333-4683-94c4-b79229c76537-kube-api-access-h8k79\") on node \"crc\" DevicePath \"\"" Jan 26 08:13:01 crc kubenswrapper[4806]: I0126 08:13:01.387413 4806 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d7b4ee8d-6333-4683-94c4-b79229c76537-logs\") on node \"crc\" DevicePath \"\"" Jan 26 08:13:01 crc kubenswrapper[4806]: I0126 08:13:01.387467 4806 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7b4ee8d-6333-4683-94c4-b79229c76537-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 08:13:01 crc kubenswrapper[4806]: I0126 08:13:01.387480 4806 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7b4ee8d-6333-4683-94c4-b79229c76537-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 08:13:01 crc kubenswrapper[4806]: I0126 08:13:01.387491 4806 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d7b4ee8d-6333-4683-94c4-b79229c76537-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 26 08:13:01 crc kubenswrapper[4806]: I0126 08:13:01.387504 4806 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d7b4ee8d-6333-4683-94c4-b79229c76537-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 08:13:01 crc kubenswrapper[4806]: I0126 08:13:01.394736 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e29f7cf-8720-4555-8418-e53025a6bdac-scripts\") pod \"nova-cell0-conductor-db-sync-wvpkk\" (UID: \"9e29f7cf-8720-4555-8418-e53025a6bdac\") " pod="openstack/nova-cell0-conductor-db-sync-wvpkk" Jan 26 08:13:01 crc kubenswrapper[4806]: I0126 08:13:01.395290 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e29f7cf-8720-4555-8418-e53025a6bdac-config-data\") pod \"nova-cell0-conductor-db-sync-wvpkk\" (UID: \"9e29f7cf-8720-4555-8418-e53025a6bdac\") " pod="openstack/nova-cell0-conductor-db-sync-wvpkk" Jan 26 08:13:01 crc kubenswrapper[4806]: I0126 08:13:01.395963 4806 generic.go:334] "Generic (PLEG): container finished" podID="d7b4ee8d-6333-4683-94c4-b79229c76537" containerID="7d67cc37c66ac3a538f6d9969d2611a95d2af70a56ce10b1b4199229b85a1143" exitCode=137 Jan 26 08:13:01 crc kubenswrapper[4806]: I0126 08:13:01.395998 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7d485d788d-5q4tb" event={"ID":"d7b4ee8d-6333-4683-94c4-b79229c76537","Type":"ContainerDied","Data":"7d67cc37c66ac3a538f6d9969d2611a95d2af70a56ce10b1b4199229b85a1143"} Jan 26 08:13:01 crc kubenswrapper[4806]: I0126 08:13:01.396022 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7d485d788d-5q4tb" event={"ID":"d7b4ee8d-6333-4683-94c4-b79229c76537","Type":"ContainerDied","Data":"9ff95e3e3101df9a660674f885ad43770e43e1a900182394428d477ad095fa2b"} Jan 26 08:13:01 crc kubenswrapper[4806]: I0126 08:13:01.396073 4806 scope.go:117] "RemoveContainer" containerID="7d67cc37c66ac3a538f6d9969d2611a95d2af70a56ce10b1b4199229b85a1143" Jan 26 08:13:01 crc kubenswrapper[4806]: I0126 08:13:01.396195 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7d485d788d-5q4tb" Jan 26 08:13:01 crc kubenswrapper[4806]: I0126 08:13:01.412333 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5558\" (UniqueName: \"kubernetes.io/projected/9e29f7cf-8720-4555-8418-e53025a6bdac-kube-api-access-r5558\") pod \"nova-cell0-conductor-db-sync-wvpkk\" (UID: \"9e29f7cf-8720-4555-8418-e53025a6bdac\") " pod="openstack/nova-cell0-conductor-db-sync-wvpkk" Jan 26 08:13:01 crc kubenswrapper[4806]: I0126 08:13:01.412398 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e29f7cf-8720-4555-8418-e53025a6bdac-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-wvpkk\" (UID: \"9e29f7cf-8720-4555-8418-e53025a6bdac\") " pod="openstack/nova-cell0-conductor-db-sync-wvpkk" Jan 26 08:13:01 crc kubenswrapper[4806]: I0126 08:13:01.486761 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7d485d788d-5q4tb"] Jan 26 08:13:01 crc kubenswrapper[4806]: I0126 08:13:01.494056 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-7d485d788d-5q4tb"] Jan 26 08:13:01 crc kubenswrapper[4806]: I0126 08:13:01.506306 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-wvpkk" Jan 26 08:13:01 crc kubenswrapper[4806]: I0126 08:13:01.644304 4806 scope.go:117] "RemoveContainer" containerID="f87d035c73c255421b48d4cd47dacbc10b097d6a12332e04ccdd30feaf74373d" Jan 26 08:13:01 crc kubenswrapper[4806]: I0126 08:13:01.859613 4806 scope.go:117] "RemoveContainer" containerID="2f5c5518703436421c91f464ab9777da39f58b99c3d2a17879d7f4a25c298cfc" Jan 26 08:13:01 crc kubenswrapper[4806]: I0126 08:13:01.927478 4806 scope.go:117] "RemoveContainer" containerID="7d67cc37c66ac3a538f6d9969d2611a95d2af70a56ce10b1b4199229b85a1143" Jan 26 08:13:01 crc kubenswrapper[4806]: E0126 08:13:01.927980 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d67cc37c66ac3a538f6d9969d2611a95d2af70a56ce10b1b4199229b85a1143\": container with ID starting with 7d67cc37c66ac3a538f6d9969d2611a95d2af70a56ce10b1b4199229b85a1143 not found: ID does not exist" containerID="7d67cc37c66ac3a538f6d9969d2611a95d2af70a56ce10b1b4199229b85a1143" Jan 26 08:13:01 crc kubenswrapper[4806]: I0126 08:13:01.928014 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d67cc37c66ac3a538f6d9969d2611a95d2af70a56ce10b1b4199229b85a1143"} err="failed to get container status \"7d67cc37c66ac3a538f6d9969d2611a95d2af70a56ce10b1b4199229b85a1143\": rpc error: code = NotFound desc = could not find container \"7d67cc37c66ac3a538f6d9969d2611a95d2af70a56ce10b1b4199229b85a1143\": container with ID starting with 7d67cc37c66ac3a538f6d9969d2611a95d2af70a56ce10b1b4199229b85a1143 not found: ID does not exist" Jan 26 08:13:01 crc kubenswrapper[4806]: I0126 08:13:01.928037 4806 scope.go:117] "RemoveContainer" containerID="f87d035c73c255421b48d4cd47dacbc10b097d6a12332e04ccdd30feaf74373d" Jan 26 08:13:01 crc kubenswrapper[4806]: E0126 08:13:01.928237 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f87d035c73c255421b48d4cd47dacbc10b097d6a12332e04ccdd30feaf74373d\": container with ID starting with f87d035c73c255421b48d4cd47dacbc10b097d6a12332e04ccdd30feaf74373d not found: ID does not exist" containerID="f87d035c73c255421b48d4cd47dacbc10b097d6a12332e04ccdd30feaf74373d" Jan 26 08:13:01 crc kubenswrapper[4806]: I0126 08:13:01.928267 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f87d035c73c255421b48d4cd47dacbc10b097d6a12332e04ccdd30feaf74373d"} err="failed to get container status \"f87d035c73c255421b48d4cd47dacbc10b097d6a12332e04ccdd30feaf74373d\": rpc error: code = NotFound desc = could not find container \"f87d035c73c255421b48d4cd47dacbc10b097d6a12332e04ccdd30feaf74373d\": container with ID starting with f87d035c73c255421b48d4cd47dacbc10b097d6a12332e04ccdd30feaf74373d not found: ID does not exist" Jan 26 08:13:01 crc kubenswrapper[4806]: I0126 08:13:01.928282 4806 scope.go:117] "RemoveContainer" containerID="2f5c5518703436421c91f464ab9777da39f58b99c3d2a17879d7f4a25c298cfc" Jan 26 08:13:01 crc kubenswrapper[4806]: E0126 08:13:01.928488 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f5c5518703436421c91f464ab9777da39f58b99c3d2a17879d7f4a25c298cfc\": container with ID starting with 2f5c5518703436421c91f464ab9777da39f58b99c3d2a17879d7f4a25c298cfc not found: ID does not exist" containerID="2f5c5518703436421c91f464ab9777da39f58b99c3d2a17879d7f4a25c298cfc" Jan 26 08:13:01 crc kubenswrapper[4806]: I0126 08:13:01.928634 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f5c5518703436421c91f464ab9777da39f58b99c3d2a17879d7f4a25c298cfc"} err="failed to get container status \"2f5c5518703436421c91f464ab9777da39f58b99c3d2a17879d7f4a25c298cfc\": rpc error: code = NotFound desc = could not find container \"2f5c5518703436421c91f464ab9777da39f58b99c3d2a17879d7f4a25c298cfc\": container with ID starting with 2f5c5518703436421c91f464ab9777da39f58b99c3d2a17879d7f4a25c298cfc not found: ID does not exist" Jan 26 08:13:02 crc kubenswrapper[4806]: I0126 08:13:02.342261 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-wvpkk"] Jan 26 08:13:02 crc kubenswrapper[4806]: I0126 08:13:02.406259 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-wvpkk" event={"ID":"9e29f7cf-8720-4555-8418-e53025a6bdac","Type":"ContainerStarted","Data":"4a6b671a3a800c9827668cd2d40e1b33a5b1ac1cd6eb1e6242f1327ed6e4d44e"} Jan 26 08:13:03 crc kubenswrapper[4806]: I0126 08:13:03.056177 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7b4ee8d-6333-4683-94c4-b79229c76537" path="/var/lib/kubelet/pods/d7b4ee8d-6333-4683-94c4-b79229c76537/volumes" Jan 26 08:13:06 crc kubenswrapper[4806]: I0126 08:13:06.454498 4806 generic.go:334] "Generic (PLEG): container finished" podID="f2eb6ded-d014-4732-bf29-d873534b7e1a" containerID="397abdc1659ca83caeab9c0f603744ca377574767bb2cbfba45f4b067c857727" exitCode=0 Jan 26 08:13:06 crc kubenswrapper[4806]: I0126 08:13:06.454720 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f2eb6ded-d014-4732-bf29-d873534b7e1a","Type":"ContainerDied","Data":"397abdc1659ca83caeab9c0f603744ca377574767bb2cbfba45f4b067c857727"} Jan 26 08:13:12 crc kubenswrapper[4806]: I0126 08:13:12.512755 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-wvpkk" event={"ID":"9e29f7cf-8720-4555-8418-e53025a6bdac","Type":"ContainerStarted","Data":"69d771cfe1e96645f8c20328d8864ae997495298b7110716e40c992feda75e0f"} Jan 26 08:13:12 crc kubenswrapper[4806]: I0126 08:13:12.535420 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-wvpkk" podStartSLOduration=2.210600247 podStartE2EDuration="11.535399892s" podCreationTimestamp="2026-01-26 08:13:01 +0000 UTC" firstStartedPulling="2026-01-26 08:13:02.355972886 +0000 UTC m=+1161.620380952" lastFinishedPulling="2026-01-26 08:13:11.680772541 +0000 UTC m=+1170.945180597" observedRunningTime="2026-01-26 08:13:12.525666869 +0000 UTC m=+1171.790074945" watchObservedRunningTime="2026-01-26 08:13:12.535399892 +0000 UTC m=+1171.799807948" Jan 26 08:13:16 crc kubenswrapper[4806]: I0126 08:13:16.577169 4806 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="f2eb6ded-d014-4732-bf29-d873534b7e1a" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 26 08:13:21 crc kubenswrapper[4806]: I0126 08:13:21.446091 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 08:13:21 crc kubenswrapper[4806]: I0126 08:13:21.604106 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f2eb6ded-d014-4732-bf29-d873534b7e1a-sg-core-conf-yaml\") pod \"f2eb6ded-d014-4732-bf29-d873534b7e1a\" (UID: \"f2eb6ded-d014-4732-bf29-d873534b7e1a\") " Jan 26 08:13:21 crc kubenswrapper[4806]: I0126 08:13:21.604162 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2eb6ded-d014-4732-bf29-d873534b7e1a-combined-ca-bundle\") pod \"f2eb6ded-d014-4732-bf29-d873534b7e1a\" (UID: \"f2eb6ded-d014-4732-bf29-d873534b7e1a\") " Jan 26 08:13:21 crc kubenswrapper[4806]: I0126 08:13:21.604253 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2eb6ded-d014-4732-bf29-d873534b7e1a-config-data\") pod \"f2eb6ded-d014-4732-bf29-d873534b7e1a\" (UID: \"f2eb6ded-d014-4732-bf29-d873534b7e1a\") " Jan 26 08:13:21 crc kubenswrapper[4806]: I0126 08:13:21.604329 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f2eb6ded-d014-4732-bf29-d873534b7e1a-run-httpd\") pod \"f2eb6ded-d014-4732-bf29-d873534b7e1a\" (UID: \"f2eb6ded-d014-4732-bf29-d873534b7e1a\") " Jan 26 08:13:21 crc kubenswrapper[4806]: I0126 08:13:21.604381 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f2eb6ded-d014-4732-bf29-d873534b7e1a-log-httpd\") pod \"f2eb6ded-d014-4732-bf29-d873534b7e1a\" (UID: \"f2eb6ded-d014-4732-bf29-d873534b7e1a\") " Jan 26 08:13:21 crc kubenswrapper[4806]: I0126 08:13:21.604457 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gjkzg\" (UniqueName: \"kubernetes.io/projected/f2eb6ded-d014-4732-bf29-d873534b7e1a-kube-api-access-gjkzg\") pod \"f2eb6ded-d014-4732-bf29-d873534b7e1a\" (UID: \"f2eb6ded-d014-4732-bf29-d873534b7e1a\") " Jan 26 08:13:21 crc kubenswrapper[4806]: I0126 08:13:21.604585 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2eb6ded-d014-4732-bf29-d873534b7e1a-scripts\") pod \"f2eb6ded-d014-4732-bf29-d873534b7e1a\" (UID: \"f2eb6ded-d014-4732-bf29-d873534b7e1a\") " Jan 26 08:13:21 crc kubenswrapper[4806]: I0126 08:13:21.613013 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f2eb6ded-d014-4732-bf29-d873534b7e1a-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "f2eb6ded-d014-4732-bf29-d873534b7e1a" (UID: "f2eb6ded-d014-4732-bf29-d873534b7e1a"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 08:13:21 crc kubenswrapper[4806]: I0126 08:13:21.613349 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f2eb6ded-d014-4732-bf29-d873534b7e1a-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "f2eb6ded-d014-4732-bf29-d873534b7e1a" (UID: "f2eb6ded-d014-4732-bf29-d873534b7e1a"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 08:13:21 crc kubenswrapper[4806]: I0126 08:13:21.615942 4806 generic.go:334] "Generic (PLEG): container finished" podID="f2eb6ded-d014-4732-bf29-d873534b7e1a" containerID="523718337d85ef43b4b5506b860d7dfafe651b79561439ed689ce8a7968d65dd" exitCode=137 Jan 26 08:13:21 crc kubenswrapper[4806]: I0126 08:13:21.615977 4806 generic.go:334] "Generic (PLEG): container finished" podID="f2eb6ded-d014-4732-bf29-d873534b7e1a" containerID="1769421a056c9243c15aee5a7cdb2d2f67048a6b96bd6b3c8da9fd219dfb8d96" exitCode=137 Jan 26 08:13:21 crc kubenswrapper[4806]: I0126 08:13:21.616010 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f2eb6ded-d014-4732-bf29-d873534b7e1a","Type":"ContainerDied","Data":"523718337d85ef43b4b5506b860d7dfafe651b79561439ed689ce8a7968d65dd"} Jan 26 08:13:21 crc kubenswrapper[4806]: I0126 08:13:21.616036 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f2eb6ded-d014-4732-bf29-d873534b7e1a","Type":"ContainerDied","Data":"1769421a056c9243c15aee5a7cdb2d2f67048a6b96bd6b3c8da9fd219dfb8d96"} Jan 26 08:13:21 crc kubenswrapper[4806]: I0126 08:13:21.616047 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f2eb6ded-d014-4732-bf29-d873534b7e1a","Type":"ContainerDied","Data":"26f96b160b3b853c332cd3a83a6829841998b5e562a9f904b2f547d6b6fac338"} Jan 26 08:13:21 crc kubenswrapper[4806]: I0126 08:13:21.616063 4806 scope.go:117] "RemoveContainer" containerID="523718337d85ef43b4b5506b860d7dfafe651b79561439ed689ce8a7968d65dd" Jan 26 08:13:21 crc kubenswrapper[4806]: I0126 08:13:21.616281 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 08:13:21 crc kubenswrapper[4806]: I0126 08:13:21.633742 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2eb6ded-d014-4732-bf29-d873534b7e1a-scripts" (OuterVolumeSpecName: "scripts") pod "f2eb6ded-d014-4732-bf29-d873534b7e1a" (UID: "f2eb6ded-d014-4732-bf29-d873534b7e1a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:13:21 crc kubenswrapper[4806]: I0126 08:13:21.667715 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2eb6ded-d014-4732-bf29-d873534b7e1a-kube-api-access-gjkzg" (OuterVolumeSpecName: "kube-api-access-gjkzg") pod "f2eb6ded-d014-4732-bf29-d873534b7e1a" (UID: "f2eb6ded-d014-4732-bf29-d873534b7e1a"). InnerVolumeSpecName "kube-api-access-gjkzg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:13:21 crc kubenswrapper[4806]: I0126 08:13:21.708253 4806 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f2eb6ded-d014-4732-bf29-d873534b7e1a-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 08:13:21 crc kubenswrapper[4806]: I0126 08:13:21.708283 4806 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f2eb6ded-d014-4732-bf29-d873534b7e1a-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 08:13:21 crc kubenswrapper[4806]: I0126 08:13:21.708292 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gjkzg\" (UniqueName: \"kubernetes.io/projected/f2eb6ded-d014-4732-bf29-d873534b7e1a-kube-api-access-gjkzg\") on node \"crc\" DevicePath \"\"" Jan 26 08:13:21 crc kubenswrapper[4806]: I0126 08:13:21.708301 4806 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2eb6ded-d014-4732-bf29-d873534b7e1a-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 08:13:21 crc kubenswrapper[4806]: I0126 08:13:21.711661 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2eb6ded-d014-4732-bf29-d873534b7e1a-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "f2eb6ded-d014-4732-bf29-d873534b7e1a" (UID: "f2eb6ded-d014-4732-bf29-d873534b7e1a"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:13:21 crc kubenswrapper[4806]: I0126 08:13:21.730056 4806 scope.go:117] "RemoveContainer" containerID="4b812439c1b601addebfc1d72ccdd85ed0c36946c4f8a78933d832517c641484" Jan 26 08:13:21 crc kubenswrapper[4806]: I0126 08:13:21.783731 4806 scope.go:117] "RemoveContainer" containerID="1769421a056c9243c15aee5a7cdb2d2f67048a6b96bd6b3c8da9fd219dfb8d96" Jan 26 08:13:21 crc kubenswrapper[4806]: I0126 08:13:21.805049 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2eb6ded-d014-4732-bf29-d873534b7e1a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f2eb6ded-d014-4732-bf29-d873534b7e1a" (UID: "f2eb6ded-d014-4732-bf29-d873534b7e1a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:13:21 crc kubenswrapper[4806]: I0126 08:13:21.810796 4806 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f2eb6ded-d014-4732-bf29-d873534b7e1a-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 26 08:13:21 crc kubenswrapper[4806]: I0126 08:13:21.810821 4806 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2eb6ded-d014-4732-bf29-d873534b7e1a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 08:13:21 crc kubenswrapper[4806]: I0126 08:13:21.810937 4806 scope.go:117] "RemoveContainer" containerID="397abdc1659ca83caeab9c0f603744ca377574767bb2cbfba45f4b067c857727" Jan 26 08:13:21 crc kubenswrapper[4806]: I0126 08:13:21.835347 4806 scope.go:117] "RemoveContainer" containerID="523718337d85ef43b4b5506b860d7dfafe651b79561439ed689ce8a7968d65dd" Jan 26 08:13:21 crc kubenswrapper[4806]: I0126 08:13:21.839050 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2eb6ded-d014-4732-bf29-d873534b7e1a-config-data" (OuterVolumeSpecName: "config-data") pod "f2eb6ded-d014-4732-bf29-d873534b7e1a" (UID: "f2eb6ded-d014-4732-bf29-d873534b7e1a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:13:21 crc kubenswrapper[4806]: E0126 08:13:21.839084 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"523718337d85ef43b4b5506b860d7dfafe651b79561439ed689ce8a7968d65dd\": container with ID starting with 523718337d85ef43b4b5506b860d7dfafe651b79561439ed689ce8a7968d65dd not found: ID does not exist" containerID="523718337d85ef43b4b5506b860d7dfafe651b79561439ed689ce8a7968d65dd" Jan 26 08:13:21 crc kubenswrapper[4806]: I0126 08:13:21.839126 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"523718337d85ef43b4b5506b860d7dfafe651b79561439ed689ce8a7968d65dd"} err="failed to get container status \"523718337d85ef43b4b5506b860d7dfafe651b79561439ed689ce8a7968d65dd\": rpc error: code = NotFound desc = could not find container \"523718337d85ef43b4b5506b860d7dfafe651b79561439ed689ce8a7968d65dd\": container with ID starting with 523718337d85ef43b4b5506b860d7dfafe651b79561439ed689ce8a7968d65dd not found: ID does not exist" Jan 26 08:13:21 crc kubenswrapper[4806]: I0126 08:13:21.839156 4806 scope.go:117] "RemoveContainer" containerID="4b812439c1b601addebfc1d72ccdd85ed0c36946c4f8a78933d832517c641484" Jan 26 08:13:21 crc kubenswrapper[4806]: E0126 08:13:21.844511 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b812439c1b601addebfc1d72ccdd85ed0c36946c4f8a78933d832517c641484\": container with ID starting with 4b812439c1b601addebfc1d72ccdd85ed0c36946c4f8a78933d832517c641484 not found: ID does not exist" containerID="4b812439c1b601addebfc1d72ccdd85ed0c36946c4f8a78933d832517c641484" Jan 26 08:13:21 crc kubenswrapper[4806]: I0126 08:13:21.844614 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b812439c1b601addebfc1d72ccdd85ed0c36946c4f8a78933d832517c641484"} err="failed to get container status \"4b812439c1b601addebfc1d72ccdd85ed0c36946c4f8a78933d832517c641484\": rpc error: code = NotFound desc = could not find container \"4b812439c1b601addebfc1d72ccdd85ed0c36946c4f8a78933d832517c641484\": container with ID starting with 4b812439c1b601addebfc1d72ccdd85ed0c36946c4f8a78933d832517c641484 not found: ID does not exist" Jan 26 08:13:21 crc kubenswrapper[4806]: I0126 08:13:21.844641 4806 scope.go:117] "RemoveContainer" containerID="1769421a056c9243c15aee5a7cdb2d2f67048a6b96bd6b3c8da9fd219dfb8d96" Jan 26 08:13:21 crc kubenswrapper[4806]: E0126 08:13:21.845695 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1769421a056c9243c15aee5a7cdb2d2f67048a6b96bd6b3c8da9fd219dfb8d96\": container with ID starting with 1769421a056c9243c15aee5a7cdb2d2f67048a6b96bd6b3c8da9fd219dfb8d96 not found: ID does not exist" containerID="1769421a056c9243c15aee5a7cdb2d2f67048a6b96bd6b3c8da9fd219dfb8d96" Jan 26 08:13:21 crc kubenswrapper[4806]: I0126 08:13:21.845732 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1769421a056c9243c15aee5a7cdb2d2f67048a6b96bd6b3c8da9fd219dfb8d96"} err="failed to get container status \"1769421a056c9243c15aee5a7cdb2d2f67048a6b96bd6b3c8da9fd219dfb8d96\": rpc error: code = NotFound desc = could not find container \"1769421a056c9243c15aee5a7cdb2d2f67048a6b96bd6b3c8da9fd219dfb8d96\": container with ID starting with 1769421a056c9243c15aee5a7cdb2d2f67048a6b96bd6b3c8da9fd219dfb8d96 not found: ID does not exist" Jan 26 08:13:21 crc kubenswrapper[4806]: I0126 08:13:21.845746 4806 scope.go:117] "RemoveContainer" containerID="397abdc1659ca83caeab9c0f603744ca377574767bb2cbfba45f4b067c857727" Jan 26 08:13:21 crc kubenswrapper[4806]: E0126 08:13:21.846111 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"397abdc1659ca83caeab9c0f603744ca377574767bb2cbfba45f4b067c857727\": container with ID starting with 397abdc1659ca83caeab9c0f603744ca377574767bb2cbfba45f4b067c857727 not found: ID does not exist" containerID="397abdc1659ca83caeab9c0f603744ca377574767bb2cbfba45f4b067c857727" Jan 26 08:13:21 crc kubenswrapper[4806]: I0126 08:13:21.846144 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"397abdc1659ca83caeab9c0f603744ca377574767bb2cbfba45f4b067c857727"} err="failed to get container status \"397abdc1659ca83caeab9c0f603744ca377574767bb2cbfba45f4b067c857727\": rpc error: code = NotFound desc = could not find container \"397abdc1659ca83caeab9c0f603744ca377574767bb2cbfba45f4b067c857727\": container with ID starting with 397abdc1659ca83caeab9c0f603744ca377574767bb2cbfba45f4b067c857727 not found: ID does not exist" Jan 26 08:13:21 crc kubenswrapper[4806]: I0126 08:13:21.846172 4806 scope.go:117] "RemoveContainer" containerID="523718337d85ef43b4b5506b860d7dfafe651b79561439ed689ce8a7968d65dd" Jan 26 08:13:21 crc kubenswrapper[4806]: I0126 08:13:21.846490 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"523718337d85ef43b4b5506b860d7dfafe651b79561439ed689ce8a7968d65dd"} err="failed to get container status \"523718337d85ef43b4b5506b860d7dfafe651b79561439ed689ce8a7968d65dd\": rpc error: code = NotFound desc = could not find container \"523718337d85ef43b4b5506b860d7dfafe651b79561439ed689ce8a7968d65dd\": container with ID starting with 523718337d85ef43b4b5506b860d7dfafe651b79561439ed689ce8a7968d65dd not found: ID does not exist" Jan 26 08:13:21 crc kubenswrapper[4806]: I0126 08:13:21.846532 4806 scope.go:117] "RemoveContainer" containerID="4b812439c1b601addebfc1d72ccdd85ed0c36946c4f8a78933d832517c641484" Jan 26 08:13:21 crc kubenswrapper[4806]: I0126 08:13:21.846859 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b812439c1b601addebfc1d72ccdd85ed0c36946c4f8a78933d832517c641484"} err="failed to get container status \"4b812439c1b601addebfc1d72ccdd85ed0c36946c4f8a78933d832517c641484\": rpc error: code = NotFound desc = could not find container \"4b812439c1b601addebfc1d72ccdd85ed0c36946c4f8a78933d832517c641484\": container with ID starting with 4b812439c1b601addebfc1d72ccdd85ed0c36946c4f8a78933d832517c641484 not found: ID does not exist" Jan 26 08:13:21 crc kubenswrapper[4806]: I0126 08:13:21.846879 4806 scope.go:117] "RemoveContainer" containerID="1769421a056c9243c15aee5a7cdb2d2f67048a6b96bd6b3c8da9fd219dfb8d96" Jan 26 08:13:21 crc kubenswrapper[4806]: I0126 08:13:21.847224 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1769421a056c9243c15aee5a7cdb2d2f67048a6b96bd6b3c8da9fd219dfb8d96"} err="failed to get container status \"1769421a056c9243c15aee5a7cdb2d2f67048a6b96bd6b3c8da9fd219dfb8d96\": rpc error: code = NotFound desc = could not find container \"1769421a056c9243c15aee5a7cdb2d2f67048a6b96bd6b3c8da9fd219dfb8d96\": container with ID starting with 1769421a056c9243c15aee5a7cdb2d2f67048a6b96bd6b3c8da9fd219dfb8d96 not found: ID does not exist" Jan 26 08:13:21 crc kubenswrapper[4806]: I0126 08:13:21.847247 4806 scope.go:117] "RemoveContainer" containerID="397abdc1659ca83caeab9c0f603744ca377574767bb2cbfba45f4b067c857727" Jan 26 08:13:21 crc kubenswrapper[4806]: I0126 08:13:21.847484 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"397abdc1659ca83caeab9c0f603744ca377574767bb2cbfba45f4b067c857727"} err="failed to get container status \"397abdc1659ca83caeab9c0f603744ca377574767bb2cbfba45f4b067c857727\": rpc error: code = NotFound desc = could not find container \"397abdc1659ca83caeab9c0f603744ca377574767bb2cbfba45f4b067c857727\": container with ID starting with 397abdc1659ca83caeab9c0f603744ca377574767bb2cbfba45f4b067c857727 not found: ID does not exist" Jan 26 08:13:21 crc kubenswrapper[4806]: I0126 08:13:21.913357 4806 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2eb6ded-d014-4732-bf29-d873534b7e1a-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 08:13:21 crc kubenswrapper[4806]: I0126 08:13:21.955722 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 08:13:21 crc kubenswrapper[4806]: I0126 08:13:21.970287 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 26 08:13:21 crc kubenswrapper[4806]: I0126 08:13:21.980797 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 26 08:13:21 crc kubenswrapper[4806]: E0126 08:13:21.982633 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2eb6ded-d014-4732-bf29-d873534b7e1a" containerName="ceilometer-notification-agent" Jan 26 08:13:21 crc kubenswrapper[4806]: I0126 08:13:21.982652 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2eb6ded-d014-4732-bf29-d873534b7e1a" containerName="ceilometer-notification-agent" Jan 26 08:13:21 crc kubenswrapper[4806]: E0126 08:13:21.982666 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7b4ee8d-6333-4683-94c4-b79229c76537" containerName="horizon" Jan 26 08:13:21 crc kubenswrapper[4806]: I0126 08:13:21.982673 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7b4ee8d-6333-4683-94c4-b79229c76537" containerName="horizon" Jan 26 08:13:21 crc kubenswrapper[4806]: E0126 08:13:21.982684 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2eb6ded-d014-4732-bf29-d873534b7e1a" containerName="ceilometer-central-agent" Jan 26 08:13:21 crc kubenswrapper[4806]: I0126 08:13:21.982691 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2eb6ded-d014-4732-bf29-d873534b7e1a" containerName="ceilometer-central-agent" Jan 26 08:13:21 crc kubenswrapper[4806]: E0126 08:13:21.982704 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2eb6ded-d014-4732-bf29-d873534b7e1a" containerName="sg-core" Jan 26 08:13:21 crc kubenswrapper[4806]: I0126 08:13:21.982710 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2eb6ded-d014-4732-bf29-d873534b7e1a" containerName="sg-core" Jan 26 08:13:21 crc kubenswrapper[4806]: E0126 08:13:21.982734 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2eb6ded-d014-4732-bf29-d873534b7e1a" containerName="proxy-httpd" Jan 26 08:13:21 crc kubenswrapper[4806]: I0126 08:13:21.982740 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2eb6ded-d014-4732-bf29-d873534b7e1a" containerName="proxy-httpd" Jan 26 08:13:21 crc kubenswrapper[4806]: I0126 08:13:21.983752 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2eb6ded-d014-4732-bf29-d873534b7e1a" containerName="ceilometer-central-agent" Jan 26 08:13:21 crc kubenswrapper[4806]: I0126 08:13:21.983781 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2eb6ded-d014-4732-bf29-d873534b7e1a" containerName="proxy-httpd" Jan 26 08:13:21 crc kubenswrapper[4806]: I0126 08:13:21.983793 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7b4ee8d-6333-4683-94c4-b79229c76537" containerName="horizon" Jan 26 08:13:21 crc kubenswrapper[4806]: I0126 08:13:21.983803 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2eb6ded-d014-4732-bf29-d873534b7e1a" containerName="sg-core" Jan 26 08:13:21 crc kubenswrapper[4806]: I0126 08:13:21.983824 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2eb6ded-d014-4732-bf29-d873534b7e1a" containerName="ceilometer-notification-agent" Jan 26 08:13:21 crc kubenswrapper[4806]: I0126 08:13:21.985410 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 08:13:21 crc kubenswrapper[4806]: I0126 08:13:21.987860 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 26 08:13:21 crc kubenswrapper[4806]: I0126 08:13:21.987885 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 26 08:13:22 crc kubenswrapper[4806]: I0126 08:13:22.014468 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fa73a17-a786-4301-87fb-020f835bf067-config-data\") pod \"ceilometer-0\" (UID: \"7fa73a17-a786-4301-87fb-020f835bf067\") " pod="openstack/ceilometer-0" Jan 26 08:13:22 crc kubenswrapper[4806]: I0126 08:13:22.014514 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvdl9\" (UniqueName: \"kubernetes.io/projected/7fa73a17-a786-4301-87fb-020f835bf067-kube-api-access-xvdl9\") pod \"ceilometer-0\" (UID: \"7fa73a17-a786-4301-87fb-020f835bf067\") " pod="openstack/ceilometer-0" Jan 26 08:13:22 crc kubenswrapper[4806]: I0126 08:13:22.014561 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fa73a17-a786-4301-87fb-020f835bf067-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7fa73a17-a786-4301-87fb-020f835bf067\") " pod="openstack/ceilometer-0" Jan 26 08:13:22 crc kubenswrapper[4806]: I0126 08:13:22.014589 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7fa73a17-a786-4301-87fb-020f835bf067-scripts\") pod \"ceilometer-0\" (UID: \"7fa73a17-a786-4301-87fb-020f835bf067\") " pod="openstack/ceilometer-0" Jan 26 08:13:22 crc kubenswrapper[4806]: I0126 08:13:22.014610 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7fa73a17-a786-4301-87fb-020f835bf067-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7fa73a17-a786-4301-87fb-020f835bf067\") " pod="openstack/ceilometer-0" Jan 26 08:13:22 crc kubenswrapper[4806]: I0126 08:13:22.014628 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7fa73a17-a786-4301-87fb-020f835bf067-run-httpd\") pod \"ceilometer-0\" (UID: \"7fa73a17-a786-4301-87fb-020f835bf067\") " pod="openstack/ceilometer-0" Jan 26 08:13:22 crc kubenswrapper[4806]: I0126 08:13:22.014645 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7fa73a17-a786-4301-87fb-020f835bf067-log-httpd\") pod \"ceilometer-0\" (UID: \"7fa73a17-a786-4301-87fb-020f835bf067\") " pod="openstack/ceilometer-0" Jan 26 08:13:22 crc kubenswrapper[4806]: I0126 08:13:22.021701 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 08:13:22 crc kubenswrapper[4806]: I0126 08:13:22.082340 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 08:13:22 crc kubenswrapper[4806]: E0126 08:13:22.083675 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle config-data kube-api-access-xvdl9 log-httpd run-httpd scripts sg-core-conf-yaml], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/ceilometer-0" podUID="7fa73a17-a786-4301-87fb-020f835bf067" Jan 26 08:13:22 crc kubenswrapper[4806]: I0126 08:13:22.116455 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fa73a17-a786-4301-87fb-020f835bf067-config-data\") pod \"ceilometer-0\" (UID: \"7fa73a17-a786-4301-87fb-020f835bf067\") " pod="openstack/ceilometer-0" Jan 26 08:13:22 crc kubenswrapper[4806]: I0126 08:13:22.116496 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvdl9\" (UniqueName: \"kubernetes.io/projected/7fa73a17-a786-4301-87fb-020f835bf067-kube-api-access-xvdl9\") pod \"ceilometer-0\" (UID: \"7fa73a17-a786-4301-87fb-020f835bf067\") " pod="openstack/ceilometer-0" Jan 26 08:13:22 crc kubenswrapper[4806]: I0126 08:13:22.116564 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fa73a17-a786-4301-87fb-020f835bf067-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7fa73a17-a786-4301-87fb-020f835bf067\") " pod="openstack/ceilometer-0" Jan 26 08:13:22 crc kubenswrapper[4806]: I0126 08:13:22.116595 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7fa73a17-a786-4301-87fb-020f835bf067-scripts\") pod \"ceilometer-0\" (UID: \"7fa73a17-a786-4301-87fb-020f835bf067\") " pod="openstack/ceilometer-0" Jan 26 08:13:22 crc kubenswrapper[4806]: I0126 08:13:22.116618 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7fa73a17-a786-4301-87fb-020f835bf067-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7fa73a17-a786-4301-87fb-020f835bf067\") " pod="openstack/ceilometer-0" Jan 26 08:13:22 crc kubenswrapper[4806]: I0126 08:13:22.116638 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7fa73a17-a786-4301-87fb-020f835bf067-run-httpd\") pod \"ceilometer-0\" (UID: \"7fa73a17-a786-4301-87fb-020f835bf067\") " pod="openstack/ceilometer-0" Jan 26 08:13:22 crc kubenswrapper[4806]: I0126 08:13:22.116654 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7fa73a17-a786-4301-87fb-020f835bf067-log-httpd\") pod \"ceilometer-0\" (UID: \"7fa73a17-a786-4301-87fb-020f835bf067\") " pod="openstack/ceilometer-0" Jan 26 08:13:22 crc kubenswrapper[4806]: I0126 08:13:22.117721 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7fa73a17-a786-4301-87fb-020f835bf067-log-httpd\") pod \"ceilometer-0\" (UID: \"7fa73a17-a786-4301-87fb-020f835bf067\") " pod="openstack/ceilometer-0" Jan 26 08:13:22 crc kubenswrapper[4806]: I0126 08:13:22.117803 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7fa73a17-a786-4301-87fb-020f835bf067-run-httpd\") pod \"ceilometer-0\" (UID: \"7fa73a17-a786-4301-87fb-020f835bf067\") " pod="openstack/ceilometer-0" Jan 26 08:13:22 crc kubenswrapper[4806]: I0126 08:13:22.120356 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7fa73a17-a786-4301-87fb-020f835bf067-scripts\") pod \"ceilometer-0\" (UID: \"7fa73a17-a786-4301-87fb-020f835bf067\") " pod="openstack/ceilometer-0" Jan 26 08:13:22 crc kubenswrapper[4806]: I0126 08:13:22.122647 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fa73a17-a786-4301-87fb-020f835bf067-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7fa73a17-a786-4301-87fb-020f835bf067\") " pod="openstack/ceilometer-0" Jan 26 08:13:22 crc kubenswrapper[4806]: I0126 08:13:22.122837 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fa73a17-a786-4301-87fb-020f835bf067-config-data\") pod \"ceilometer-0\" (UID: \"7fa73a17-a786-4301-87fb-020f835bf067\") " pod="openstack/ceilometer-0" Jan 26 08:13:22 crc kubenswrapper[4806]: I0126 08:13:22.139795 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvdl9\" (UniqueName: \"kubernetes.io/projected/7fa73a17-a786-4301-87fb-020f835bf067-kube-api-access-xvdl9\") pod \"ceilometer-0\" (UID: \"7fa73a17-a786-4301-87fb-020f835bf067\") " pod="openstack/ceilometer-0" Jan 26 08:13:22 crc kubenswrapper[4806]: I0126 08:13:22.140765 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7fa73a17-a786-4301-87fb-020f835bf067-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7fa73a17-a786-4301-87fb-020f835bf067\") " pod="openstack/ceilometer-0" Jan 26 08:13:22 crc kubenswrapper[4806]: I0126 08:13:22.625039 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 08:13:22 crc kubenswrapper[4806]: I0126 08:13:22.634357 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 08:13:22 crc kubenswrapper[4806]: I0126 08:13:22.826254 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fa73a17-a786-4301-87fb-020f835bf067-combined-ca-bundle\") pod \"7fa73a17-a786-4301-87fb-020f835bf067\" (UID: \"7fa73a17-a786-4301-87fb-020f835bf067\") " Jan 26 08:13:22 crc kubenswrapper[4806]: I0126 08:13:22.826669 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xvdl9\" (UniqueName: \"kubernetes.io/projected/7fa73a17-a786-4301-87fb-020f835bf067-kube-api-access-xvdl9\") pod \"7fa73a17-a786-4301-87fb-020f835bf067\" (UID: \"7fa73a17-a786-4301-87fb-020f835bf067\") " Jan 26 08:13:22 crc kubenswrapper[4806]: I0126 08:13:22.826814 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7fa73a17-a786-4301-87fb-020f835bf067-sg-core-conf-yaml\") pod \"7fa73a17-a786-4301-87fb-020f835bf067\" (UID: \"7fa73a17-a786-4301-87fb-020f835bf067\") " Jan 26 08:13:22 crc kubenswrapper[4806]: I0126 08:13:22.826940 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7fa73a17-a786-4301-87fb-020f835bf067-run-httpd\") pod \"7fa73a17-a786-4301-87fb-020f835bf067\" (UID: \"7fa73a17-a786-4301-87fb-020f835bf067\") " Jan 26 08:13:22 crc kubenswrapper[4806]: I0126 08:13:22.827105 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7fa73a17-a786-4301-87fb-020f835bf067-scripts\") pod \"7fa73a17-a786-4301-87fb-020f835bf067\" (UID: \"7fa73a17-a786-4301-87fb-020f835bf067\") " Jan 26 08:13:22 crc kubenswrapper[4806]: I0126 08:13:22.827181 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7fa73a17-a786-4301-87fb-020f835bf067-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "7fa73a17-a786-4301-87fb-020f835bf067" (UID: "7fa73a17-a786-4301-87fb-020f835bf067"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 08:13:22 crc kubenswrapper[4806]: I0126 08:13:22.827318 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fa73a17-a786-4301-87fb-020f835bf067-config-data\") pod \"7fa73a17-a786-4301-87fb-020f835bf067\" (UID: \"7fa73a17-a786-4301-87fb-020f835bf067\") " Jan 26 08:13:22 crc kubenswrapper[4806]: I0126 08:13:22.827452 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7fa73a17-a786-4301-87fb-020f835bf067-log-httpd\") pod \"7fa73a17-a786-4301-87fb-020f835bf067\" (UID: \"7fa73a17-a786-4301-87fb-020f835bf067\") " Jan 26 08:13:22 crc kubenswrapper[4806]: I0126 08:13:22.827943 4806 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7fa73a17-a786-4301-87fb-020f835bf067-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 08:13:22 crc kubenswrapper[4806]: I0126 08:13:22.827982 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7fa73a17-a786-4301-87fb-020f835bf067-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "7fa73a17-a786-4301-87fb-020f835bf067" (UID: "7fa73a17-a786-4301-87fb-020f835bf067"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 08:13:22 crc kubenswrapper[4806]: I0126 08:13:22.830156 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fa73a17-a786-4301-87fb-020f835bf067-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7fa73a17-a786-4301-87fb-020f835bf067" (UID: "7fa73a17-a786-4301-87fb-020f835bf067"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:13:22 crc kubenswrapper[4806]: I0126 08:13:22.841540 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fa73a17-a786-4301-87fb-020f835bf067-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "7fa73a17-a786-4301-87fb-020f835bf067" (UID: "7fa73a17-a786-4301-87fb-020f835bf067"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:13:22 crc kubenswrapper[4806]: I0126 08:13:22.841662 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fa73a17-a786-4301-87fb-020f835bf067-scripts" (OuterVolumeSpecName: "scripts") pod "7fa73a17-a786-4301-87fb-020f835bf067" (UID: "7fa73a17-a786-4301-87fb-020f835bf067"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:13:22 crc kubenswrapper[4806]: I0126 08:13:22.843673 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fa73a17-a786-4301-87fb-020f835bf067-kube-api-access-xvdl9" (OuterVolumeSpecName: "kube-api-access-xvdl9") pod "7fa73a17-a786-4301-87fb-020f835bf067" (UID: "7fa73a17-a786-4301-87fb-020f835bf067"). InnerVolumeSpecName "kube-api-access-xvdl9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:13:22 crc kubenswrapper[4806]: I0126 08:13:22.850466 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fa73a17-a786-4301-87fb-020f835bf067-config-data" (OuterVolumeSpecName: "config-data") pod "7fa73a17-a786-4301-87fb-020f835bf067" (UID: "7fa73a17-a786-4301-87fb-020f835bf067"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:13:22 crc kubenswrapper[4806]: I0126 08:13:22.928907 4806 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7fa73a17-a786-4301-87fb-020f835bf067-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 08:13:22 crc kubenswrapper[4806]: I0126 08:13:22.929158 4806 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fa73a17-a786-4301-87fb-020f835bf067-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 08:13:22 crc kubenswrapper[4806]: I0126 08:13:22.929168 4806 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7fa73a17-a786-4301-87fb-020f835bf067-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 08:13:22 crc kubenswrapper[4806]: I0126 08:13:22.929177 4806 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fa73a17-a786-4301-87fb-020f835bf067-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 08:13:22 crc kubenswrapper[4806]: I0126 08:13:22.929187 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xvdl9\" (UniqueName: \"kubernetes.io/projected/7fa73a17-a786-4301-87fb-020f835bf067-kube-api-access-xvdl9\") on node \"crc\" DevicePath \"\"" Jan 26 08:13:22 crc kubenswrapper[4806]: I0126 08:13:22.929197 4806 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7fa73a17-a786-4301-87fb-020f835bf067-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 26 08:13:23 crc kubenswrapper[4806]: I0126 08:13:23.051882 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2eb6ded-d014-4732-bf29-d873534b7e1a" path="/var/lib/kubelet/pods/f2eb6ded-d014-4732-bf29-d873534b7e1a/volumes" Jan 26 08:13:23 crc kubenswrapper[4806]: I0126 08:13:23.632063 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 08:13:23 crc kubenswrapper[4806]: I0126 08:13:23.677142 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 08:13:23 crc kubenswrapper[4806]: I0126 08:13:23.708438 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 26 08:13:23 crc kubenswrapper[4806]: I0126 08:13:23.753037 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 26 08:13:23 crc kubenswrapper[4806]: I0126 08:13:23.755845 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 08:13:23 crc kubenswrapper[4806]: I0126 08:13:23.758899 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 26 08:13:23 crc kubenswrapper[4806]: I0126 08:13:23.759308 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 26 08:13:23 crc kubenswrapper[4806]: I0126 08:13:23.795153 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 08:13:23 crc kubenswrapper[4806]: I0126 08:13:23.946010 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab397110-992c-4443-bad3-a62a2cb9d02c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ab397110-992c-4443-bad3-a62a2cb9d02c\") " pod="openstack/ceilometer-0" Jan 26 08:13:23 crc kubenswrapper[4806]: I0126 08:13:23.946097 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab397110-992c-4443-bad3-a62a2cb9d02c-scripts\") pod \"ceilometer-0\" (UID: \"ab397110-992c-4443-bad3-a62a2cb9d02c\") " pod="openstack/ceilometer-0" Jan 26 08:13:23 crc kubenswrapper[4806]: I0126 08:13:23.946126 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ab397110-992c-4443-bad3-a62a2cb9d02c-run-httpd\") pod \"ceilometer-0\" (UID: \"ab397110-992c-4443-bad3-a62a2cb9d02c\") " pod="openstack/ceilometer-0" Jan 26 08:13:23 crc kubenswrapper[4806]: I0126 08:13:23.946917 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab397110-992c-4443-bad3-a62a2cb9d02c-config-data\") pod \"ceilometer-0\" (UID: \"ab397110-992c-4443-bad3-a62a2cb9d02c\") " pod="openstack/ceilometer-0" Jan 26 08:13:23 crc kubenswrapper[4806]: I0126 08:13:23.946960 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ab397110-992c-4443-bad3-a62a2cb9d02c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ab397110-992c-4443-bad3-a62a2cb9d02c\") " pod="openstack/ceilometer-0" Jan 26 08:13:23 crc kubenswrapper[4806]: I0126 08:13:23.947015 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ab397110-992c-4443-bad3-a62a2cb9d02c-log-httpd\") pod \"ceilometer-0\" (UID: \"ab397110-992c-4443-bad3-a62a2cb9d02c\") " pod="openstack/ceilometer-0" Jan 26 08:13:23 crc kubenswrapper[4806]: I0126 08:13:23.947048 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqbcz\" (UniqueName: \"kubernetes.io/projected/ab397110-992c-4443-bad3-a62a2cb9d02c-kube-api-access-mqbcz\") pod \"ceilometer-0\" (UID: \"ab397110-992c-4443-bad3-a62a2cb9d02c\") " pod="openstack/ceilometer-0" Jan 26 08:13:24 crc kubenswrapper[4806]: I0126 08:13:24.048382 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab397110-992c-4443-bad3-a62a2cb9d02c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ab397110-992c-4443-bad3-a62a2cb9d02c\") " pod="openstack/ceilometer-0" Jan 26 08:13:24 crc kubenswrapper[4806]: I0126 08:13:24.048455 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab397110-992c-4443-bad3-a62a2cb9d02c-scripts\") pod \"ceilometer-0\" (UID: \"ab397110-992c-4443-bad3-a62a2cb9d02c\") " pod="openstack/ceilometer-0" Jan 26 08:13:24 crc kubenswrapper[4806]: I0126 08:13:24.048484 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ab397110-992c-4443-bad3-a62a2cb9d02c-run-httpd\") pod \"ceilometer-0\" (UID: \"ab397110-992c-4443-bad3-a62a2cb9d02c\") " pod="openstack/ceilometer-0" Jan 26 08:13:24 crc kubenswrapper[4806]: I0126 08:13:24.048565 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab397110-992c-4443-bad3-a62a2cb9d02c-config-data\") pod \"ceilometer-0\" (UID: \"ab397110-992c-4443-bad3-a62a2cb9d02c\") " pod="openstack/ceilometer-0" Jan 26 08:13:24 crc kubenswrapper[4806]: I0126 08:13:24.048583 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ab397110-992c-4443-bad3-a62a2cb9d02c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ab397110-992c-4443-bad3-a62a2cb9d02c\") " pod="openstack/ceilometer-0" Jan 26 08:13:24 crc kubenswrapper[4806]: I0126 08:13:24.048603 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ab397110-992c-4443-bad3-a62a2cb9d02c-log-httpd\") pod \"ceilometer-0\" (UID: \"ab397110-992c-4443-bad3-a62a2cb9d02c\") " pod="openstack/ceilometer-0" Jan 26 08:13:24 crc kubenswrapper[4806]: I0126 08:13:24.048622 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mqbcz\" (UniqueName: \"kubernetes.io/projected/ab397110-992c-4443-bad3-a62a2cb9d02c-kube-api-access-mqbcz\") pod \"ceilometer-0\" (UID: \"ab397110-992c-4443-bad3-a62a2cb9d02c\") " pod="openstack/ceilometer-0" Jan 26 08:13:24 crc kubenswrapper[4806]: I0126 08:13:24.049183 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ab397110-992c-4443-bad3-a62a2cb9d02c-log-httpd\") pod \"ceilometer-0\" (UID: \"ab397110-992c-4443-bad3-a62a2cb9d02c\") " pod="openstack/ceilometer-0" Jan 26 08:13:24 crc kubenswrapper[4806]: I0126 08:13:24.049360 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ab397110-992c-4443-bad3-a62a2cb9d02c-run-httpd\") pod \"ceilometer-0\" (UID: \"ab397110-992c-4443-bad3-a62a2cb9d02c\") " pod="openstack/ceilometer-0" Jan 26 08:13:24 crc kubenswrapper[4806]: I0126 08:13:24.054198 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ab397110-992c-4443-bad3-a62a2cb9d02c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ab397110-992c-4443-bad3-a62a2cb9d02c\") " pod="openstack/ceilometer-0" Jan 26 08:13:24 crc kubenswrapper[4806]: I0126 08:13:24.062924 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab397110-992c-4443-bad3-a62a2cb9d02c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ab397110-992c-4443-bad3-a62a2cb9d02c\") " pod="openstack/ceilometer-0" Jan 26 08:13:24 crc kubenswrapper[4806]: I0126 08:13:24.062944 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab397110-992c-4443-bad3-a62a2cb9d02c-scripts\") pod \"ceilometer-0\" (UID: \"ab397110-992c-4443-bad3-a62a2cb9d02c\") " pod="openstack/ceilometer-0" Jan 26 08:13:24 crc kubenswrapper[4806]: I0126 08:13:24.067919 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab397110-992c-4443-bad3-a62a2cb9d02c-config-data\") pod \"ceilometer-0\" (UID: \"ab397110-992c-4443-bad3-a62a2cb9d02c\") " pod="openstack/ceilometer-0" Jan 26 08:13:24 crc kubenswrapper[4806]: I0126 08:13:24.072234 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mqbcz\" (UniqueName: \"kubernetes.io/projected/ab397110-992c-4443-bad3-a62a2cb9d02c-kube-api-access-mqbcz\") pod \"ceilometer-0\" (UID: \"ab397110-992c-4443-bad3-a62a2cb9d02c\") " pod="openstack/ceilometer-0" Jan 26 08:13:24 crc kubenswrapper[4806]: I0126 08:13:24.084694 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 08:13:24 crc kubenswrapper[4806]: I0126 08:13:24.547184 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 08:13:24 crc kubenswrapper[4806]: W0126 08:13:24.551766 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podab397110_992c_4443_bad3_a62a2cb9d02c.slice/crio-c44fbc36893df6f2c343d5f65f908b0ac607f2a12e36923f4a1069e0b3739cfd WatchSource:0}: Error finding container c44fbc36893df6f2c343d5f65f908b0ac607f2a12e36923f4a1069e0b3739cfd: Status 404 returned error can't find the container with id c44fbc36893df6f2c343d5f65f908b0ac607f2a12e36923f4a1069e0b3739cfd Jan 26 08:13:24 crc kubenswrapper[4806]: I0126 08:13:24.641287 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ab397110-992c-4443-bad3-a62a2cb9d02c","Type":"ContainerStarted","Data":"c44fbc36893df6f2c343d5f65f908b0ac607f2a12e36923f4a1069e0b3739cfd"} Jan 26 08:13:24 crc kubenswrapper[4806]: I0126 08:13:24.643384 4806 generic.go:334] "Generic (PLEG): container finished" podID="9e29f7cf-8720-4555-8418-e53025a6bdac" containerID="69d771cfe1e96645f8c20328d8864ae997495298b7110716e40c992feda75e0f" exitCode=0 Jan 26 08:13:24 crc kubenswrapper[4806]: I0126 08:13:24.643430 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-wvpkk" event={"ID":"9e29f7cf-8720-4555-8418-e53025a6bdac","Type":"ContainerDied","Data":"69d771cfe1e96645f8c20328d8864ae997495298b7110716e40c992feda75e0f"} Jan 26 08:13:25 crc kubenswrapper[4806]: I0126 08:13:25.072278 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fa73a17-a786-4301-87fb-020f835bf067" path="/var/lib/kubelet/pods/7fa73a17-a786-4301-87fb-020f835bf067/volumes" Jan 26 08:13:25 crc kubenswrapper[4806]: I0126 08:13:25.657180 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ab397110-992c-4443-bad3-a62a2cb9d02c","Type":"ContainerStarted","Data":"a570a0e4c919d2e04d4f6586f83d7d7bc47ffa62c55c5d9e1f65b698749d998a"} Jan 26 08:13:26 crc kubenswrapper[4806]: I0126 08:13:26.052176 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-wvpkk" Jan 26 08:13:26 crc kubenswrapper[4806]: I0126 08:13:26.194691 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e29f7cf-8720-4555-8418-e53025a6bdac-scripts\") pod \"9e29f7cf-8720-4555-8418-e53025a6bdac\" (UID: \"9e29f7cf-8720-4555-8418-e53025a6bdac\") " Jan 26 08:13:26 crc kubenswrapper[4806]: I0126 08:13:26.194750 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e29f7cf-8720-4555-8418-e53025a6bdac-combined-ca-bundle\") pod \"9e29f7cf-8720-4555-8418-e53025a6bdac\" (UID: \"9e29f7cf-8720-4555-8418-e53025a6bdac\") " Jan 26 08:13:26 crc kubenswrapper[4806]: I0126 08:13:26.194862 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e29f7cf-8720-4555-8418-e53025a6bdac-config-data\") pod \"9e29f7cf-8720-4555-8418-e53025a6bdac\" (UID: \"9e29f7cf-8720-4555-8418-e53025a6bdac\") " Jan 26 08:13:26 crc kubenswrapper[4806]: I0126 08:13:26.194965 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r5558\" (UniqueName: \"kubernetes.io/projected/9e29f7cf-8720-4555-8418-e53025a6bdac-kube-api-access-r5558\") pod \"9e29f7cf-8720-4555-8418-e53025a6bdac\" (UID: \"9e29f7cf-8720-4555-8418-e53025a6bdac\") " Jan 26 08:13:26 crc kubenswrapper[4806]: I0126 08:13:26.216699 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e29f7cf-8720-4555-8418-e53025a6bdac-kube-api-access-r5558" (OuterVolumeSpecName: "kube-api-access-r5558") pod "9e29f7cf-8720-4555-8418-e53025a6bdac" (UID: "9e29f7cf-8720-4555-8418-e53025a6bdac"). InnerVolumeSpecName "kube-api-access-r5558". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:13:26 crc kubenswrapper[4806]: I0126 08:13:26.218890 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e29f7cf-8720-4555-8418-e53025a6bdac-scripts" (OuterVolumeSpecName: "scripts") pod "9e29f7cf-8720-4555-8418-e53025a6bdac" (UID: "9e29f7cf-8720-4555-8418-e53025a6bdac"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:13:26 crc kubenswrapper[4806]: I0126 08:13:26.249650 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e29f7cf-8720-4555-8418-e53025a6bdac-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9e29f7cf-8720-4555-8418-e53025a6bdac" (UID: "9e29f7cf-8720-4555-8418-e53025a6bdac"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:13:26 crc kubenswrapper[4806]: I0126 08:13:26.268693 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e29f7cf-8720-4555-8418-e53025a6bdac-config-data" (OuterVolumeSpecName: "config-data") pod "9e29f7cf-8720-4555-8418-e53025a6bdac" (UID: "9e29f7cf-8720-4555-8418-e53025a6bdac"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:13:26 crc kubenswrapper[4806]: I0126 08:13:26.296689 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r5558\" (UniqueName: \"kubernetes.io/projected/9e29f7cf-8720-4555-8418-e53025a6bdac-kube-api-access-r5558\") on node \"crc\" DevicePath \"\"" Jan 26 08:13:26 crc kubenswrapper[4806]: I0126 08:13:26.297153 4806 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e29f7cf-8720-4555-8418-e53025a6bdac-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 08:13:26 crc kubenswrapper[4806]: I0126 08:13:26.297202 4806 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e29f7cf-8720-4555-8418-e53025a6bdac-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 08:13:26 crc kubenswrapper[4806]: I0126 08:13:26.297215 4806 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e29f7cf-8720-4555-8418-e53025a6bdac-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 08:13:26 crc kubenswrapper[4806]: I0126 08:13:26.666358 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ab397110-992c-4443-bad3-a62a2cb9d02c","Type":"ContainerStarted","Data":"dd16ba0b2d7b1ea3ad41bbd8ebf247917a09111a10cceb06ae55538b2f66bbfe"} Jan 26 08:13:26 crc kubenswrapper[4806]: I0126 08:13:26.666400 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ab397110-992c-4443-bad3-a62a2cb9d02c","Type":"ContainerStarted","Data":"97fd25d9d1e461461889facb51bbfbc442106a4f6abfb5d4f1772926cfe809f2"} Jan 26 08:13:26 crc kubenswrapper[4806]: I0126 08:13:26.668901 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-wvpkk" event={"ID":"9e29f7cf-8720-4555-8418-e53025a6bdac","Type":"ContainerDied","Data":"4a6b671a3a800c9827668cd2d40e1b33a5b1ac1cd6eb1e6242f1327ed6e4d44e"} Jan 26 08:13:26 crc kubenswrapper[4806]: I0126 08:13:26.668954 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4a6b671a3a800c9827668cd2d40e1b33a5b1ac1cd6eb1e6242f1327ed6e4d44e" Jan 26 08:13:26 crc kubenswrapper[4806]: I0126 08:13:26.669022 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-wvpkk" Jan 26 08:13:26 crc kubenswrapper[4806]: I0126 08:13:26.775999 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 26 08:13:26 crc kubenswrapper[4806]: E0126 08:13:26.776325 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e29f7cf-8720-4555-8418-e53025a6bdac" containerName="nova-cell0-conductor-db-sync" Jan 26 08:13:26 crc kubenswrapper[4806]: I0126 08:13:26.776344 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e29f7cf-8720-4555-8418-e53025a6bdac" containerName="nova-cell0-conductor-db-sync" Jan 26 08:13:26 crc kubenswrapper[4806]: I0126 08:13:26.776515 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e29f7cf-8720-4555-8418-e53025a6bdac" containerName="nova-cell0-conductor-db-sync" Jan 26 08:13:26 crc kubenswrapper[4806]: I0126 08:13:26.777062 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 26 08:13:26 crc kubenswrapper[4806]: I0126 08:13:26.782060 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-q4qd9" Jan 26 08:13:26 crc kubenswrapper[4806]: I0126 08:13:26.782347 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 26 08:13:26 crc kubenswrapper[4806]: I0126 08:13:26.806701 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aee89af5-d60e-4b49-938e-443c6299f3fa-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"aee89af5-d60e-4b49-938e-443c6299f3fa\") " pod="openstack/nova-cell0-conductor-0" Jan 26 08:13:26 crc kubenswrapper[4806]: I0126 08:13:26.806893 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rscq7\" (UniqueName: \"kubernetes.io/projected/aee89af5-d60e-4b49-938e-443c6299f3fa-kube-api-access-rscq7\") pod \"nova-cell0-conductor-0\" (UID: \"aee89af5-d60e-4b49-938e-443c6299f3fa\") " pod="openstack/nova-cell0-conductor-0" Jan 26 08:13:26 crc kubenswrapper[4806]: I0126 08:13:26.806919 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aee89af5-d60e-4b49-938e-443c6299f3fa-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"aee89af5-d60e-4b49-938e-443c6299f3fa\") " pod="openstack/nova-cell0-conductor-0" Jan 26 08:13:26 crc kubenswrapper[4806]: I0126 08:13:26.808332 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 26 08:13:26 crc kubenswrapper[4806]: I0126 08:13:26.908950 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rscq7\" (UniqueName: \"kubernetes.io/projected/aee89af5-d60e-4b49-938e-443c6299f3fa-kube-api-access-rscq7\") pod \"nova-cell0-conductor-0\" (UID: \"aee89af5-d60e-4b49-938e-443c6299f3fa\") " pod="openstack/nova-cell0-conductor-0" Jan 26 08:13:26 crc kubenswrapper[4806]: I0126 08:13:26.908987 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aee89af5-d60e-4b49-938e-443c6299f3fa-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"aee89af5-d60e-4b49-938e-443c6299f3fa\") " pod="openstack/nova-cell0-conductor-0" Jan 26 08:13:26 crc kubenswrapper[4806]: I0126 08:13:26.909070 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aee89af5-d60e-4b49-938e-443c6299f3fa-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"aee89af5-d60e-4b49-938e-443c6299f3fa\") " pod="openstack/nova-cell0-conductor-0" Jan 26 08:13:26 crc kubenswrapper[4806]: I0126 08:13:26.913493 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aee89af5-d60e-4b49-938e-443c6299f3fa-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"aee89af5-d60e-4b49-938e-443c6299f3fa\") " pod="openstack/nova-cell0-conductor-0" Jan 26 08:13:26 crc kubenswrapper[4806]: I0126 08:13:26.914382 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aee89af5-d60e-4b49-938e-443c6299f3fa-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"aee89af5-d60e-4b49-938e-443c6299f3fa\") " pod="openstack/nova-cell0-conductor-0" Jan 26 08:13:26 crc kubenswrapper[4806]: I0126 08:13:26.924732 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rscq7\" (UniqueName: \"kubernetes.io/projected/aee89af5-d60e-4b49-938e-443c6299f3fa-kube-api-access-rscq7\") pod \"nova-cell0-conductor-0\" (UID: \"aee89af5-d60e-4b49-938e-443c6299f3fa\") " pod="openstack/nova-cell0-conductor-0" Jan 26 08:13:27 crc kubenswrapper[4806]: I0126 08:13:27.098032 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 26 08:13:27 crc kubenswrapper[4806]: I0126 08:13:27.542670 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 26 08:13:27 crc kubenswrapper[4806]: I0126 08:13:27.680693 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"aee89af5-d60e-4b49-938e-443c6299f3fa","Type":"ContainerStarted","Data":"17a5a37ac4fee59432855dda3cb66fadd739b46db7b290599ca55e3a9b16f9a9"} Jan 26 08:13:28 crc kubenswrapper[4806]: I0126 08:13:28.694725 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"aee89af5-d60e-4b49-938e-443c6299f3fa","Type":"ContainerStarted","Data":"b4df371a8575bb947fe548bb348ce25e2bb6ecd9088985d70fea9b29980f316c"} Jan 26 08:13:28 crc kubenswrapper[4806]: I0126 08:13:28.695705 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 26 08:13:28 crc kubenswrapper[4806]: I0126 08:13:28.707592 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ab397110-992c-4443-bad3-a62a2cb9d02c","Type":"ContainerStarted","Data":"abb4b2d62ff50b2d12fd76939d683603c2f9db0de3a288d206bc92e1fd7763d4"} Jan 26 08:13:28 crc kubenswrapper[4806]: I0126 08:13:28.708598 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 26 08:13:28 crc kubenswrapper[4806]: I0126 08:13:28.721438 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.721419266 podStartE2EDuration="2.721419266s" podCreationTimestamp="2026-01-26 08:13:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 08:13:28.713820403 +0000 UTC m=+1187.978228459" watchObservedRunningTime="2026-01-26 08:13:28.721419266 +0000 UTC m=+1187.985827312" Jan 26 08:13:28 crc kubenswrapper[4806]: I0126 08:13:28.742485 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.514158981 podStartE2EDuration="5.742470044s" podCreationTimestamp="2026-01-26 08:13:23 +0000 UTC" firstStartedPulling="2026-01-26 08:13:24.554286669 +0000 UTC m=+1183.818694725" lastFinishedPulling="2026-01-26 08:13:27.782597732 +0000 UTC m=+1187.047005788" observedRunningTime="2026-01-26 08:13:28.7419587 +0000 UTC m=+1188.006366766" watchObservedRunningTime="2026-01-26 08:13:28.742470044 +0000 UTC m=+1188.006878100" Jan 26 08:13:32 crc kubenswrapper[4806]: I0126 08:13:32.131677 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 26 08:13:32 crc kubenswrapper[4806]: I0126 08:13:32.815176 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-pwr48"] Jan 26 08:13:32 crc kubenswrapper[4806]: I0126 08:13:32.819447 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-pwr48" Jan 26 08:13:32 crc kubenswrapper[4806]: I0126 08:13:32.821492 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 26 08:13:32 crc kubenswrapper[4806]: I0126 08:13:32.821889 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 26 08:13:32 crc kubenswrapper[4806]: I0126 08:13:32.828951 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-pwr48"] Jan 26 08:13:33 crc kubenswrapper[4806]: I0126 08:13:33.018896 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 26 08:13:33 crc kubenswrapper[4806]: I0126 08:13:33.020874 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d495701-d98d-4c0a-be75-2330f3589594-config-data\") pod \"nova-cell0-cell-mapping-pwr48\" (UID: \"0d495701-d98d-4c0a-be75-2330f3589594\") " pod="openstack/nova-cell0-cell-mapping-pwr48" Jan 26 08:13:33 crc kubenswrapper[4806]: I0126 08:13:33.020921 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4brfn\" (UniqueName: \"kubernetes.io/projected/0d495701-d98d-4c0a-be75-2330f3589594-kube-api-access-4brfn\") pod \"nova-cell0-cell-mapping-pwr48\" (UID: \"0d495701-d98d-4c0a-be75-2330f3589594\") " pod="openstack/nova-cell0-cell-mapping-pwr48" Jan 26 08:13:33 crc kubenswrapper[4806]: I0126 08:13:33.020977 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d495701-d98d-4c0a-be75-2330f3589594-scripts\") pod \"nova-cell0-cell-mapping-pwr48\" (UID: \"0d495701-d98d-4c0a-be75-2330f3589594\") " pod="openstack/nova-cell0-cell-mapping-pwr48" Jan 26 08:13:33 crc kubenswrapper[4806]: I0126 08:13:33.021051 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 08:13:33 crc kubenswrapper[4806]: I0126 08:13:33.021271 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d495701-d98d-4c0a-be75-2330f3589594-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-pwr48\" (UID: \"0d495701-d98d-4c0a-be75-2330f3589594\") " pod="openstack/nova-cell0-cell-mapping-pwr48" Jan 26 08:13:33 crc kubenswrapper[4806]: I0126 08:13:33.023235 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 26 08:13:33 crc kubenswrapper[4806]: I0126 08:13:33.056859 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 26 08:13:33 crc kubenswrapper[4806]: I0126 08:13:33.087298 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 26 08:13:33 crc kubenswrapper[4806]: I0126 08:13:33.088915 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 26 08:13:33 crc kubenswrapper[4806]: I0126 08:13:33.090759 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 26 08:13:33 crc kubenswrapper[4806]: I0126 08:13:33.123979 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec315438-1a4f-4779-ac65-7c8adcbf0c69-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ec315438-1a4f-4779-ac65-7c8adcbf0c69\") " pod="openstack/nova-api-0" Jan 26 08:13:33 crc kubenswrapper[4806]: I0126 08:13:33.124025 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d495701-d98d-4c0a-be75-2330f3589594-scripts\") pod \"nova-cell0-cell-mapping-pwr48\" (UID: \"0d495701-d98d-4c0a-be75-2330f3589594\") " pod="openstack/nova-cell0-cell-mapping-pwr48" Jan 26 08:13:33 crc kubenswrapper[4806]: I0126 08:13:33.124045 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec315438-1a4f-4779-ac65-7c8adcbf0c69-config-data\") pod \"nova-api-0\" (UID: \"ec315438-1a4f-4779-ac65-7c8adcbf0c69\") " pod="openstack/nova-api-0" Jan 26 08:13:33 crc kubenswrapper[4806]: I0126 08:13:33.124059 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ec315438-1a4f-4779-ac65-7c8adcbf0c69-logs\") pod \"nova-api-0\" (UID: \"ec315438-1a4f-4779-ac65-7c8adcbf0c69\") " pod="openstack/nova-api-0" Jan 26 08:13:33 crc kubenswrapper[4806]: I0126 08:13:33.124083 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ac27597-a156-4995-a67e-98858e667c8a-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"4ac27597-a156-4995-a67e-98858e667c8a\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 08:13:33 crc kubenswrapper[4806]: I0126 08:13:33.124118 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ac27597-a156-4995-a67e-98858e667c8a-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"4ac27597-a156-4995-a67e-98858e667c8a\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 08:13:33 crc kubenswrapper[4806]: I0126 08:13:33.124155 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wkqh\" (UniqueName: \"kubernetes.io/projected/ec315438-1a4f-4779-ac65-7c8adcbf0c69-kube-api-access-2wkqh\") pod \"nova-api-0\" (UID: \"ec315438-1a4f-4779-ac65-7c8adcbf0c69\") " pod="openstack/nova-api-0" Jan 26 08:13:33 crc kubenswrapper[4806]: I0126 08:13:33.124256 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrg9r\" (UniqueName: \"kubernetes.io/projected/4ac27597-a156-4995-a67e-98858e667c8a-kube-api-access-hrg9r\") pod \"nova-cell1-novncproxy-0\" (UID: \"4ac27597-a156-4995-a67e-98858e667c8a\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 08:13:33 crc kubenswrapper[4806]: I0126 08:13:33.124286 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d495701-d98d-4c0a-be75-2330f3589594-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-pwr48\" (UID: \"0d495701-d98d-4c0a-be75-2330f3589594\") " pod="openstack/nova-cell0-cell-mapping-pwr48" Jan 26 08:13:33 crc kubenswrapper[4806]: I0126 08:13:33.124321 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d495701-d98d-4c0a-be75-2330f3589594-config-data\") pod \"nova-cell0-cell-mapping-pwr48\" (UID: \"0d495701-d98d-4c0a-be75-2330f3589594\") " pod="openstack/nova-cell0-cell-mapping-pwr48" Jan 26 08:13:33 crc kubenswrapper[4806]: I0126 08:13:33.124360 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4brfn\" (UniqueName: \"kubernetes.io/projected/0d495701-d98d-4c0a-be75-2330f3589594-kube-api-access-4brfn\") pod \"nova-cell0-cell-mapping-pwr48\" (UID: \"0d495701-d98d-4c0a-be75-2330f3589594\") " pod="openstack/nova-cell0-cell-mapping-pwr48" Jan 26 08:13:33 crc kubenswrapper[4806]: I0126 08:13:33.135287 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d495701-d98d-4c0a-be75-2330f3589594-scripts\") pod \"nova-cell0-cell-mapping-pwr48\" (UID: \"0d495701-d98d-4c0a-be75-2330f3589594\") " pod="openstack/nova-cell0-cell-mapping-pwr48" Jan 26 08:13:33 crc kubenswrapper[4806]: I0126 08:13:33.151368 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d495701-d98d-4c0a-be75-2330f3589594-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-pwr48\" (UID: \"0d495701-d98d-4c0a-be75-2330f3589594\") " pod="openstack/nova-cell0-cell-mapping-pwr48" Jan 26 08:13:33 crc kubenswrapper[4806]: I0126 08:13:33.154895 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d495701-d98d-4c0a-be75-2330f3589594-config-data\") pod \"nova-cell0-cell-mapping-pwr48\" (UID: \"0d495701-d98d-4c0a-be75-2330f3589594\") " pod="openstack/nova-cell0-cell-mapping-pwr48" Jan 26 08:13:33 crc kubenswrapper[4806]: I0126 08:13:33.154970 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 26 08:13:33 crc kubenswrapper[4806]: I0126 08:13:33.179884 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4brfn\" (UniqueName: \"kubernetes.io/projected/0d495701-d98d-4c0a-be75-2330f3589594-kube-api-access-4brfn\") pod \"nova-cell0-cell-mapping-pwr48\" (UID: \"0d495701-d98d-4c0a-be75-2330f3589594\") " pod="openstack/nova-cell0-cell-mapping-pwr48" Jan 26 08:13:33 crc kubenswrapper[4806]: I0126 08:13:33.227057 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrg9r\" (UniqueName: \"kubernetes.io/projected/4ac27597-a156-4995-a67e-98858e667c8a-kube-api-access-hrg9r\") pod \"nova-cell1-novncproxy-0\" (UID: \"4ac27597-a156-4995-a67e-98858e667c8a\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 08:13:33 crc kubenswrapper[4806]: I0126 08:13:33.227194 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec315438-1a4f-4779-ac65-7c8adcbf0c69-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ec315438-1a4f-4779-ac65-7c8adcbf0c69\") " pod="openstack/nova-api-0" Jan 26 08:13:33 crc kubenswrapper[4806]: I0126 08:13:33.227214 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec315438-1a4f-4779-ac65-7c8adcbf0c69-config-data\") pod \"nova-api-0\" (UID: \"ec315438-1a4f-4779-ac65-7c8adcbf0c69\") " pod="openstack/nova-api-0" Jan 26 08:13:33 crc kubenswrapper[4806]: I0126 08:13:33.227232 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ec315438-1a4f-4779-ac65-7c8adcbf0c69-logs\") pod \"nova-api-0\" (UID: \"ec315438-1a4f-4779-ac65-7c8adcbf0c69\") " pod="openstack/nova-api-0" Jan 26 08:13:33 crc kubenswrapper[4806]: I0126 08:13:33.227270 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ac27597-a156-4995-a67e-98858e667c8a-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"4ac27597-a156-4995-a67e-98858e667c8a\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 08:13:33 crc kubenswrapper[4806]: I0126 08:13:33.227313 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ac27597-a156-4995-a67e-98858e667c8a-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"4ac27597-a156-4995-a67e-98858e667c8a\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 08:13:33 crc kubenswrapper[4806]: I0126 08:13:33.227338 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wkqh\" (UniqueName: \"kubernetes.io/projected/ec315438-1a4f-4779-ac65-7c8adcbf0c69-kube-api-access-2wkqh\") pod \"nova-api-0\" (UID: \"ec315438-1a4f-4779-ac65-7c8adcbf0c69\") " pod="openstack/nova-api-0" Jan 26 08:13:33 crc kubenswrapper[4806]: I0126 08:13:33.228178 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ec315438-1a4f-4779-ac65-7c8adcbf0c69-logs\") pod \"nova-api-0\" (UID: \"ec315438-1a4f-4779-ac65-7c8adcbf0c69\") " pod="openstack/nova-api-0" Jan 26 08:13:33 crc kubenswrapper[4806]: I0126 08:13:33.235034 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec315438-1a4f-4779-ac65-7c8adcbf0c69-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ec315438-1a4f-4779-ac65-7c8adcbf0c69\") " pod="openstack/nova-api-0" Jan 26 08:13:33 crc kubenswrapper[4806]: I0126 08:13:33.235477 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec315438-1a4f-4779-ac65-7c8adcbf0c69-config-data\") pod \"nova-api-0\" (UID: \"ec315438-1a4f-4779-ac65-7c8adcbf0c69\") " pod="openstack/nova-api-0" Jan 26 08:13:33 crc kubenswrapper[4806]: I0126 08:13:33.235862 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ac27597-a156-4995-a67e-98858e667c8a-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"4ac27597-a156-4995-a67e-98858e667c8a\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 08:13:33 crc kubenswrapper[4806]: I0126 08:13:33.262068 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ac27597-a156-4995-a67e-98858e667c8a-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"4ac27597-a156-4995-a67e-98858e667c8a\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 08:13:33 crc kubenswrapper[4806]: I0126 08:13:33.277634 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 26 08:13:33 crc kubenswrapper[4806]: I0126 08:13:33.279225 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 08:13:33 crc kubenswrapper[4806]: I0126 08:13:33.288961 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 26 08:13:33 crc kubenswrapper[4806]: I0126 08:13:33.312750 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 08:13:33 crc kubenswrapper[4806]: I0126 08:13:33.316486 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2wkqh\" (UniqueName: \"kubernetes.io/projected/ec315438-1a4f-4779-ac65-7c8adcbf0c69-kube-api-access-2wkqh\") pod \"nova-api-0\" (UID: \"ec315438-1a4f-4779-ac65-7c8adcbf0c69\") " pod="openstack/nova-api-0" Jan 26 08:13:33 crc kubenswrapper[4806]: I0126 08:13:33.327913 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7649d82-1472-4df9-afad-04cf71d5138b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d7649d82-1472-4df9-afad-04cf71d5138b\") " pod="openstack/nova-metadata-0" Jan 26 08:13:33 crc kubenswrapper[4806]: I0126 08:13:33.327965 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65x74\" (UniqueName: \"kubernetes.io/projected/d7649d82-1472-4df9-afad-04cf71d5138b-kube-api-access-65x74\") pod \"nova-metadata-0\" (UID: \"d7649d82-1472-4df9-afad-04cf71d5138b\") " pod="openstack/nova-metadata-0" Jan 26 08:13:33 crc kubenswrapper[4806]: I0126 08:13:33.327999 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7649d82-1472-4df9-afad-04cf71d5138b-config-data\") pod \"nova-metadata-0\" (UID: \"d7649d82-1472-4df9-afad-04cf71d5138b\") " pod="openstack/nova-metadata-0" Jan 26 08:13:33 crc kubenswrapper[4806]: I0126 08:13:33.328059 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d7649d82-1472-4df9-afad-04cf71d5138b-logs\") pod \"nova-metadata-0\" (UID: \"d7649d82-1472-4df9-afad-04cf71d5138b\") " pod="openstack/nova-metadata-0" Jan 26 08:13:33 crc kubenswrapper[4806]: I0126 08:13:33.328957 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrg9r\" (UniqueName: \"kubernetes.io/projected/4ac27597-a156-4995-a67e-98858e667c8a-kube-api-access-hrg9r\") pod \"nova-cell1-novncproxy-0\" (UID: \"4ac27597-a156-4995-a67e-98858e667c8a\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 08:13:33 crc kubenswrapper[4806]: I0126 08:13:33.352066 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 08:13:33 crc kubenswrapper[4806]: I0126 08:13:33.404982 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 26 08:13:33 crc kubenswrapper[4806]: I0126 08:13:33.417760 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 08:13:33 crc kubenswrapper[4806]: I0126 08:13:33.419097 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 08:13:33 crc kubenswrapper[4806]: I0126 08:13:33.423712 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 26 08:13:33 crc kubenswrapper[4806]: I0126 08:13:33.431391 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7649d82-1472-4df9-afad-04cf71d5138b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d7649d82-1472-4df9-afad-04cf71d5138b\") " pod="openstack/nova-metadata-0" Jan 26 08:13:33 crc kubenswrapper[4806]: I0126 08:13:33.431451 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-65x74\" (UniqueName: \"kubernetes.io/projected/d7649d82-1472-4df9-afad-04cf71d5138b-kube-api-access-65x74\") pod \"nova-metadata-0\" (UID: \"d7649d82-1472-4df9-afad-04cf71d5138b\") " pod="openstack/nova-metadata-0" Jan 26 08:13:33 crc kubenswrapper[4806]: I0126 08:13:33.431495 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7649d82-1472-4df9-afad-04cf71d5138b-config-data\") pod \"nova-metadata-0\" (UID: \"d7649d82-1472-4df9-afad-04cf71d5138b\") " pod="openstack/nova-metadata-0" Jan 26 08:13:33 crc kubenswrapper[4806]: I0126 08:13:33.431571 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d7649d82-1472-4df9-afad-04cf71d5138b-logs\") pod \"nova-metadata-0\" (UID: \"d7649d82-1472-4df9-afad-04cf71d5138b\") " pod="openstack/nova-metadata-0" Jan 26 08:13:33 crc kubenswrapper[4806]: I0126 08:13:33.437484 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-pwr48" Jan 26 08:13:33 crc kubenswrapper[4806]: I0126 08:13:33.437703 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7649d82-1472-4df9-afad-04cf71d5138b-config-data\") pod \"nova-metadata-0\" (UID: \"d7649d82-1472-4df9-afad-04cf71d5138b\") " pod="openstack/nova-metadata-0" Jan 26 08:13:33 crc kubenswrapper[4806]: I0126 08:13:33.440306 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d7649d82-1472-4df9-afad-04cf71d5138b-logs\") pod \"nova-metadata-0\" (UID: \"d7649d82-1472-4df9-afad-04cf71d5138b\") " pod="openstack/nova-metadata-0" Jan 26 08:13:33 crc kubenswrapper[4806]: I0126 08:13:33.448756 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 08:13:33 crc kubenswrapper[4806]: I0126 08:13:33.450938 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7649d82-1472-4df9-afad-04cf71d5138b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d7649d82-1472-4df9-afad-04cf71d5138b\") " pod="openstack/nova-metadata-0" Jan 26 08:13:33 crc kubenswrapper[4806]: I0126 08:13:33.469184 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-65x74\" (UniqueName: \"kubernetes.io/projected/d7649d82-1472-4df9-afad-04cf71d5138b-kube-api-access-65x74\") pod \"nova-metadata-0\" (UID: \"d7649d82-1472-4df9-afad-04cf71d5138b\") " pod="openstack/nova-metadata-0" Jan 26 08:13:33 crc kubenswrapper[4806]: I0126 08:13:33.524810 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-568d7fd7cf-qwtvl"] Jan 26 08:13:33 crc kubenswrapper[4806]: I0126 08:13:33.526529 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-568d7fd7cf-qwtvl" Jan 26 08:13:33 crc kubenswrapper[4806]: I0126 08:13:33.541640 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-568d7fd7cf-qwtvl"] Jan 26 08:13:33 crc kubenswrapper[4806]: I0126 08:13:33.544825 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7717aee-8b1d-48e4-87f0-ea8d2fd313c2-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f7717aee-8b1d-48e4-87f0-ea8d2fd313c2\") " pod="openstack/nova-scheduler-0" Jan 26 08:13:33 crc kubenswrapper[4806]: I0126 08:13:33.545255 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44tkq\" (UniqueName: \"kubernetes.io/projected/f7717aee-8b1d-48e4-87f0-ea8d2fd313c2-kube-api-access-44tkq\") pod \"nova-scheduler-0\" (UID: \"f7717aee-8b1d-48e4-87f0-ea8d2fd313c2\") " pod="openstack/nova-scheduler-0" Jan 26 08:13:33 crc kubenswrapper[4806]: I0126 08:13:33.545299 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7717aee-8b1d-48e4-87f0-ea8d2fd313c2-config-data\") pod \"nova-scheduler-0\" (UID: \"f7717aee-8b1d-48e4-87f0-ea8d2fd313c2\") " pod="openstack/nova-scheduler-0" Jan 26 08:13:33 crc kubenswrapper[4806]: I0126 08:13:33.647739 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44tkq\" (UniqueName: \"kubernetes.io/projected/f7717aee-8b1d-48e4-87f0-ea8d2fd313c2-kube-api-access-44tkq\") pod \"nova-scheduler-0\" (UID: \"f7717aee-8b1d-48e4-87f0-ea8d2fd313c2\") " pod="openstack/nova-scheduler-0" Jan 26 08:13:33 crc kubenswrapper[4806]: I0126 08:13:33.647789 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7717aee-8b1d-48e4-87f0-ea8d2fd313c2-config-data\") pod \"nova-scheduler-0\" (UID: \"f7717aee-8b1d-48e4-87f0-ea8d2fd313c2\") " pod="openstack/nova-scheduler-0" Jan 26 08:13:33 crc kubenswrapper[4806]: I0126 08:13:33.647812 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/001ecd97-04e4-4d0e-a713-34e7fc0a80a7-ovsdbserver-sb\") pod \"dnsmasq-dns-568d7fd7cf-qwtvl\" (UID: \"001ecd97-04e4-4d0e-a713-34e7fc0a80a7\") " pod="openstack/dnsmasq-dns-568d7fd7cf-qwtvl" Jan 26 08:13:33 crc kubenswrapper[4806]: I0126 08:13:33.647838 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/001ecd97-04e4-4d0e-a713-34e7fc0a80a7-dns-svc\") pod \"dnsmasq-dns-568d7fd7cf-qwtvl\" (UID: \"001ecd97-04e4-4d0e-a713-34e7fc0a80a7\") " pod="openstack/dnsmasq-dns-568d7fd7cf-qwtvl" Jan 26 08:13:33 crc kubenswrapper[4806]: I0126 08:13:33.647869 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/001ecd97-04e4-4d0e-a713-34e7fc0a80a7-config\") pod \"dnsmasq-dns-568d7fd7cf-qwtvl\" (UID: \"001ecd97-04e4-4d0e-a713-34e7fc0a80a7\") " pod="openstack/dnsmasq-dns-568d7fd7cf-qwtvl" Jan 26 08:13:33 crc kubenswrapper[4806]: I0126 08:13:33.647898 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/001ecd97-04e4-4d0e-a713-34e7fc0a80a7-ovsdbserver-nb\") pod \"dnsmasq-dns-568d7fd7cf-qwtvl\" (UID: \"001ecd97-04e4-4d0e-a713-34e7fc0a80a7\") " pod="openstack/dnsmasq-dns-568d7fd7cf-qwtvl" Jan 26 08:13:33 crc kubenswrapper[4806]: I0126 08:13:33.647946 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/001ecd97-04e4-4d0e-a713-34e7fc0a80a7-dns-swift-storage-0\") pod \"dnsmasq-dns-568d7fd7cf-qwtvl\" (UID: \"001ecd97-04e4-4d0e-a713-34e7fc0a80a7\") " pod="openstack/dnsmasq-dns-568d7fd7cf-qwtvl" Jan 26 08:13:33 crc kubenswrapper[4806]: I0126 08:13:33.648013 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7717aee-8b1d-48e4-87f0-ea8d2fd313c2-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f7717aee-8b1d-48e4-87f0-ea8d2fd313c2\") " pod="openstack/nova-scheduler-0" Jan 26 08:13:33 crc kubenswrapper[4806]: I0126 08:13:33.648036 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgs66\" (UniqueName: \"kubernetes.io/projected/001ecd97-04e4-4d0e-a713-34e7fc0a80a7-kube-api-access-pgs66\") pod \"dnsmasq-dns-568d7fd7cf-qwtvl\" (UID: \"001ecd97-04e4-4d0e-a713-34e7fc0a80a7\") " pod="openstack/dnsmasq-dns-568d7fd7cf-qwtvl" Jan 26 08:13:33 crc kubenswrapper[4806]: I0126 08:13:33.654082 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7717aee-8b1d-48e4-87f0-ea8d2fd313c2-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f7717aee-8b1d-48e4-87f0-ea8d2fd313c2\") " pod="openstack/nova-scheduler-0" Jan 26 08:13:33 crc kubenswrapper[4806]: I0126 08:13:33.654702 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7717aee-8b1d-48e4-87f0-ea8d2fd313c2-config-data\") pod \"nova-scheduler-0\" (UID: \"f7717aee-8b1d-48e4-87f0-ea8d2fd313c2\") " pod="openstack/nova-scheduler-0" Jan 26 08:13:33 crc kubenswrapper[4806]: I0126 08:13:33.677653 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44tkq\" (UniqueName: \"kubernetes.io/projected/f7717aee-8b1d-48e4-87f0-ea8d2fd313c2-kube-api-access-44tkq\") pod \"nova-scheduler-0\" (UID: \"f7717aee-8b1d-48e4-87f0-ea8d2fd313c2\") " pod="openstack/nova-scheduler-0" Jan 26 08:13:33 crc kubenswrapper[4806]: I0126 08:13:33.721956 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 08:13:33 crc kubenswrapper[4806]: I0126 08:13:33.753753 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pgs66\" (UniqueName: \"kubernetes.io/projected/001ecd97-04e4-4d0e-a713-34e7fc0a80a7-kube-api-access-pgs66\") pod \"dnsmasq-dns-568d7fd7cf-qwtvl\" (UID: \"001ecd97-04e4-4d0e-a713-34e7fc0a80a7\") " pod="openstack/dnsmasq-dns-568d7fd7cf-qwtvl" Jan 26 08:13:33 crc kubenswrapper[4806]: I0126 08:13:33.753808 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/001ecd97-04e4-4d0e-a713-34e7fc0a80a7-ovsdbserver-sb\") pod \"dnsmasq-dns-568d7fd7cf-qwtvl\" (UID: \"001ecd97-04e4-4d0e-a713-34e7fc0a80a7\") " pod="openstack/dnsmasq-dns-568d7fd7cf-qwtvl" Jan 26 08:13:33 crc kubenswrapper[4806]: I0126 08:13:33.753834 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/001ecd97-04e4-4d0e-a713-34e7fc0a80a7-dns-svc\") pod \"dnsmasq-dns-568d7fd7cf-qwtvl\" (UID: \"001ecd97-04e4-4d0e-a713-34e7fc0a80a7\") " pod="openstack/dnsmasq-dns-568d7fd7cf-qwtvl" Jan 26 08:13:33 crc kubenswrapper[4806]: I0126 08:13:33.753875 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/001ecd97-04e4-4d0e-a713-34e7fc0a80a7-config\") pod \"dnsmasq-dns-568d7fd7cf-qwtvl\" (UID: \"001ecd97-04e4-4d0e-a713-34e7fc0a80a7\") " pod="openstack/dnsmasq-dns-568d7fd7cf-qwtvl" Jan 26 08:13:33 crc kubenswrapper[4806]: I0126 08:13:33.753906 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/001ecd97-04e4-4d0e-a713-34e7fc0a80a7-ovsdbserver-nb\") pod \"dnsmasq-dns-568d7fd7cf-qwtvl\" (UID: \"001ecd97-04e4-4d0e-a713-34e7fc0a80a7\") " pod="openstack/dnsmasq-dns-568d7fd7cf-qwtvl" Jan 26 08:13:33 crc kubenswrapper[4806]: I0126 08:13:33.753953 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/001ecd97-04e4-4d0e-a713-34e7fc0a80a7-dns-swift-storage-0\") pod \"dnsmasq-dns-568d7fd7cf-qwtvl\" (UID: \"001ecd97-04e4-4d0e-a713-34e7fc0a80a7\") " pod="openstack/dnsmasq-dns-568d7fd7cf-qwtvl" Jan 26 08:13:33 crc kubenswrapper[4806]: I0126 08:13:33.755117 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/001ecd97-04e4-4d0e-a713-34e7fc0a80a7-dns-swift-storage-0\") pod \"dnsmasq-dns-568d7fd7cf-qwtvl\" (UID: \"001ecd97-04e4-4d0e-a713-34e7fc0a80a7\") " pod="openstack/dnsmasq-dns-568d7fd7cf-qwtvl" Jan 26 08:13:33 crc kubenswrapper[4806]: I0126 08:13:33.755711 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/001ecd97-04e4-4d0e-a713-34e7fc0a80a7-config\") pod \"dnsmasq-dns-568d7fd7cf-qwtvl\" (UID: \"001ecd97-04e4-4d0e-a713-34e7fc0a80a7\") " pod="openstack/dnsmasq-dns-568d7fd7cf-qwtvl" Jan 26 08:13:33 crc kubenswrapper[4806]: I0126 08:13:33.755764 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/001ecd97-04e4-4d0e-a713-34e7fc0a80a7-dns-svc\") pod \"dnsmasq-dns-568d7fd7cf-qwtvl\" (UID: \"001ecd97-04e4-4d0e-a713-34e7fc0a80a7\") " pod="openstack/dnsmasq-dns-568d7fd7cf-qwtvl" Jan 26 08:13:33 crc kubenswrapper[4806]: I0126 08:13:33.756237 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/001ecd97-04e4-4d0e-a713-34e7fc0a80a7-ovsdbserver-nb\") pod \"dnsmasq-dns-568d7fd7cf-qwtvl\" (UID: \"001ecd97-04e4-4d0e-a713-34e7fc0a80a7\") " pod="openstack/dnsmasq-dns-568d7fd7cf-qwtvl" Jan 26 08:13:33 crc kubenswrapper[4806]: I0126 08:13:33.756606 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/001ecd97-04e4-4d0e-a713-34e7fc0a80a7-ovsdbserver-sb\") pod \"dnsmasq-dns-568d7fd7cf-qwtvl\" (UID: \"001ecd97-04e4-4d0e-a713-34e7fc0a80a7\") " pod="openstack/dnsmasq-dns-568d7fd7cf-qwtvl" Jan 26 08:13:33 crc kubenswrapper[4806]: I0126 08:13:33.757742 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 08:13:33 crc kubenswrapper[4806]: I0126 08:13:33.782151 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pgs66\" (UniqueName: \"kubernetes.io/projected/001ecd97-04e4-4d0e-a713-34e7fc0a80a7-kube-api-access-pgs66\") pod \"dnsmasq-dns-568d7fd7cf-qwtvl\" (UID: \"001ecd97-04e4-4d0e-a713-34e7fc0a80a7\") " pod="openstack/dnsmasq-dns-568d7fd7cf-qwtvl" Jan 26 08:13:33 crc kubenswrapper[4806]: I0126 08:13:33.865104 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-568d7fd7cf-qwtvl" Jan 26 08:13:34 crc kubenswrapper[4806]: I0126 08:13:34.206999 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 26 08:13:34 crc kubenswrapper[4806]: I0126 08:13:34.378259 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 26 08:13:34 crc kubenswrapper[4806]: W0126 08:13:34.458847 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0d495701_d98d_4c0a_be75_2330f3589594.slice/crio-31e93cffeec5602365abbd2e0b2ad2b0811f931457702e0402364446b67f47bf WatchSource:0}: Error finding container 31e93cffeec5602365abbd2e0b2ad2b0811f931457702e0402364446b67f47bf: Status 404 returned error can't find the container with id 31e93cffeec5602365abbd2e0b2ad2b0811f931457702e0402364446b67f47bf Jan 26 08:13:34 crc kubenswrapper[4806]: I0126 08:13:34.467893 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-pwr48"] Jan 26 08:13:34 crc kubenswrapper[4806]: I0126 08:13:34.729653 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 08:13:34 crc kubenswrapper[4806]: I0126 08:13:34.791069 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 08:13:34 crc kubenswrapper[4806]: I0126 08:13:34.792126 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"4ac27597-a156-4995-a67e-98858e667c8a","Type":"ContainerStarted","Data":"0ceaa7a10c746943f0423e155af4fd22a17a5920a04d8a69626e862748a2d675"} Jan 26 08:13:34 crc kubenswrapper[4806]: I0126 08:13:34.793664 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ec315438-1a4f-4779-ac65-7c8adcbf0c69","Type":"ContainerStarted","Data":"3486154aa5c012fb03f8a1e2d74f918de240a494bc1d82a4e7df4d9534517699"} Jan 26 08:13:34 crc kubenswrapper[4806]: W0126 08:13:34.799791 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd7649d82_1472_4df9_afad_04cf71d5138b.slice/crio-1c1b08b0b8994c08dbea2c6f2dbe699701caa6f0fba9785a17ee9a7c0c7bdf76 WatchSource:0}: Error finding container 1c1b08b0b8994c08dbea2c6f2dbe699701caa6f0fba9785a17ee9a7c0c7bdf76: Status 404 returned error can't find the container with id 1c1b08b0b8994c08dbea2c6f2dbe699701caa6f0fba9785a17ee9a7c0c7bdf76 Jan 26 08:13:34 crc kubenswrapper[4806]: I0126 08:13:34.799919 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f7717aee-8b1d-48e4-87f0-ea8d2fd313c2","Type":"ContainerStarted","Data":"9b77a2c4b3f9558cdf37ef3b42d2bf813279f183849be66dca9465df20424fcc"} Jan 26 08:13:34 crc kubenswrapper[4806]: I0126 08:13:34.804100 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-pwr48" event={"ID":"0d495701-d98d-4c0a-be75-2330f3589594","Type":"ContainerStarted","Data":"31e93cffeec5602365abbd2e0b2ad2b0811f931457702e0402364446b67f47bf"} Jan 26 08:13:34 crc kubenswrapper[4806]: I0126 08:13:34.910394 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-568d7fd7cf-qwtvl"] Jan 26 08:13:34 crc kubenswrapper[4806]: W0126 08:13:34.918342 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod001ecd97_04e4_4d0e_a713_34e7fc0a80a7.slice/crio-d6099c6f109b684dea38008828d7e9e04e616af8cc8a6f4030bb6e4b5e0006b4 WatchSource:0}: Error finding container d6099c6f109b684dea38008828d7e9e04e616af8cc8a6f4030bb6e4b5e0006b4: Status 404 returned error can't find the container with id d6099c6f109b684dea38008828d7e9e04e616af8cc8a6f4030bb6e4b5e0006b4 Jan 26 08:13:35 crc kubenswrapper[4806]: I0126 08:13:35.825865 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-pwr48" event={"ID":"0d495701-d98d-4c0a-be75-2330f3589594","Type":"ContainerStarted","Data":"89e896068718c5e18ba6426ea7fd689d74dd449ffa6ca67a6ea77d410009d80e"} Jan 26 08:13:35 crc kubenswrapper[4806]: I0126 08:13:35.833467 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d7649d82-1472-4df9-afad-04cf71d5138b","Type":"ContainerStarted","Data":"1c1b08b0b8994c08dbea2c6f2dbe699701caa6f0fba9785a17ee9a7c0c7bdf76"} Jan 26 08:13:35 crc kubenswrapper[4806]: I0126 08:13:35.841234 4806 generic.go:334] "Generic (PLEG): container finished" podID="001ecd97-04e4-4d0e-a713-34e7fc0a80a7" containerID="466eb1c9b8fe43db8d40e8ff281f9233eb106cae8827c8d209937e5a72210c24" exitCode=0 Jan 26 08:13:35 crc kubenswrapper[4806]: I0126 08:13:35.841490 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-568d7fd7cf-qwtvl" event={"ID":"001ecd97-04e4-4d0e-a713-34e7fc0a80a7","Type":"ContainerDied","Data":"466eb1c9b8fe43db8d40e8ff281f9233eb106cae8827c8d209937e5a72210c24"} Jan 26 08:13:35 crc kubenswrapper[4806]: I0126 08:13:35.841647 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-568d7fd7cf-qwtvl" event={"ID":"001ecd97-04e4-4d0e-a713-34e7fc0a80a7","Type":"ContainerStarted","Data":"d6099c6f109b684dea38008828d7e9e04e616af8cc8a6f4030bb6e4b5e0006b4"} Jan 26 08:13:35 crc kubenswrapper[4806]: I0126 08:13:35.856697 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-pwr48" podStartSLOduration=3.856673729 podStartE2EDuration="3.856673729s" podCreationTimestamp="2026-01-26 08:13:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 08:13:35.844864339 +0000 UTC m=+1195.109272395" watchObservedRunningTime="2026-01-26 08:13:35.856673729 +0000 UTC m=+1195.121081785" Jan 26 08:13:36 crc kubenswrapper[4806]: I0126 08:13:36.372955 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-ltfdd"] Jan 26 08:13:36 crc kubenswrapper[4806]: I0126 08:13:36.374714 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-ltfdd" Jan 26 08:13:36 crc kubenswrapper[4806]: I0126 08:13:36.383803 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 26 08:13:36 crc kubenswrapper[4806]: I0126 08:13:36.384028 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 26 08:13:36 crc kubenswrapper[4806]: I0126 08:13:36.394327 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-ltfdd"] Jan 26 08:13:36 crc kubenswrapper[4806]: I0126 08:13:36.428579 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xwk6\" (UniqueName: \"kubernetes.io/projected/17a62481-034a-4042-b58d-a3ebf9e99202-kube-api-access-5xwk6\") pod \"nova-cell1-conductor-db-sync-ltfdd\" (UID: \"17a62481-034a-4042-b58d-a3ebf9e99202\") " pod="openstack/nova-cell1-conductor-db-sync-ltfdd" Jan 26 08:13:36 crc kubenswrapper[4806]: I0126 08:13:36.428867 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/17a62481-034a-4042-b58d-a3ebf9e99202-scripts\") pod \"nova-cell1-conductor-db-sync-ltfdd\" (UID: \"17a62481-034a-4042-b58d-a3ebf9e99202\") " pod="openstack/nova-cell1-conductor-db-sync-ltfdd" Jan 26 08:13:36 crc kubenswrapper[4806]: I0126 08:13:36.429016 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17a62481-034a-4042-b58d-a3ebf9e99202-config-data\") pod \"nova-cell1-conductor-db-sync-ltfdd\" (UID: \"17a62481-034a-4042-b58d-a3ebf9e99202\") " pod="openstack/nova-cell1-conductor-db-sync-ltfdd" Jan 26 08:13:36 crc kubenswrapper[4806]: I0126 08:13:36.429092 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17a62481-034a-4042-b58d-a3ebf9e99202-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-ltfdd\" (UID: \"17a62481-034a-4042-b58d-a3ebf9e99202\") " pod="openstack/nova-cell1-conductor-db-sync-ltfdd" Jan 26 08:13:36 crc kubenswrapper[4806]: I0126 08:13:36.532404 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17a62481-034a-4042-b58d-a3ebf9e99202-config-data\") pod \"nova-cell1-conductor-db-sync-ltfdd\" (UID: \"17a62481-034a-4042-b58d-a3ebf9e99202\") " pod="openstack/nova-cell1-conductor-db-sync-ltfdd" Jan 26 08:13:36 crc kubenswrapper[4806]: I0126 08:13:36.532742 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17a62481-034a-4042-b58d-a3ebf9e99202-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-ltfdd\" (UID: \"17a62481-034a-4042-b58d-a3ebf9e99202\") " pod="openstack/nova-cell1-conductor-db-sync-ltfdd" Jan 26 08:13:36 crc kubenswrapper[4806]: I0126 08:13:36.532840 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xwk6\" (UniqueName: \"kubernetes.io/projected/17a62481-034a-4042-b58d-a3ebf9e99202-kube-api-access-5xwk6\") pod \"nova-cell1-conductor-db-sync-ltfdd\" (UID: \"17a62481-034a-4042-b58d-a3ebf9e99202\") " pod="openstack/nova-cell1-conductor-db-sync-ltfdd" Jan 26 08:13:36 crc kubenswrapper[4806]: I0126 08:13:36.533222 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/17a62481-034a-4042-b58d-a3ebf9e99202-scripts\") pod \"nova-cell1-conductor-db-sync-ltfdd\" (UID: \"17a62481-034a-4042-b58d-a3ebf9e99202\") " pod="openstack/nova-cell1-conductor-db-sync-ltfdd" Jan 26 08:13:36 crc kubenswrapper[4806]: I0126 08:13:36.544252 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17a62481-034a-4042-b58d-a3ebf9e99202-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-ltfdd\" (UID: \"17a62481-034a-4042-b58d-a3ebf9e99202\") " pod="openstack/nova-cell1-conductor-db-sync-ltfdd" Jan 26 08:13:36 crc kubenswrapper[4806]: I0126 08:13:36.544826 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/17a62481-034a-4042-b58d-a3ebf9e99202-scripts\") pod \"nova-cell1-conductor-db-sync-ltfdd\" (UID: \"17a62481-034a-4042-b58d-a3ebf9e99202\") " pod="openstack/nova-cell1-conductor-db-sync-ltfdd" Jan 26 08:13:36 crc kubenswrapper[4806]: I0126 08:13:36.560809 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xwk6\" (UniqueName: \"kubernetes.io/projected/17a62481-034a-4042-b58d-a3ebf9e99202-kube-api-access-5xwk6\") pod \"nova-cell1-conductor-db-sync-ltfdd\" (UID: \"17a62481-034a-4042-b58d-a3ebf9e99202\") " pod="openstack/nova-cell1-conductor-db-sync-ltfdd" Jan 26 08:13:36 crc kubenswrapper[4806]: I0126 08:13:36.568114 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17a62481-034a-4042-b58d-a3ebf9e99202-config-data\") pod \"nova-cell1-conductor-db-sync-ltfdd\" (UID: \"17a62481-034a-4042-b58d-a3ebf9e99202\") " pod="openstack/nova-cell1-conductor-db-sync-ltfdd" Jan 26 08:13:36 crc kubenswrapper[4806]: I0126 08:13:36.729707 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-ltfdd" Jan 26 08:13:36 crc kubenswrapper[4806]: I0126 08:13:36.873992 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-568d7fd7cf-qwtvl" event={"ID":"001ecd97-04e4-4d0e-a713-34e7fc0a80a7","Type":"ContainerStarted","Data":"862cc2a44cd0f69a7e2b0d6a19694935b37b8b55e7afb510002ee9ec72efc192"} Jan 26 08:13:36 crc kubenswrapper[4806]: I0126 08:13:36.874059 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-568d7fd7cf-qwtvl" Jan 26 08:13:37 crc kubenswrapper[4806]: I0126 08:13:37.487079 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-568d7fd7cf-qwtvl" podStartSLOduration=4.487055925 podStartE2EDuration="4.487055925s" podCreationTimestamp="2026-01-26 08:13:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 08:13:36.90260915 +0000 UTC m=+1196.167017206" watchObservedRunningTime="2026-01-26 08:13:37.487055925 +0000 UTC m=+1196.751463981" Jan 26 08:13:37 crc kubenswrapper[4806]: I0126 08:13:37.494392 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-ltfdd"] Jan 26 08:13:38 crc kubenswrapper[4806]: I0126 08:13:38.241105 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 08:13:38 crc kubenswrapper[4806]: I0126 08:13:38.268052 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 26 08:13:39 crc kubenswrapper[4806]: W0126 08:13:39.195694 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod17a62481_034a_4042_b58d_a3ebf9e99202.slice/crio-f8e52b7879ac1f95fad5988c9ad98ad2edd859d4274bafbf0f250ac77180dddc WatchSource:0}: Error finding container f8e52b7879ac1f95fad5988c9ad98ad2edd859d4274bafbf0f250ac77180dddc: Status 404 returned error can't find the container with id f8e52b7879ac1f95fad5988c9ad98ad2edd859d4274bafbf0f250ac77180dddc Jan 26 08:13:39 crc kubenswrapper[4806]: I0126 08:13:39.904718 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-ltfdd" event={"ID":"17a62481-034a-4042-b58d-a3ebf9e99202","Type":"ContainerStarted","Data":"f8e52b7879ac1f95fad5988c9ad98ad2edd859d4274bafbf0f250ac77180dddc"} Jan 26 08:13:40 crc kubenswrapper[4806]: I0126 08:13:40.936039 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-ltfdd" event={"ID":"17a62481-034a-4042-b58d-a3ebf9e99202","Type":"ContainerStarted","Data":"8e58449f0541bdbd765bf2724cf689af99f9580047a40d7e34b5976769a0b19a"} Jan 26 08:13:40 crc kubenswrapper[4806]: I0126 08:13:40.955908 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"4ac27597-a156-4995-a67e-98858e667c8a","Type":"ContainerStarted","Data":"0fc4469fbe8602c6b944bc3ef839cbf3e4a62919548e5ea249b13802680aeb78"} Jan 26 08:13:40 crc kubenswrapper[4806]: I0126 08:13:40.956324 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="4ac27597-a156-4995-a67e-98858e667c8a" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://0fc4469fbe8602c6b944bc3ef839cbf3e4a62919548e5ea249b13802680aeb78" gracePeriod=30 Jan 26 08:13:40 crc kubenswrapper[4806]: I0126 08:13:40.976680 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f7717aee-8b1d-48e4-87f0-ea8d2fd313c2","Type":"ContainerStarted","Data":"8e59d39aa2bdbe93c74b4686552617dfb06994e546c5b060972276a68810a559"} Jan 26 08:13:40 crc kubenswrapper[4806]: I0126 08:13:40.986495 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ec315438-1a4f-4779-ac65-7c8adcbf0c69","Type":"ContainerStarted","Data":"1b02d0c58daacaa87044dff41663ba8bd99d64b337d27bf144ad7f7266e4ed9d"} Jan 26 08:13:40 crc kubenswrapper[4806]: I0126 08:13:40.986575 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ec315438-1a4f-4779-ac65-7c8adcbf0c69","Type":"ContainerStarted","Data":"1a39b7fe8c41b4d33478f5907b6d8fecd0cc77a62c905261447b1d2cd3c6b06a"} Jan 26 08:13:41 crc kubenswrapper[4806]: I0126 08:13:41.001499 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-ltfdd" podStartSLOduration=5.001477608 podStartE2EDuration="5.001477608s" podCreationTimestamp="2026-01-26 08:13:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 08:13:40.967377695 +0000 UTC m=+1200.231785751" watchObservedRunningTime="2026-01-26 08:13:41.001477608 +0000 UTC m=+1200.265885664" Jan 26 08:13:41 crc kubenswrapper[4806]: I0126 08:13:41.004937 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d7649d82-1472-4df9-afad-04cf71d5138b","Type":"ContainerStarted","Data":"a7f1a5a5ef35b3b2a50d9879b06911860606d70b6a99d186239de2d0e6c1503c"} Jan 26 08:13:41 crc kubenswrapper[4806]: I0126 08:13:41.004989 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d7649d82-1472-4df9-afad-04cf71d5138b","Type":"ContainerStarted","Data":"e4a3dabd6e807e0d9aa6e5dfe597dc616e6a1ced935240104640811eeb270686"} Jan 26 08:13:41 crc kubenswrapper[4806]: I0126 08:13:41.005106 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="d7649d82-1472-4df9-afad-04cf71d5138b" containerName="nova-metadata-log" containerID="cri-o://e4a3dabd6e807e0d9aa6e5dfe597dc616e6a1ced935240104640811eeb270686" gracePeriod=30 Jan 26 08:13:41 crc kubenswrapper[4806]: I0126 08:13:41.005432 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="d7649d82-1472-4df9-afad-04cf71d5138b" containerName="nova-metadata-metadata" containerID="cri-o://a7f1a5a5ef35b3b2a50d9879b06911860606d70b6a99d186239de2d0e6c1503c" gracePeriod=30 Jan 26 08:13:41 crc kubenswrapper[4806]: I0126 08:13:41.040044 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.714654659 podStartE2EDuration="8.040026667s" podCreationTimestamp="2026-01-26 08:13:33 +0000 UTC" firstStartedPulling="2026-01-26 08:13:34.48466156 +0000 UTC m=+1193.749069616" lastFinishedPulling="2026-01-26 08:13:39.810033568 +0000 UTC m=+1199.074441624" observedRunningTime="2026-01-26 08:13:40.989509994 +0000 UTC m=+1200.253918050" watchObservedRunningTime="2026-01-26 08:13:41.040026667 +0000 UTC m=+1200.304434723" Jan 26 08:13:41 crc kubenswrapper[4806]: I0126 08:13:41.041030 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.553438459 podStartE2EDuration="9.041022164s" podCreationTimestamp="2026-01-26 08:13:32 +0000 UTC" firstStartedPulling="2026-01-26 08:13:34.321679922 +0000 UTC m=+1193.586087978" lastFinishedPulling="2026-01-26 08:13:39.809263627 +0000 UTC m=+1199.073671683" observedRunningTime="2026-01-26 08:13:41.023062722 +0000 UTC m=+1200.287470778" watchObservedRunningTime="2026-01-26 08:13:41.041022164 +0000 UTC m=+1200.305430220" Jan 26 08:13:41 crc kubenswrapper[4806]: I0126 08:13:41.097762 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.061143649 podStartE2EDuration="8.097740591s" podCreationTimestamp="2026-01-26 08:13:33 +0000 UTC" firstStartedPulling="2026-01-26 08:13:34.75427284 +0000 UTC m=+1194.018680896" lastFinishedPulling="2026-01-26 08:13:39.790869782 +0000 UTC m=+1199.055277838" observedRunningTime="2026-01-26 08:13:41.045016736 +0000 UTC m=+1200.309424792" watchObservedRunningTime="2026-01-26 08:13:41.097740591 +0000 UTC m=+1200.362148647" Jan 26 08:13:41 crc kubenswrapper[4806]: I0126 08:13:41.118016 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.122985458 podStartE2EDuration="8.117995587s" podCreationTimestamp="2026-01-26 08:13:33 +0000 UTC" firstStartedPulling="2026-01-26 08:13:34.812933311 +0000 UTC m=+1194.077341367" lastFinishedPulling="2026-01-26 08:13:39.80794344 +0000 UTC m=+1199.072351496" observedRunningTime="2026-01-26 08:13:41.114435467 +0000 UTC m=+1200.378843523" watchObservedRunningTime="2026-01-26 08:13:41.117995587 +0000 UTC m=+1200.382403643" Jan 26 08:13:41 crc kubenswrapper[4806]: I0126 08:13:41.351024 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 08:13:41 crc kubenswrapper[4806]: I0126 08:13:41.351501 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ab397110-992c-4443-bad3-a62a2cb9d02c" containerName="ceilometer-central-agent" containerID="cri-o://a570a0e4c919d2e04d4f6586f83d7d7bc47ffa62c55c5d9e1f65b698749d998a" gracePeriod=30 Jan 26 08:13:41 crc kubenswrapper[4806]: I0126 08:13:41.351710 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ab397110-992c-4443-bad3-a62a2cb9d02c" containerName="ceilometer-notification-agent" containerID="cri-o://97fd25d9d1e461461889facb51bbfbc442106a4f6abfb5d4f1772926cfe809f2" gracePeriod=30 Jan 26 08:13:41 crc kubenswrapper[4806]: I0126 08:13:41.351731 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ab397110-992c-4443-bad3-a62a2cb9d02c" containerName="proxy-httpd" containerID="cri-o://abb4b2d62ff50b2d12fd76939d683603c2f9db0de3a288d206bc92e1fd7763d4" gracePeriod=30 Jan 26 08:13:41 crc kubenswrapper[4806]: I0126 08:13:41.351712 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ab397110-992c-4443-bad3-a62a2cb9d02c" containerName="sg-core" containerID="cri-o://dd16ba0b2d7b1ea3ad41bbd8ebf247917a09111a10cceb06ae55538b2f66bbfe" gracePeriod=30 Jan 26 08:13:41 crc kubenswrapper[4806]: I0126 08:13:41.433718 4806 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="ab397110-992c-4443-bad3-a62a2cb9d02c" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.199:3000/\": EOF" Jan 26 08:13:42 crc kubenswrapper[4806]: I0126 08:13:42.030137 4806 generic.go:334] "Generic (PLEG): container finished" podID="d7649d82-1472-4df9-afad-04cf71d5138b" containerID="a7f1a5a5ef35b3b2a50d9879b06911860606d70b6a99d186239de2d0e6c1503c" exitCode=0 Jan 26 08:13:42 crc kubenswrapper[4806]: I0126 08:13:42.030389 4806 generic.go:334] "Generic (PLEG): container finished" podID="d7649d82-1472-4df9-afad-04cf71d5138b" containerID="e4a3dabd6e807e0d9aa6e5dfe597dc616e6a1ced935240104640811eeb270686" exitCode=143 Jan 26 08:13:42 crc kubenswrapper[4806]: I0126 08:13:42.030194 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d7649d82-1472-4df9-afad-04cf71d5138b","Type":"ContainerDied","Data":"a7f1a5a5ef35b3b2a50d9879b06911860606d70b6a99d186239de2d0e6c1503c"} Jan 26 08:13:42 crc kubenswrapper[4806]: I0126 08:13:42.030492 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d7649d82-1472-4df9-afad-04cf71d5138b","Type":"ContainerDied","Data":"e4a3dabd6e807e0d9aa6e5dfe597dc616e6a1ced935240104640811eeb270686"} Jan 26 08:13:42 crc kubenswrapper[4806]: I0126 08:13:42.030507 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d7649d82-1472-4df9-afad-04cf71d5138b","Type":"ContainerDied","Data":"1c1b08b0b8994c08dbea2c6f2dbe699701caa6f0fba9785a17ee9a7c0c7bdf76"} Jan 26 08:13:42 crc kubenswrapper[4806]: I0126 08:13:42.030532 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1c1b08b0b8994c08dbea2c6f2dbe699701caa6f0fba9785a17ee9a7c0c7bdf76" Jan 26 08:13:42 crc kubenswrapper[4806]: I0126 08:13:42.035697 4806 generic.go:334] "Generic (PLEG): container finished" podID="ab397110-992c-4443-bad3-a62a2cb9d02c" containerID="abb4b2d62ff50b2d12fd76939d683603c2f9db0de3a288d206bc92e1fd7763d4" exitCode=0 Jan 26 08:13:42 crc kubenswrapper[4806]: I0126 08:13:42.035733 4806 generic.go:334] "Generic (PLEG): container finished" podID="ab397110-992c-4443-bad3-a62a2cb9d02c" containerID="dd16ba0b2d7b1ea3ad41bbd8ebf247917a09111a10cceb06ae55538b2f66bbfe" exitCode=2 Jan 26 08:13:42 crc kubenswrapper[4806]: I0126 08:13:42.035745 4806 generic.go:334] "Generic (PLEG): container finished" podID="ab397110-992c-4443-bad3-a62a2cb9d02c" containerID="a570a0e4c919d2e04d4f6586f83d7d7bc47ffa62c55c5d9e1f65b698749d998a" exitCode=0 Jan 26 08:13:42 crc kubenswrapper[4806]: I0126 08:13:42.036602 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ab397110-992c-4443-bad3-a62a2cb9d02c","Type":"ContainerDied","Data":"abb4b2d62ff50b2d12fd76939d683603c2f9db0de3a288d206bc92e1fd7763d4"} Jan 26 08:13:42 crc kubenswrapper[4806]: I0126 08:13:42.036638 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ab397110-992c-4443-bad3-a62a2cb9d02c","Type":"ContainerDied","Data":"dd16ba0b2d7b1ea3ad41bbd8ebf247917a09111a10cceb06ae55538b2f66bbfe"} Jan 26 08:13:42 crc kubenswrapper[4806]: I0126 08:13:42.036656 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ab397110-992c-4443-bad3-a62a2cb9d02c","Type":"ContainerDied","Data":"a570a0e4c919d2e04d4f6586f83d7d7bc47ffa62c55c5d9e1f65b698749d998a"} Jan 26 08:13:42 crc kubenswrapper[4806]: I0126 08:13:42.046691 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 08:13:42 crc kubenswrapper[4806]: I0126 08:13:42.162011 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-65x74\" (UniqueName: \"kubernetes.io/projected/d7649d82-1472-4df9-afad-04cf71d5138b-kube-api-access-65x74\") pod \"d7649d82-1472-4df9-afad-04cf71d5138b\" (UID: \"d7649d82-1472-4df9-afad-04cf71d5138b\") " Jan 26 08:13:42 crc kubenswrapper[4806]: I0126 08:13:42.162088 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7649d82-1472-4df9-afad-04cf71d5138b-config-data\") pod \"d7649d82-1472-4df9-afad-04cf71d5138b\" (UID: \"d7649d82-1472-4df9-afad-04cf71d5138b\") " Jan 26 08:13:42 crc kubenswrapper[4806]: I0126 08:13:42.162190 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7649d82-1472-4df9-afad-04cf71d5138b-combined-ca-bundle\") pod \"d7649d82-1472-4df9-afad-04cf71d5138b\" (UID: \"d7649d82-1472-4df9-afad-04cf71d5138b\") " Jan 26 08:13:42 crc kubenswrapper[4806]: I0126 08:13:42.162245 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d7649d82-1472-4df9-afad-04cf71d5138b-logs\") pod \"d7649d82-1472-4df9-afad-04cf71d5138b\" (UID: \"d7649d82-1472-4df9-afad-04cf71d5138b\") " Jan 26 08:13:42 crc kubenswrapper[4806]: I0126 08:13:42.165129 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7649d82-1472-4df9-afad-04cf71d5138b-logs" (OuterVolumeSpecName: "logs") pod "d7649d82-1472-4df9-afad-04cf71d5138b" (UID: "d7649d82-1472-4df9-afad-04cf71d5138b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 08:13:42 crc kubenswrapper[4806]: I0126 08:13:42.179731 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7649d82-1472-4df9-afad-04cf71d5138b-kube-api-access-65x74" (OuterVolumeSpecName: "kube-api-access-65x74") pod "d7649d82-1472-4df9-afad-04cf71d5138b" (UID: "d7649d82-1472-4df9-afad-04cf71d5138b"). InnerVolumeSpecName "kube-api-access-65x74". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:13:42 crc kubenswrapper[4806]: I0126 08:13:42.204387 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7649d82-1472-4df9-afad-04cf71d5138b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d7649d82-1472-4df9-afad-04cf71d5138b" (UID: "d7649d82-1472-4df9-afad-04cf71d5138b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:13:42 crc kubenswrapper[4806]: I0126 08:13:42.206930 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7649d82-1472-4df9-afad-04cf71d5138b-config-data" (OuterVolumeSpecName: "config-data") pod "d7649d82-1472-4df9-afad-04cf71d5138b" (UID: "d7649d82-1472-4df9-afad-04cf71d5138b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:13:42 crc kubenswrapper[4806]: I0126 08:13:42.264637 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-65x74\" (UniqueName: \"kubernetes.io/projected/d7649d82-1472-4df9-afad-04cf71d5138b-kube-api-access-65x74\") on node \"crc\" DevicePath \"\"" Jan 26 08:13:42 crc kubenswrapper[4806]: I0126 08:13:42.264673 4806 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7649d82-1472-4df9-afad-04cf71d5138b-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 08:13:42 crc kubenswrapper[4806]: I0126 08:13:42.264688 4806 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7649d82-1472-4df9-afad-04cf71d5138b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 08:13:42 crc kubenswrapper[4806]: I0126 08:13:42.264699 4806 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d7649d82-1472-4df9-afad-04cf71d5138b-logs\") on node \"crc\" DevicePath \"\"" Jan 26 08:13:43 crc kubenswrapper[4806]: I0126 08:13:43.103125 4806 generic.go:334] "Generic (PLEG): container finished" podID="ab397110-992c-4443-bad3-a62a2cb9d02c" containerID="97fd25d9d1e461461889facb51bbfbc442106a4f6abfb5d4f1772926cfe809f2" exitCode=0 Jan 26 08:13:43 crc kubenswrapper[4806]: I0126 08:13:43.109292 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ab397110-992c-4443-bad3-a62a2cb9d02c","Type":"ContainerDied","Data":"97fd25d9d1e461461889facb51bbfbc442106a4f6abfb5d4f1772926cfe809f2"} Jan 26 08:13:43 crc kubenswrapper[4806]: I0126 08:13:43.109514 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 08:13:43 crc kubenswrapper[4806]: I0126 08:13:43.154906 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 08:13:43 crc kubenswrapper[4806]: I0126 08:13:43.168193 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 08:13:43 crc kubenswrapper[4806]: I0126 08:13:43.182691 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 26 08:13:43 crc kubenswrapper[4806]: E0126 08:13:43.183119 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7649d82-1472-4df9-afad-04cf71d5138b" containerName="nova-metadata-metadata" Jan 26 08:13:43 crc kubenswrapper[4806]: I0126 08:13:43.183135 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7649d82-1472-4df9-afad-04cf71d5138b" containerName="nova-metadata-metadata" Jan 26 08:13:43 crc kubenswrapper[4806]: E0126 08:13:43.183167 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7649d82-1472-4df9-afad-04cf71d5138b" containerName="nova-metadata-log" Jan 26 08:13:43 crc kubenswrapper[4806]: I0126 08:13:43.183176 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7649d82-1472-4df9-afad-04cf71d5138b" containerName="nova-metadata-log" Jan 26 08:13:43 crc kubenswrapper[4806]: I0126 08:13:43.183382 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7649d82-1472-4df9-afad-04cf71d5138b" containerName="nova-metadata-metadata" Jan 26 08:13:43 crc kubenswrapper[4806]: I0126 08:13:43.183395 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7649d82-1472-4df9-afad-04cf71d5138b" containerName="nova-metadata-log" Jan 26 08:13:43 crc kubenswrapper[4806]: I0126 08:13:43.184350 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 08:13:43 crc kubenswrapper[4806]: I0126 08:13:43.195909 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 26 08:13:43 crc kubenswrapper[4806]: I0126 08:13:43.196107 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 26 08:13:43 crc kubenswrapper[4806]: I0126 08:13:43.215923 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 08:13:43 crc kubenswrapper[4806]: I0126 08:13:43.259817 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 08:13:43 crc kubenswrapper[4806]: I0126 08:13:43.313318 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee694d4d-cfe7-43d8-be01-a4f08de501a8-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ee694d4d-cfe7-43d8-be01-a4f08de501a8\") " pod="openstack/nova-metadata-0" Jan 26 08:13:43 crc kubenswrapper[4806]: I0126 08:13:43.313607 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee694d4d-cfe7-43d8-be01-a4f08de501a8-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ee694d4d-cfe7-43d8-be01-a4f08de501a8\") " pod="openstack/nova-metadata-0" Jan 26 08:13:43 crc kubenswrapper[4806]: I0126 08:13:43.313644 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ee694d4d-cfe7-43d8-be01-a4f08de501a8-logs\") pod \"nova-metadata-0\" (UID: \"ee694d4d-cfe7-43d8-be01-a4f08de501a8\") " pod="openstack/nova-metadata-0" Jan 26 08:13:43 crc kubenswrapper[4806]: I0126 08:13:43.313705 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wl6r6\" (UniqueName: \"kubernetes.io/projected/ee694d4d-cfe7-43d8-be01-a4f08de501a8-kube-api-access-wl6r6\") pod \"nova-metadata-0\" (UID: \"ee694d4d-cfe7-43d8-be01-a4f08de501a8\") " pod="openstack/nova-metadata-0" Jan 26 08:13:43 crc kubenswrapper[4806]: I0126 08:13:43.313762 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee694d4d-cfe7-43d8-be01-a4f08de501a8-config-data\") pod \"nova-metadata-0\" (UID: \"ee694d4d-cfe7-43d8-be01-a4f08de501a8\") " pod="openstack/nova-metadata-0" Jan 26 08:13:43 crc kubenswrapper[4806]: I0126 08:13:43.352540 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 26 08:13:43 crc kubenswrapper[4806]: I0126 08:13:43.352599 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 26 08:13:43 crc kubenswrapper[4806]: I0126 08:13:43.406097 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 26 08:13:43 crc kubenswrapper[4806]: I0126 08:13:43.414880 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ab397110-992c-4443-bad3-a62a2cb9d02c-run-httpd\") pod \"ab397110-992c-4443-bad3-a62a2cb9d02c\" (UID: \"ab397110-992c-4443-bad3-a62a2cb9d02c\") " Jan 26 08:13:43 crc kubenswrapper[4806]: I0126 08:13:43.415194 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mqbcz\" (UniqueName: \"kubernetes.io/projected/ab397110-992c-4443-bad3-a62a2cb9d02c-kube-api-access-mqbcz\") pod \"ab397110-992c-4443-bad3-a62a2cb9d02c\" (UID: \"ab397110-992c-4443-bad3-a62a2cb9d02c\") " Jan 26 08:13:43 crc kubenswrapper[4806]: I0126 08:13:43.415236 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab397110-992c-4443-bad3-a62a2cb9d02c-config-data\") pod \"ab397110-992c-4443-bad3-a62a2cb9d02c\" (UID: \"ab397110-992c-4443-bad3-a62a2cb9d02c\") " Jan 26 08:13:43 crc kubenswrapper[4806]: I0126 08:13:43.415273 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ab397110-992c-4443-bad3-a62a2cb9d02c-log-httpd\") pod \"ab397110-992c-4443-bad3-a62a2cb9d02c\" (UID: \"ab397110-992c-4443-bad3-a62a2cb9d02c\") " Jan 26 08:13:43 crc kubenswrapper[4806]: I0126 08:13:43.415343 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab397110-992c-4443-bad3-a62a2cb9d02c-scripts\") pod \"ab397110-992c-4443-bad3-a62a2cb9d02c\" (UID: \"ab397110-992c-4443-bad3-a62a2cb9d02c\") " Jan 26 08:13:43 crc kubenswrapper[4806]: I0126 08:13:43.415371 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ab397110-992c-4443-bad3-a62a2cb9d02c-sg-core-conf-yaml\") pod \"ab397110-992c-4443-bad3-a62a2cb9d02c\" (UID: \"ab397110-992c-4443-bad3-a62a2cb9d02c\") " Jan 26 08:13:43 crc kubenswrapper[4806]: I0126 08:13:43.415403 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab397110-992c-4443-bad3-a62a2cb9d02c-combined-ca-bundle\") pod \"ab397110-992c-4443-bad3-a62a2cb9d02c\" (UID: \"ab397110-992c-4443-bad3-a62a2cb9d02c\") " Jan 26 08:13:43 crc kubenswrapper[4806]: I0126 08:13:43.415454 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab397110-992c-4443-bad3-a62a2cb9d02c-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "ab397110-992c-4443-bad3-a62a2cb9d02c" (UID: "ab397110-992c-4443-bad3-a62a2cb9d02c"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 08:13:43 crc kubenswrapper[4806]: I0126 08:13:43.415724 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ee694d4d-cfe7-43d8-be01-a4f08de501a8-logs\") pod \"nova-metadata-0\" (UID: \"ee694d4d-cfe7-43d8-be01-a4f08de501a8\") " pod="openstack/nova-metadata-0" Jan 26 08:13:43 crc kubenswrapper[4806]: I0126 08:13:43.415872 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wl6r6\" (UniqueName: \"kubernetes.io/projected/ee694d4d-cfe7-43d8-be01-a4f08de501a8-kube-api-access-wl6r6\") pod \"nova-metadata-0\" (UID: \"ee694d4d-cfe7-43d8-be01-a4f08de501a8\") " pod="openstack/nova-metadata-0" Jan 26 08:13:43 crc kubenswrapper[4806]: I0126 08:13:43.415888 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab397110-992c-4443-bad3-a62a2cb9d02c-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "ab397110-992c-4443-bad3-a62a2cb9d02c" (UID: "ab397110-992c-4443-bad3-a62a2cb9d02c"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 08:13:43 crc kubenswrapper[4806]: I0126 08:13:43.415996 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee694d4d-cfe7-43d8-be01-a4f08de501a8-config-data\") pod \"nova-metadata-0\" (UID: \"ee694d4d-cfe7-43d8-be01-a4f08de501a8\") " pod="openstack/nova-metadata-0" Jan 26 08:13:43 crc kubenswrapper[4806]: I0126 08:13:43.416110 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee694d4d-cfe7-43d8-be01-a4f08de501a8-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ee694d4d-cfe7-43d8-be01-a4f08de501a8\") " pod="openstack/nova-metadata-0" Jan 26 08:13:43 crc kubenswrapper[4806]: I0126 08:13:43.416144 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee694d4d-cfe7-43d8-be01-a4f08de501a8-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ee694d4d-cfe7-43d8-be01-a4f08de501a8\") " pod="openstack/nova-metadata-0" Jan 26 08:13:43 crc kubenswrapper[4806]: I0126 08:13:43.416197 4806 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ab397110-992c-4443-bad3-a62a2cb9d02c-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 08:13:43 crc kubenswrapper[4806]: I0126 08:13:43.416210 4806 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ab397110-992c-4443-bad3-a62a2cb9d02c-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 08:13:43 crc kubenswrapper[4806]: I0126 08:13:43.416495 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ee694d4d-cfe7-43d8-be01-a4f08de501a8-logs\") pod \"nova-metadata-0\" (UID: \"ee694d4d-cfe7-43d8-be01-a4f08de501a8\") " pod="openstack/nova-metadata-0" Jan 26 08:13:43 crc kubenswrapper[4806]: I0126 08:13:43.428123 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee694d4d-cfe7-43d8-be01-a4f08de501a8-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ee694d4d-cfe7-43d8-be01-a4f08de501a8\") " pod="openstack/nova-metadata-0" Jan 26 08:13:43 crc kubenswrapper[4806]: I0126 08:13:43.429144 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab397110-992c-4443-bad3-a62a2cb9d02c-scripts" (OuterVolumeSpecName: "scripts") pod "ab397110-992c-4443-bad3-a62a2cb9d02c" (UID: "ab397110-992c-4443-bad3-a62a2cb9d02c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:13:43 crc kubenswrapper[4806]: I0126 08:13:43.429329 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab397110-992c-4443-bad3-a62a2cb9d02c-kube-api-access-mqbcz" (OuterVolumeSpecName: "kube-api-access-mqbcz") pod "ab397110-992c-4443-bad3-a62a2cb9d02c" (UID: "ab397110-992c-4443-bad3-a62a2cb9d02c"). InnerVolumeSpecName "kube-api-access-mqbcz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:13:43 crc kubenswrapper[4806]: I0126 08:13:43.430221 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee694d4d-cfe7-43d8-be01-a4f08de501a8-config-data\") pod \"nova-metadata-0\" (UID: \"ee694d4d-cfe7-43d8-be01-a4f08de501a8\") " pod="openstack/nova-metadata-0" Jan 26 08:13:43 crc kubenswrapper[4806]: I0126 08:13:43.438020 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee694d4d-cfe7-43d8-be01-a4f08de501a8-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ee694d4d-cfe7-43d8-be01-a4f08de501a8\") " pod="openstack/nova-metadata-0" Jan 26 08:13:43 crc kubenswrapper[4806]: I0126 08:13:43.447954 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wl6r6\" (UniqueName: \"kubernetes.io/projected/ee694d4d-cfe7-43d8-be01-a4f08de501a8-kube-api-access-wl6r6\") pod \"nova-metadata-0\" (UID: \"ee694d4d-cfe7-43d8-be01-a4f08de501a8\") " pod="openstack/nova-metadata-0" Jan 26 08:13:43 crc kubenswrapper[4806]: I0126 08:13:43.491246 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab397110-992c-4443-bad3-a62a2cb9d02c-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "ab397110-992c-4443-bad3-a62a2cb9d02c" (UID: "ab397110-992c-4443-bad3-a62a2cb9d02c"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:13:43 crc kubenswrapper[4806]: I0126 08:13:43.519114 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mqbcz\" (UniqueName: \"kubernetes.io/projected/ab397110-992c-4443-bad3-a62a2cb9d02c-kube-api-access-mqbcz\") on node \"crc\" DevicePath \"\"" Jan 26 08:13:43 crc kubenswrapper[4806]: I0126 08:13:43.519144 4806 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ab397110-992c-4443-bad3-a62a2cb9d02c-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 08:13:43 crc kubenswrapper[4806]: I0126 08:13:43.519156 4806 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ab397110-992c-4443-bad3-a62a2cb9d02c-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 26 08:13:43 crc kubenswrapper[4806]: I0126 08:13:43.562776 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab397110-992c-4443-bad3-a62a2cb9d02c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ab397110-992c-4443-bad3-a62a2cb9d02c" (UID: "ab397110-992c-4443-bad3-a62a2cb9d02c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:13:43 crc kubenswrapper[4806]: I0126 08:13:43.563791 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab397110-992c-4443-bad3-a62a2cb9d02c-config-data" (OuterVolumeSpecName: "config-data") pod "ab397110-992c-4443-bad3-a62a2cb9d02c" (UID: "ab397110-992c-4443-bad3-a62a2cb9d02c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:13:43 crc kubenswrapper[4806]: I0126 08:13:43.579046 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 08:13:43 crc kubenswrapper[4806]: I0126 08:13:43.622490 4806 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab397110-992c-4443-bad3-a62a2cb9d02c-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 08:13:43 crc kubenswrapper[4806]: I0126 08:13:43.622748 4806 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab397110-992c-4443-bad3-a62a2cb9d02c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 08:13:43 crc kubenswrapper[4806]: I0126 08:13:43.759017 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 26 08:13:43 crc kubenswrapper[4806]: I0126 08:13:43.759333 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 26 08:13:43 crc kubenswrapper[4806]: I0126 08:13:43.803794 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 26 08:13:43 crc kubenswrapper[4806]: I0126 08:13:43.866959 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-568d7fd7cf-qwtvl" Jan 26 08:13:43 crc kubenswrapper[4806]: I0126 08:13:43.932609 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-688b9f5b49-gzqdl"] Jan 26 08:13:43 crc kubenswrapper[4806]: I0126 08:13:43.932834 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-688b9f5b49-gzqdl" podUID="c603efb4-a0d1-474b-90a0-fc0c93aa37a3" containerName="dnsmasq-dns" containerID="cri-o://c195900afd175c117e075ab725623539c9d20fd9d8dc8574887dd3ddfe48f7ca" gracePeriod=10 Jan 26 08:13:44 crc kubenswrapper[4806]: I0126 08:13:44.056452 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 08:13:44 crc kubenswrapper[4806]: I0126 08:13:44.166778 4806 generic.go:334] "Generic (PLEG): container finished" podID="c603efb4-a0d1-474b-90a0-fc0c93aa37a3" containerID="c195900afd175c117e075ab725623539c9d20fd9d8dc8574887dd3ddfe48f7ca" exitCode=0 Jan 26 08:13:44 crc kubenswrapper[4806]: I0126 08:13:44.166866 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688b9f5b49-gzqdl" event={"ID":"c603efb4-a0d1-474b-90a0-fc0c93aa37a3","Type":"ContainerDied","Data":"c195900afd175c117e075ab725623539c9d20fd9d8dc8574887dd3ddfe48f7ca"} Jan 26 08:13:44 crc kubenswrapper[4806]: I0126 08:13:44.175423 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ee694d4d-cfe7-43d8-be01-a4f08de501a8","Type":"ContainerStarted","Data":"9d4e6a11ce97496924bc960e36c29aac8b3d40ebec806dde6c8608870582a01d"} Jan 26 08:13:44 crc kubenswrapper[4806]: I0126 08:13:44.181609 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ab397110-992c-4443-bad3-a62a2cb9d02c","Type":"ContainerDied","Data":"c44fbc36893df6f2c343d5f65f908b0ac607f2a12e36923f4a1069e0b3739cfd"} Jan 26 08:13:44 crc kubenswrapper[4806]: I0126 08:13:44.181653 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 08:13:44 crc kubenswrapper[4806]: I0126 08:13:44.181661 4806 scope.go:117] "RemoveContainer" containerID="abb4b2d62ff50b2d12fd76939d683603c2f9db0de3a288d206bc92e1fd7763d4" Jan 26 08:13:44 crc kubenswrapper[4806]: I0126 08:13:44.255681 4806 scope.go:117] "RemoveContainer" containerID="dd16ba0b2d7b1ea3ad41bbd8ebf247917a09111a10cceb06ae55538b2f66bbfe" Jan 26 08:13:44 crc kubenswrapper[4806]: I0126 08:13:44.287608 4806 scope.go:117] "RemoveContainer" containerID="97fd25d9d1e461461889facb51bbfbc442106a4f6abfb5d4f1772926cfe809f2" Jan 26 08:13:44 crc kubenswrapper[4806]: I0126 08:13:44.288556 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 08:13:44 crc kubenswrapper[4806]: I0126 08:13:44.308344 4806 scope.go:117] "RemoveContainer" containerID="a570a0e4c919d2e04d4f6586f83d7d7bc47ffa62c55c5d9e1f65b698749d998a" Jan 26 08:13:44 crc kubenswrapper[4806]: I0126 08:13:44.308464 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 26 08:13:44 crc kubenswrapper[4806]: I0126 08:13:44.328026 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 26 08:13:44 crc kubenswrapper[4806]: E0126 08:13:44.328673 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab397110-992c-4443-bad3-a62a2cb9d02c" containerName="sg-core" Jan 26 08:13:44 crc kubenswrapper[4806]: I0126 08:13:44.328689 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab397110-992c-4443-bad3-a62a2cb9d02c" containerName="sg-core" Jan 26 08:13:44 crc kubenswrapper[4806]: E0126 08:13:44.328725 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab397110-992c-4443-bad3-a62a2cb9d02c" containerName="proxy-httpd" Jan 26 08:13:44 crc kubenswrapper[4806]: I0126 08:13:44.328732 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab397110-992c-4443-bad3-a62a2cb9d02c" containerName="proxy-httpd" Jan 26 08:13:44 crc kubenswrapper[4806]: E0126 08:13:44.328749 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab397110-992c-4443-bad3-a62a2cb9d02c" containerName="ceilometer-notification-agent" Jan 26 08:13:44 crc kubenswrapper[4806]: I0126 08:13:44.328755 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab397110-992c-4443-bad3-a62a2cb9d02c" containerName="ceilometer-notification-agent" Jan 26 08:13:44 crc kubenswrapper[4806]: E0126 08:13:44.328766 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab397110-992c-4443-bad3-a62a2cb9d02c" containerName="ceilometer-central-agent" Jan 26 08:13:44 crc kubenswrapper[4806]: I0126 08:13:44.328772 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab397110-992c-4443-bad3-a62a2cb9d02c" containerName="ceilometer-central-agent" Jan 26 08:13:44 crc kubenswrapper[4806]: I0126 08:13:44.328925 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab397110-992c-4443-bad3-a62a2cb9d02c" containerName="ceilometer-notification-agent" Jan 26 08:13:44 crc kubenswrapper[4806]: I0126 08:13:44.328943 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab397110-992c-4443-bad3-a62a2cb9d02c" containerName="proxy-httpd" Jan 26 08:13:44 crc kubenswrapper[4806]: I0126 08:13:44.328956 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab397110-992c-4443-bad3-a62a2cb9d02c" containerName="ceilometer-central-agent" Jan 26 08:13:44 crc kubenswrapper[4806]: I0126 08:13:44.328964 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab397110-992c-4443-bad3-a62a2cb9d02c" containerName="sg-core" Jan 26 08:13:44 crc kubenswrapper[4806]: I0126 08:13:44.330660 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 08:13:44 crc kubenswrapper[4806]: I0126 08:13:44.335549 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 26 08:13:44 crc kubenswrapper[4806]: I0126 08:13:44.341024 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 26 08:13:44 crc kubenswrapper[4806]: I0126 08:13:44.341254 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 26 08:13:44 crc kubenswrapper[4806]: I0126 08:13:44.345382 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1907c0be-76fa-416d-ad59-3e106d418c43-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1907c0be-76fa-416d-ad59-3e106d418c43\") " pod="openstack/ceilometer-0" Jan 26 08:13:44 crc kubenswrapper[4806]: I0126 08:13:44.345420 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1907c0be-76fa-416d-ad59-3e106d418c43-config-data\") pod \"ceilometer-0\" (UID: \"1907c0be-76fa-416d-ad59-3e106d418c43\") " pod="openstack/ceilometer-0" Jan 26 08:13:44 crc kubenswrapper[4806]: I0126 08:13:44.345463 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1907c0be-76fa-416d-ad59-3e106d418c43-run-httpd\") pod \"ceilometer-0\" (UID: \"1907c0be-76fa-416d-ad59-3e106d418c43\") " pod="openstack/ceilometer-0" Jan 26 08:13:44 crc kubenswrapper[4806]: I0126 08:13:44.345498 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7znh\" (UniqueName: \"kubernetes.io/projected/1907c0be-76fa-416d-ad59-3e106d418c43-kube-api-access-h7znh\") pod \"ceilometer-0\" (UID: \"1907c0be-76fa-416d-ad59-3e106d418c43\") " pod="openstack/ceilometer-0" Jan 26 08:13:44 crc kubenswrapper[4806]: I0126 08:13:44.345529 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1907c0be-76fa-416d-ad59-3e106d418c43-log-httpd\") pod \"ceilometer-0\" (UID: \"1907c0be-76fa-416d-ad59-3e106d418c43\") " pod="openstack/ceilometer-0" Jan 26 08:13:44 crc kubenswrapper[4806]: I0126 08:13:44.345625 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1907c0be-76fa-416d-ad59-3e106d418c43-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1907c0be-76fa-416d-ad59-3e106d418c43\") " pod="openstack/ceilometer-0" Jan 26 08:13:44 crc kubenswrapper[4806]: I0126 08:13:44.345657 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1907c0be-76fa-416d-ad59-3e106d418c43-scripts\") pod \"ceilometer-0\" (UID: \"1907c0be-76fa-416d-ad59-3e106d418c43\") " pod="openstack/ceilometer-0" Jan 26 08:13:44 crc kubenswrapper[4806]: I0126 08:13:44.373633 4806 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-688b9f5b49-gzqdl" podUID="c603efb4-a0d1-474b-90a0-fc0c93aa37a3" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.177:5353: connect: connection refused" Jan 26 08:13:44 crc kubenswrapper[4806]: I0126 08:13:44.395248 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 08:13:44 crc kubenswrapper[4806]: I0126 08:13:44.442647 4806 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="ec315438-1a4f-4779-ac65-7c8adcbf0c69" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.202:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 08:13:44 crc kubenswrapper[4806]: I0126 08:13:44.443191 4806 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="ec315438-1a4f-4779-ac65-7c8adcbf0c69" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.202:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 08:13:44 crc kubenswrapper[4806]: I0126 08:13:44.448653 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1907c0be-76fa-416d-ad59-3e106d418c43-scripts\") pod \"ceilometer-0\" (UID: \"1907c0be-76fa-416d-ad59-3e106d418c43\") " pod="openstack/ceilometer-0" Jan 26 08:13:44 crc kubenswrapper[4806]: I0126 08:13:44.448764 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1907c0be-76fa-416d-ad59-3e106d418c43-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1907c0be-76fa-416d-ad59-3e106d418c43\") " pod="openstack/ceilometer-0" Jan 26 08:13:44 crc kubenswrapper[4806]: I0126 08:13:44.448796 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1907c0be-76fa-416d-ad59-3e106d418c43-config-data\") pod \"ceilometer-0\" (UID: \"1907c0be-76fa-416d-ad59-3e106d418c43\") " pod="openstack/ceilometer-0" Jan 26 08:13:44 crc kubenswrapper[4806]: I0126 08:13:44.448831 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1907c0be-76fa-416d-ad59-3e106d418c43-run-httpd\") pod \"ceilometer-0\" (UID: \"1907c0be-76fa-416d-ad59-3e106d418c43\") " pod="openstack/ceilometer-0" Jan 26 08:13:44 crc kubenswrapper[4806]: I0126 08:13:44.448863 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h7znh\" (UniqueName: \"kubernetes.io/projected/1907c0be-76fa-416d-ad59-3e106d418c43-kube-api-access-h7znh\") pod \"ceilometer-0\" (UID: \"1907c0be-76fa-416d-ad59-3e106d418c43\") " pod="openstack/ceilometer-0" Jan 26 08:13:44 crc kubenswrapper[4806]: I0126 08:13:44.448884 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1907c0be-76fa-416d-ad59-3e106d418c43-log-httpd\") pod \"ceilometer-0\" (UID: \"1907c0be-76fa-416d-ad59-3e106d418c43\") " pod="openstack/ceilometer-0" Jan 26 08:13:44 crc kubenswrapper[4806]: I0126 08:13:44.448925 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1907c0be-76fa-416d-ad59-3e106d418c43-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1907c0be-76fa-416d-ad59-3e106d418c43\") " pod="openstack/ceilometer-0" Jan 26 08:13:44 crc kubenswrapper[4806]: I0126 08:13:44.449572 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1907c0be-76fa-416d-ad59-3e106d418c43-run-httpd\") pod \"ceilometer-0\" (UID: \"1907c0be-76fa-416d-ad59-3e106d418c43\") " pod="openstack/ceilometer-0" Jan 26 08:13:44 crc kubenswrapper[4806]: I0126 08:13:44.450139 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1907c0be-76fa-416d-ad59-3e106d418c43-log-httpd\") pod \"ceilometer-0\" (UID: \"1907c0be-76fa-416d-ad59-3e106d418c43\") " pod="openstack/ceilometer-0" Jan 26 08:13:44 crc kubenswrapper[4806]: I0126 08:13:44.452091 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1907c0be-76fa-416d-ad59-3e106d418c43-scripts\") pod \"ceilometer-0\" (UID: \"1907c0be-76fa-416d-ad59-3e106d418c43\") " pod="openstack/ceilometer-0" Jan 26 08:13:44 crc kubenswrapper[4806]: I0126 08:13:44.459469 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1907c0be-76fa-416d-ad59-3e106d418c43-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1907c0be-76fa-416d-ad59-3e106d418c43\") " pod="openstack/ceilometer-0" Jan 26 08:13:44 crc kubenswrapper[4806]: I0126 08:13:44.459654 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1907c0be-76fa-416d-ad59-3e106d418c43-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1907c0be-76fa-416d-ad59-3e106d418c43\") " pod="openstack/ceilometer-0" Jan 26 08:13:44 crc kubenswrapper[4806]: I0126 08:13:44.464428 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1907c0be-76fa-416d-ad59-3e106d418c43-config-data\") pod \"ceilometer-0\" (UID: \"1907c0be-76fa-416d-ad59-3e106d418c43\") " pod="openstack/ceilometer-0" Jan 26 08:13:44 crc kubenswrapper[4806]: I0126 08:13:44.548565 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h7znh\" (UniqueName: \"kubernetes.io/projected/1907c0be-76fa-416d-ad59-3e106d418c43-kube-api-access-h7znh\") pod \"ceilometer-0\" (UID: \"1907c0be-76fa-416d-ad59-3e106d418c43\") " pod="openstack/ceilometer-0" Jan 26 08:13:44 crc kubenswrapper[4806]: I0126 08:13:44.651981 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 08:13:44 crc kubenswrapper[4806]: I0126 08:13:44.866938 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688b9f5b49-gzqdl" Jan 26 08:13:44 crc kubenswrapper[4806]: I0126 08:13:44.965292 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c603efb4-a0d1-474b-90a0-fc0c93aa37a3-ovsdbserver-sb\") pod \"c603efb4-a0d1-474b-90a0-fc0c93aa37a3\" (UID: \"c603efb4-a0d1-474b-90a0-fc0c93aa37a3\") " Jan 26 08:13:44 crc kubenswrapper[4806]: I0126 08:13:44.965383 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c603efb4-a0d1-474b-90a0-fc0c93aa37a3-dns-swift-storage-0\") pod \"c603efb4-a0d1-474b-90a0-fc0c93aa37a3\" (UID: \"c603efb4-a0d1-474b-90a0-fc0c93aa37a3\") " Jan 26 08:13:44 crc kubenswrapper[4806]: I0126 08:13:44.965506 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c603efb4-a0d1-474b-90a0-fc0c93aa37a3-dns-svc\") pod \"c603efb4-a0d1-474b-90a0-fc0c93aa37a3\" (UID: \"c603efb4-a0d1-474b-90a0-fc0c93aa37a3\") " Jan 26 08:13:44 crc kubenswrapper[4806]: I0126 08:13:44.965674 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xb9nn\" (UniqueName: \"kubernetes.io/projected/c603efb4-a0d1-474b-90a0-fc0c93aa37a3-kube-api-access-xb9nn\") pod \"c603efb4-a0d1-474b-90a0-fc0c93aa37a3\" (UID: \"c603efb4-a0d1-474b-90a0-fc0c93aa37a3\") " Jan 26 08:13:44 crc kubenswrapper[4806]: I0126 08:13:44.965742 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c603efb4-a0d1-474b-90a0-fc0c93aa37a3-ovsdbserver-nb\") pod \"c603efb4-a0d1-474b-90a0-fc0c93aa37a3\" (UID: \"c603efb4-a0d1-474b-90a0-fc0c93aa37a3\") " Jan 26 08:13:44 crc kubenswrapper[4806]: I0126 08:13:44.965782 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c603efb4-a0d1-474b-90a0-fc0c93aa37a3-config\") pod \"c603efb4-a0d1-474b-90a0-fc0c93aa37a3\" (UID: \"c603efb4-a0d1-474b-90a0-fc0c93aa37a3\") " Jan 26 08:13:44 crc kubenswrapper[4806]: I0126 08:13:44.982885 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c603efb4-a0d1-474b-90a0-fc0c93aa37a3-kube-api-access-xb9nn" (OuterVolumeSpecName: "kube-api-access-xb9nn") pod "c603efb4-a0d1-474b-90a0-fc0c93aa37a3" (UID: "c603efb4-a0d1-474b-90a0-fc0c93aa37a3"). InnerVolumeSpecName "kube-api-access-xb9nn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:13:45 crc kubenswrapper[4806]: I0126 08:13:45.077693 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xb9nn\" (UniqueName: \"kubernetes.io/projected/c603efb4-a0d1-474b-90a0-fc0c93aa37a3-kube-api-access-xb9nn\") on node \"crc\" DevicePath \"\"" Jan 26 08:13:45 crc kubenswrapper[4806]: I0126 08:13:45.086565 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab397110-992c-4443-bad3-a62a2cb9d02c" path="/var/lib/kubelet/pods/ab397110-992c-4443-bad3-a62a2cb9d02c/volumes" Jan 26 08:13:45 crc kubenswrapper[4806]: I0126 08:13:45.087347 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7649d82-1472-4df9-afad-04cf71d5138b" path="/var/lib/kubelet/pods/d7649d82-1472-4df9-afad-04cf71d5138b/volumes" Jan 26 08:13:45 crc kubenswrapper[4806]: I0126 08:13:45.199259 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c603efb4-a0d1-474b-90a0-fc0c93aa37a3-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c603efb4-a0d1-474b-90a0-fc0c93aa37a3" (UID: "c603efb4-a0d1-474b-90a0-fc0c93aa37a3"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 08:13:45 crc kubenswrapper[4806]: I0126 08:13:45.207429 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c603efb4-a0d1-474b-90a0-fc0c93aa37a3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c603efb4-a0d1-474b-90a0-fc0c93aa37a3" (UID: "c603efb4-a0d1-474b-90a0-fc0c93aa37a3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 08:13:45 crc kubenswrapper[4806]: I0126 08:13:45.208828 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688b9f5b49-gzqdl" event={"ID":"c603efb4-a0d1-474b-90a0-fc0c93aa37a3","Type":"ContainerDied","Data":"120041f453d91f3a060585dafe4b2eb8c3464e93679b402d89e8e5fcf55a90f6"} Jan 26 08:13:45 crc kubenswrapper[4806]: I0126 08:13:45.208961 4806 scope.go:117] "RemoveContainer" containerID="c195900afd175c117e075ab725623539c9d20fd9d8dc8574887dd3ddfe48f7ca" Jan 26 08:13:45 crc kubenswrapper[4806]: I0126 08:13:45.209183 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688b9f5b49-gzqdl" Jan 26 08:13:45 crc kubenswrapper[4806]: I0126 08:13:45.227366 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ee694d4d-cfe7-43d8-be01-a4f08de501a8","Type":"ContainerStarted","Data":"46339bed7978c11df3c7be7d8d06fcb32d326cee0e43aa9e4d949c7f1083c87a"} Jan 26 08:13:45 crc kubenswrapper[4806]: I0126 08:13:45.268961 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c603efb4-a0d1-474b-90a0-fc0c93aa37a3-config" (OuterVolumeSpecName: "config") pod "c603efb4-a0d1-474b-90a0-fc0c93aa37a3" (UID: "c603efb4-a0d1-474b-90a0-fc0c93aa37a3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 08:13:45 crc kubenswrapper[4806]: I0126 08:13:45.281801 4806 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c603efb4-a0d1-474b-90a0-fc0c93aa37a3-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 08:13:45 crc kubenswrapper[4806]: I0126 08:13:45.281830 4806 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c603efb4-a0d1-474b-90a0-fc0c93aa37a3-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 08:13:45 crc kubenswrapper[4806]: I0126 08:13:45.281843 4806 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c603efb4-a0d1-474b-90a0-fc0c93aa37a3-config\") on node \"crc\" DevicePath \"\"" Jan 26 08:13:45 crc kubenswrapper[4806]: I0126 08:13:45.289141 4806 scope.go:117] "RemoveContainer" containerID="528e5b8373edeb679c83b5b012f1fa9fdd449f1325661a9bfa96912bc4d8e006" Jan 26 08:13:45 crc kubenswrapper[4806]: I0126 08:13:45.289699 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c603efb4-a0d1-474b-90a0-fc0c93aa37a3-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "c603efb4-a0d1-474b-90a0-fc0c93aa37a3" (UID: "c603efb4-a0d1-474b-90a0-fc0c93aa37a3"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 08:13:45 crc kubenswrapper[4806]: I0126 08:13:45.290137 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c603efb4-a0d1-474b-90a0-fc0c93aa37a3-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c603efb4-a0d1-474b-90a0-fc0c93aa37a3" (UID: "c603efb4-a0d1-474b-90a0-fc0c93aa37a3"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 08:13:45 crc kubenswrapper[4806]: I0126 08:13:45.354458 4806 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 08:13:45 crc kubenswrapper[4806]: I0126 08:13:45.359086 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 08:13:45 crc kubenswrapper[4806]: I0126 08:13:45.383412 4806 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c603efb4-a0d1-474b-90a0-fc0c93aa37a3-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 08:13:45 crc kubenswrapper[4806]: I0126 08:13:45.383446 4806 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c603efb4-a0d1-474b-90a0-fc0c93aa37a3-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 26 08:13:45 crc kubenswrapper[4806]: I0126 08:13:45.570150 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-688b9f5b49-gzqdl"] Jan 26 08:13:45 crc kubenswrapper[4806]: I0126 08:13:45.577892 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-688b9f5b49-gzqdl"] Jan 26 08:13:45 crc kubenswrapper[4806]: I0126 08:13:45.806196 4806 patch_prober.go:28] interesting pod/machine-config-daemon-k2tlk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 08:13:45 crc kubenswrapper[4806]: I0126 08:13:45.806449 4806 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 08:13:46 crc kubenswrapper[4806]: I0126 08:13:46.236835 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1907c0be-76fa-416d-ad59-3e106d418c43","Type":"ContainerStarted","Data":"41ffb145c1d5faf50af42e9d1848d6d9453ea93a16379a544674db0e8e772af5"} Jan 26 08:13:46 crc kubenswrapper[4806]: I0126 08:13:46.236917 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1907c0be-76fa-416d-ad59-3e106d418c43","Type":"ContainerStarted","Data":"3d756fa9df6d882fa1fd254c3d904a17282c42a85006c1ca3eb8ff9b946ee30c"} Jan 26 08:13:46 crc kubenswrapper[4806]: I0126 08:13:46.238727 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ee694d4d-cfe7-43d8-be01-a4f08de501a8","Type":"ContainerStarted","Data":"4996801cfedefdb5d70f573f4f718b260ea862eec9c710cdd30a6f2af1b67d4b"} Jan 26 08:13:46 crc kubenswrapper[4806]: I0126 08:13:46.261780 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.261758586 podStartE2EDuration="3.261758586s" podCreationTimestamp="2026-01-26 08:13:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 08:13:46.260341096 +0000 UTC m=+1205.524749152" watchObservedRunningTime="2026-01-26 08:13:46.261758586 +0000 UTC m=+1205.526166642" Jan 26 08:13:47 crc kubenswrapper[4806]: I0126 08:13:47.051203 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c603efb4-a0d1-474b-90a0-fc0c93aa37a3" path="/var/lib/kubelet/pods/c603efb4-a0d1-474b-90a0-fc0c93aa37a3/volumes" Jan 26 08:13:47 crc kubenswrapper[4806]: I0126 08:13:47.250961 4806 generic.go:334] "Generic (PLEG): container finished" podID="0d495701-d98d-4c0a-be75-2330f3589594" containerID="89e896068718c5e18ba6426ea7fd689d74dd449ffa6ca67a6ea77d410009d80e" exitCode=0 Jan 26 08:13:47 crc kubenswrapper[4806]: I0126 08:13:47.251028 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-pwr48" event={"ID":"0d495701-d98d-4c0a-be75-2330f3589594","Type":"ContainerDied","Data":"89e896068718c5e18ba6426ea7fd689d74dd449ffa6ca67a6ea77d410009d80e"} Jan 26 08:13:47 crc kubenswrapper[4806]: I0126 08:13:47.253259 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1907c0be-76fa-416d-ad59-3e106d418c43","Type":"ContainerStarted","Data":"95eb1ec0195a28be3bde73feba41786fc919a340e066c478330c4127cc0bc337"} Jan 26 08:13:48 crc kubenswrapper[4806]: I0126 08:13:48.263901 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1907c0be-76fa-416d-ad59-3e106d418c43","Type":"ContainerStarted","Data":"d0521d5a61ebea8f5723d5494b314a2548fbac1d3d9c7647095705c26bc120c6"} Jan 26 08:13:48 crc kubenswrapper[4806]: I0126 08:13:48.581255 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 26 08:13:48 crc kubenswrapper[4806]: I0126 08:13:48.581484 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 26 08:13:48 crc kubenswrapper[4806]: I0126 08:13:48.784664 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-pwr48" Jan 26 08:13:48 crc kubenswrapper[4806]: I0126 08:13:48.960956 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d495701-d98d-4c0a-be75-2330f3589594-scripts\") pod \"0d495701-d98d-4c0a-be75-2330f3589594\" (UID: \"0d495701-d98d-4c0a-be75-2330f3589594\") " Jan 26 08:13:48 crc kubenswrapper[4806]: I0126 08:13:48.961435 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d495701-d98d-4c0a-be75-2330f3589594-config-data\") pod \"0d495701-d98d-4c0a-be75-2330f3589594\" (UID: \"0d495701-d98d-4c0a-be75-2330f3589594\") " Jan 26 08:13:48 crc kubenswrapper[4806]: I0126 08:13:48.961576 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4brfn\" (UniqueName: \"kubernetes.io/projected/0d495701-d98d-4c0a-be75-2330f3589594-kube-api-access-4brfn\") pod \"0d495701-d98d-4c0a-be75-2330f3589594\" (UID: \"0d495701-d98d-4c0a-be75-2330f3589594\") " Jan 26 08:13:48 crc kubenswrapper[4806]: I0126 08:13:48.961781 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d495701-d98d-4c0a-be75-2330f3589594-combined-ca-bundle\") pod \"0d495701-d98d-4c0a-be75-2330f3589594\" (UID: \"0d495701-d98d-4c0a-be75-2330f3589594\") " Jan 26 08:13:48 crc kubenswrapper[4806]: I0126 08:13:48.967334 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d495701-d98d-4c0a-be75-2330f3589594-kube-api-access-4brfn" (OuterVolumeSpecName: "kube-api-access-4brfn") pod "0d495701-d98d-4c0a-be75-2330f3589594" (UID: "0d495701-d98d-4c0a-be75-2330f3589594"). InnerVolumeSpecName "kube-api-access-4brfn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:13:48 crc kubenswrapper[4806]: I0126 08:13:48.973103 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d495701-d98d-4c0a-be75-2330f3589594-scripts" (OuterVolumeSpecName: "scripts") pod "0d495701-d98d-4c0a-be75-2330f3589594" (UID: "0d495701-d98d-4c0a-be75-2330f3589594"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:13:49 crc kubenswrapper[4806]: I0126 08:13:49.001140 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d495701-d98d-4c0a-be75-2330f3589594-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0d495701-d98d-4c0a-be75-2330f3589594" (UID: "0d495701-d98d-4c0a-be75-2330f3589594"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:13:49 crc kubenswrapper[4806]: I0126 08:13:49.016042 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d495701-d98d-4c0a-be75-2330f3589594-config-data" (OuterVolumeSpecName: "config-data") pod "0d495701-d98d-4c0a-be75-2330f3589594" (UID: "0d495701-d98d-4c0a-be75-2330f3589594"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:13:49 crc kubenswrapper[4806]: I0126 08:13:49.064560 4806 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d495701-d98d-4c0a-be75-2330f3589594-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 08:13:49 crc kubenswrapper[4806]: I0126 08:13:49.064601 4806 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d495701-d98d-4c0a-be75-2330f3589594-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 08:13:49 crc kubenswrapper[4806]: I0126 08:13:49.064613 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4brfn\" (UniqueName: \"kubernetes.io/projected/0d495701-d98d-4c0a-be75-2330f3589594-kube-api-access-4brfn\") on node \"crc\" DevicePath \"\"" Jan 26 08:13:49 crc kubenswrapper[4806]: I0126 08:13:49.064624 4806 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d495701-d98d-4c0a-be75-2330f3589594-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 08:13:49 crc kubenswrapper[4806]: I0126 08:13:49.273191 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-pwr48" event={"ID":"0d495701-d98d-4c0a-be75-2330f3589594","Type":"ContainerDied","Data":"31e93cffeec5602365abbd2e0b2ad2b0811f931457702e0402364446b67f47bf"} Jan 26 08:13:49 crc kubenswrapper[4806]: I0126 08:13:49.273233 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="31e93cffeec5602365abbd2e0b2ad2b0811f931457702e0402364446b67f47bf" Jan 26 08:13:49 crc kubenswrapper[4806]: I0126 08:13:49.273250 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-pwr48" Jan 26 08:13:49 crc kubenswrapper[4806]: I0126 08:13:49.277298 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1907c0be-76fa-416d-ad59-3e106d418c43","Type":"ContainerStarted","Data":"ab57275b20b2b33e2d67c05d237dfc4419fe687aabb8d5f9fc1967d1c6cfa489"} Jan 26 08:13:49 crc kubenswrapper[4806]: I0126 08:13:49.277515 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 26 08:13:49 crc kubenswrapper[4806]: I0126 08:13:49.303426 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.978468924 podStartE2EDuration="5.303400608s" podCreationTimestamp="2026-01-26 08:13:44 +0000 UTC" firstStartedPulling="2026-01-26 08:13:45.354226467 +0000 UTC m=+1204.618634523" lastFinishedPulling="2026-01-26 08:13:48.679158151 +0000 UTC m=+1207.943566207" observedRunningTime="2026-01-26 08:13:49.303375377 +0000 UTC m=+1208.567783443" watchObservedRunningTime="2026-01-26 08:13:49.303400608 +0000 UTC m=+1208.567808684" Jan 26 08:13:49 crc kubenswrapper[4806]: I0126 08:13:49.480631 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 26 08:13:49 crc kubenswrapper[4806]: I0126 08:13:49.480844 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="ec315438-1a4f-4779-ac65-7c8adcbf0c69" containerName="nova-api-log" containerID="cri-o://1a39b7fe8c41b4d33478f5907b6d8fecd0cc77a62c905261447b1d2cd3c6b06a" gracePeriod=30 Jan 26 08:13:49 crc kubenswrapper[4806]: I0126 08:13:49.481275 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="ec315438-1a4f-4779-ac65-7c8adcbf0c69" containerName="nova-api-api" containerID="cri-o://1b02d0c58daacaa87044dff41663ba8bd99d64b337d27bf144ad7f7266e4ed9d" gracePeriod=30 Jan 26 08:13:49 crc kubenswrapper[4806]: I0126 08:13:49.507923 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 08:13:49 crc kubenswrapper[4806]: I0126 08:13:49.508144 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="f7717aee-8b1d-48e4-87f0-ea8d2fd313c2" containerName="nova-scheduler-scheduler" containerID="cri-o://8e59d39aa2bdbe93c74b4686552617dfb06994e546c5b060972276a68810a559" gracePeriod=30 Jan 26 08:13:49 crc kubenswrapper[4806]: I0126 08:13:49.524080 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 08:13:49 crc kubenswrapper[4806]: I0126 08:13:49.525862 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="ee694d4d-cfe7-43d8-be01-a4f08de501a8" containerName="nova-metadata-log" containerID="cri-o://46339bed7978c11df3c7be7d8d06fcb32d326cee0e43aa9e4d949c7f1083c87a" gracePeriod=30 Jan 26 08:13:49 crc kubenswrapper[4806]: I0126 08:13:49.526148 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="ee694d4d-cfe7-43d8-be01-a4f08de501a8" containerName="nova-metadata-metadata" containerID="cri-o://4996801cfedefdb5d70f573f4f718b260ea862eec9c710cdd30a6f2af1b67d4b" gracePeriod=30 Jan 26 08:13:50 crc kubenswrapper[4806]: I0126 08:13:50.295977 4806 generic.go:334] "Generic (PLEG): container finished" podID="ec315438-1a4f-4779-ac65-7c8adcbf0c69" containerID="1a39b7fe8c41b4d33478f5907b6d8fecd0cc77a62c905261447b1d2cd3c6b06a" exitCode=143 Jan 26 08:13:50 crc kubenswrapper[4806]: I0126 08:13:50.296404 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ec315438-1a4f-4779-ac65-7c8adcbf0c69","Type":"ContainerDied","Data":"1a39b7fe8c41b4d33478f5907b6d8fecd0cc77a62c905261447b1d2cd3c6b06a"} Jan 26 08:13:50 crc kubenswrapper[4806]: I0126 08:13:50.303052 4806 generic.go:334] "Generic (PLEG): container finished" podID="ee694d4d-cfe7-43d8-be01-a4f08de501a8" containerID="4996801cfedefdb5d70f573f4f718b260ea862eec9c710cdd30a6f2af1b67d4b" exitCode=0 Jan 26 08:13:50 crc kubenswrapper[4806]: I0126 08:13:50.303076 4806 generic.go:334] "Generic (PLEG): container finished" podID="ee694d4d-cfe7-43d8-be01-a4f08de501a8" containerID="46339bed7978c11df3c7be7d8d06fcb32d326cee0e43aa9e4d949c7f1083c87a" exitCode=143 Jan 26 08:13:50 crc kubenswrapper[4806]: I0126 08:13:50.304366 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ee694d4d-cfe7-43d8-be01-a4f08de501a8","Type":"ContainerDied","Data":"4996801cfedefdb5d70f573f4f718b260ea862eec9c710cdd30a6f2af1b67d4b"} Jan 26 08:13:50 crc kubenswrapper[4806]: I0126 08:13:50.304404 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ee694d4d-cfe7-43d8-be01-a4f08de501a8","Type":"ContainerDied","Data":"46339bed7978c11df3c7be7d8d06fcb32d326cee0e43aa9e4d949c7f1083c87a"} Jan 26 08:13:50 crc kubenswrapper[4806]: I0126 08:13:50.505295 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 08:13:50 crc kubenswrapper[4806]: I0126 08:13:50.600485 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wl6r6\" (UniqueName: \"kubernetes.io/projected/ee694d4d-cfe7-43d8-be01-a4f08de501a8-kube-api-access-wl6r6\") pod \"ee694d4d-cfe7-43d8-be01-a4f08de501a8\" (UID: \"ee694d4d-cfe7-43d8-be01-a4f08de501a8\") " Jan 26 08:13:50 crc kubenswrapper[4806]: I0126 08:13:50.600596 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee694d4d-cfe7-43d8-be01-a4f08de501a8-combined-ca-bundle\") pod \"ee694d4d-cfe7-43d8-be01-a4f08de501a8\" (UID: \"ee694d4d-cfe7-43d8-be01-a4f08de501a8\") " Jan 26 08:13:50 crc kubenswrapper[4806]: I0126 08:13:50.600633 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee694d4d-cfe7-43d8-be01-a4f08de501a8-config-data\") pod \"ee694d4d-cfe7-43d8-be01-a4f08de501a8\" (UID: \"ee694d4d-cfe7-43d8-be01-a4f08de501a8\") " Jan 26 08:13:50 crc kubenswrapper[4806]: I0126 08:13:50.600686 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee694d4d-cfe7-43d8-be01-a4f08de501a8-nova-metadata-tls-certs\") pod \"ee694d4d-cfe7-43d8-be01-a4f08de501a8\" (UID: \"ee694d4d-cfe7-43d8-be01-a4f08de501a8\") " Jan 26 08:13:50 crc kubenswrapper[4806]: I0126 08:13:50.600723 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ee694d4d-cfe7-43d8-be01-a4f08de501a8-logs\") pod \"ee694d4d-cfe7-43d8-be01-a4f08de501a8\" (UID: \"ee694d4d-cfe7-43d8-be01-a4f08de501a8\") " Jan 26 08:13:50 crc kubenswrapper[4806]: I0126 08:13:50.601170 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee694d4d-cfe7-43d8-be01-a4f08de501a8-logs" (OuterVolumeSpecName: "logs") pod "ee694d4d-cfe7-43d8-be01-a4f08de501a8" (UID: "ee694d4d-cfe7-43d8-be01-a4f08de501a8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 08:13:50 crc kubenswrapper[4806]: I0126 08:13:50.606810 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee694d4d-cfe7-43d8-be01-a4f08de501a8-kube-api-access-wl6r6" (OuterVolumeSpecName: "kube-api-access-wl6r6") pod "ee694d4d-cfe7-43d8-be01-a4f08de501a8" (UID: "ee694d4d-cfe7-43d8-be01-a4f08de501a8"). InnerVolumeSpecName "kube-api-access-wl6r6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:13:50 crc kubenswrapper[4806]: I0126 08:13:50.637473 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee694d4d-cfe7-43d8-be01-a4f08de501a8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ee694d4d-cfe7-43d8-be01-a4f08de501a8" (UID: "ee694d4d-cfe7-43d8-be01-a4f08de501a8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:13:50 crc kubenswrapper[4806]: I0126 08:13:50.683241 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee694d4d-cfe7-43d8-be01-a4f08de501a8-config-data" (OuterVolumeSpecName: "config-data") pod "ee694d4d-cfe7-43d8-be01-a4f08de501a8" (UID: "ee694d4d-cfe7-43d8-be01-a4f08de501a8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:13:50 crc kubenswrapper[4806]: I0126 08:13:50.689419 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee694d4d-cfe7-43d8-be01-a4f08de501a8-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "ee694d4d-cfe7-43d8-be01-a4f08de501a8" (UID: "ee694d4d-cfe7-43d8-be01-a4f08de501a8"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:13:50 crc kubenswrapper[4806]: I0126 08:13:50.702424 4806 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee694d4d-cfe7-43d8-be01-a4f08de501a8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 08:13:50 crc kubenswrapper[4806]: I0126 08:13:50.702464 4806 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee694d4d-cfe7-43d8-be01-a4f08de501a8-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 08:13:50 crc kubenswrapper[4806]: I0126 08:13:50.702477 4806 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee694d4d-cfe7-43d8-be01-a4f08de501a8-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 08:13:50 crc kubenswrapper[4806]: I0126 08:13:50.702489 4806 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ee694d4d-cfe7-43d8-be01-a4f08de501a8-logs\") on node \"crc\" DevicePath \"\"" Jan 26 08:13:50 crc kubenswrapper[4806]: I0126 08:13:50.702501 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wl6r6\" (UniqueName: \"kubernetes.io/projected/ee694d4d-cfe7-43d8-be01-a4f08de501a8-kube-api-access-wl6r6\") on node \"crc\" DevicePath \"\"" Jan 26 08:13:51 crc kubenswrapper[4806]: I0126 08:13:51.125670 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 08:13:51 crc kubenswrapper[4806]: I0126 08:13:51.311876 4806 generic.go:334] "Generic (PLEG): container finished" podID="f7717aee-8b1d-48e4-87f0-ea8d2fd313c2" containerID="8e59d39aa2bdbe93c74b4686552617dfb06994e546c5b060972276a68810a559" exitCode=0 Jan 26 08:13:51 crc kubenswrapper[4806]: I0126 08:13:51.311925 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f7717aee-8b1d-48e4-87f0-ea8d2fd313c2","Type":"ContainerDied","Data":"8e59d39aa2bdbe93c74b4686552617dfb06994e546c5b060972276a68810a559"} Jan 26 08:13:51 crc kubenswrapper[4806]: I0126 08:13:51.311951 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 08:13:51 crc kubenswrapper[4806]: I0126 08:13:51.313082 4806 scope.go:117] "RemoveContainer" containerID="8e59d39aa2bdbe93c74b4686552617dfb06994e546c5b060972276a68810a559" Jan 26 08:13:51 crc kubenswrapper[4806]: I0126 08:13:51.312980 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f7717aee-8b1d-48e4-87f0-ea8d2fd313c2","Type":"ContainerDied","Data":"9b77a2c4b3f9558cdf37ef3b42d2bf813279f183849be66dca9465df20424fcc"} Jan 26 08:13:51 crc kubenswrapper[4806]: I0126 08:13:51.313585 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7717aee-8b1d-48e4-87f0-ea8d2fd313c2-combined-ca-bundle\") pod \"f7717aee-8b1d-48e4-87f0-ea8d2fd313c2\" (UID: \"f7717aee-8b1d-48e4-87f0-ea8d2fd313c2\") " Jan 26 08:13:51 crc kubenswrapper[4806]: I0126 08:13:51.313680 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7717aee-8b1d-48e4-87f0-ea8d2fd313c2-config-data\") pod \"f7717aee-8b1d-48e4-87f0-ea8d2fd313c2\" (UID: \"f7717aee-8b1d-48e4-87f0-ea8d2fd313c2\") " Jan 26 08:13:51 crc kubenswrapper[4806]: I0126 08:13:51.313750 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-44tkq\" (UniqueName: \"kubernetes.io/projected/f7717aee-8b1d-48e4-87f0-ea8d2fd313c2-kube-api-access-44tkq\") pod \"f7717aee-8b1d-48e4-87f0-ea8d2fd313c2\" (UID: \"f7717aee-8b1d-48e4-87f0-ea8d2fd313c2\") " Jan 26 08:13:51 crc kubenswrapper[4806]: I0126 08:13:51.316795 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7717aee-8b1d-48e4-87f0-ea8d2fd313c2-kube-api-access-44tkq" (OuterVolumeSpecName: "kube-api-access-44tkq") pod "f7717aee-8b1d-48e4-87f0-ea8d2fd313c2" (UID: "f7717aee-8b1d-48e4-87f0-ea8d2fd313c2"). InnerVolumeSpecName "kube-api-access-44tkq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:13:51 crc kubenswrapper[4806]: I0126 08:13:51.339587 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ee694d4d-cfe7-43d8-be01-a4f08de501a8","Type":"ContainerDied","Data":"9d4e6a11ce97496924bc960e36c29aac8b3d40ebec806dde6c8608870582a01d"} Jan 26 08:13:51 crc kubenswrapper[4806]: I0126 08:13:51.339773 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 08:13:51 crc kubenswrapper[4806]: I0126 08:13:51.378690 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7717aee-8b1d-48e4-87f0-ea8d2fd313c2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f7717aee-8b1d-48e4-87f0-ea8d2fd313c2" (UID: "f7717aee-8b1d-48e4-87f0-ea8d2fd313c2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:13:51 crc kubenswrapper[4806]: I0126 08:13:51.390726 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7717aee-8b1d-48e4-87f0-ea8d2fd313c2-config-data" (OuterVolumeSpecName: "config-data") pod "f7717aee-8b1d-48e4-87f0-ea8d2fd313c2" (UID: "f7717aee-8b1d-48e4-87f0-ea8d2fd313c2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:13:51 crc kubenswrapper[4806]: I0126 08:13:51.416478 4806 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7717aee-8b1d-48e4-87f0-ea8d2fd313c2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 08:13:51 crc kubenswrapper[4806]: I0126 08:13:51.416642 4806 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7717aee-8b1d-48e4-87f0-ea8d2fd313c2-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 08:13:51 crc kubenswrapper[4806]: I0126 08:13:51.416700 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-44tkq\" (UniqueName: \"kubernetes.io/projected/f7717aee-8b1d-48e4-87f0-ea8d2fd313c2-kube-api-access-44tkq\") on node \"crc\" DevicePath \"\"" Jan 26 08:13:51 crc kubenswrapper[4806]: I0126 08:13:51.429432 4806 scope.go:117] "RemoveContainer" containerID="8e59d39aa2bdbe93c74b4686552617dfb06994e546c5b060972276a68810a559" Jan 26 08:13:51 crc kubenswrapper[4806]: E0126 08:13:51.429907 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e59d39aa2bdbe93c74b4686552617dfb06994e546c5b060972276a68810a559\": container with ID starting with 8e59d39aa2bdbe93c74b4686552617dfb06994e546c5b060972276a68810a559 not found: ID does not exist" containerID="8e59d39aa2bdbe93c74b4686552617dfb06994e546c5b060972276a68810a559" Jan 26 08:13:51 crc kubenswrapper[4806]: I0126 08:13:51.429951 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e59d39aa2bdbe93c74b4686552617dfb06994e546c5b060972276a68810a559"} err="failed to get container status \"8e59d39aa2bdbe93c74b4686552617dfb06994e546c5b060972276a68810a559\": rpc error: code = NotFound desc = could not find container \"8e59d39aa2bdbe93c74b4686552617dfb06994e546c5b060972276a68810a559\": container with ID starting with 8e59d39aa2bdbe93c74b4686552617dfb06994e546c5b060972276a68810a559 not found: ID does not exist" Jan 26 08:13:51 crc kubenswrapper[4806]: I0126 08:13:51.429979 4806 scope.go:117] "RemoveContainer" containerID="4996801cfedefdb5d70f573f4f718b260ea862eec9c710cdd30a6f2af1b67d4b" Jan 26 08:13:51 crc kubenswrapper[4806]: I0126 08:13:51.442146 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 08:13:51 crc kubenswrapper[4806]: I0126 08:13:51.453711 4806 scope.go:117] "RemoveContainer" containerID="46339bed7978c11df3c7be7d8d06fcb32d326cee0e43aa9e4d949c7f1083c87a" Jan 26 08:13:51 crc kubenswrapper[4806]: I0126 08:13:51.456837 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 08:13:51 crc kubenswrapper[4806]: I0126 08:13:51.472879 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 26 08:13:51 crc kubenswrapper[4806]: E0126 08:13:51.473308 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7717aee-8b1d-48e4-87f0-ea8d2fd313c2" containerName="nova-scheduler-scheduler" Jan 26 08:13:51 crc kubenswrapper[4806]: I0126 08:13:51.473330 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7717aee-8b1d-48e4-87f0-ea8d2fd313c2" containerName="nova-scheduler-scheduler" Jan 26 08:13:51 crc kubenswrapper[4806]: E0126 08:13:51.473345 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c603efb4-a0d1-474b-90a0-fc0c93aa37a3" containerName="dnsmasq-dns" Jan 26 08:13:51 crc kubenswrapper[4806]: I0126 08:13:51.473352 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="c603efb4-a0d1-474b-90a0-fc0c93aa37a3" containerName="dnsmasq-dns" Jan 26 08:13:51 crc kubenswrapper[4806]: E0126 08:13:51.473365 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c603efb4-a0d1-474b-90a0-fc0c93aa37a3" containerName="init" Jan 26 08:13:51 crc kubenswrapper[4806]: I0126 08:13:51.473371 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="c603efb4-a0d1-474b-90a0-fc0c93aa37a3" containerName="init" Jan 26 08:13:51 crc kubenswrapper[4806]: E0126 08:13:51.473381 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee694d4d-cfe7-43d8-be01-a4f08de501a8" containerName="nova-metadata-metadata" Jan 26 08:13:51 crc kubenswrapper[4806]: I0126 08:13:51.473387 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee694d4d-cfe7-43d8-be01-a4f08de501a8" containerName="nova-metadata-metadata" Jan 26 08:13:51 crc kubenswrapper[4806]: E0126 08:13:51.473397 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d495701-d98d-4c0a-be75-2330f3589594" containerName="nova-manage" Jan 26 08:13:51 crc kubenswrapper[4806]: I0126 08:13:51.473403 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d495701-d98d-4c0a-be75-2330f3589594" containerName="nova-manage" Jan 26 08:13:51 crc kubenswrapper[4806]: E0126 08:13:51.473417 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee694d4d-cfe7-43d8-be01-a4f08de501a8" containerName="nova-metadata-log" Jan 26 08:13:51 crc kubenswrapper[4806]: I0126 08:13:51.473424 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee694d4d-cfe7-43d8-be01-a4f08de501a8" containerName="nova-metadata-log" Jan 26 08:13:51 crc kubenswrapper[4806]: I0126 08:13:51.473633 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="c603efb4-a0d1-474b-90a0-fc0c93aa37a3" containerName="dnsmasq-dns" Jan 26 08:13:51 crc kubenswrapper[4806]: I0126 08:13:51.473650 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee694d4d-cfe7-43d8-be01-a4f08de501a8" containerName="nova-metadata-log" Jan 26 08:13:51 crc kubenswrapper[4806]: I0126 08:13:51.473661 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee694d4d-cfe7-43d8-be01-a4f08de501a8" containerName="nova-metadata-metadata" Jan 26 08:13:51 crc kubenswrapper[4806]: I0126 08:13:51.473670 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d495701-d98d-4c0a-be75-2330f3589594" containerName="nova-manage" Jan 26 08:13:51 crc kubenswrapper[4806]: I0126 08:13:51.473686 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7717aee-8b1d-48e4-87f0-ea8d2fd313c2" containerName="nova-scheduler-scheduler" Jan 26 08:13:51 crc kubenswrapper[4806]: I0126 08:13:51.486237 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 08:13:51 crc kubenswrapper[4806]: I0126 08:13:51.489448 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 26 08:13:51 crc kubenswrapper[4806]: I0126 08:13:51.490097 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 26 08:13:51 crc kubenswrapper[4806]: I0126 08:13:51.523577 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 08:13:51 crc kubenswrapper[4806]: I0126 08:13:51.620681 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5132f7a4-1bcc-4f2e-8dea-d5382b7a52e8-config-data\") pod \"nova-metadata-0\" (UID: \"5132f7a4-1bcc-4f2e-8dea-d5382b7a52e8\") " pod="openstack/nova-metadata-0" Jan 26 08:13:51 crc kubenswrapper[4806]: I0126 08:13:51.620773 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5132f7a4-1bcc-4f2e-8dea-d5382b7a52e8-logs\") pod \"nova-metadata-0\" (UID: \"5132f7a4-1bcc-4f2e-8dea-d5382b7a52e8\") " pod="openstack/nova-metadata-0" Jan 26 08:13:51 crc kubenswrapper[4806]: I0126 08:13:51.620808 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmnhk\" (UniqueName: \"kubernetes.io/projected/5132f7a4-1bcc-4f2e-8dea-d5382b7a52e8-kube-api-access-bmnhk\") pod \"nova-metadata-0\" (UID: \"5132f7a4-1bcc-4f2e-8dea-d5382b7a52e8\") " pod="openstack/nova-metadata-0" Jan 26 08:13:51 crc kubenswrapper[4806]: I0126 08:13:51.620860 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5132f7a4-1bcc-4f2e-8dea-d5382b7a52e8-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"5132f7a4-1bcc-4f2e-8dea-d5382b7a52e8\") " pod="openstack/nova-metadata-0" Jan 26 08:13:51 crc kubenswrapper[4806]: I0126 08:13:51.620955 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5132f7a4-1bcc-4f2e-8dea-d5382b7a52e8-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"5132f7a4-1bcc-4f2e-8dea-d5382b7a52e8\") " pod="openstack/nova-metadata-0" Jan 26 08:13:51 crc kubenswrapper[4806]: I0126 08:13:51.662068 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 08:13:51 crc kubenswrapper[4806]: I0126 08:13:51.678013 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 08:13:51 crc kubenswrapper[4806]: I0126 08:13:51.716229 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 08:13:51 crc kubenswrapper[4806]: I0126 08:13:51.717631 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 08:13:51 crc kubenswrapper[4806]: I0126 08:13:51.722893 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 26 08:13:51 crc kubenswrapper[4806]: I0126 08:13:51.724074 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5132f7a4-1bcc-4f2e-8dea-d5382b7a52e8-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"5132f7a4-1bcc-4f2e-8dea-d5382b7a52e8\") " pod="openstack/nova-metadata-0" Jan 26 08:13:51 crc kubenswrapper[4806]: I0126 08:13:51.724271 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5132f7a4-1bcc-4f2e-8dea-d5382b7a52e8-config-data\") pod \"nova-metadata-0\" (UID: \"5132f7a4-1bcc-4f2e-8dea-d5382b7a52e8\") " pod="openstack/nova-metadata-0" Jan 26 08:13:51 crc kubenswrapper[4806]: I0126 08:13:51.724309 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5132f7a4-1bcc-4f2e-8dea-d5382b7a52e8-logs\") pod \"nova-metadata-0\" (UID: \"5132f7a4-1bcc-4f2e-8dea-d5382b7a52e8\") " pod="openstack/nova-metadata-0" Jan 26 08:13:51 crc kubenswrapper[4806]: I0126 08:13:51.724329 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmnhk\" (UniqueName: \"kubernetes.io/projected/5132f7a4-1bcc-4f2e-8dea-d5382b7a52e8-kube-api-access-bmnhk\") pod \"nova-metadata-0\" (UID: \"5132f7a4-1bcc-4f2e-8dea-d5382b7a52e8\") " pod="openstack/nova-metadata-0" Jan 26 08:13:51 crc kubenswrapper[4806]: I0126 08:13:51.724349 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5132f7a4-1bcc-4f2e-8dea-d5382b7a52e8-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"5132f7a4-1bcc-4f2e-8dea-d5382b7a52e8\") " pod="openstack/nova-metadata-0" Jan 26 08:13:51 crc kubenswrapper[4806]: I0126 08:13:51.725407 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5132f7a4-1bcc-4f2e-8dea-d5382b7a52e8-logs\") pod \"nova-metadata-0\" (UID: \"5132f7a4-1bcc-4f2e-8dea-d5382b7a52e8\") " pod="openstack/nova-metadata-0" Jan 26 08:13:51 crc kubenswrapper[4806]: I0126 08:13:51.731317 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5132f7a4-1bcc-4f2e-8dea-d5382b7a52e8-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"5132f7a4-1bcc-4f2e-8dea-d5382b7a52e8\") " pod="openstack/nova-metadata-0" Jan 26 08:13:51 crc kubenswrapper[4806]: I0126 08:13:51.731557 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5132f7a4-1bcc-4f2e-8dea-d5382b7a52e8-config-data\") pod \"nova-metadata-0\" (UID: \"5132f7a4-1bcc-4f2e-8dea-d5382b7a52e8\") " pod="openstack/nova-metadata-0" Jan 26 08:13:51 crc kubenswrapper[4806]: I0126 08:13:51.733590 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5132f7a4-1bcc-4f2e-8dea-d5382b7a52e8-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"5132f7a4-1bcc-4f2e-8dea-d5382b7a52e8\") " pod="openstack/nova-metadata-0" Jan 26 08:13:51 crc kubenswrapper[4806]: I0126 08:13:51.740090 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 08:13:51 crc kubenswrapper[4806]: I0126 08:13:51.762124 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmnhk\" (UniqueName: \"kubernetes.io/projected/5132f7a4-1bcc-4f2e-8dea-d5382b7a52e8-kube-api-access-bmnhk\") pod \"nova-metadata-0\" (UID: \"5132f7a4-1bcc-4f2e-8dea-d5382b7a52e8\") " pod="openstack/nova-metadata-0" Jan 26 08:13:51 crc kubenswrapper[4806]: I0126 08:13:51.807856 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 08:13:51 crc kubenswrapper[4806]: I0126 08:13:51.826245 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/adb88266-6c78-4f92-89a8-1f8eb73b60c3-config-data\") pod \"nova-scheduler-0\" (UID: \"adb88266-6c78-4f92-89a8-1f8eb73b60c3\") " pod="openstack/nova-scheduler-0" Jan 26 08:13:51 crc kubenswrapper[4806]: I0126 08:13:51.826326 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q72nk\" (UniqueName: \"kubernetes.io/projected/adb88266-6c78-4f92-89a8-1f8eb73b60c3-kube-api-access-q72nk\") pod \"nova-scheduler-0\" (UID: \"adb88266-6c78-4f92-89a8-1f8eb73b60c3\") " pod="openstack/nova-scheduler-0" Jan 26 08:13:51 crc kubenswrapper[4806]: I0126 08:13:51.826456 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/adb88266-6c78-4f92-89a8-1f8eb73b60c3-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"adb88266-6c78-4f92-89a8-1f8eb73b60c3\") " pod="openstack/nova-scheduler-0" Jan 26 08:13:51 crc kubenswrapper[4806]: I0126 08:13:51.927613 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/adb88266-6c78-4f92-89a8-1f8eb73b60c3-config-data\") pod \"nova-scheduler-0\" (UID: \"adb88266-6c78-4f92-89a8-1f8eb73b60c3\") " pod="openstack/nova-scheduler-0" Jan 26 08:13:51 crc kubenswrapper[4806]: I0126 08:13:51.927712 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q72nk\" (UniqueName: \"kubernetes.io/projected/adb88266-6c78-4f92-89a8-1f8eb73b60c3-kube-api-access-q72nk\") pod \"nova-scheduler-0\" (UID: \"adb88266-6c78-4f92-89a8-1f8eb73b60c3\") " pod="openstack/nova-scheduler-0" Jan 26 08:13:51 crc kubenswrapper[4806]: I0126 08:13:51.927818 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/adb88266-6c78-4f92-89a8-1f8eb73b60c3-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"adb88266-6c78-4f92-89a8-1f8eb73b60c3\") " pod="openstack/nova-scheduler-0" Jan 26 08:13:51 crc kubenswrapper[4806]: I0126 08:13:51.940349 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/adb88266-6c78-4f92-89a8-1f8eb73b60c3-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"adb88266-6c78-4f92-89a8-1f8eb73b60c3\") " pod="openstack/nova-scheduler-0" Jan 26 08:13:51 crc kubenswrapper[4806]: I0126 08:13:51.941893 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/adb88266-6c78-4f92-89a8-1f8eb73b60c3-config-data\") pod \"nova-scheduler-0\" (UID: \"adb88266-6c78-4f92-89a8-1f8eb73b60c3\") " pod="openstack/nova-scheduler-0" Jan 26 08:13:51 crc kubenswrapper[4806]: I0126 08:13:51.952408 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q72nk\" (UniqueName: \"kubernetes.io/projected/adb88266-6c78-4f92-89a8-1f8eb73b60c3-kube-api-access-q72nk\") pod \"nova-scheduler-0\" (UID: \"adb88266-6c78-4f92-89a8-1f8eb73b60c3\") " pod="openstack/nova-scheduler-0" Jan 26 08:13:52 crc kubenswrapper[4806]: I0126 08:13:52.133033 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 08:13:52 crc kubenswrapper[4806]: I0126 08:13:52.383385 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 08:13:52 crc kubenswrapper[4806]: I0126 08:13:52.670504 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 08:13:53 crc kubenswrapper[4806]: I0126 08:13:53.057651 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee694d4d-cfe7-43d8-be01-a4f08de501a8" path="/var/lib/kubelet/pods/ee694d4d-cfe7-43d8-be01-a4f08de501a8/volumes" Jan 26 08:13:53 crc kubenswrapper[4806]: I0126 08:13:53.058674 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7717aee-8b1d-48e4-87f0-ea8d2fd313c2" path="/var/lib/kubelet/pods/f7717aee-8b1d-48e4-87f0-ea8d2fd313c2/volumes" Jan 26 08:13:53 crc kubenswrapper[4806]: I0126 08:13:53.385405 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5132f7a4-1bcc-4f2e-8dea-d5382b7a52e8","Type":"ContainerStarted","Data":"80da07caa93be671e91b01f948a5c54c938cc2e3804b599751b6f37795828a30"} Jan 26 08:13:53 crc kubenswrapper[4806]: I0126 08:13:53.385706 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5132f7a4-1bcc-4f2e-8dea-d5382b7a52e8","Type":"ContainerStarted","Data":"4e12f63d5bc17c8bbb6ecc00305e481b2ff5ec44225861a592e09b2aa0a46d0f"} Jan 26 08:13:53 crc kubenswrapper[4806]: I0126 08:13:53.385718 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5132f7a4-1bcc-4f2e-8dea-d5382b7a52e8","Type":"ContainerStarted","Data":"c58cfeed0725a913238d2fd90bd33fcd4046b7d0674f53f175f966b6d9258f4c"} Jan 26 08:13:53 crc kubenswrapper[4806]: I0126 08:13:53.411832 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.411813362 podStartE2EDuration="2.411813362s" podCreationTimestamp="2026-01-26 08:13:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 08:13:53.408690685 +0000 UTC m=+1212.673098741" watchObservedRunningTime="2026-01-26 08:13:53.411813362 +0000 UTC m=+1212.676221418" Jan 26 08:13:53 crc kubenswrapper[4806]: I0126 08:13:53.419786 4806 generic.go:334] "Generic (PLEG): container finished" podID="17a62481-034a-4042-b58d-a3ebf9e99202" containerID="8e58449f0541bdbd765bf2724cf689af99f9580047a40d7e34b5976769a0b19a" exitCode=0 Jan 26 08:13:53 crc kubenswrapper[4806]: I0126 08:13:53.419836 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-ltfdd" event={"ID":"17a62481-034a-4042-b58d-a3ebf9e99202","Type":"ContainerDied","Data":"8e58449f0541bdbd765bf2724cf689af99f9580047a40d7e34b5976769a0b19a"} Jan 26 08:13:53 crc kubenswrapper[4806]: I0126 08:13:53.428329 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"adb88266-6c78-4f92-89a8-1f8eb73b60c3","Type":"ContainerStarted","Data":"15b7071699f668ad4cf42c9e0a968b681c5c25d1c8b04d99a240d413f258f75b"} Jan 26 08:13:53 crc kubenswrapper[4806]: I0126 08:13:53.428368 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"adb88266-6c78-4f92-89a8-1f8eb73b60c3","Type":"ContainerStarted","Data":"ad3234e14f1189d7798d4648d0e81ea2a840bdc58ce8228c993380d6bddfad1f"} Jan 26 08:13:53 crc kubenswrapper[4806]: I0126 08:13:53.429764 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 08:13:53 crc kubenswrapper[4806]: I0126 08:13:53.431086 4806 generic.go:334] "Generic (PLEG): container finished" podID="ec315438-1a4f-4779-ac65-7c8adcbf0c69" containerID="1b02d0c58daacaa87044dff41663ba8bd99d64b337d27bf144ad7f7266e4ed9d" exitCode=0 Jan 26 08:13:53 crc kubenswrapper[4806]: I0126 08:13:53.431130 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ec315438-1a4f-4779-ac65-7c8adcbf0c69","Type":"ContainerDied","Data":"1b02d0c58daacaa87044dff41663ba8bd99d64b337d27bf144ad7f7266e4ed9d"} Jan 26 08:13:53 crc kubenswrapper[4806]: I0126 08:13:53.431182 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ec315438-1a4f-4779-ac65-7c8adcbf0c69","Type":"ContainerDied","Data":"3486154aa5c012fb03f8a1e2d74f918de240a494bc1d82a4e7df4d9534517699"} Jan 26 08:13:53 crc kubenswrapper[4806]: I0126 08:13:53.431199 4806 scope.go:117] "RemoveContainer" containerID="1b02d0c58daacaa87044dff41663ba8bd99d64b337d27bf144ad7f7266e4ed9d" Jan 26 08:13:53 crc kubenswrapper[4806]: I0126 08:13:53.450373 4806 scope.go:117] "RemoveContainer" containerID="1a39b7fe8c41b4d33478f5907b6d8fecd0cc77a62c905261447b1d2cd3c6b06a" Jan 26 08:13:53 crc kubenswrapper[4806]: I0126 08:13:53.474070 4806 scope.go:117] "RemoveContainer" containerID="1b02d0c58daacaa87044dff41663ba8bd99d64b337d27bf144ad7f7266e4ed9d" Jan 26 08:13:53 crc kubenswrapper[4806]: E0126 08:13:53.477364 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b02d0c58daacaa87044dff41663ba8bd99d64b337d27bf144ad7f7266e4ed9d\": container with ID starting with 1b02d0c58daacaa87044dff41663ba8bd99d64b337d27bf144ad7f7266e4ed9d not found: ID does not exist" containerID="1b02d0c58daacaa87044dff41663ba8bd99d64b337d27bf144ad7f7266e4ed9d" Jan 26 08:13:53 crc kubenswrapper[4806]: I0126 08:13:53.477397 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b02d0c58daacaa87044dff41663ba8bd99d64b337d27bf144ad7f7266e4ed9d"} err="failed to get container status \"1b02d0c58daacaa87044dff41663ba8bd99d64b337d27bf144ad7f7266e4ed9d\": rpc error: code = NotFound desc = could not find container \"1b02d0c58daacaa87044dff41663ba8bd99d64b337d27bf144ad7f7266e4ed9d\": container with ID starting with 1b02d0c58daacaa87044dff41663ba8bd99d64b337d27bf144ad7f7266e4ed9d not found: ID does not exist" Jan 26 08:13:53 crc kubenswrapper[4806]: I0126 08:13:53.477418 4806 scope.go:117] "RemoveContainer" containerID="1a39b7fe8c41b4d33478f5907b6d8fecd0cc77a62c905261447b1d2cd3c6b06a" Jan 26 08:13:53 crc kubenswrapper[4806]: E0126 08:13:53.484003 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a39b7fe8c41b4d33478f5907b6d8fecd0cc77a62c905261447b1d2cd3c6b06a\": container with ID starting with 1a39b7fe8c41b4d33478f5907b6d8fecd0cc77a62c905261447b1d2cd3c6b06a not found: ID does not exist" containerID="1a39b7fe8c41b4d33478f5907b6d8fecd0cc77a62c905261447b1d2cd3c6b06a" Jan 26 08:13:53 crc kubenswrapper[4806]: I0126 08:13:53.484037 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a39b7fe8c41b4d33478f5907b6d8fecd0cc77a62c905261447b1d2cd3c6b06a"} err="failed to get container status \"1a39b7fe8c41b4d33478f5907b6d8fecd0cc77a62c905261447b1d2cd3c6b06a\": rpc error: code = NotFound desc = could not find container \"1a39b7fe8c41b4d33478f5907b6d8fecd0cc77a62c905261447b1d2cd3c6b06a\": container with ID starting with 1a39b7fe8c41b4d33478f5907b6d8fecd0cc77a62c905261447b1d2cd3c6b06a not found: ID does not exist" Jan 26 08:13:53 crc kubenswrapper[4806]: I0126 08:13:53.492169 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.492153359 podStartE2EDuration="2.492153359s" podCreationTimestamp="2026-01-26 08:13:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 08:13:53.46395048 +0000 UTC m=+1212.728358566" watchObservedRunningTime="2026-01-26 08:13:53.492153359 +0000 UTC m=+1212.756561405" Jan 26 08:13:53 crc kubenswrapper[4806]: I0126 08:13:53.577418 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2wkqh\" (UniqueName: \"kubernetes.io/projected/ec315438-1a4f-4779-ac65-7c8adcbf0c69-kube-api-access-2wkqh\") pod \"ec315438-1a4f-4779-ac65-7c8adcbf0c69\" (UID: \"ec315438-1a4f-4779-ac65-7c8adcbf0c69\") " Jan 26 08:13:53 crc kubenswrapper[4806]: I0126 08:13:53.578350 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ec315438-1a4f-4779-ac65-7c8adcbf0c69-logs\") pod \"ec315438-1a4f-4779-ac65-7c8adcbf0c69\" (UID: \"ec315438-1a4f-4779-ac65-7c8adcbf0c69\") " Jan 26 08:13:53 crc kubenswrapper[4806]: I0126 08:13:53.578678 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec315438-1a4f-4779-ac65-7c8adcbf0c69-logs" (OuterVolumeSpecName: "logs") pod "ec315438-1a4f-4779-ac65-7c8adcbf0c69" (UID: "ec315438-1a4f-4779-ac65-7c8adcbf0c69"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 08:13:53 crc kubenswrapper[4806]: I0126 08:13:53.578815 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec315438-1a4f-4779-ac65-7c8adcbf0c69-config-data\") pod \"ec315438-1a4f-4779-ac65-7c8adcbf0c69\" (UID: \"ec315438-1a4f-4779-ac65-7c8adcbf0c69\") " Jan 26 08:13:53 crc kubenswrapper[4806]: I0126 08:13:53.579201 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec315438-1a4f-4779-ac65-7c8adcbf0c69-combined-ca-bundle\") pod \"ec315438-1a4f-4779-ac65-7c8adcbf0c69\" (UID: \"ec315438-1a4f-4779-ac65-7c8adcbf0c69\") " Jan 26 08:13:53 crc kubenswrapper[4806]: I0126 08:13:53.579724 4806 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ec315438-1a4f-4779-ac65-7c8adcbf0c69-logs\") on node \"crc\" DevicePath \"\"" Jan 26 08:13:53 crc kubenswrapper[4806]: I0126 08:13:53.585657 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec315438-1a4f-4779-ac65-7c8adcbf0c69-kube-api-access-2wkqh" (OuterVolumeSpecName: "kube-api-access-2wkqh") pod "ec315438-1a4f-4779-ac65-7c8adcbf0c69" (UID: "ec315438-1a4f-4779-ac65-7c8adcbf0c69"). InnerVolumeSpecName "kube-api-access-2wkqh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:13:53 crc kubenswrapper[4806]: I0126 08:13:53.606504 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec315438-1a4f-4779-ac65-7c8adcbf0c69-config-data" (OuterVolumeSpecName: "config-data") pod "ec315438-1a4f-4779-ac65-7c8adcbf0c69" (UID: "ec315438-1a4f-4779-ac65-7c8adcbf0c69"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:13:53 crc kubenswrapper[4806]: I0126 08:13:53.629451 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec315438-1a4f-4779-ac65-7c8adcbf0c69-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ec315438-1a4f-4779-ac65-7c8adcbf0c69" (UID: "ec315438-1a4f-4779-ac65-7c8adcbf0c69"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:13:53 crc kubenswrapper[4806]: I0126 08:13:53.681841 4806 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec315438-1a4f-4779-ac65-7c8adcbf0c69-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 08:13:53 crc kubenswrapper[4806]: I0126 08:13:53.681874 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2wkqh\" (UniqueName: \"kubernetes.io/projected/ec315438-1a4f-4779-ac65-7c8adcbf0c69-kube-api-access-2wkqh\") on node \"crc\" DevicePath \"\"" Jan 26 08:13:53 crc kubenswrapper[4806]: I0126 08:13:53.681885 4806 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec315438-1a4f-4779-ac65-7c8adcbf0c69-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 08:13:54 crc kubenswrapper[4806]: I0126 08:13:54.442984 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 08:13:54 crc kubenswrapper[4806]: I0126 08:13:54.503734 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 26 08:13:54 crc kubenswrapper[4806]: I0126 08:13:54.514746 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 26 08:13:54 crc kubenswrapper[4806]: I0126 08:13:54.582023 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 26 08:13:54 crc kubenswrapper[4806]: E0126 08:13:54.583500 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec315438-1a4f-4779-ac65-7c8adcbf0c69" containerName="nova-api-api" Jan 26 08:13:54 crc kubenswrapper[4806]: I0126 08:13:54.583533 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec315438-1a4f-4779-ac65-7c8adcbf0c69" containerName="nova-api-api" Jan 26 08:13:54 crc kubenswrapper[4806]: E0126 08:13:54.583555 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec315438-1a4f-4779-ac65-7c8adcbf0c69" containerName="nova-api-log" Jan 26 08:13:54 crc kubenswrapper[4806]: I0126 08:13:54.583561 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec315438-1a4f-4779-ac65-7c8adcbf0c69" containerName="nova-api-log" Jan 26 08:13:54 crc kubenswrapper[4806]: I0126 08:13:54.585419 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec315438-1a4f-4779-ac65-7c8adcbf0c69" containerName="nova-api-log" Jan 26 08:13:54 crc kubenswrapper[4806]: I0126 08:13:54.585441 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec315438-1a4f-4779-ac65-7c8adcbf0c69" containerName="nova-api-api" Jan 26 08:13:54 crc kubenswrapper[4806]: I0126 08:13:54.588403 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 08:13:54 crc kubenswrapper[4806]: I0126 08:13:54.591968 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 26 08:13:54 crc kubenswrapper[4806]: I0126 08:13:54.620183 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 26 08:13:54 crc kubenswrapper[4806]: I0126 08:13:54.710885 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmnb2\" (UniqueName: \"kubernetes.io/projected/2d59962d-c0f8-41f3-92eb-917eb72966f4-kube-api-access-mmnb2\") pod \"nova-api-0\" (UID: \"2d59962d-c0f8-41f3-92eb-917eb72966f4\") " pod="openstack/nova-api-0" Jan 26 08:13:54 crc kubenswrapper[4806]: I0126 08:13:54.710955 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d59962d-c0f8-41f3-92eb-917eb72966f4-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2d59962d-c0f8-41f3-92eb-917eb72966f4\") " pod="openstack/nova-api-0" Jan 26 08:13:54 crc kubenswrapper[4806]: I0126 08:13:54.711058 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2d59962d-c0f8-41f3-92eb-917eb72966f4-logs\") pod \"nova-api-0\" (UID: \"2d59962d-c0f8-41f3-92eb-917eb72966f4\") " pod="openstack/nova-api-0" Jan 26 08:13:54 crc kubenswrapper[4806]: I0126 08:13:54.711103 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d59962d-c0f8-41f3-92eb-917eb72966f4-config-data\") pod \"nova-api-0\" (UID: \"2d59962d-c0f8-41f3-92eb-917eb72966f4\") " pod="openstack/nova-api-0" Jan 26 08:13:54 crc kubenswrapper[4806]: I0126 08:13:54.814594 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mmnb2\" (UniqueName: \"kubernetes.io/projected/2d59962d-c0f8-41f3-92eb-917eb72966f4-kube-api-access-mmnb2\") pod \"nova-api-0\" (UID: \"2d59962d-c0f8-41f3-92eb-917eb72966f4\") " pod="openstack/nova-api-0" Jan 26 08:13:54 crc kubenswrapper[4806]: I0126 08:13:54.814656 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d59962d-c0f8-41f3-92eb-917eb72966f4-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2d59962d-c0f8-41f3-92eb-917eb72966f4\") " pod="openstack/nova-api-0" Jan 26 08:13:54 crc kubenswrapper[4806]: I0126 08:13:54.814704 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2d59962d-c0f8-41f3-92eb-917eb72966f4-logs\") pod \"nova-api-0\" (UID: \"2d59962d-c0f8-41f3-92eb-917eb72966f4\") " pod="openstack/nova-api-0" Jan 26 08:13:54 crc kubenswrapper[4806]: I0126 08:13:54.814739 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d59962d-c0f8-41f3-92eb-917eb72966f4-config-data\") pod \"nova-api-0\" (UID: \"2d59962d-c0f8-41f3-92eb-917eb72966f4\") " pod="openstack/nova-api-0" Jan 26 08:13:54 crc kubenswrapper[4806]: I0126 08:13:54.821035 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2d59962d-c0f8-41f3-92eb-917eb72966f4-logs\") pod \"nova-api-0\" (UID: \"2d59962d-c0f8-41f3-92eb-917eb72966f4\") " pod="openstack/nova-api-0" Jan 26 08:13:54 crc kubenswrapper[4806]: I0126 08:13:54.821806 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d59962d-c0f8-41f3-92eb-917eb72966f4-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2d59962d-c0f8-41f3-92eb-917eb72966f4\") " pod="openstack/nova-api-0" Jan 26 08:13:54 crc kubenswrapper[4806]: I0126 08:13:54.822061 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d59962d-c0f8-41f3-92eb-917eb72966f4-config-data\") pod \"nova-api-0\" (UID: \"2d59962d-c0f8-41f3-92eb-917eb72966f4\") " pod="openstack/nova-api-0" Jan 26 08:13:54 crc kubenswrapper[4806]: I0126 08:13:54.848181 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmnb2\" (UniqueName: \"kubernetes.io/projected/2d59962d-c0f8-41f3-92eb-917eb72966f4-kube-api-access-mmnb2\") pod \"nova-api-0\" (UID: \"2d59962d-c0f8-41f3-92eb-917eb72966f4\") " pod="openstack/nova-api-0" Jan 26 08:13:54 crc kubenswrapper[4806]: I0126 08:13:54.916413 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 08:13:55 crc kubenswrapper[4806]: I0126 08:13:55.072059 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec315438-1a4f-4779-ac65-7c8adcbf0c69" path="/var/lib/kubelet/pods/ec315438-1a4f-4779-ac65-7c8adcbf0c69/volumes" Jan 26 08:13:55 crc kubenswrapper[4806]: I0126 08:13:55.097137 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-ltfdd" Jan 26 08:13:55 crc kubenswrapper[4806]: I0126 08:13:55.221070 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5xwk6\" (UniqueName: \"kubernetes.io/projected/17a62481-034a-4042-b58d-a3ebf9e99202-kube-api-access-5xwk6\") pod \"17a62481-034a-4042-b58d-a3ebf9e99202\" (UID: \"17a62481-034a-4042-b58d-a3ebf9e99202\") " Jan 26 08:13:55 crc kubenswrapper[4806]: I0126 08:13:55.221123 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17a62481-034a-4042-b58d-a3ebf9e99202-combined-ca-bundle\") pod \"17a62481-034a-4042-b58d-a3ebf9e99202\" (UID: \"17a62481-034a-4042-b58d-a3ebf9e99202\") " Jan 26 08:13:55 crc kubenswrapper[4806]: I0126 08:13:55.222088 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17a62481-034a-4042-b58d-a3ebf9e99202-config-data\") pod \"17a62481-034a-4042-b58d-a3ebf9e99202\" (UID: \"17a62481-034a-4042-b58d-a3ebf9e99202\") " Jan 26 08:13:55 crc kubenswrapper[4806]: I0126 08:13:55.222162 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/17a62481-034a-4042-b58d-a3ebf9e99202-scripts\") pod \"17a62481-034a-4042-b58d-a3ebf9e99202\" (UID: \"17a62481-034a-4042-b58d-a3ebf9e99202\") " Jan 26 08:13:55 crc kubenswrapper[4806]: I0126 08:13:55.226159 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17a62481-034a-4042-b58d-a3ebf9e99202-kube-api-access-5xwk6" (OuterVolumeSpecName: "kube-api-access-5xwk6") pod "17a62481-034a-4042-b58d-a3ebf9e99202" (UID: "17a62481-034a-4042-b58d-a3ebf9e99202"). InnerVolumeSpecName "kube-api-access-5xwk6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:13:55 crc kubenswrapper[4806]: I0126 08:13:55.226591 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17a62481-034a-4042-b58d-a3ebf9e99202-scripts" (OuterVolumeSpecName: "scripts") pod "17a62481-034a-4042-b58d-a3ebf9e99202" (UID: "17a62481-034a-4042-b58d-a3ebf9e99202"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:13:55 crc kubenswrapper[4806]: I0126 08:13:55.260034 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17a62481-034a-4042-b58d-a3ebf9e99202-config-data" (OuterVolumeSpecName: "config-data") pod "17a62481-034a-4042-b58d-a3ebf9e99202" (UID: "17a62481-034a-4042-b58d-a3ebf9e99202"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:13:55 crc kubenswrapper[4806]: I0126 08:13:55.271283 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17a62481-034a-4042-b58d-a3ebf9e99202-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "17a62481-034a-4042-b58d-a3ebf9e99202" (UID: "17a62481-034a-4042-b58d-a3ebf9e99202"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:13:55 crc kubenswrapper[4806]: I0126 08:13:55.326321 4806 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17a62481-034a-4042-b58d-a3ebf9e99202-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 08:13:55 crc kubenswrapper[4806]: I0126 08:13:55.326351 4806 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/17a62481-034a-4042-b58d-a3ebf9e99202-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 08:13:55 crc kubenswrapper[4806]: I0126 08:13:55.326363 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5xwk6\" (UniqueName: \"kubernetes.io/projected/17a62481-034a-4042-b58d-a3ebf9e99202-kube-api-access-5xwk6\") on node \"crc\" DevicePath \"\"" Jan 26 08:13:55 crc kubenswrapper[4806]: I0126 08:13:55.326373 4806 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17a62481-034a-4042-b58d-a3ebf9e99202-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 08:13:55 crc kubenswrapper[4806]: I0126 08:13:55.435061 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 26 08:13:55 crc kubenswrapper[4806]: I0126 08:13:55.458750 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2d59962d-c0f8-41f3-92eb-917eb72966f4","Type":"ContainerStarted","Data":"69ab8fb026bbffe77de5789dd440903020be2eea36a6c2f99fa83ce280893f44"} Jan 26 08:13:55 crc kubenswrapper[4806]: I0126 08:13:55.464329 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-ltfdd" event={"ID":"17a62481-034a-4042-b58d-a3ebf9e99202","Type":"ContainerDied","Data":"f8e52b7879ac1f95fad5988c9ad98ad2edd859d4274bafbf0f250ac77180dddc"} Jan 26 08:13:55 crc kubenswrapper[4806]: I0126 08:13:55.464366 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f8e52b7879ac1f95fad5988c9ad98ad2edd859d4274bafbf0f250ac77180dddc" Jan 26 08:13:55 crc kubenswrapper[4806]: I0126 08:13:55.464429 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-ltfdd" Jan 26 08:13:55 crc kubenswrapper[4806]: I0126 08:13:55.557022 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 26 08:13:55 crc kubenswrapper[4806]: E0126 08:13:55.557710 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17a62481-034a-4042-b58d-a3ebf9e99202" containerName="nova-cell1-conductor-db-sync" Jan 26 08:13:55 crc kubenswrapper[4806]: I0126 08:13:55.557726 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="17a62481-034a-4042-b58d-a3ebf9e99202" containerName="nova-cell1-conductor-db-sync" Jan 26 08:13:55 crc kubenswrapper[4806]: I0126 08:13:55.557960 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="17a62481-034a-4042-b58d-a3ebf9e99202" containerName="nova-cell1-conductor-db-sync" Jan 26 08:13:55 crc kubenswrapper[4806]: I0126 08:13:55.558605 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 26 08:13:55 crc kubenswrapper[4806]: I0126 08:13:55.560708 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 26 08:13:55 crc kubenswrapper[4806]: I0126 08:13:55.572790 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 26 08:13:55 crc kubenswrapper[4806]: I0126 08:13:55.737289 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58f0f405-bc44-4051-af6e-ece4bf71bdbb-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"58f0f405-bc44-4051-af6e-ece4bf71bdbb\") " pod="openstack/nova-cell1-conductor-0" Jan 26 08:13:55 crc kubenswrapper[4806]: I0126 08:13:55.737457 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58f0f405-bc44-4051-af6e-ece4bf71bdbb-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"58f0f405-bc44-4051-af6e-ece4bf71bdbb\") " pod="openstack/nova-cell1-conductor-0" Jan 26 08:13:55 crc kubenswrapper[4806]: I0126 08:13:55.737518 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdp8k\" (UniqueName: \"kubernetes.io/projected/58f0f405-bc44-4051-af6e-ece4bf71bdbb-kube-api-access-cdp8k\") pod \"nova-cell1-conductor-0\" (UID: \"58f0f405-bc44-4051-af6e-ece4bf71bdbb\") " pod="openstack/nova-cell1-conductor-0" Jan 26 08:13:55 crc kubenswrapper[4806]: I0126 08:13:55.838885 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58f0f405-bc44-4051-af6e-ece4bf71bdbb-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"58f0f405-bc44-4051-af6e-ece4bf71bdbb\") " pod="openstack/nova-cell1-conductor-0" Jan 26 08:13:55 crc kubenswrapper[4806]: I0126 08:13:55.839015 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58f0f405-bc44-4051-af6e-ece4bf71bdbb-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"58f0f405-bc44-4051-af6e-ece4bf71bdbb\") " pod="openstack/nova-cell1-conductor-0" Jan 26 08:13:55 crc kubenswrapper[4806]: I0126 08:13:55.839067 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cdp8k\" (UniqueName: \"kubernetes.io/projected/58f0f405-bc44-4051-af6e-ece4bf71bdbb-kube-api-access-cdp8k\") pod \"nova-cell1-conductor-0\" (UID: \"58f0f405-bc44-4051-af6e-ece4bf71bdbb\") " pod="openstack/nova-cell1-conductor-0" Jan 26 08:13:55 crc kubenswrapper[4806]: I0126 08:13:55.843350 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58f0f405-bc44-4051-af6e-ece4bf71bdbb-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"58f0f405-bc44-4051-af6e-ece4bf71bdbb\") " pod="openstack/nova-cell1-conductor-0" Jan 26 08:13:55 crc kubenswrapper[4806]: I0126 08:13:55.843499 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58f0f405-bc44-4051-af6e-ece4bf71bdbb-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"58f0f405-bc44-4051-af6e-ece4bf71bdbb\") " pod="openstack/nova-cell1-conductor-0" Jan 26 08:13:55 crc kubenswrapper[4806]: I0126 08:13:55.865479 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdp8k\" (UniqueName: \"kubernetes.io/projected/58f0f405-bc44-4051-af6e-ece4bf71bdbb-kube-api-access-cdp8k\") pod \"nova-cell1-conductor-0\" (UID: \"58f0f405-bc44-4051-af6e-ece4bf71bdbb\") " pod="openstack/nova-cell1-conductor-0" Jan 26 08:13:55 crc kubenswrapper[4806]: I0126 08:13:55.886142 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 26 08:13:56 crc kubenswrapper[4806]: I0126 08:13:56.368668 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 26 08:13:56 crc kubenswrapper[4806]: I0126 08:13:56.474079 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2d59962d-c0f8-41f3-92eb-917eb72966f4","Type":"ContainerStarted","Data":"fcd3afbbc45f8e721c2e2ca587a4428c8995657ae6ee98a85d12a59628a63f3a"} Jan 26 08:13:56 crc kubenswrapper[4806]: I0126 08:13:56.474427 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2d59962d-c0f8-41f3-92eb-917eb72966f4","Type":"ContainerStarted","Data":"950a99e2119c38ab19bf2e910c26f15c086b8049fa61b8f01dbcfe8fd92f83a8"} Jan 26 08:13:56 crc kubenswrapper[4806]: I0126 08:13:56.476906 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"58f0f405-bc44-4051-af6e-ece4bf71bdbb","Type":"ContainerStarted","Data":"60c9cafc7c856f934ad25b333cddaac618c2e493861feb7399b71187cc4671ce"} Jan 26 08:13:56 crc kubenswrapper[4806]: I0126 08:13:56.508068 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.508046812 podStartE2EDuration="2.508046812s" podCreationTimestamp="2026-01-26 08:13:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 08:13:56.49725329 +0000 UTC m=+1215.761661346" watchObservedRunningTime="2026-01-26 08:13:56.508046812 +0000 UTC m=+1215.772454878" Jan 26 08:13:56 crc kubenswrapper[4806]: I0126 08:13:56.808007 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 26 08:13:56 crc kubenswrapper[4806]: I0126 08:13:56.808068 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 26 08:13:57 crc kubenswrapper[4806]: I0126 08:13:57.133623 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 26 08:13:57 crc kubenswrapper[4806]: I0126 08:13:57.486529 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"58f0f405-bc44-4051-af6e-ece4bf71bdbb","Type":"ContainerStarted","Data":"f4e07f9269891438de0e6cc51d07ec3b6cfc93fd82c83cb9f71e2037f19efca8"} Jan 26 08:13:57 crc kubenswrapper[4806]: I0126 08:13:57.487403 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 26 08:13:57 crc kubenswrapper[4806]: I0126 08:13:57.511817 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.511798951 podStartE2EDuration="2.511798951s" podCreationTimestamp="2026-01-26 08:13:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 08:13:57.505367232 +0000 UTC m=+1216.769775288" watchObservedRunningTime="2026-01-26 08:13:57.511798951 +0000 UTC m=+1216.776207007" Jan 26 08:14:01 crc kubenswrapper[4806]: I0126 08:14:01.808322 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 26 08:14:01 crc kubenswrapper[4806]: I0126 08:14:01.808999 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 26 08:14:02 crc kubenswrapper[4806]: I0126 08:14:02.133833 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 26 08:14:02 crc kubenswrapper[4806]: I0126 08:14:02.168312 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 26 08:14:02 crc kubenswrapper[4806]: I0126 08:14:02.614509 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 26 08:14:02 crc kubenswrapper[4806]: I0126 08:14:02.822681 4806 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="5132f7a4-1bcc-4f2e-8dea-d5382b7a52e8" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.210:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 26 08:14:02 crc kubenswrapper[4806]: I0126 08:14:02.822770 4806 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="5132f7a4-1bcc-4f2e-8dea-d5382b7a52e8" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.210:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 26 08:14:04 crc kubenswrapper[4806]: I0126 08:14:04.917495 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 26 08:14:04 crc kubenswrapper[4806]: I0126 08:14:04.917980 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 26 08:14:05 crc kubenswrapper[4806]: I0126 08:14:05.912962 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 26 08:14:06 crc kubenswrapper[4806]: I0126 08:14:06.000735 4806 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="2d59962d-c0f8-41f3-92eb-917eb72966f4" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.212:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 08:14:06 crc kubenswrapper[4806]: I0126 08:14:06.001038 4806 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="2d59962d-c0f8-41f3-92eb-917eb72966f4" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.212:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 08:14:11 crc kubenswrapper[4806]: I0126 08:14:11.432586 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 26 08:14:11 crc kubenswrapper[4806]: I0126 08:14:11.443641 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hrg9r\" (UniqueName: \"kubernetes.io/projected/4ac27597-a156-4995-a67e-98858e667c8a-kube-api-access-hrg9r\") pod \"4ac27597-a156-4995-a67e-98858e667c8a\" (UID: \"4ac27597-a156-4995-a67e-98858e667c8a\") " Jan 26 08:14:11 crc kubenswrapper[4806]: I0126 08:14:11.443950 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ac27597-a156-4995-a67e-98858e667c8a-combined-ca-bundle\") pod \"4ac27597-a156-4995-a67e-98858e667c8a\" (UID: \"4ac27597-a156-4995-a67e-98858e667c8a\") " Jan 26 08:14:11 crc kubenswrapper[4806]: I0126 08:14:11.444056 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ac27597-a156-4995-a67e-98858e667c8a-config-data\") pod \"4ac27597-a156-4995-a67e-98858e667c8a\" (UID: \"4ac27597-a156-4995-a67e-98858e667c8a\") " Jan 26 08:14:11 crc kubenswrapper[4806]: I0126 08:14:11.448855 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ac27597-a156-4995-a67e-98858e667c8a-kube-api-access-hrg9r" (OuterVolumeSpecName: "kube-api-access-hrg9r") pod "4ac27597-a156-4995-a67e-98858e667c8a" (UID: "4ac27597-a156-4995-a67e-98858e667c8a"). InnerVolumeSpecName "kube-api-access-hrg9r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:14:11 crc kubenswrapper[4806]: I0126 08:14:11.517737 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ac27597-a156-4995-a67e-98858e667c8a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4ac27597-a156-4995-a67e-98858e667c8a" (UID: "4ac27597-a156-4995-a67e-98858e667c8a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:14:11 crc kubenswrapper[4806]: I0126 08:14:11.536669 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ac27597-a156-4995-a67e-98858e667c8a-config-data" (OuterVolumeSpecName: "config-data") pod "4ac27597-a156-4995-a67e-98858e667c8a" (UID: "4ac27597-a156-4995-a67e-98858e667c8a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:14:11 crc kubenswrapper[4806]: I0126 08:14:11.554083 4806 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ac27597-a156-4995-a67e-98858e667c8a-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 08:14:11 crc kubenswrapper[4806]: I0126 08:14:11.554122 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hrg9r\" (UniqueName: \"kubernetes.io/projected/4ac27597-a156-4995-a67e-98858e667c8a-kube-api-access-hrg9r\") on node \"crc\" DevicePath \"\"" Jan 26 08:14:11 crc kubenswrapper[4806]: I0126 08:14:11.554132 4806 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ac27597-a156-4995-a67e-98858e667c8a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 08:14:11 crc kubenswrapper[4806]: I0126 08:14:11.646813 4806 generic.go:334] "Generic (PLEG): container finished" podID="4ac27597-a156-4995-a67e-98858e667c8a" containerID="0fc4469fbe8602c6b944bc3ef839cbf3e4a62919548e5ea249b13802680aeb78" exitCode=137 Jan 26 08:14:11 crc kubenswrapper[4806]: I0126 08:14:11.646857 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"4ac27597-a156-4995-a67e-98858e667c8a","Type":"ContainerDied","Data":"0fc4469fbe8602c6b944bc3ef839cbf3e4a62919548e5ea249b13802680aeb78"} Jan 26 08:14:11 crc kubenswrapper[4806]: I0126 08:14:11.646871 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 26 08:14:11 crc kubenswrapper[4806]: I0126 08:14:11.646893 4806 scope.go:117] "RemoveContainer" containerID="0fc4469fbe8602c6b944bc3ef839cbf3e4a62919548e5ea249b13802680aeb78" Jan 26 08:14:11 crc kubenswrapper[4806]: I0126 08:14:11.646882 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"4ac27597-a156-4995-a67e-98858e667c8a","Type":"ContainerDied","Data":"0ceaa7a10c746943f0423e155af4fd22a17a5920a04d8a69626e862748a2d675"} Jan 26 08:14:11 crc kubenswrapper[4806]: I0126 08:14:11.667875 4806 scope.go:117] "RemoveContainer" containerID="0fc4469fbe8602c6b944bc3ef839cbf3e4a62919548e5ea249b13802680aeb78" Jan 26 08:14:11 crc kubenswrapper[4806]: E0126 08:14:11.668305 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0fc4469fbe8602c6b944bc3ef839cbf3e4a62919548e5ea249b13802680aeb78\": container with ID starting with 0fc4469fbe8602c6b944bc3ef839cbf3e4a62919548e5ea249b13802680aeb78 not found: ID does not exist" containerID="0fc4469fbe8602c6b944bc3ef839cbf3e4a62919548e5ea249b13802680aeb78" Jan 26 08:14:11 crc kubenswrapper[4806]: I0126 08:14:11.668346 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0fc4469fbe8602c6b944bc3ef839cbf3e4a62919548e5ea249b13802680aeb78"} err="failed to get container status \"0fc4469fbe8602c6b944bc3ef839cbf3e4a62919548e5ea249b13802680aeb78\": rpc error: code = NotFound desc = could not find container \"0fc4469fbe8602c6b944bc3ef839cbf3e4a62919548e5ea249b13802680aeb78\": container with ID starting with 0fc4469fbe8602c6b944bc3ef839cbf3e4a62919548e5ea249b13802680aeb78 not found: ID does not exist" Jan 26 08:14:11 crc kubenswrapper[4806]: I0126 08:14:11.682618 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 26 08:14:11 crc kubenswrapper[4806]: I0126 08:14:11.697320 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 26 08:14:11 crc kubenswrapper[4806]: I0126 08:14:11.713222 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 26 08:14:11 crc kubenswrapper[4806]: E0126 08:14:11.713658 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ac27597-a156-4995-a67e-98858e667c8a" containerName="nova-cell1-novncproxy-novncproxy" Jan 26 08:14:11 crc kubenswrapper[4806]: I0126 08:14:11.713671 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ac27597-a156-4995-a67e-98858e667c8a" containerName="nova-cell1-novncproxy-novncproxy" Jan 26 08:14:11 crc kubenswrapper[4806]: I0126 08:14:11.713860 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ac27597-a156-4995-a67e-98858e667c8a" containerName="nova-cell1-novncproxy-novncproxy" Jan 26 08:14:11 crc kubenswrapper[4806]: I0126 08:14:11.714502 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 26 08:14:11 crc kubenswrapper[4806]: I0126 08:14:11.718434 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 26 08:14:11 crc kubenswrapper[4806]: I0126 08:14:11.718623 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 26 08:14:11 crc kubenswrapper[4806]: I0126 08:14:11.718783 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 26 08:14:11 crc kubenswrapper[4806]: I0126 08:14:11.727230 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 26 08:14:11 crc kubenswrapper[4806]: I0126 08:14:11.815147 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 26 08:14:11 crc kubenswrapper[4806]: I0126 08:14:11.815364 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 26 08:14:11 crc kubenswrapper[4806]: I0126 08:14:11.820657 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 26 08:14:11 crc kubenswrapper[4806]: I0126 08:14:11.820874 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 26 08:14:11 crc kubenswrapper[4806]: I0126 08:14:11.860234 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/9ce10b87-e354-4e13-9283-f1e15e0d5908-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"9ce10b87-e354-4e13-9283-f1e15e0d5908\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 08:14:11 crc kubenswrapper[4806]: I0126 08:14:11.860275 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ce10b87-e354-4e13-9283-f1e15e0d5908-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"9ce10b87-e354-4e13-9283-f1e15e0d5908\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 08:14:11 crc kubenswrapper[4806]: I0126 08:14:11.860298 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/9ce10b87-e354-4e13-9283-f1e15e0d5908-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"9ce10b87-e354-4e13-9283-f1e15e0d5908\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 08:14:11 crc kubenswrapper[4806]: I0126 08:14:11.860329 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ce10b87-e354-4e13-9283-f1e15e0d5908-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"9ce10b87-e354-4e13-9283-f1e15e0d5908\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 08:14:11 crc kubenswrapper[4806]: I0126 08:14:11.860348 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xz9z\" (UniqueName: \"kubernetes.io/projected/9ce10b87-e354-4e13-9283-f1e15e0d5908-kube-api-access-4xz9z\") pod \"nova-cell1-novncproxy-0\" (UID: \"9ce10b87-e354-4e13-9283-f1e15e0d5908\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 08:14:11 crc kubenswrapper[4806]: I0126 08:14:11.961700 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/9ce10b87-e354-4e13-9283-f1e15e0d5908-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"9ce10b87-e354-4e13-9283-f1e15e0d5908\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 08:14:11 crc kubenswrapper[4806]: I0126 08:14:11.961747 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ce10b87-e354-4e13-9283-f1e15e0d5908-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"9ce10b87-e354-4e13-9283-f1e15e0d5908\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 08:14:11 crc kubenswrapper[4806]: I0126 08:14:11.961779 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/9ce10b87-e354-4e13-9283-f1e15e0d5908-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"9ce10b87-e354-4e13-9283-f1e15e0d5908\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 08:14:11 crc kubenswrapper[4806]: I0126 08:14:11.961806 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ce10b87-e354-4e13-9283-f1e15e0d5908-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"9ce10b87-e354-4e13-9283-f1e15e0d5908\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 08:14:11 crc kubenswrapper[4806]: I0126 08:14:11.961832 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4xz9z\" (UniqueName: \"kubernetes.io/projected/9ce10b87-e354-4e13-9283-f1e15e0d5908-kube-api-access-4xz9z\") pod \"nova-cell1-novncproxy-0\" (UID: \"9ce10b87-e354-4e13-9283-f1e15e0d5908\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 08:14:11 crc kubenswrapper[4806]: I0126 08:14:11.965621 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/9ce10b87-e354-4e13-9283-f1e15e0d5908-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"9ce10b87-e354-4e13-9283-f1e15e0d5908\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 08:14:11 crc kubenswrapper[4806]: I0126 08:14:11.966472 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ce10b87-e354-4e13-9283-f1e15e0d5908-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"9ce10b87-e354-4e13-9283-f1e15e0d5908\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 08:14:11 crc kubenswrapper[4806]: I0126 08:14:11.967295 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ce10b87-e354-4e13-9283-f1e15e0d5908-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"9ce10b87-e354-4e13-9283-f1e15e0d5908\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 08:14:11 crc kubenswrapper[4806]: I0126 08:14:11.969034 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/9ce10b87-e354-4e13-9283-f1e15e0d5908-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"9ce10b87-e354-4e13-9283-f1e15e0d5908\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 08:14:11 crc kubenswrapper[4806]: I0126 08:14:11.978558 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4xz9z\" (UniqueName: \"kubernetes.io/projected/9ce10b87-e354-4e13-9283-f1e15e0d5908-kube-api-access-4xz9z\") pod \"nova-cell1-novncproxy-0\" (UID: \"9ce10b87-e354-4e13-9283-f1e15e0d5908\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 08:14:12 crc kubenswrapper[4806]: I0126 08:14:12.042931 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 26 08:14:12 crc kubenswrapper[4806]: I0126 08:14:12.506360 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 26 08:14:12 crc kubenswrapper[4806]: I0126 08:14:12.657495 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"9ce10b87-e354-4e13-9283-f1e15e0d5908","Type":"ContainerStarted","Data":"cf1abe254c65128126a3a6d865d2abc6d1872eea240ae819aa9dbe3dcf7e621f"} Jan 26 08:14:13 crc kubenswrapper[4806]: I0126 08:14:13.055621 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ac27597-a156-4995-a67e-98858e667c8a" path="/var/lib/kubelet/pods/4ac27597-a156-4995-a67e-98858e667c8a/volumes" Jan 26 08:14:13 crc kubenswrapper[4806]: I0126 08:14:13.666617 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"9ce10b87-e354-4e13-9283-f1e15e0d5908","Type":"ContainerStarted","Data":"9d4ace7b37c97dfb2b05b66d65361e18f78ce0ab74c804e14cb41e04d5595f5e"} Jan 26 08:14:13 crc kubenswrapper[4806]: I0126 08:14:13.689903 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.6898882349999997 podStartE2EDuration="2.689888235s" podCreationTimestamp="2026-01-26 08:14:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 08:14:13.684997608 +0000 UTC m=+1232.949405664" watchObservedRunningTime="2026-01-26 08:14:13.689888235 +0000 UTC m=+1232.954296291" Jan 26 08:14:14 crc kubenswrapper[4806]: I0126 08:14:14.658878 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 26 08:14:14 crc kubenswrapper[4806]: I0126 08:14:14.920042 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 26 08:14:14 crc kubenswrapper[4806]: I0126 08:14:14.920494 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 26 08:14:14 crc kubenswrapper[4806]: I0126 08:14:14.926765 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 26 08:14:14 crc kubenswrapper[4806]: I0126 08:14:14.930199 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 26 08:14:15 crc kubenswrapper[4806]: I0126 08:14:15.685142 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 26 08:14:15 crc kubenswrapper[4806]: I0126 08:14:15.688980 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 26 08:14:15 crc kubenswrapper[4806]: I0126 08:14:15.806927 4806 patch_prober.go:28] interesting pod/machine-config-daemon-k2tlk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 08:14:15 crc kubenswrapper[4806]: I0126 08:14:15.807252 4806 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 08:14:15 crc kubenswrapper[4806]: I0126 08:14:15.957640 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-f84f9ccf-dptl9"] Jan 26 08:14:15 crc kubenswrapper[4806]: I0126 08:14:15.977009 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f84f9ccf-dptl9"] Jan 26 08:14:15 crc kubenswrapper[4806]: I0126 08:14:15.977125 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f84f9ccf-dptl9" Jan 26 08:14:16 crc kubenswrapper[4806]: I0126 08:14:16.114457 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bd8a87c9-fdf5-48dd-9d72-5767bad62a99-dns-svc\") pod \"dnsmasq-dns-f84f9ccf-dptl9\" (UID: \"bd8a87c9-fdf5-48dd-9d72-5767bad62a99\") " pod="openstack/dnsmasq-dns-f84f9ccf-dptl9" Jan 26 08:14:16 crc kubenswrapper[4806]: I0126 08:14:16.114551 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bd8a87c9-fdf5-48dd-9d72-5767bad62a99-ovsdbserver-sb\") pod \"dnsmasq-dns-f84f9ccf-dptl9\" (UID: \"bd8a87c9-fdf5-48dd-9d72-5767bad62a99\") " pod="openstack/dnsmasq-dns-f84f9ccf-dptl9" Jan 26 08:14:16 crc kubenswrapper[4806]: I0126 08:14:16.114596 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd8a87c9-fdf5-48dd-9d72-5767bad62a99-config\") pod \"dnsmasq-dns-f84f9ccf-dptl9\" (UID: \"bd8a87c9-fdf5-48dd-9d72-5767bad62a99\") " pod="openstack/dnsmasq-dns-f84f9ccf-dptl9" Jan 26 08:14:16 crc kubenswrapper[4806]: I0126 08:14:16.114648 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bd8a87c9-fdf5-48dd-9d72-5767bad62a99-dns-swift-storage-0\") pod \"dnsmasq-dns-f84f9ccf-dptl9\" (UID: \"bd8a87c9-fdf5-48dd-9d72-5767bad62a99\") " pod="openstack/dnsmasq-dns-f84f9ccf-dptl9" Jan 26 08:14:16 crc kubenswrapper[4806]: I0126 08:14:16.114713 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bd8a87c9-fdf5-48dd-9d72-5767bad62a99-ovsdbserver-nb\") pod \"dnsmasq-dns-f84f9ccf-dptl9\" (UID: \"bd8a87c9-fdf5-48dd-9d72-5767bad62a99\") " pod="openstack/dnsmasq-dns-f84f9ccf-dptl9" Jan 26 08:14:16 crc kubenswrapper[4806]: I0126 08:14:16.114752 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xx7t\" (UniqueName: \"kubernetes.io/projected/bd8a87c9-fdf5-48dd-9d72-5767bad62a99-kube-api-access-4xx7t\") pod \"dnsmasq-dns-f84f9ccf-dptl9\" (UID: \"bd8a87c9-fdf5-48dd-9d72-5767bad62a99\") " pod="openstack/dnsmasq-dns-f84f9ccf-dptl9" Jan 26 08:14:16 crc kubenswrapper[4806]: I0126 08:14:16.216826 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bd8a87c9-fdf5-48dd-9d72-5767bad62a99-dns-swift-storage-0\") pod \"dnsmasq-dns-f84f9ccf-dptl9\" (UID: \"bd8a87c9-fdf5-48dd-9d72-5767bad62a99\") " pod="openstack/dnsmasq-dns-f84f9ccf-dptl9" Jan 26 08:14:16 crc kubenswrapper[4806]: I0126 08:14:16.216896 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bd8a87c9-fdf5-48dd-9d72-5767bad62a99-ovsdbserver-nb\") pod \"dnsmasq-dns-f84f9ccf-dptl9\" (UID: \"bd8a87c9-fdf5-48dd-9d72-5767bad62a99\") " pod="openstack/dnsmasq-dns-f84f9ccf-dptl9" Jan 26 08:14:16 crc kubenswrapper[4806]: I0126 08:14:16.216930 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4xx7t\" (UniqueName: \"kubernetes.io/projected/bd8a87c9-fdf5-48dd-9d72-5767bad62a99-kube-api-access-4xx7t\") pod \"dnsmasq-dns-f84f9ccf-dptl9\" (UID: \"bd8a87c9-fdf5-48dd-9d72-5767bad62a99\") " pod="openstack/dnsmasq-dns-f84f9ccf-dptl9" Jan 26 08:14:16 crc kubenswrapper[4806]: I0126 08:14:16.216996 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bd8a87c9-fdf5-48dd-9d72-5767bad62a99-dns-svc\") pod \"dnsmasq-dns-f84f9ccf-dptl9\" (UID: \"bd8a87c9-fdf5-48dd-9d72-5767bad62a99\") " pod="openstack/dnsmasq-dns-f84f9ccf-dptl9" Jan 26 08:14:16 crc kubenswrapper[4806]: I0126 08:14:16.217025 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bd8a87c9-fdf5-48dd-9d72-5767bad62a99-ovsdbserver-sb\") pod \"dnsmasq-dns-f84f9ccf-dptl9\" (UID: \"bd8a87c9-fdf5-48dd-9d72-5767bad62a99\") " pod="openstack/dnsmasq-dns-f84f9ccf-dptl9" Jan 26 08:14:16 crc kubenswrapper[4806]: I0126 08:14:16.217052 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd8a87c9-fdf5-48dd-9d72-5767bad62a99-config\") pod \"dnsmasq-dns-f84f9ccf-dptl9\" (UID: \"bd8a87c9-fdf5-48dd-9d72-5767bad62a99\") " pod="openstack/dnsmasq-dns-f84f9ccf-dptl9" Jan 26 08:14:16 crc kubenswrapper[4806]: I0126 08:14:16.217922 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd8a87c9-fdf5-48dd-9d72-5767bad62a99-config\") pod \"dnsmasq-dns-f84f9ccf-dptl9\" (UID: \"bd8a87c9-fdf5-48dd-9d72-5767bad62a99\") " pod="openstack/dnsmasq-dns-f84f9ccf-dptl9" Jan 26 08:14:16 crc kubenswrapper[4806]: I0126 08:14:16.217978 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bd8a87c9-fdf5-48dd-9d72-5767bad62a99-ovsdbserver-nb\") pod \"dnsmasq-dns-f84f9ccf-dptl9\" (UID: \"bd8a87c9-fdf5-48dd-9d72-5767bad62a99\") " pod="openstack/dnsmasq-dns-f84f9ccf-dptl9" Jan 26 08:14:16 crc kubenswrapper[4806]: I0126 08:14:16.218216 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bd8a87c9-fdf5-48dd-9d72-5767bad62a99-dns-swift-storage-0\") pod \"dnsmasq-dns-f84f9ccf-dptl9\" (UID: \"bd8a87c9-fdf5-48dd-9d72-5767bad62a99\") " pod="openstack/dnsmasq-dns-f84f9ccf-dptl9" Jan 26 08:14:16 crc kubenswrapper[4806]: I0126 08:14:16.218681 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bd8a87c9-fdf5-48dd-9d72-5767bad62a99-dns-svc\") pod \"dnsmasq-dns-f84f9ccf-dptl9\" (UID: \"bd8a87c9-fdf5-48dd-9d72-5767bad62a99\") " pod="openstack/dnsmasq-dns-f84f9ccf-dptl9" Jan 26 08:14:16 crc kubenswrapper[4806]: I0126 08:14:16.218893 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bd8a87c9-fdf5-48dd-9d72-5767bad62a99-ovsdbserver-sb\") pod \"dnsmasq-dns-f84f9ccf-dptl9\" (UID: \"bd8a87c9-fdf5-48dd-9d72-5767bad62a99\") " pod="openstack/dnsmasq-dns-f84f9ccf-dptl9" Jan 26 08:14:16 crc kubenswrapper[4806]: I0126 08:14:16.245510 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4xx7t\" (UniqueName: \"kubernetes.io/projected/bd8a87c9-fdf5-48dd-9d72-5767bad62a99-kube-api-access-4xx7t\") pod \"dnsmasq-dns-f84f9ccf-dptl9\" (UID: \"bd8a87c9-fdf5-48dd-9d72-5767bad62a99\") " pod="openstack/dnsmasq-dns-f84f9ccf-dptl9" Jan 26 08:14:16 crc kubenswrapper[4806]: I0126 08:14:16.330856 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f84f9ccf-dptl9" Jan 26 08:14:16 crc kubenswrapper[4806]: I0126 08:14:16.907239 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f84f9ccf-dptl9"] Jan 26 08:14:17 crc kubenswrapper[4806]: I0126 08:14:17.055455 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 26 08:14:17 crc kubenswrapper[4806]: E0126 08:14:17.446333 4806 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbd8a87c9_fdf5_48dd_9d72_5767bad62a99.slice/crio-c80b45dfe22113616d620ce7f237b8ba546243adf081b090820a8f78bb275e11.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbd8a87c9_fdf5_48dd_9d72_5767bad62a99.slice/crio-conmon-c80b45dfe22113616d620ce7f237b8ba546243adf081b090820a8f78bb275e11.scope\": RecentStats: unable to find data in memory cache]" Jan 26 08:14:17 crc kubenswrapper[4806]: I0126 08:14:17.711892 4806 generic.go:334] "Generic (PLEG): container finished" podID="bd8a87c9-fdf5-48dd-9d72-5767bad62a99" containerID="c80b45dfe22113616d620ce7f237b8ba546243adf081b090820a8f78bb275e11" exitCode=0 Jan 26 08:14:17 crc kubenswrapper[4806]: I0126 08:14:17.713209 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f84f9ccf-dptl9" event={"ID":"bd8a87c9-fdf5-48dd-9d72-5767bad62a99","Type":"ContainerDied","Data":"c80b45dfe22113616d620ce7f237b8ba546243adf081b090820a8f78bb275e11"} Jan 26 08:14:17 crc kubenswrapper[4806]: I0126 08:14:17.713266 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f84f9ccf-dptl9" event={"ID":"bd8a87c9-fdf5-48dd-9d72-5767bad62a99","Type":"ContainerStarted","Data":"e6ff4b1a8ce3ebf2c2b29d1f4529f62b6d8efea14c07a8bf6aab0d789023742c"} Jan 26 08:14:18 crc kubenswrapper[4806]: I0126 08:14:18.720833 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f84f9ccf-dptl9" event={"ID":"bd8a87c9-fdf5-48dd-9d72-5767bad62a99","Type":"ContainerStarted","Data":"f3df4a4892569a8fb143b20d9fad2bd154a0d27579b6b3335113ffd8ac087f6c"} Jan 26 08:14:18 crc kubenswrapper[4806]: I0126 08:14:18.721159 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-f84f9ccf-dptl9" Jan 26 08:14:18 crc kubenswrapper[4806]: I0126 08:14:18.745278 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-f84f9ccf-dptl9" podStartSLOduration=3.7452567610000003 podStartE2EDuration="3.745256761s" podCreationTimestamp="2026-01-26 08:14:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 08:14:18.737322179 +0000 UTC m=+1238.001730235" watchObservedRunningTime="2026-01-26 08:14:18.745256761 +0000 UTC m=+1238.009664817" Jan 26 08:14:18 crc kubenswrapper[4806]: I0126 08:14:18.847021 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 26 08:14:18 crc kubenswrapper[4806]: I0126 08:14:18.847271 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="2d59962d-c0f8-41f3-92eb-917eb72966f4" containerName="nova-api-log" containerID="cri-o://950a99e2119c38ab19bf2e910c26f15c086b8049fa61b8f01dbcfe8fd92f83a8" gracePeriod=30 Jan 26 08:14:18 crc kubenswrapper[4806]: I0126 08:14:18.847391 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="2d59962d-c0f8-41f3-92eb-917eb72966f4" containerName="nova-api-api" containerID="cri-o://fcd3afbbc45f8e721c2e2ca587a4428c8995657ae6ee98a85d12a59628a63f3a" gracePeriod=30 Jan 26 08:14:19 crc kubenswrapper[4806]: I0126 08:14:19.731907 4806 generic.go:334] "Generic (PLEG): container finished" podID="2d59962d-c0f8-41f3-92eb-917eb72966f4" containerID="950a99e2119c38ab19bf2e910c26f15c086b8049fa61b8f01dbcfe8fd92f83a8" exitCode=143 Jan 26 08:14:19 crc kubenswrapper[4806]: I0126 08:14:19.731995 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2d59962d-c0f8-41f3-92eb-917eb72966f4","Type":"ContainerDied","Data":"950a99e2119c38ab19bf2e910c26f15c086b8049fa61b8f01dbcfe8fd92f83a8"} Jan 26 08:14:20 crc kubenswrapper[4806]: I0126 08:14:20.239454 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 08:14:20 crc kubenswrapper[4806]: I0126 08:14:20.239709 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1907c0be-76fa-416d-ad59-3e106d418c43" containerName="ceilometer-central-agent" containerID="cri-o://41ffb145c1d5faf50af42e9d1848d6d9453ea93a16379a544674db0e8e772af5" gracePeriod=30 Jan 26 08:14:20 crc kubenswrapper[4806]: I0126 08:14:20.239775 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1907c0be-76fa-416d-ad59-3e106d418c43" containerName="ceilometer-notification-agent" containerID="cri-o://95eb1ec0195a28be3bde73feba41786fc919a340e066c478330c4127cc0bc337" gracePeriod=30 Jan 26 08:14:20 crc kubenswrapper[4806]: I0126 08:14:20.239789 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1907c0be-76fa-416d-ad59-3e106d418c43" containerName="sg-core" containerID="cri-o://d0521d5a61ebea8f5723d5494b314a2548fbac1d3d9c7647095705c26bc120c6" gracePeriod=30 Jan 26 08:14:20 crc kubenswrapper[4806]: I0126 08:14:20.239884 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1907c0be-76fa-416d-ad59-3e106d418c43" containerName="proxy-httpd" containerID="cri-o://ab57275b20b2b33e2d67c05d237dfc4419fe687aabb8d5f9fc1967d1c6cfa489" gracePeriod=30 Jan 26 08:14:20 crc kubenswrapper[4806]: I0126 08:14:20.744379 4806 generic.go:334] "Generic (PLEG): container finished" podID="1907c0be-76fa-416d-ad59-3e106d418c43" containerID="ab57275b20b2b33e2d67c05d237dfc4419fe687aabb8d5f9fc1967d1c6cfa489" exitCode=0 Jan 26 08:14:20 crc kubenswrapper[4806]: I0126 08:14:20.744702 4806 generic.go:334] "Generic (PLEG): container finished" podID="1907c0be-76fa-416d-ad59-3e106d418c43" containerID="d0521d5a61ebea8f5723d5494b314a2548fbac1d3d9c7647095705c26bc120c6" exitCode=2 Jan 26 08:14:20 crc kubenswrapper[4806]: I0126 08:14:20.744711 4806 generic.go:334] "Generic (PLEG): container finished" podID="1907c0be-76fa-416d-ad59-3e106d418c43" containerID="41ffb145c1d5faf50af42e9d1848d6d9453ea93a16379a544674db0e8e772af5" exitCode=0 Jan 26 08:14:20 crc kubenswrapper[4806]: I0126 08:14:20.744463 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1907c0be-76fa-416d-ad59-3e106d418c43","Type":"ContainerDied","Data":"ab57275b20b2b33e2d67c05d237dfc4419fe687aabb8d5f9fc1967d1c6cfa489"} Jan 26 08:14:20 crc kubenswrapper[4806]: I0126 08:14:20.744749 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1907c0be-76fa-416d-ad59-3e106d418c43","Type":"ContainerDied","Data":"d0521d5a61ebea8f5723d5494b314a2548fbac1d3d9c7647095705c26bc120c6"} Jan 26 08:14:20 crc kubenswrapper[4806]: I0126 08:14:20.744763 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1907c0be-76fa-416d-ad59-3e106d418c43","Type":"ContainerDied","Data":"41ffb145c1d5faf50af42e9d1848d6d9453ea93a16379a544674db0e8e772af5"} Jan 26 08:14:21 crc kubenswrapper[4806]: I0126 08:14:21.784130 4806 generic.go:334] "Generic (PLEG): container finished" podID="1907c0be-76fa-416d-ad59-3e106d418c43" containerID="95eb1ec0195a28be3bde73feba41786fc919a340e066c478330c4127cc0bc337" exitCode=0 Jan 26 08:14:21 crc kubenswrapper[4806]: I0126 08:14:21.784720 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1907c0be-76fa-416d-ad59-3e106d418c43","Type":"ContainerDied","Data":"95eb1ec0195a28be3bde73feba41786fc919a340e066c478330c4127cc0bc337"} Jan 26 08:14:21 crc kubenswrapper[4806]: I0126 08:14:21.871611 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 08:14:21 crc kubenswrapper[4806]: I0126 08:14:21.968171 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1907c0be-76fa-416d-ad59-3e106d418c43-config-data\") pod \"1907c0be-76fa-416d-ad59-3e106d418c43\" (UID: \"1907c0be-76fa-416d-ad59-3e106d418c43\") " Jan 26 08:14:21 crc kubenswrapper[4806]: I0126 08:14:21.968215 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h7znh\" (UniqueName: \"kubernetes.io/projected/1907c0be-76fa-416d-ad59-3e106d418c43-kube-api-access-h7znh\") pod \"1907c0be-76fa-416d-ad59-3e106d418c43\" (UID: \"1907c0be-76fa-416d-ad59-3e106d418c43\") " Jan 26 08:14:21 crc kubenswrapper[4806]: I0126 08:14:21.968330 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1907c0be-76fa-416d-ad59-3e106d418c43-combined-ca-bundle\") pod \"1907c0be-76fa-416d-ad59-3e106d418c43\" (UID: \"1907c0be-76fa-416d-ad59-3e106d418c43\") " Jan 26 08:14:21 crc kubenswrapper[4806]: I0126 08:14:21.968362 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1907c0be-76fa-416d-ad59-3e106d418c43-log-httpd\") pod \"1907c0be-76fa-416d-ad59-3e106d418c43\" (UID: \"1907c0be-76fa-416d-ad59-3e106d418c43\") " Jan 26 08:14:21 crc kubenswrapper[4806]: I0126 08:14:21.968399 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1907c0be-76fa-416d-ad59-3e106d418c43-run-httpd\") pod \"1907c0be-76fa-416d-ad59-3e106d418c43\" (UID: \"1907c0be-76fa-416d-ad59-3e106d418c43\") " Jan 26 08:14:21 crc kubenswrapper[4806]: I0126 08:14:21.968475 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1907c0be-76fa-416d-ad59-3e106d418c43-scripts\") pod \"1907c0be-76fa-416d-ad59-3e106d418c43\" (UID: \"1907c0be-76fa-416d-ad59-3e106d418c43\") " Jan 26 08:14:21 crc kubenswrapper[4806]: I0126 08:14:21.968507 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1907c0be-76fa-416d-ad59-3e106d418c43-sg-core-conf-yaml\") pod \"1907c0be-76fa-416d-ad59-3e106d418c43\" (UID: \"1907c0be-76fa-416d-ad59-3e106d418c43\") " Jan 26 08:14:21 crc kubenswrapper[4806]: I0126 08:14:21.970467 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1907c0be-76fa-416d-ad59-3e106d418c43-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "1907c0be-76fa-416d-ad59-3e106d418c43" (UID: "1907c0be-76fa-416d-ad59-3e106d418c43"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 08:14:21 crc kubenswrapper[4806]: I0126 08:14:21.971261 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1907c0be-76fa-416d-ad59-3e106d418c43-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "1907c0be-76fa-416d-ad59-3e106d418c43" (UID: "1907c0be-76fa-416d-ad59-3e106d418c43"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 08:14:21 crc kubenswrapper[4806]: I0126 08:14:21.974260 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1907c0be-76fa-416d-ad59-3e106d418c43-kube-api-access-h7znh" (OuterVolumeSpecName: "kube-api-access-h7znh") pod "1907c0be-76fa-416d-ad59-3e106d418c43" (UID: "1907c0be-76fa-416d-ad59-3e106d418c43"). InnerVolumeSpecName "kube-api-access-h7znh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:14:21 crc kubenswrapper[4806]: I0126 08:14:21.977999 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1907c0be-76fa-416d-ad59-3e106d418c43-scripts" (OuterVolumeSpecName: "scripts") pod "1907c0be-76fa-416d-ad59-3e106d418c43" (UID: "1907c0be-76fa-416d-ad59-3e106d418c43"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:14:22 crc kubenswrapper[4806]: I0126 08:14:22.012649 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1907c0be-76fa-416d-ad59-3e106d418c43-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "1907c0be-76fa-416d-ad59-3e106d418c43" (UID: "1907c0be-76fa-416d-ad59-3e106d418c43"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:14:22 crc kubenswrapper[4806]: I0126 08:14:22.047955 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 26 08:14:22 crc kubenswrapper[4806]: I0126 08:14:22.070239 4806 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1907c0be-76fa-416d-ad59-3e106d418c43-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 08:14:22 crc kubenswrapper[4806]: I0126 08:14:22.070269 4806 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1907c0be-76fa-416d-ad59-3e106d418c43-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 08:14:22 crc kubenswrapper[4806]: I0126 08:14:22.070277 4806 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1907c0be-76fa-416d-ad59-3e106d418c43-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 08:14:22 crc kubenswrapper[4806]: I0126 08:14:22.070286 4806 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1907c0be-76fa-416d-ad59-3e106d418c43-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 26 08:14:22 crc kubenswrapper[4806]: I0126 08:14:22.070295 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h7znh\" (UniqueName: \"kubernetes.io/projected/1907c0be-76fa-416d-ad59-3e106d418c43-kube-api-access-h7znh\") on node \"crc\" DevicePath \"\"" Jan 26 08:14:22 crc kubenswrapper[4806]: I0126 08:14:22.153953 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 26 08:14:22 crc kubenswrapper[4806]: I0126 08:14:22.168575 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1907c0be-76fa-416d-ad59-3e106d418c43-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1907c0be-76fa-416d-ad59-3e106d418c43" (UID: "1907c0be-76fa-416d-ad59-3e106d418c43"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:14:22 crc kubenswrapper[4806]: I0126 08:14:22.173204 4806 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1907c0be-76fa-416d-ad59-3e106d418c43-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 08:14:22 crc kubenswrapper[4806]: I0126 08:14:22.216957 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1907c0be-76fa-416d-ad59-3e106d418c43-config-data" (OuterVolumeSpecName: "config-data") pod "1907c0be-76fa-416d-ad59-3e106d418c43" (UID: "1907c0be-76fa-416d-ad59-3e106d418c43"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:14:22 crc kubenswrapper[4806]: I0126 08:14:22.276269 4806 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1907c0be-76fa-416d-ad59-3e106d418c43-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 08:14:22 crc kubenswrapper[4806]: I0126 08:14:22.709494 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 08:14:22 crc kubenswrapper[4806]: I0126 08:14:22.784308 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d59962d-c0f8-41f3-92eb-917eb72966f4-combined-ca-bundle\") pod \"2d59962d-c0f8-41f3-92eb-917eb72966f4\" (UID: \"2d59962d-c0f8-41f3-92eb-917eb72966f4\") " Jan 26 08:14:22 crc kubenswrapper[4806]: I0126 08:14:22.784382 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d59962d-c0f8-41f3-92eb-917eb72966f4-config-data\") pod \"2d59962d-c0f8-41f3-92eb-917eb72966f4\" (UID: \"2d59962d-c0f8-41f3-92eb-917eb72966f4\") " Jan 26 08:14:22 crc kubenswrapper[4806]: I0126 08:14:22.785041 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2d59962d-c0f8-41f3-92eb-917eb72966f4-logs\") pod \"2d59962d-c0f8-41f3-92eb-917eb72966f4\" (UID: \"2d59962d-c0f8-41f3-92eb-917eb72966f4\") " Jan 26 08:14:22 crc kubenswrapper[4806]: I0126 08:14:22.785167 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mmnb2\" (UniqueName: \"kubernetes.io/projected/2d59962d-c0f8-41f3-92eb-917eb72966f4-kube-api-access-mmnb2\") pod \"2d59962d-c0f8-41f3-92eb-917eb72966f4\" (UID: \"2d59962d-c0f8-41f3-92eb-917eb72966f4\") " Jan 26 08:14:22 crc kubenswrapper[4806]: I0126 08:14:22.785424 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d59962d-c0f8-41f3-92eb-917eb72966f4-logs" (OuterVolumeSpecName: "logs") pod "2d59962d-c0f8-41f3-92eb-917eb72966f4" (UID: "2d59962d-c0f8-41f3-92eb-917eb72966f4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 08:14:22 crc kubenswrapper[4806]: I0126 08:14:22.785949 4806 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2d59962d-c0f8-41f3-92eb-917eb72966f4-logs\") on node \"crc\" DevicePath \"\"" Jan 26 08:14:22 crc kubenswrapper[4806]: I0126 08:14:22.801181 4806 generic.go:334] "Generic (PLEG): container finished" podID="2d59962d-c0f8-41f3-92eb-917eb72966f4" containerID="fcd3afbbc45f8e721c2e2ca587a4428c8995657ae6ee98a85d12a59628a63f3a" exitCode=0 Jan 26 08:14:22 crc kubenswrapper[4806]: I0126 08:14:22.801241 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2d59962d-c0f8-41f3-92eb-917eb72966f4","Type":"ContainerDied","Data":"fcd3afbbc45f8e721c2e2ca587a4428c8995657ae6ee98a85d12a59628a63f3a"} Jan 26 08:14:22 crc kubenswrapper[4806]: I0126 08:14:22.801267 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2d59962d-c0f8-41f3-92eb-917eb72966f4","Type":"ContainerDied","Data":"69ab8fb026bbffe77de5789dd440903020be2eea36a6c2f99fa83ce280893f44"} Jan 26 08:14:22 crc kubenswrapper[4806]: I0126 08:14:22.801284 4806 scope.go:117] "RemoveContainer" containerID="fcd3afbbc45f8e721c2e2ca587a4428c8995657ae6ee98a85d12a59628a63f3a" Jan 26 08:14:22 crc kubenswrapper[4806]: I0126 08:14:22.801393 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 08:14:22 crc kubenswrapper[4806]: I0126 08:14:22.809306 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1907c0be-76fa-416d-ad59-3e106d418c43","Type":"ContainerDied","Data":"3d756fa9df6d882fa1fd254c3d904a17282c42a85006c1ca3eb8ff9b946ee30c"} Jan 26 08:14:22 crc kubenswrapper[4806]: I0126 08:14:22.809373 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 08:14:22 crc kubenswrapper[4806]: I0126 08:14:22.820798 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d59962d-c0f8-41f3-92eb-917eb72966f4-kube-api-access-mmnb2" (OuterVolumeSpecName: "kube-api-access-mmnb2") pod "2d59962d-c0f8-41f3-92eb-917eb72966f4" (UID: "2d59962d-c0f8-41f3-92eb-917eb72966f4"). InnerVolumeSpecName "kube-api-access-mmnb2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:14:22 crc kubenswrapper[4806]: I0126 08:14:22.835428 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d59962d-c0f8-41f3-92eb-917eb72966f4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2d59962d-c0f8-41f3-92eb-917eb72966f4" (UID: "2d59962d-c0f8-41f3-92eb-917eb72966f4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:14:22 crc kubenswrapper[4806]: I0126 08:14:22.839423 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 26 08:14:22 crc kubenswrapper[4806]: I0126 08:14:22.843888 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d59962d-c0f8-41f3-92eb-917eb72966f4-config-data" (OuterVolumeSpecName: "config-data") pod "2d59962d-c0f8-41f3-92eb-917eb72966f4" (UID: "2d59962d-c0f8-41f3-92eb-917eb72966f4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:14:22 crc kubenswrapper[4806]: I0126 08:14:22.894668 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mmnb2\" (UniqueName: \"kubernetes.io/projected/2d59962d-c0f8-41f3-92eb-917eb72966f4-kube-api-access-mmnb2\") on node \"crc\" DevicePath \"\"" Jan 26 08:14:22 crc kubenswrapper[4806]: I0126 08:14:22.894696 4806 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d59962d-c0f8-41f3-92eb-917eb72966f4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 08:14:22 crc kubenswrapper[4806]: I0126 08:14:22.894706 4806 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d59962d-c0f8-41f3-92eb-917eb72966f4-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 08:14:22 crc kubenswrapper[4806]: I0126 08:14:22.920359 4806 scope.go:117] "RemoveContainer" containerID="950a99e2119c38ab19bf2e910c26f15c086b8049fa61b8f01dbcfe8fd92f83a8" Jan 26 08:14:22 crc kubenswrapper[4806]: I0126 08:14:22.943050 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 08:14:22 crc kubenswrapper[4806]: I0126 08:14:22.957858 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 26 08:14:23 crc kubenswrapper[4806]: I0126 08:14:22.993363 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 26 08:14:23 crc kubenswrapper[4806]: E0126 08:14:22.993821 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d59962d-c0f8-41f3-92eb-917eb72966f4" containerName="nova-api-log" Jan 26 08:14:23 crc kubenswrapper[4806]: I0126 08:14:22.993834 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d59962d-c0f8-41f3-92eb-917eb72966f4" containerName="nova-api-log" Jan 26 08:14:23 crc kubenswrapper[4806]: E0126 08:14:22.993846 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d59962d-c0f8-41f3-92eb-917eb72966f4" containerName="nova-api-api" Jan 26 08:14:23 crc kubenswrapper[4806]: I0126 08:14:22.993852 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d59962d-c0f8-41f3-92eb-917eb72966f4" containerName="nova-api-api" Jan 26 08:14:23 crc kubenswrapper[4806]: E0126 08:14:22.993863 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1907c0be-76fa-416d-ad59-3e106d418c43" containerName="sg-core" Jan 26 08:14:23 crc kubenswrapper[4806]: I0126 08:14:22.993869 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="1907c0be-76fa-416d-ad59-3e106d418c43" containerName="sg-core" Jan 26 08:14:23 crc kubenswrapper[4806]: E0126 08:14:22.993881 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1907c0be-76fa-416d-ad59-3e106d418c43" containerName="proxy-httpd" Jan 26 08:14:23 crc kubenswrapper[4806]: I0126 08:14:22.993886 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="1907c0be-76fa-416d-ad59-3e106d418c43" containerName="proxy-httpd" Jan 26 08:14:23 crc kubenswrapper[4806]: E0126 08:14:22.993895 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1907c0be-76fa-416d-ad59-3e106d418c43" containerName="ceilometer-central-agent" Jan 26 08:14:23 crc kubenswrapper[4806]: I0126 08:14:22.993901 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="1907c0be-76fa-416d-ad59-3e106d418c43" containerName="ceilometer-central-agent" Jan 26 08:14:23 crc kubenswrapper[4806]: E0126 08:14:22.993910 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1907c0be-76fa-416d-ad59-3e106d418c43" containerName="ceilometer-notification-agent" Jan 26 08:14:23 crc kubenswrapper[4806]: I0126 08:14:22.993915 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="1907c0be-76fa-416d-ad59-3e106d418c43" containerName="ceilometer-notification-agent" Jan 26 08:14:23 crc kubenswrapper[4806]: I0126 08:14:22.993990 4806 scope.go:117] "RemoveContainer" containerID="fcd3afbbc45f8e721c2e2ca587a4428c8995657ae6ee98a85d12a59628a63f3a" Jan 26 08:14:23 crc kubenswrapper[4806]: I0126 08:14:22.994110 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="1907c0be-76fa-416d-ad59-3e106d418c43" containerName="proxy-httpd" Jan 26 08:14:23 crc kubenswrapper[4806]: I0126 08:14:22.994122 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="1907c0be-76fa-416d-ad59-3e106d418c43" containerName="ceilometer-central-agent" Jan 26 08:14:23 crc kubenswrapper[4806]: I0126 08:14:22.994132 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d59962d-c0f8-41f3-92eb-917eb72966f4" containerName="nova-api-api" Jan 26 08:14:23 crc kubenswrapper[4806]: I0126 08:14:22.994145 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="1907c0be-76fa-416d-ad59-3e106d418c43" containerName="sg-core" Jan 26 08:14:23 crc kubenswrapper[4806]: I0126 08:14:22.994154 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="1907c0be-76fa-416d-ad59-3e106d418c43" containerName="ceilometer-notification-agent" Jan 26 08:14:23 crc kubenswrapper[4806]: I0126 08:14:22.994173 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d59962d-c0f8-41f3-92eb-917eb72966f4" containerName="nova-api-log" Jan 26 08:14:23 crc kubenswrapper[4806]: E0126 08:14:22.995499 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fcd3afbbc45f8e721c2e2ca587a4428c8995657ae6ee98a85d12a59628a63f3a\": container with ID starting with fcd3afbbc45f8e721c2e2ca587a4428c8995657ae6ee98a85d12a59628a63f3a not found: ID does not exist" containerID="fcd3afbbc45f8e721c2e2ca587a4428c8995657ae6ee98a85d12a59628a63f3a" Jan 26 08:14:23 crc kubenswrapper[4806]: I0126 08:14:22.995540 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fcd3afbbc45f8e721c2e2ca587a4428c8995657ae6ee98a85d12a59628a63f3a"} err="failed to get container status \"fcd3afbbc45f8e721c2e2ca587a4428c8995657ae6ee98a85d12a59628a63f3a\": rpc error: code = NotFound desc = could not find container \"fcd3afbbc45f8e721c2e2ca587a4428c8995657ae6ee98a85d12a59628a63f3a\": container with ID starting with fcd3afbbc45f8e721c2e2ca587a4428c8995657ae6ee98a85d12a59628a63f3a not found: ID does not exist" Jan 26 08:14:23 crc kubenswrapper[4806]: I0126 08:14:22.995562 4806 scope.go:117] "RemoveContainer" containerID="950a99e2119c38ab19bf2e910c26f15c086b8049fa61b8f01dbcfe8fd92f83a8" Jan 26 08:14:23 crc kubenswrapper[4806]: E0126 08:14:22.996164 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"950a99e2119c38ab19bf2e910c26f15c086b8049fa61b8f01dbcfe8fd92f83a8\": container with ID starting with 950a99e2119c38ab19bf2e910c26f15c086b8049fa61b8f01dbcfe8fd92f83a8 not found: ID does not exist" containerID="950a99e2119c38ab19bf2e910c26f15c086b8049fa61b8f01dbcfe8fd92f83a8" Jan 26 08:14:23 crc kubenswrapper[4806]: I0126 08:14:22.996182 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"950a99e2119c38ab19bf2e910c26f15c086b8049fa61b8f01dbcfe8fd92f83a8"} err="failed to get container status \"950a99e2119c38ab19bf2e910c26f15c086b8049fa61b8f01dbcfe8fd92f83a8\": rpc error: code = NotFound desc = could not find container \"950a99e2119c38ab19bf2e910c26f15c086b8049fa61b8f01dbcfe8fd92f83a8\": container with ID starting with 950a99e2119c38ab19bf2e910c26f15c086b8049fa61b8f01dbcfe8fd92f83a8 not found: ID does not exist" Jan 26 08:14:23 crc kubenswrapper[4806]: I0126 08:14:22.996196 4806 scope.go:117] "RemoveContainer" containerID="ab57275b20b2b33e2d67c05d237dfc4419fe687aabb8d5f9fc1967d1c6cfa489" Jan 26 08:14:23 crc kubenswrapper[4806]: I0126 08:14:22.999402 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 08:14:23 crc kubenswrapper[4806]: I0126 08:14:23.005599 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 26 08:14:23 crc kubenswrapper[4806]: I0126 08:14:23.005800 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 26 08:14:23 crc kubenswrapper[4806]: I0126 08:14:23.018641 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 08:14:23 crc kubenswrapper[4806]: I0126 08:14:23.048409 4806 scope.go:117] "RemoveContainer" containerID="d0521d5a61ebea8f5723d5494b314a2548fbac1d3d9c7647095705c26bc120c6" Jan 26 08:14:23 crc kubenswrapper[4806]: I0126 08:14:23.063045 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1907c0be-76fa-416d-ad59-3e106d418c43" path="/var/lib/kubelet/pods/1907c0be-76fa-416d-ad59-3e106d418c43/volumes" Jan 26 08:14:23 crc kubenswrapper[4806]: I0126 08:14:23.096712 4806 scope.go:117] "RemoveContainer" containerID="95eb1ec0195a28be3bde73feba41786fc919a340e066c478330c4127cc0bc337" Jan 26 08:14:23 crc kubenswrapper[4806]: I0126 08:14:23.097671 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6ff9c60-ebe3-40fb-aa92-180ed68ed333-config-data\") pod \"ceilometer-0\" (UID: \"f6ff9c60-ebe3-40fb-aa92-180ed68ed333\") " pod="openstack/ceilometer-0" Jan 26 08:14:23 crc kubenswrapper[4806]: I0126 08:14:23.097719 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clmts\" (UniqueName: \"kubernetes.io/projected/f6ff9c60-ebe3-40fb-aa92-180ed68ed333-kube-api-access-clmts\") pod \"ceilometer-0\" (UID: \"f6ff9c60-ebe3-40fb-aa92-180ed68ed333\") " pod="openstack/ceilometer-0" Jan 26 08:14:23 crc kubenswrapper[4806]: I0126 08:14:23.097744 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f6ff9c60-ebe3-40fb-aa92-180ed68ed333-scripts\") pod \"ceilometer-0\" (UID: \"f6ff9c60-ebe3-40fb-aa92-180ed68ed333\") " pod="openstack/ceilometer-0" Jan 26 08:14:23 crc kubenswrapper[4806]: I0126 08:14:23.097763 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f6ff9c60-ebe3-40fb-aa92-180ed68ed333-log-httpd\") pod \"ceilometer-0\" (UID: \"f6ff9c60-ebe3-40fb-aa92-180ed68ed333\") " pod="openstack/ceilometer-0" Jan 26 08:14:23 crc kubenswrapper[4806]: I0126 08:14:23.097831 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f6ff9c60-ebe3-40fb-aa92-180ed68ed333-run-httpd\") pod \"ceilometer-0\" (UID: \"f6ff9c60-ebe3-40fb-aa92-180ed68ed333\") " pod="openstack/ceilometer-0" Jan 26 08:14:23 crc kubenswrapper[4806]: I0126 08:14:23.097849 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f6ff9c60-ebe3-40fb-aa92-180ed68ed333-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f6ff9c60-ebe3-40fb-aa92-180ed68ed333\") " pod="openstack/ceilometer-0" Jan 26 08:14:23 crc kubenswrapper[4806]: I0126 08:14:23.097899 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6ff9c60-ebe3-40fb-aa92-180ed68ed333-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f6ff9c60-ebe3-40fb-aa92-180ed68ed333\") " pod="openstack/ceilometer-0" Jan 26 08:14:23 crc kubenswrapper[4806]: I0126 08:14:23.125138 4806 scope.go:117] "RemoveContainer" containerID="41ffb145c1d5faf50af42e9d1848d6d9453ea93a16379a544674db0e8e772af5" Jan 26 08:14:23 crc kubenswrapper[4806]: I0126 08:14:23.134064 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 26 08:14:23 crc kubenswrapper[4806]: I0126 08:14:23.156239 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 26 08:14:23 crc kubenswrapper[4806]: I0126 08:14:23.179363 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 26 08:14:23 crc kubenswrapper[4806]: I0126 08:14:23.180906 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 08:14:23 crc kubenswrapper[4806]: I0126 08:14:23.187970 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 26 08:14:23 crc kubenswrapper[4806]: I0126 08:14:23.188115 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 26 08:14:23 crc kubenswrapper[4806]: I0126 08:14:23.191208 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 26 08:14:23 crc kubenswrapper[4806]: I0126 08:14:23.199422 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6ff9c60-ebe3-40fb-aa92-180ed68ed333-config-data\") pod \"ceilometer-0\" (UID: \"f6ff9c60-ebe3-40fb-aa92-180ed68ed333\") " pod="openstack/ceilometer-0" Jan 26 08:14:23 crc kubenswrapper[4806]: I0126 08:14:23.199454 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-clmts\" (UniqueName: \"kubernetes.io/projected/f6ff9c60-ebe3-40fb-aa92-180ed68ed333-kube-api-access-clmts\") pod \"ceilometer-0\" (UID: \"f6ff9c60-ebe3-40fb-aa92-180ed68ed333\") " pod="openstack/ceilometer-0" Jan 26 08:14:23 crc kubenswrapper[4806]: I0126 08:14:23.199504 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f6ff9c60-ebe3-40fb-aa92-180ed68ed333-scripts\") pod \"ceilometer-0\" (UID: \"f6ff9c60-ebe3-40fb-aa92-180ed68ed333\") " pod="openstack/ceilometer-0" Jan 26 08:14:23 crc kubenswrapper[4806]: I0126 08:14:23.199675 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f6ff9c60-ebe3-40fb-aa92-180ed68ed333-log-httpd\") pod \"ceilometer-0\" (UID: \"f6ff9c60-ebe3-40fb-aa92-180ed68ed333\") " pod="openstack/ceilometer-0" Jan 26 08:14:23 crc kubenswrapper[4806]: I0126 08:14:23.199783 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f6ff9c60-ebe3-40fb-aa92-180ed68ed333-run-httpd\") pod \"ceilometer-0\" (UID: \"f6ff9c60-ebe3-40fb-aa92-180ed68ed333\") " pod="openstack/ceilometer-0" Jan 26 08:14:23 crc kubenswrapper[4806]: I0126 08:14:23.199821 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f6ff9c60-ebe3-40fb-aa92-180ed68ed333-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f6ff9c60-ebe3-40fb-aa92-180ed68ed333\") " pod="openstack/ceilometer-0" Jan 26 08:14:23 crc kubenswrapper[4806]: I0126 08:14:23.199874 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6ff9c60-ebe3-40fb-aa92-180ed68ed333-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f6ff9c60-ebe3-40fb-aa92-180ed68ed333\") " pod="openstack/ceilometer-0" Jan 26 08:14:23 crc kubenswrapper[4806]: I0126 08:14:23.200669 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f6ff9c60-ebe3-40fb-aa92-180ed68ed333-log-httpd\") pod \"ceilometer-0\" (UID: \"f6ff9c60-ebe3-40fb-aa92-180ed68ed333\") " pod="openstack/ceilometer-0" Jan 26 08:14:23 crc kubenswrapper[4806]: I0126 08:14:23.201883 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f6ff9c60-ebe3-40fb-aa92-180ed68ed333-run-httpd\") pod \"ceilometer-0\" (UID: \"f6ff9c60-ebe3-40fb-aa92-180ed68ed333\") " pod="openstack/ceilometer-0" Jan 26 08:14:23 crc kubenswrapper[4806]: I0126 08:14:23.213318 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6ff9c60-ebe3-40fb-aa92-180ed68ed333-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f6ff9c60-ebe3-40fb-aa92-180ed68ed333\") " pod="openstack/ceilometer-0" Jan 26 08:14:23 crc kubenswrapper[4806]: I0126 08:14:23.232743 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6ff9c60-ebe3-40fb-aa92-180ed68ed333-config-data\") pod \"ceilometer-0\" (UID: \"f6ff9c60-ebe3-40fb-aa92-180ed68ed333\") " pod="openstack/ceilometer-0" Jan 26 08:14:23 crc kubenswrapper[4806]: I0126 08:14:23.233953 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f6ff9c60-ebe3-40fb-aa92-180ed68ed333-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f6ff9c60-ebe3-40fb-aa92-180ed68ed333\") " pod="openstack/ceilometer-0" Jan 26 08:14:23 crc kubenswrapper[4806]: I0126 08:14:23.235822 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 26 08:14:23 crc kubenswrapper[4806]: I0126 08:14:23.259235 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f6ff9c60-ebe3-40fb-aa92-180ed68ed333-scripts\") pod \"ceilometer-0\" (UID: \"f6ff9c60-ebe3-40fb-aa92-180ed68ed333\") " pod="openstack/ceilometer-0" Jan 26 08:14:23 crc kubenswrapper[4806]: I0126 08:14:23.272333 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-clmts\" (UniqueName: \"kubernetes.io/projected/f6ff9c60-ebe3-40fb-aa92-180ed68ed333-kube-api-access-clmts\") pod \"ceilometer-0\" (UID: \"f6ff9c60-ebe3-40fb-aa92-180ed68ed333\") " pod="openstack/ceilometer-0" Jan 26 08:14:23 crc kubenswrapper[4806]: I0126 08:14:23.297876 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-j5f4m"] Jan 26 08:14:23 crc kubenswrapper[4806]: I0126 08:14:23.301108 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-j5f4m" Jan 26 08:14:23 crc kubenswrapper[4806]: I0126 08:14:23.303016 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/87668aa1-185b-4149-98a5-c6ad71face4d-logs\") pod \"nova-api-0\" (UID: \"87668aa1-185b-4149-98a5-c6ad71face4d\") " pod="openstack/nova-api-0" Jan 26 08:14:23 crc kubenswrapper[4806]: I0126 08:14:23.303071 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87668aa1-185b-4149-98a5-c6ad71face4d-config-data\") pod \"nova-api-0\" (UID: \"87668aa1-185b-4149-98a5-c6ad71face4d\") " pod="openstack/nova-api-0" Jan 26 08:14:23 crc kubenswrapper[4806]: I0126 08:14:23.303180 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/87668aa1-185b-4149-98a5-c6ad71face4d-internal-tls-certs\") pod \"nova-api-0\" (UID: \"87668aa1-185b-4149-98a5-c6ad71face4d\") " pod="openstack/nova-api-0" Jan 26 08:14:23 crc kubenswrapper[4806]: I0126 08:14:23.303235 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87668aa1-185b-4149-98a5-c6ad71face4d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"87668aa1-185b-4149-98a5-c6ad71face4d\") " pod="openstack/nova-api-0" Jan 26 08:14:23 crc kubenswrapper[4806]: I0126 08:14:23.303360 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/87668aa1-185b-4149-98a5-c6ad71face4d-public-tls-certs\") pod \"nova-api-0\" (UID: \"87668aa1-185b-4149-98a5-c6ad71face4d\") " pod="openstack/nova-api-0" Jan 26 08:14:23 crc kubenswrapper[4806]: I0126 08:14:23.303841 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbk7b\" (UniqueName: \"kubernetes.io/projected/87668aa1-185b-4149-98a5-c6ad71face4d-kube-api-access-sbk7b\") pod \"nova-api-0\" (UID: \"87668aa1-185b-4149-98a5-c6ad71face4d\") " pod="openstack/nova-api-0" Jan 26 08:14:23 crc kubenswrapper[4806]: I0126 08:14:23.310435 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 26 08:14:23 crc kubenswrapper[4806]: I0126 08:14:23.311095 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 26 08:14:23 crc kubenswrapper[4806]: I0126 08:14:23.328727 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-j5f4m"] Jan 26 08:14:23 crc kubenswrapper[4806]: I0126 08:14:23.343201 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 08:14:23 crc kubenswrapper[4806]: I0126 08:14:23.349341 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 08:14:23 crc kubenswrapper[4806]: I0126 08:14:23.409418 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/87668aa1-185b-4149-98a5-c6ad71face4d-public-tls-certs\") pod \"nova-api-0\" (UID: \"87668aa1-185b-4149-98a5-c6ad71face4d\") " pod="openstack/nova-api-0" Jan 26 08:14:23 crc kubenswrapper[4806]: I0126 08:14:23.410769 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbk7b\" (UniqueName: \"kubernetes.io/projected/87668aa1-185b-4149-98a5-c6ad71face4d-kube-api-access-sbk7b\") pod \"nova-api-0\" (UID: \"87668aa1-185b-4149-98a5-c6ad71face4d\") " pod="openstack/nova-api-0" Jan 26 08:14:23 crc kubenswrapper[4806]: I0126 08:14:23.410902 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94d9h\" (UniqueName: \"kubernetes.io/projected/6ad06e81-5ace-4cc0-9c53-aee0ec57425b-kube-api-access-94d9h\") pod \"nova-cell1-cell-mapping-j5f4m\" (UID: \"6ad06e81-5ace-4cc0-9c53-aee0ec57425b\") " pod="openstack/nova-cell1-cell-mapping-j5f4m" Jan 26 08:14:23 crc kubenswrapper[4806]: I0126 08:14:23.411095 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/87668aa1-185b-4149-98a5-c6ad71face4d-logs\") pod \"nova-api-0\" (UID: \"87668aa1-185b-4149-98a5-c6ad71face4d\") " pod="openstack/nova-api-0" Jan 26 08:14:23 crc kubenswrapper[4806]: I0126 08:14:23.411212 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87668aa1-185b-4149-98a5-c6ad71face4d-config-data\") pod \"nova-api-0\" (UID: \"87668aa1-185b-4149-98a5-c6ad71face4d\") " pod="openstack/nova-api-0" Jan 26 08:14:23 crc kubenswrapper[4806]: I0126 08:14:23.411691 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/87668aa1-185b-4149-98a5-c6ad71face4d-internal-tls-certs\") pod \"nova-api-0\" (UID: \"87668aa1-185b-4149-98a5-c6ad71face4d\") " pod="openstack/nova-api-0" Jan 26 08:14:23 crc kubenswrapper[4806]: I0126 08:14:23.411873 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87668aa1-185b-4149-98a5-c6ad71face4d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"87668aa1-185b-4149-98a5-c6ad71face4d\") " pod="openstack/nova-api-0" Jan 26 08:14:23 crc kubenswrapper[4806]: I0126 08:14:23.413585 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ad06e81-5ace-4cc0-9c53-aee0ec57425b-config-data\") pod \"nova-cell1-cell-mapping-j5f4m\" (UID: \"6ad06e81-5ace-4cc0-9c53-aee0ec57425b\") " pod="openstack/nova-cell1-cell-mapping-j5f4m" Jan 26 08:14:23 crc kubenswrapper[4806]: I0126 08:14:23.411753 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/87668aa1-185b-4149-98a5-c6ad71face4d-logs\") pod \"nova-api-0\" (UID: \"87668aa1-185b-4149-98a5-c6ad71face4d\") " pod="openstack/nova-api-0" Jan 26 08:14:23 crc kubenswrapper[4806]: I0126 08:14:23.413790 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ad06e81-5ace-4cc0-9c53-aee0ec57425b-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-j5f4m\" (UID: \"6ad06e81-5ace-4cc0-9c53-aee0ec57425b\") " pod="openstack/nova-cell1-cell-mapping-j5f4m" Jan 26 08:14:23 crc kubenswrapper[4806]: I0126 08:14:23.415061 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6ad06e81-5ace-4cc0-9c53-aee0ec57425b-scripts\") pod \"nova-cell1-cell-mapping-j5f4m\" (UID: \"6ad06e81-5ace-4cc0-9c53-aee0ec57425b\") " pod="openstack/nova-cell1-cell-mapping-j5f4m" Jan 26 08:14:23 crc kubenswrapper[4806]: I0126 08:14:23.417187 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/87668aa1-185b-4149-98a5-c6ad71face4d-public-tls-certs\") pod \"nova-api-0\" (UID: \"87668aa1-185b-4149-98a5-c6ad71face4d\") " pod="openstack/nova-api-0" Jan 26 08:14:23 crc kubenswrapper[4806]: I0126 08:14:23.417685 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87668aa1-185b-4149-98a5-c6ad71face4d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"87668aa1-185b-4149-98a5-c6ad71face4d\") " pod="openstack/nova-api-0" Jan 26 08:14:23 crc kubenswrapper[4806]: I0126 08:14:23.422786 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/87668aa1-185b-4149-98a5-c6ad71face4d-internal-tls-certs\") pod \"nova-api-0\" (UID: \"87668aa1-185b-4149-98a5-c6ad71face4d\") " pod="openstack/nova-api-0" Jan 26 08:14:23 crc kubenswrapper[4806]: I0126 08:14:23.426491 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87668aa1-185b-4149-98a5-c6ad71face4d-config-data\") pod \"nova-api-0\" (UID: \"87668aa1-185b-4149-98a5-c6ad71face4d\") " pod="openstack/nova-api-0" Jan 26 08:14:23 crc kubenswrapper[4806]: I0126 08:14:23.434389 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbk7b\" (UniqueName: \"kubernetes.io/projected/87668aa1-185b-4149-98a5-c6ad71face4d-kube-api-access-sbk7b\") pod \"nova-api-0\" (UID: \"87668aa1-185b-4149-98a5-c6ad71face4d\") " pod="openstack/nova-api-0" Jan 26 08:14:23 crc kubenswrapper[4806]: I0126 08:14:23.517332 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ad06e81-5ace-4cc0-9c53-aee0ec57425b-config-data\") pod \"nova-cell1-cell-mapping-j5f4m\" (UID: \"6ad06e81-5ace-4cc0-9c53-aee0ec57425b\") " pod="openstack/nova-cell1-cell-mapping-j5f4m" Jan 26 08:14:23 crc kubenswrapper[4806]: I0126 08:14:23.517376 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ad06e81-5ace-4cc0-9c53-aee0ec57425b-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-j5f4m\" (UID: \"6ad06e81-5ace-4cc0-9c53-aee0ec57425b\") " pod="openstack/nova-cell1-cell-mapping-j5f4m" Jan 26 08:14:23 crc kubenswrapper[4806]: I0126 08:14:23.517400 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6ad06e81-5ace-4cc0-9c53-aee0ec57425b-scripts\") pod \"nova-cell1-cell-mapping-j5f4m\" (UID: \"6ad06e81-5ace-4cc0-9c53-aee0ec57425b\") " pod="openstack/nova-cell1-cell-mapping-j5f4m" Jan 26 08:14:23 crc kubenswrapper[4806]: I0126 08:14:23.517459 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94d9h\" (UniqueName: \"kubernetes.io/projected/6ad06e81-5ace-4cc0-9c53-aee0ec57425b-kube-api-access-94d9h\") pod \"nova-cell1-cell-mapping-j5f4m\" (UID: \"6ad06e81-5ace-4cc0-9c53-aee0ec57425b\") " pod="openstack/nova-cell1-cell-mapping-j5f4m" Jan 26 08:14:23 crc kubenswrapper[4806]: I0126 08:14:23.521082 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6ad06e81-5ace-4cc0-9c53-aee0ec57425b-scripts\") pod \"nova-cell1-cell-mapping-j5f4m\" (UID: \"6ad06e81-5ace-4cc0-9c53-aee0ec57425b\") " pod="openstack/nova-cell1-cell-mapping-j5f4m" Jan 26 08:14:23 crc kubenswrapper[4806]: I0126 08:14:23.523025 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ad06e81-5ace-4cc0-9c53-aee0ec57425b-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-j5f4m\" (UID: \"6ad06e81-5ace-4cc0-9c53-aee0ec57425b\") " pod="openstack/nova-cell1-cell-mapping-j5f4m" Jan 26 08:14:23 crc kubenswrapper[4806]: I0126 08:14:23.527929 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 08:14:23 crc kubenswrapper[4806]: I0126 08:14:23.530066 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ad06e81-5ace-4cc0-9c53-aee0ec57425b-config-data\") pod \"nova-cell1-cell-mapping-j5f4m\" (UID: \"6ad06e81-5ace-4cc0-9c53-aee0ec57425b\") " pod="openstack/nova-cell1-cell-mapping-j5f4m" Jan 26 08:14:23 crc kubenswrapper[4806]: I0126 08:14:23.542015 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94d9h\" (UniqueName: \"kubernetes.io/projected/6ad06e81-5ace-4cc0-9c53-aee0ec57425b-kube-api-access-94d9h\") pod \"nova-cell1-cell-mapping-j5f4m\" (UID: \"6ad06e81-5ace-4cc0-9c53-aee0ec57425b\") " pod="openstack/nova-cell1-cell-mapping-j5f4m" Jan 26 08:14:23 crc kubenswrapper[4806]: I0126 08:14:23.625384 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-j5f4m" Jan 26 08:14:23 crc kubenswrapper[4806]: I0126 08:14:23.889931 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 08:14:23 crc kubenswrapper[4806]: W0126 08:14:23.891982 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf6ff9c60_ebe3_40fb_aa92_180ed68ed333.slice/crio-b3fde46dca8d78fefcd1a02d7c6539bf29b9a8a7a3cfaae7c2f39149d724d7b3 WatchSource:0}: Error finding container b3fde46dca8d78fefcd1a02d7c6539bf29b9a8a7a3cfaae7c2f39149d724d7b3: Status 404 returned error can't find the container with id b3fde46dca8d78fefcd1a02d7c6539bf29b9a8a7a3cfaae7c2f39149d724d7b3 Jan 26 08:14:24 crc kubenswrapper[4806]: I0126 08:14:24.060212 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-j5f4m"] Jan 26 08:14:24 crc kubenswrapper[4806]: W0126 08:14:24.062066 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6ad06e81_5ace_4cc0_9c53_aee0ec57425b.slice/crio-d3de32977a0ec5ecfed908259e2434ef6b4bfdb73599a72099d363b4de7474b7 WatchSource:0}: Error finding container d3de32977a0ec5ecfed908259e2434ef6b4bfdb73599a72099d363b4de7474b7: Status 404 returned error can't find the container with id d3de32977a0ec5ecfed908259e2434ef6b4bfdb73599a72099d363b4de7474b7 Jan 26 08:14:24 crc kubenswrapper[4806]: W0126 08:14:24.084431 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod87668aa1_185b_4149_98a5_c6ad71face4d.slice/crio-1cdc6c031e13e494d612e1b0d1efdcab663a5600c28573c133920a652fb64e02 WatchSource:0}: Error finding container 1cdc6c031e13e494d612e1b0d1efdcab663a5600c28573c133920a652fb64e02: Status 404 returned error can't find the container with id 1cdc6c031e13e494d612e1b0d1efdcab663a5600c28573c133920a652fb64e02 Jan 26 08:14:24 crc kubenswrapper[4806]: I0126 08:14:24.101463 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 26 08:14:24 crc kubenswrapper[4806]: I0126 08:14:24.835870 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-j5f4m" event={"ID":"6ad06e81-5ace-4cc0-9c53-aee0ec57425b","Type":"ContainerStarted","Data":"bdc730cb52f083e34a1b3265bcf8dcfa6ebd679236ed4fe2dae841cec138882b"} Jan 26 08:14:24 crc kubenswrapper[4806]: I0126 08:14:24.836157 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-j5f4m" event={"ID":"6ad06e81-5ace-4cc0-9c53-aee0ec57425b","Type":"ContainerStarted","Data":"d3de32977a0ec5ecfed908259e2434ef6b4bfdb73599a72099d363b4de7474b7"} Jan 26 08:14:24 crc kubenswrapper[4806]: I0126 08:14:24.845017 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f6ff9c60-ebe3-40fb-aa92-180ed68ed333","Type":"ContainerStarted","Data":"e7acfa7e439b45d135015e78d87ed9ded35de16c019463d9795f91737c4ecbec"} Jan 26 08:14:24 crc kubenswrapper[4806]: I0126 08:14:24.845113 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f6ff9c60-ebe3-40fb-aa92-180ed68ed333","Type":"ContainerStarted","Data":"b3fde46dca8d78fefcd1a02d7c6539bf29b9a8a7a3cfaae7c2f39149d724d7b3"} Jan 26 08:14:24 crc kubenswrapper[4806]: I0126 08:14:24.847040 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"87668aa1-185b-4149-98a5-c6ad71face4d","Type":"ContainerStarted","Data":"9c41a1007d4331751739abb9a48c437902567f4ac02a75190437f278cc35ceb9"} Jan 26 08:14:24 crc kubenswrapper[4806]: I0126 08:14:24.847069 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"87668aa1-185b-4149-98a5-c6ad71face4d","Type":"ContainerStarted","Data":"d0554aa91f225d79772b96b1fa91255ded347d6027163cb5a1ac35f064073ec1"} Jan 26 08:14:24 crc kubenswrapper[4806]: I0126 08:14:24.847082 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"87668aa1-185b-4149-98a5-c6ad71face4d","Type":"ContainerStarted","Data":"1cdc6c031e13e494d612e1b0d1efdcab663a5600c28573c133920a652fb64e02"} Jan 26 08:14:24 crc kubenswrapper[4806]: I0126 08:14:24.883671 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-j5f4m" podStartSLOduration=1.883651856 podStartE2EDuration="1.883651856s" podCreationTimestamp="2026-01-26 08:14:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 08:14:24.863146453 +0000 UTC m=+1244.127554509" watchObservedRunningTime="2026-01-26 08:14:24.883651856 +0000 UTC m=+1244.148059912" Jan 26 08:14:24 crc kubenswrapper[4806]: I0126 08:14:24.888123 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=1.888115951 podStartE2EDuration="1.888115951s" podCreationTimestamp="2026-01-26 08:14:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 08:14:24.882040821 +0000 UTC m=+1244.146448877" watchObservedRunningTime="2026-01-26 08:14:24.888115951 +0000 UTC m=+1244.152524007" Jan 26 08:14:25 crc kubenswrapper[4806]: I0126 08:14:25.052146 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d59962d-c0f8-41f3-92eb-917eb72966f4" path="/var/lib/kubelet/pods/2d59962d-c0f8-41f3-92eb-917eb72966f4/volumes" Jan 26 08:14:25 crc kubenswrapper[4806]: I0126 08:14:25.858334 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f6ff9c60-ebe3-40fb-aa92-180ed68ed333","Type":"ContainerStarted","Data":"3a048d494fffde007ca242d270163c5979ab4ab739a2b83d7f24cfe1956aacc4"} Jan 26 08:14:26 crc kubenswrapper[4806]: I0126 08:14:26.333731 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-f84f9ccf-dptl9" Jan 26 08:14:26 crc kubenswrapper[4806]: I0126 08:14:26.426758 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-568d7fd7cf-qwtvl"] Jan 26 08:14:26 crc kubenswrapper[4806]: I0126 08:14:26.427044 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-568d7fd7cf-qwtvl" podUID="001ecd97-04e4-4d0e-a713-34e7fc0a80a7" containerName="dnsmasq-dns" containerID="cri-o://862cc2a44cd0f69a7e2b0d6a19694935b37b8b55e7afb510002ee9ec72efc192" gracePeriod=10 Jan 26 08:14:27 crc kubenswrapper[4806]: I0126 08:14:26.894136 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f6ff9c60-ebe3-40fb-aa92-180ed68ed333","Type":"ContainerStarted","Data":"3b21f1e0b216b88529c369201b457f4c47c8e10b6ea1f4d56f68c1823496ed2c"} Jan 26 08:14:27 crc kubenswrapper[4806]: I0126 08:14:26.897473 4806 generic.go:334] "Generic (PLEG): container finished" podID="001ecd97-04e4-4d0e-a713-34e7fc0a80a7" containerID="862cc2a44cd0f69a7e2b0d6a19694935b37b8b55e7afb510002ee9ec72efc192" exitCode=0 Jan 26 08:14:27 crc kubenswrapper[4806]: I0126 08:14:26.897505 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-568d7fd7cf-qwtvl" event={"ID":"001ecd97-04e4-4d0e-a713-34e7fc0a80a7","Type":"ContainerDied","Data":"862cc2a44cd0f69a7e2b0d6a19694935b37b8b55e7afb510002ee9ec72efc192"} Jan 26 08:14:27 crc kubenswrapper[4806]: I0126 08:14:27.041762 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-568d7fd7cf-qwtvl" Jan 26 08:14:27 crc kubenswrapper[4806]: I0126 08:14:27.098162 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/001ecd97-04e4-4d0e-a713-34e7fc0a80a7-dns-swift-storage-0\") pod \"001ecd97-04e4-4d0e-a713-34e7fc0a80a7\" (UID: \"001ecd97-04e4-4d0e-a713-34e7fc0a80a7\") " Jan 26 08:14:27 crc kubenswrapper[4806]: I0126 08:14:27.098313 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/001ecd97-04e4-4d0e-a713-34e7fc0a80a7-dns-svc\") pod \"001ecd97-04e4-4d0e-a713-34e7fc0a80a7\" (UID: \"001ecd97-04e4-4d0e-a713-34e7fc0a80a7\") " Jan 26 08:14:27 crc kubenswrapper[4806]: I0126 08:14:27.098402 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/001ecd97-04e4-4d0e-a713-34e7fc0a80a7-ovsdbserver-nb\") pod \"001ecd97-04e4-4d0e-a713-34e7fc0a80a7\" (UID: \"001ecd97-04e4-4d0e-a713-34e7fc0a80a7\") " Jan 26 08:14:27 crc kubenswrapper[4806]: I0126 08:14:27.098429 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/001ecd97-04e4-4d0e-a713-34e7fc0a80a7-ovsdbserver-sb\") pod \"001ecd97-04e4-4d0e-a713-34e7fc0a80a7\" (UID: \"001ecd97-04e4-4d0e-a713-34e7fc0a80a7\") " Jan 26 08:14:27 crc kubenswrapper[4806]: I0126 08:14:27.098443 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/001ecd97-04e4-4d0e-a713-34e7fc0a80a7-config\") pod \"001ecd97-04e4-4d0e-a713-34e7fc0a80a7\" (UID: \"001ecd97-04e4-4d0e-a713-34e7fc0a80a7\") " Jan 26 08:14:27 crc kubenswrapper[4806]: I0126 08:14:27.098461 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pgs66\" (UniqueName: \"kubernetes.io/projected/001ecd97-04e4-4d0e-a713-34e7fc0a80a7-kube-api-access-pgs66\") pod \"001ecd97-04e4-4d0e-a713-34e7fc0a80a7\" (UID: \"001ecd97-04e4-4d0e-a713-34e7fc0a80a7\") " Jan 26 08:14:27 crc kubenswrapper[4806]: I0126 08:14:27.129082 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/001ecd97-04e4-4d0e-a713-34e7fc0a80a7-kube-api-access-pgs66" (OuterVolumeSpecName: "kube-api-access-pgs66") pod "001ecd97-04e4-4d0e-a713-34e7fc0a80a7" (UID: "001ecd97-04e4-4d0e-a713-34e7fc0a80a7"). InnerVolumeSpecName "kube-api-access-pgs66". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:14:27 crc kubenswrapper[4806]: I0126 08:14:27.201440 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pgs66\" (UniqueName: \"kubernetes.io/projected/001ecd97-04e4-4d0e-a713-34e7fc0a80a7-kube-api-access-pgs66\") on node \"crc\" DevicePath \"\"" Jan 26 08:14:27 crc kubenswrapper[4806]: I0126 08:14:27.206198 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/001ecd97-04e4-4d0e-a713-34e7fc0a80a7-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "001ecd97-04e4-4d0e-a713-34e7fc0a80a7" (UID: "001ecd97-04e4-4d0e-a713-34e7fc0a80a7"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 08:14:27 crc kubenswrapper[4806]: I0126 08:14:27.277083 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/001ecd97-04e4-4d0e-a713-34e7fc0a80a7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "001ecd97-04e4-4d0e-a713-34e7fc0a80a7" (UID: "001ecd97-04e4-4d0e-a713-34e7fc0a80a7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 08:14:27 crc kubenswrapper[4806]: I0126 08:14:27.289639 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/001ecd97-04e4-4d0e-a713-34e7fc0a80a7-config" (OuterVolumeSpecName: "config") pod "001ecd97-04e4-4d0e-a713-34e7fc0a80a7" (UID: "001ecd97-04e4-4d0e-a713-34e7fc0a80a7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 08:14:27 crc kubenswrapper[4806]: I0126 08:14:27.292622 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/001ecd97-04e4-4d0e-a713-34e7fc0a80a7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "001ecd97-04e4-4d0e-a713-34e7fc0a80a7" (UID: "001ecd97-04e4-4d0e-a713-34e7fc0a80a7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 08:14:27 crc kubenswrapper[4806]: I0126 08:14:27.299889 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/001ecd97-04e4-4d0e-a713-34e7fc0a80a7-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "001ecd97-04e4-4d0e-a713-34e7fc0a80a7" (UID: "001ecd97-04e4-4d0e-a713-34e7fc0a80a7"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 08:14:27 crc kubenswrapper[4806]: I0126 08:14:27.303572 4806 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/001ecd97-04e4-4d0e-a713-34e7fc0a80a7-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 26 08:14:27 crc kubenswrapper[4806]: I0126 08:14:27.303592 4806 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/001ecd97-04e4-4d0e-a713-34e7fc0a80a7-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 08:14:27 crc kubenswrapper[4806]: I0126 08:14:27.303601 4806 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/001ecd97-04e4-4d0e-a713-34e7fc0a80a7-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 08:14:27 crc kubenswrapper[4806]: I0126 08:14:27.303610 4806 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/001ecd97-04e4-4d0e-a713-34e7fc0a80a7-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 08:14:27 crc kubenswrapper[4806]: I0126 08:14:27.303618 4806 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/001ecd97-04e4-4d0e-a713-34e7fc0a80a7-config\") on node \"crc\" DevicePath \"\"" Jan 26 08:14:27 crc kubenswrapper[4806]: I0126 08:14:27.910063 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f6ff9c60-ebe3-40fb-aa92-180ed68ed333","Type":"ContainerStarted","Data":"43559937f7d83757f4461f20aa30e31687296f3f4548ef23b5d6553f025a607e"} Jan 26 08:14:27 crc kubenswrapper[4806]: I0126 08:14:27.910375 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f6ff9c60-ebe3-40fb-aa92-180ed68ed333" containerName="ceilometer-central-agent" containerID="cri-o://e7acfa7e439b45d135015e78d87ed9ded35de16c019463d9795f91737c4ecbec" gracePeriod=30 Jan 26 08:14:27 crc kubenswrapper[4806]: I0126 08:14:27.910693 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 26 08:14:27 crc kubenswrapper[4806]: I0126 08:14:27.910825 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f6ff9c60-ebe3-40fb-aa92-180ed68ed333" containerName="proxy-httpd" containerID="cri-o://43559937f7d83757f4461f20aa30e31687296f3f4548ef23b5d6553f025a607e" gracePeriod=30 Jan 26 08:14:27 crc kubenswrapper[4806]: I0126 08:14:27.911066 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f6ff9c60-ebe3-40fb-aa92-180ed68ed333" containerName="sg-core" containerID="cri-o://3b21f1e0b216b88529c369201b457f4c47c8e10b6ea1f4d56f68c1823496ed2c" gracePeriod=30 Jan 26 08:14:27 crc kubenswrapper[4806]: I0126 08:14:27.911136 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f6ff9c60-ebe3-40fb-aa92-180ed68ed333" containerName="ceilometer-notification-agent" containerID="cri-o://3a048d494fffde007ca242d270163c5979ab4ab739a2b83d7f24cfe1956aacc4" gracePeriod=30 Jan 26 08:14:27 crc kubenswrapper[4806]: I0126 08:14:27.918417 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-568d7fd7cf-qwtvl" event={"ID":"001ecd97-04e4-4d0e-a713-34e7fc0a80a7","Type":"ContainerDied","Data":"d6099c6f109b684dea38008828d7e9e04e616af8cc8a6f4030bb6e4b5e0006b4"} Jan 26 08:14:27 crc kubenswrapper[4806]: I0126 08:14:27.918482 4806 scope.go:117] "RemoveContainer" containerID="862cc2a44cd0f69a7e2b0d6a19694935b37b8b55e7afb510002ee9ec72efc192" Jan 26 08:14:27 crc kubenswrapper[4806]: I0126 08:14:27.918642 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-568d7fd7cf-qwtvl" Jan 26 08:14:27 crc kubenswrapper[4806]: I0126 08:14:27.953917 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.817324062 podStartE2EDuration="5.953896698s" podCreationTimestamp="2026-01-26 08:14:22 +0000 UTC" firstStartedPulling="2026-01-26 08:14:23.898223418 +0000 UTC m=+1243.162631474" lastFinishedPulling="2026-01-26 08:14:27.034796064 +0000 UTC m=+1246.299204110" observedRunningTime="2026-01-26 08:14:27.949189346 +0000 UTC m=+1247.213597402" watchObservedRunningTime="2026-01-26 08:14:27.953896698 +0000 UTC m=+1247.218304754" Jan 26 08:14:27 crc kubenswrapper[4806]: I0126 08:14:27.995792 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-568d7fd7cf-qwtvl"] Jan 26 08:14:28 crc kubenswrapper[4806]: I0126 08:14:28.000942 4806 scope.go:117] "RemoveContainer" containerID="466eb1c9b8fe43db8d40e8ff281f9233eb106cae8827c8d209937e5a72210c24" Jan 26 08:14:28 crc kubenswrapper[4806]: I0126 08:14:28.003974 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-568d7fd7cf-qwtvl"] Jan 26 08:14:28 crc kubenswrapper[4806]: I0126 08:14:28.934032 4806 generic.go:334] "Generic (PLEG): container finished" podID="f6ff9c60-ebe3-40fb-aa92-180ed68ed333" containerID="43559937f7d83757f4461f20aa30e31687296f3f4548ef23b5d6553f025a607e" exitCode=0 Jan 26 08:14:28 crc kubenswrapper[4806]: I0126 08:14:28.934072 4806 generic.go:334] "Generic (PLEG): container finished" podID="f6ff9c60-ebe3-40fb-aa92-180ed68ed333" containerID="3b21f1e0b216b88529c369201b457f4c47c8e10b6ea1f4d56f68c1823496ed2c" exitCode=2 Jan 26 08:14:28 crc kubenswrapper[4806]: I0126 08:14:28.934158 4806 generic.go:334] "Generic (PLEG): container finished" podID="f6ff9c60-ebe3-40fb-aa92-180ed68ed333" containerID="3a048d494fffde007ca242d270163c5979ab4ab739a2b83d7f24cfe1956aacc4" exitCode=0 Jan 26 08:14:28 crc kubenswrapper[4806]: I0126 08:14:28.934106 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f6ff9c60-ebe3-40fb-aa92-180ed68ed333","Type":"ContainerDied","Data":"43559937f7d83757f4461f20aa30e31687296f3f4548ef23b5d6553f025a607e"} Jan 26 08:14:28 crc kubenswrapper[4806]: I0126 08:14:28.934196 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f6ff9c60-ebe3-40fb-aa92-180ed68ed333","Type":"ContainerDied","Data":"3b21f1e0b216b88529c369201b457f4c47c8e10b6ea1f4d56f68c1823496ed2c"} Jan 26 08:14:28 crc kubenswrapper[4806]: I0126 08:14:28.934210 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f6ff9c60-ebe3-40fb-aa92-180ed68ed333","Type":"ContainerDied","Data":"3a048d494fffde007ca242d270163c5979ab4ab739a2b83d7f24cfe1956aacc4"} Jan 26 08:14:29 crc kubenswrapper[4806]: I0126 08:14:29.051675 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="001ecd97-04e4-4d0e-a713-34e7fc0a80a7" path="/var/lib/kubelet/pods/001ecd97-04e4-4d0e-a713-34e7fc0a80a7/volumes" Jan 26 08:14:29 crc kubenswrapper[4806]: I0126 08:14:29.945675 4806 generic.go:334] "Generic (PLEG): container finished" podID="6ad06e81-5ace-4cc0-9c53-aee0ec57425b" containerID="bdc730cb52f083e34a1b3265bcf8dcfa6ebd679236ed4fe2dae841cec138882b" exitCode=0 Jan 26 08:14:29 crc kubenswrapper[4806]: I0126 08:14:29.945733 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-j5f4m" event={"ID":"6ad06e81-5ace-4cc0-9c53-aee0ec57425b","Type":"ContainerDied","Data":"bdc730cb52f083e34a1b3265bcf8dcfa6ebd679236ed4fe2dae841cec138882b"} Jan 26 08:14:30 crc kubenswrapper[4806]: I0126 08:14:30.961333 4806 generic.go:334] "Generic (PLEG): container finished" podID="f6ff9c60-ebe3-40fb-aa92-180ed68ed333" containerID="e7acfa7e439b45d135015e78d87ed9ded35de16c019463d9795f91737c4ecbec" exitCode=0 Jan 26 08:14:30 crc kubenswrapper[4806]: I0126 08:14:30.961483 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f6ff9c60-ebe3-40fb-aa92-180ed68ed333","Type":"ContainerDied","Data":"e7acfa7e439b45d135015e78d87ed9ded35de16c019463d9795f91737c4ecbec"} Jan 26 08:14:31 crc kubenswrapper[4806]: I0126 08:14:31.425561 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 08:14:31 crc kubenswrapper[4806]: I0126 08:14:31.433307 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-j5f4m" Jan 26 08:14:31 crc kubenswrapper[4806]: I0126 08:14:31.493913 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6ff9c60-ebe3-40fb-aa92-180ed68ed333-combined-ca-bundle\") pod \"f6ff9c60-ebe3-40fb-aa92-180ed68ed333\" (UID: \"f6ff9c60-ebe3-40fb-aa92-180ed68ed333\") " Jan 26 08:14:31 crc kubenswrapper[4806]: I0126 08:14:31.494293 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f6ff9c60-ebe3-40fb-aa92-180ed68ed333-run-httpd\") pod \"f6ff9c60-ebe3-40fb-aa92-180ed68ed333\" (UID: \"f6ff9c60-ebe3-40fb-aa92-180ed68ed333\") " Jan 26 08:14:31 crc kubenswrapper[4806]: I0126 08:14:31.494339 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f6ff9c60-ebe3-40fb-aa92-180ed68ed333-sg-core-conf-yaml\") pod \"f6ff9c60-ebe3-40fb-aa92-180ed68ed333\" (UID: \"f6ff9c60-ebe3-40fb-aa92-180ed68ed333\") " Jan 26 08:14:31 crc kubenswrapper[4806]: I0126 08:14:31.494381 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-clmts\" (UniqueName: \"kubernetes.io/projected/f6ff9c60-ebe3-40fb-aa92-180ed68ed333-kube-api-access-clmts\") pod \"f6ff9c60-ebe3-40fb-aa92-180ed68ed333\" (UID: \"f6ff9c60-ebe3-40fb-aa92-180ed68ed333\") " Jan 26 08:14:31 crc kubenswrapper[4806]: I0126 08:14:31.494415 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ad06e81-5ace-4cc0-9c53-aee0ec57425b-combined-ca-bundle\") pod \"6ad06e81-5ace-4cc0-9c53-aee0ec57425b\" (UID: \"6ad06e81-5ace-4cc0-9c53-aee0ec57425b\") " Jan 26 08:14:31 crc kubenswrapper[4806]: I0126 08:14:31.494452 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6ff9c60-ebe3-40fb-aa92-180ed68ed333-config-data\") pod \"f6ff9c60-ebe3-40fb-aa92-180ed68ed333\" (UID: \"f6ff9c60-ebe3-40fb-aa92-180ed68ed333\") " Jan 26 08:14:31 crc kubenswrapper[4806]: I0126 08:14:31.494508 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f6ff9c60-ebe3-40fb-aa92-180ed68ed333-scripts\") pod \"f6ff9c60-ebe3-40fb-aa92-180ed68ed333\" (UID: \"f6ff9c60-ebe3-40fb-aa92-180ed68ed333\") " Jan 26 08:14:31 crc kubenswrapper[4806]: I0126 08:14:31.494592 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f6ff9c60-ebe3-40fb-aa92-180ed68ed333-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "f6ff9c60-ebe3-40fb-aa92-180ed68ed333" (UID: "f6ff9c60-ebe3-40fb-aa92-180ed68ed333"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 08:14:31 crc kubenswrapper[4806]: I0126 08:14:31.494604 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6ad06e81-5ace-4cc0-9c53-aee0ec57425b-scripts\") pod \"6ad06e81-5ace-4cc0-9c53-aee0ec57425b\" (UID: \"6ad06e81-5ace-4cc0-9c53-aee0ec57425b\") " Jan 26 08:14:31 crc kubenswrapper[4806]: I0126 08:14:31.494660 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ad06e81-5ace-4cc0-9c53-aee0ec57425b-config-data\") pod \"6ad06e81-5ace-4cc0-9c53-aee0ec57425b\" (UID: \"6ad06e81-5ace-4cc0-9c53-aee0ec57425b\") " Jan 26 08:14:31 crc kubenswrapper[4806]: I0126 08:14:31.494710 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94d9h\" (UniqueName: \"kubernetes.io/projected/6ad06e81-5ace-4cc0-9c53-aee0ec57425b-kube-api-access-94d9h\") pod \"6ad06e81-5ace-4cc0-9c53-aee0ec57425b\" (UID: \"6ad06e81-5ace-4cc0-9c53-aee0ec57425b\") " Jan 26 08:14:31 crc kubenswrapper[4806]: I0126 08:14:31.494753 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f6ff9c60-ebe3-40fb-aa92-180ed68ed333-log-httpd\") pod \"f6ff9c60-ebe3-40fb-aa92-180ed68ed333\" (UID: \"f6ff9c60-ebe3-40fb-aa92-180ed68ed333\") " Jan 26 08:14:31 crc kubenswrapper[4806]: I0126 08:14:31.495382 4806 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f6ff9c60-ebe3-40fb-aa92-180ed68ed333-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 08:14:31 crc kubenswrapper[4806]: I0126 08:14:31.495784 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f6ff9c60-ebe3-40fb-aa92-180ed68ed333-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "f6ff9c60-ebe3-40fb-aa92-180ed68ed333" (UID: "f6ff9c60-ebe3-40fb-aa92-180ed68ed333"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 08:14:31 crc kubenswrapper[4806]: I0126 08:14:31.499634 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ad06e81-5ace-4cc0-9c53-aee0ec57425b-scripts" (OuterVolumeSpecName: "scripts") pod "6ad06e81-5ace-4cc0-9c53-aee0ec57425b" (UID: "6ad06e81-5ace-4cc0-9c53-aee0ec57425b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:14:31 crc kubenswrapper[4806]: I0126 08:14:31.501116 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ad06e81-5ace-4cc0-9c53-aee0ec57425b-kube-api-access-94d9h" (OuterVolumeSpecName: "kube-api-access-94d9h") pod "6ad06e81-5ace-4cc0-9c53-aee0ec57425b" (UID: "6ad06e81-5ace-4cc0-9c53-aee0ec57425b"). InnerVolumeSpecName "kube-api-access-94d9h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:14:31 crc kubenswrapper[4806]: I0126 08:14:31.503723 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6ff9c60-ebe3-40fb-aa92-180ed68ed333-kube-api-access-clmts" (OuterVolumeSpecName: "kube-api-access-clmts") pod "f6ff9c60-ebe3-40fb-aa92-180ed68ed333" (UID: "f6ff9c60-ebe3-40fb-aa92-180ed68ed333"). InnerVolumeSpecName "kube-api-access-clmts". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:14:31 crc kubenswrapper[4806]: I0126 08:14:31.508510 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6ff9c60-ebe3-40fb-aa92-180ed68ed333-scripts" (OuterVolumeSpecName: "scripts") pod "f6ff9c60-ebe3-40fb-aa92-180ed68ed333" (UID: "f6ff9c60-ebe3-40fb-aa92-180ed68ed333"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:14:31 crc kubenswrapper[4806]: I0126 08:14:31.525819 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ad06e81-5ace-4cc0-9c53-aee0ec57425b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6ad06e81-5ace-4cc0-9c53-aee0ec57425b" (UID: "6ad06e81-5ace-4cc0-9c53-aee0ec57425b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:14:31 crc kubenswrapper[4806]: I0126 08:14:31.526304 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ad06e81-5ace-4cc0-9c53-aee0ec57425b-config-data" (OuterVolumeSpecName: "config-data") pod "6ad06e81-5ace-4cc0-9c53-aee0ec57425b" (UID: "6ad06e81-5ace-4cc0-9c53-aee0ec57425b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:14:31 crc kubenswrapper[4806]: I0126 08:14:31.527033 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6ff9c60-ebe3-40fb-aa92-180ed68ed333-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "f6ff9c60-ebe3-40fb-aa92-180ed68ed333" (UID: "f6ff9c60-ebe3-40fb-aa92-180ed68ed333"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:14:31 crc kubenswrapper[4806]: I0126 08:14:31.572613 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6ff9c60-ebe3-40fb-aa92-180ed68ed333-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f6ff9c60-ebe3-40fb-aa92-180ed68ed333" (UID: "f6ff9c60-ebe3-40fb-aa92-180ed68ed333"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:14:31 crc kubenswrapper[4806]: I0126 08:14:31.593060 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6ff9c60-ebe3-40fb-aa92-180ed68ed333-config-data" (OuterVolumeSpecName: "config-data") pod "f6ff9c60-ebe3-40fb-aa92-180ed68ed333" (UID: "f6ff9c60-ebe3-40fb-aa92-180ed68ed333"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:14:31 crc kubenswrapper[4806]: I0126 08:14:31.597202 4806 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f6ff9c60-ebe3-40fb-aa92-180ed68ed333-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 08:14:31 crc kubenswrapper[4806]: I0126 08:14:31.597228 4806 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6ad06e81-5ace-4cc0-9c53-aee0ec57425b-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 08:14:31 crc kubenswrapper[4806]: I0126 08:14:31.597239 4806 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ad06e81-5ace-4cc0-9c53-aee0ec57425b-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 08:14:31 crc kubenswrapper[4806]: I0126 08:14:31.597248 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-94d9h\" (UniqueName: \"kubernetes.io/projected/6ad06e81-5ace-4cc0-9c53-aee0ec57425b-kube-api-access-94d9h\") on node \"crc\" DevicePath \"\"" Jan 26 08:14:31 crc kubenswrapper[4806]: I0126 08:14:31.597260 4806 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f6ff9c60-ebe3-40fb-aa92-180ed68ed333-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 08:14:31 crc kubenswrapper[4806]: I0126 08:14:31.597268 4806 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6ff9c60-ebe3-40fb-aa92-180ed68ed333-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 08:14:31 crc kubenswrapper[4806]: I0126 08:14:31.597276 4806 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f6ff9c60-ebe3-40fb-aa92-180ed68ed333-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 26 08:14:31 crc kubenswrapper[4806]: I0126 08:14:31.597284 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-clmts\" (UniqueName: \"kubernetes.io/projected/f6ff9c60-ebe3-40fb-aa92-180ed68ed333-kube-api-access-clmts\") on node \"crc\" DevicePath \"\"" Jan 26 08:14:31 crc kubenswrapper[4806]: I0126 08:14:31.597292 4806 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ad06e81-5ace-4cc0-9c53-aee0ec57425b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 08:14:31 crc kubenswrapper[4806]: I0126 08:14:31.597299 4806 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6ff9c60-ebe3-40fb-aa92-180ed68ed333-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 08:14:31 crc kubenswrapper[4806]: I0126 08:14:31.973052 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f6ff9c60-ebe3-40fb-aa92-180ed68ed333","Type":"ContainerDied","Data":"b3fde46dca8d78fefcd1a02d7c6539bf29b9a8a7a3cfaae7c2f39149d724d7b3"} Jan 26 08:14:31 crc kubenswrapper[4806]: I0126 08:14:31.973099 4806 scope.go:117] "RemoveContainer" containerID="43559937f7d83757f4461f20aa30e31687296f3f4548ef23b5d6553f025a607e" Jan 26 08:14:31 crc kubenswrapper[4806]: I0126 08:14:31.973213 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 08:14:31 crc kubenswrapper[4806]: I0126 08:14:31.982423 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-j5f4m" event={"ID":"6ad06e81-5ace-4cc0-9c53-aee0ec57425b","Type":"ContainerDied","Data":"d3de32977a0ec5ecfed908259e2434ef6b4bfdb73599a72099d363b4de7474b7"} Jan 26 08:14:31 crc kubenswrapper[4806]: I0126 08:14:31.982474 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d3de32977a0ec5ecfed908259e2434ef6b4bfdb73599a72099d363b4de7474b7" Jan 26 08:14:31 crc kubenswrapper[4806]: I0126 08:14:31.982632 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-j5f4m" Jan 26 08:14:32 crc kubenswrapper[4806]: I0126 08:14:32.064717 4806 scope.go:117] "RemoveContainer" containerID="3b21f1e0b216b88529c369201b457f4c47c8e10b6ea1f4d56f68c1823496ed2c" Jan 26 08:14:32 crc kubenswrapper[4806]: I0126 08:14:32.093508 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 08:14:32 crc kubenswrapper[4806]: I0126 08:14:32.112448 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 26 08:14:32 crc kubenswrapper[4806]: I0126 08:14:32.135813 4806 scope.go:117] "RemoveContainer" containerID="3a048d494fffde007ca242d270163c5979ab4ab739a2b83d7f24cfe1956aacc4" Jan 26 08:14:32 crc kubenswrapper[4806]: I0126 08:14:32.144286 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 26 08:14:32 crc kubenswrapper[4806]: E0126 08:14:32.144820 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="001ecd97-04e4-4d0e-a713-34e7fc0a80a7" containerName="init" Jan 26 08:14:32 crc kubenswrapper[4806]: I0126 08:14:32.144842 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="001ecd97-04e4-4d0e-a713-34e7fc0a80a7" containerName="init" Jan 26 08:14:32 crc kubenswrapper[4806]: E0126 08:14:32.144856 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6ff9c60-ebe3-40fb-aa92-180ed68ed333" containerName="sg-core" Jan 26 08:14:32 crc kubenswrapper[4806]: I0126 08:14:32.144867 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6ff9c60-ebe3-40fb-aa92-180ed68ed333" containerName="sg-core" Jan 26 08:14:32 crc kubenswrapper[4806]: E0126 08:14:32.144887 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="001ecd97-04e4-4d0e-a713-34e7fc0a80a7" containerName="dnsmasq-dns" Jan 26 08:14:32 crc kubenswrapper[4806]: I0126 08:14:32.144896 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="001ecd97-04e4-4d0e-a713-34e7fc0a80a7" containerName="dnsmasq-dns" Jan 26 08:14:32 crc kubenswrapper[4806]: E0126 08:14:32.144913 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ad06e81-5ace-4cc0-9c53-aee0ec57425b" containerName="nova-manage" Jan 26 08:14:32 crc kubenswrapper[4806]: I0126 08:14:32.144921 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ad06e81-5ace-4cc0-9c53-aee0ec57425b" containerName="nova-manage" Jan 26 08:14:32 crc kubenswrapper[4806]: E0126 08:14:32.144937 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6ff9c60-ebe3-40fb-aa92-180ed68ed333" containerName="proxy-httpd" Jan 26 08:14:32 crc kubenswrapper[4806]: I0126 08:14:32.144946 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6ff9c60-ebe3-40fb-aa92-180ed68ed333" containerName="proxy-httpd" Jan 26 08:14:32 crc kubenswrapper[4806]: E0126 08:14:32.144962 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6ff9c60-ebe3-40fb-aa92-180ed68ed333" containerName="ceilometer-central-agent" Jan 26 08:14:32 crc kubenswrapper[4806]: I0126 08:14:32.144971 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6ff9c60-ebe3-40fb-aa92-180ed68ed333" containerName="ceilometer-central-agent" Jan 26 08:14:32 crc kubenswrapper[4806]: E0126 08:14:32.144989 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6ff9c60-ebe3-40fb-aa92-180ed68ed333" containerName="ceilometer-notification-agent" Jan 26 08:14:32 crc kubenswrapper[4806]: I0126 08:14:32.144998 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6ff9c60-ebe3-40fb-aa92-180ed68ed333" containerName="ceilometer-notification-agent" Jan 26 08:14:32 crc kubenswrapper[4806]: I0126 08:14:32.145222 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6ff9c60-ebe3-40fb-aa92-180ed68ed333" containerName="sg-core" Jan 26 08:14:32 crc kubenswrapper[4806]: I0126 08:14:32.145238 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ad06e81-5ace-4cc0-9c53-aee0ec57425b" containerName="nova-manage" Jan 26 08:14:32 crc kubenswrapper[4806]: I0126 08:14:32.145256 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6ff9c60-ebe3-40fb-aa92-180ed68ed333" containerName="ceilometer-notification-agent" Jan 26 08:14:32 crc kubenswrapper[4806]: I0126 08:14:32.145268 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6ff9c60-ebe3-40fb-aa92-180ed68ed333" containerName="ceilometer-central-agent" Jan 26 08:14:32 crc kubenswrapper[4806]: I0126 08:14:32.145287 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="001ecd97-04e4-4d0e-a713-34e7fc0a80a7" containerName="dnsmasq-dns" Jan 26 08:14:32 crc kubenswrapper[4806]: I0126 08:14:32.145308 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6ff9c60-ebe3-40fb-aa92-180ed68ed333" containerName="proxy-httpd" Jan 26 08:14:32 crc kubenswrapper[4806]: I0126 08:14:32.147602 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 08:14:32 crc kubenswrapper[4806]: I0126 08:14:32.149545 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 26 08:14:32 crc kubenswrapper[4806]: I0126 08:14:32.150260 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 26 08:14:32 crc kubenswrapper[4806]: I0126 08:14:32.158620 4806 scope.go:117] "RemoveContainer" containerID="e7acfa7e439b45d135015e78d87ed9ded35de16c019463d9795f91737c4ecbec" Jan 26 08:14:32 crc kubenswrapper[4806]: I0126 08:14:32.169693 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 08:14:32 crc kubenswrapper[4806]: I0126 08:14:32.223216 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 08:14:32 crc kubenswrapper[4806]: I0126 08:14:32.223512 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="adb88266-6c78-4f92-89a8-1f8eb73b60c3" containerName="nova-scheduler-scheduler" containerID="cri-o://15b7071699f668ad4cf42c9e0a968b681c5c25d1c8b04d99a240d413f258f75b" gracePeriod=30 Jan 26 08:14:32 crc kubenswrapper[4806]: I0126 08:14:32.257058 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 26 08:14:32 crc kubenswrapper[4806]: I0126 08:14:32.257394 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="87668aa1-185b-4149-98a5-c6ad71face4d" containerName="nova-api-log" containerID="cri-o://d0554aa91f225d79772b96b1fa91255ded347d6027163cb5a1ac35f064073ec1" gracePeriod=30 Jan 26 08:14:32 crc kubenswrapper[4806]: I0126 08:14:32.257958 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="87668aa1-185b-4149-98a5-c6ad71face4d" containerName="nova-api-api" containerID="cri-o://9c41a1007d4331751739abb9a48c437902567f4ac02a75190437f278cc35ceb9" gracePeriod=30 Jan 26 08:14:32 crc kubenswrapper[4806]: I0126 08:14:32.299319 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 08:14:32 crc kubenswrapper[4806]: I0126 08:14:32.299802 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="5132f7a4-1bcc-4f2e-8dea-d5382b7a52e8" containerName="nova-metadata-log" containerID="cri-o://4e12f63d5bc17c8bbb6ecc00305e481b2ff5ec44225861a592e09b2aa0a46d0f" gracePeriod=30 Jan 26 08:14:32 crc kubenswrapper[4806]: I0126 08:14:32.299979 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="5132f7a4-1bcc-4f2e-8dea-d5382b7a52e8" containerName="nova-metadata-metadata" containerID="cri-o://80da07caa93be671e91b01f948a5c54c938cc2e3804b599751b6f37795828a30" gracePeriod=30 Jan 26 08:14:32 crc kubenswrapper[4806]: I0126 08:14:32.317550 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8b72a45f-26b1-466b-b078-81efe4bb135f-log-httpd\") pod \"ceilometer-0\" (UID: \"8b72a45f-26b1-466b-b078-81efe4bb135f\") " pod="openstack/ceilometer-0" Jan 26 08:14:32 crc kubenswrapper[4806]: I0126 08:14:32.317617 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b72a45f-26b1-466b-b078-81efe4bb135f-config-data\") pod \"ceilometer-0\" (UID: \"8b72a45f-26b1-466b-b078-81efe4bb135f\") " pod="openstack/ceilometer-0" Jan 26 08:14:32 crc kubenswrapper[4806]: I0126 08:14:32.317658 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8b72a45f-26b1-466b-b078-81efe4bb135f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8b72a45f-26b1-466b-b078-81efe4bb135f\") " pod="openstack/ceilometer-0" Jan 26 08:14:32 crc kubenswrapper[4806]: I0126 08:14:32.317710 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8b72a45f-26b1-466b-b078-81efe4bb135f-run-httpd\") pod \"ceilometer-0\" (UID: \"8b72a45f-26b1-466b-b078-81efe4bb135f\") " pod="openstack/ceilometer-0" Jan 26 08:14:32 crc kubenswrapper[4806]: I0126 08:14:32.317749 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b72a45f-26b1-466b-b078-81efe4bb135f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8b72a45f-26b1-466b-b078-81efe4bb135f\") " pod="openstack/ceilometer-0" Jan 26 08:14:32 crc kubenswrapper[4806]: I0126 08:14:32.317776 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xp6vj\" (UniqueName: \"kubernetes.io/projected/8b72a45f-26b1-466b-b078-81efe4bb135f-kube-api-access-xp6vj\") pod \"ceilometer-0\" (UID: \"8b72a45f-26b1-466b-b078-81efe4bb135f\") " pod="openstack/ceilometer-0" Jan 26 08:14:32 crc kubenswrapper[4806]: I0126 08:14:32.317828 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8b72a45f-26b1-466b-b078-81efe4bb135f-scripts\") pod \"ceilometer-0\" (UID: \"8b72a45f-26b1-466b-b078-81efe4bb135f\") " pod="openstack/ceilometer-0" Jan 26 08:14:32 crc kubenswrapper[4806]: I0126 08:14:32.419732 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8b72a45f-26b1-466b-b078-81efe4bb135f-run-httpd\") pod \"ceilometer-0\" (UID: \"8b72a45f-26b1-466b-b078-81efe4bb135f\") " pod="openstack/ceilometer-0" Jan 26 08:14:32 crc kubenswrapper[4806]: I0126 08:14:32.419913 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b72a45f-26b1-466b-b078-81efe4bb135f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8b72a45f-26b1-466b-b078-81efe4bb135f\") " pod="openstack/ceilometer-0" Jan 26 08:14:32 crc kubenswrapper[4806]: I0126 08:14:32.419990 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xp6vj\" (UniqueName: \"kubernetes.io/projected/8b72a45f-26b1-466b-b078-81efe4bb135f-kube-api-access-xp6vj\") pod \"ceilometer-0\" (UID: \"8b72a45f-26b1-466b-b078-81efe4bb135f\") " pod="openstack/ceilometer-0" Jan 26 08:14:32 crc kubenswrapper[4806]: I0126 08:14:32.420079 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8b72a45f-26b1-466b-b078-81efe4bb135f-scripts\") pod \"ceilometer-0\" (UID: \"8b72a45f-26b1-466b-b078-81efe4bb135f\") " pod="openstack/ceilometer-0" Jan 26 08:14:32 crc kubenswrapper[4806]: I0126 08:14:32.420191 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8b72a45f-26b1-466b-b078-81efe4bb135f-log-httpd\") pod \"ceilometer-0\" (UID: \"8b72a45f-26b1-466b-b078-81efe4bb135f\") " pod="openstack/ceilometer-0" Jan 26 08:14:32 crc kubenswrapper[4806]: I0126 08:14:32.420270 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b72a45f-26b1-466b-b078-81efe4bb135f-config-data\") pod \"ceilometer-0\" (UID: \"8b72a45f-26b1-466b-b078-81efe4bb135f\") " pod="openstack/ceilometer-0" Jan 26 08:14:32 crc kubenswrapper[4806]: I0126 08:14:32.420345 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8b72a45f-26b1-466b-b078-81efe4bb135f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8b72a45f-26b1-466b-b078-81efe4bb135f\") " pod="openstack/ceilometer-0" Jan 26 08:14:32 crc kubenswrapper[4806]: I0126 08:14:32.420373 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8b72a45f-26b1-466b-b078-81efe4bb135f-run-httpd\") pod \"ceilometer-0\" (UID: \"8b72a45f-26b1-466b-b078-81efe4bb135f\") " pod="openstack/ceilometer-0" Jan 26 08:14:32 crc kubenswrapper[4806]: I0126 08:14:32.421334 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8b72a45f-26b1-466b-b078-81efe4bb135f-log-httpd\") pod \"ceilometer-0\" (UID: \"8b72a45f-26b1-466b-b078-81efe4bb135f\") " pod="openstack/ceilometer-0" Jan 26 08:14:32 crc kubenswrapper[4806]: I0126 08:14:32.423752 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8b72a45f-26b1-466b-b078-81efe4bb135f-scripts\") pod \"ceilometer-0\" (UID: \"8b72a45f-26b1-466b-b078-81efe4bb135f\") " pod="openstack/ceilometer-0" Jan 26 08:14:32 crc kubenswrapper[4806]: I0126 08:14:32.424392 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b72a45f-26b1-466b-b078-81efe4bb135f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8b72a45f-26b1-466b-b078-81efe4bb135f\") " pod="openstack/ceilometer-0" Jan 26 08:14:32 crc kubenswrapper[4806]: I0126 08:14:32.424414 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8b72a45f-26b1-466b-b078-81efe4bb135f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8b72a45f-26b1-466b-b078-81efe4bb135f\") " pod="openstack/ceilometer-0" Jan 26 08:14:32 crc kubenswrapper[4806]: I0126 08:14:32.425107 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b72a45f-26b1-466b-b078-81efe4bb135f-config-data\") pod \"ceilometer-0\" (UID: \"8b72a45f-26b1-466b-b078-81efe4bb135f\") " pod="openstack/ceilometer-0" Jan 26 08:14:32 crc kubenswrapper[4806]: I0126 08:14:32.438709 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xp6vj\" (UniqueName: \"kubernetes.io/projected/8b72a45f-26b1-466b-b078-81efe4bb135f-kube-api-access-xp6vj\") pod \"ceilometer-0\" (UID: \"8b72a45f-26b1-466b-b078-81efe4bb135f\") " pod="openstack/ceilometer-0" Jan 26 08:14:32 crc kubenswrapper[4806]: I0126 08:14:32.470171 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 08:14:32 crc kubenswrapper[4806]: I0126 08:14:32.838095 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 08:14:32 crc kubenswrapper[4806]: I0126 08:14:32.933731 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87668aa1-185b-4149-98a5-c6ad71face4d-combined-ca-bundle\") pod \"87668aa1-185b-4149-98a5-c6ad71face4d\" (UID: \"87668aa1-185b-4149-98a5-c6ad71face4d\") " Jan 26 08:14:32 crc kubenswrapper[4806]: I0126 08:14:32.933845 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/87668aa1-185b-4149-98a5-c6ad71face4d-logs\") pod \"87668aa1-185b-4149-98a5-c6ad71face4d\" (UID: \"87668aa1-185b-4149-98a5-c6ad71face4d\") " Jan 26 08:14:32 crc kubenswrapper[4806]: I0126 08:14:32.933916 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/87668aa1-185b-4149-98a5-c6ad71face4d-public-tls-certs\") pod \"87668aa1-185b-4149-98a5-c6ad71face4d\" (UID: \"87668aa1-185b-4149-98a5-c6ad71face4d\") " Jan 26 08:14:32 crc kubenswrapper[4806]: I0126 08:14:32.933963 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/87668aa1-185b-4149-98a5-c6ad71face4d-internal-tls-certs\") pod \"87668aa1-185b-4149-98a5-c6ad71face4d\" (UID: \"87668aa1-185b-4149-98a5-c6ad71face4d\") " Jan 26 08:14:32 crc kubenswrapper[4806]: I0126 08:14:32.933992 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbk7b\" (UniqueName: \"kubernetes.io/projected/87668aa1-185b-4149-98a5-c6ad71face4d-kube-api-access-sbk7b\") pod \"87668aa1-185b-4149-98a5-c6ad71face4d\" (UID: \"87668aa1-185b-4149-98a5-c6ad71face4d\") " Jan 26 08:14:32 crc kubenswrapper[4806]: I0126 08:14:32.934053 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87668aa1-185b-4149-98a5-c6ad71face4d-config-data\") pod \"87668aa1-185b-4149-98a5-c6ad71face4d\" (UID: \"87668aa1-185b-4149-98a5-c6ad71face4d\") " Jan 26 08:14:32 crc kubenswrapper[4806]: I0126 08:14:32.934159 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87668aa1-185b-4149-98a5-c6ad71face4d-logs" (OuterVolumeSpecName: "logs") pod "87668aa1-185b-4149-98a5-c6ad71face4d" (UID: "87668aa1-185b-4149-98a5-c6ad71face4d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 08:14:32 crc kubenswrapper[4806]: I0126 08:14:32.934531 4806 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/87668aa1-185b-4149-98a5-c6ad71face4d-logs\") on node \"crc\" DevicePath \"\"" Jan 26 08:14:32 crc kubenswrapper[4806]: I0126 08:14:32.939096 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87668aa1-185b-4149-98a5-c6ad71face4d-kube-api-access-sbk7b" (OuterVolumeSpecName: "kube-api-access-sbk7b") pod "87668aa1-185b-4149-98a5-c6ad71face4d" (UID: "87668aa1-185b-4149-98a5-c6ad71face4d"). InnerVolumeSpecName "kube-api-access-sbk7b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:14:32 crc kubenswrapper[4806]: I0126 08:14:32.986205 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87668aa1-185b-4149-98a5-c6ad71face4d-config-data" (OuterVolumeSpecName: "config-data") pod "87668aa1-185b-4149-98a5-c6ad71face4d" (UID: "87668aa1-185b-4149-98a5-c6ad71face4d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:14:33 crc kubenswrapper[4806]: I0126 08:14:33.007643 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 08:14:33 crc kubenswrapper[4806]: I0126 08:14:33.008482 4806 generic.go:334] "Generic (PLEG): container finished" podID="5132f7a4-1bcc-4f2e-8dea-d5382b7a52e8" containerID="4e12f63d5bc17c8bbb6ecc00305e481b2ff5ec44225861a592e09b2aa0a46d0f" exitCode=143 Jan 26 08:14:33 crc kubenswrapper[4806]: I0126 08:14:33.008735 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5132f7a4-1bcc-4f2e-8dea-d5382b7a52e8","Type":"ContainerDied","Data":"4e12f63d5bc17c8bbb6ecc00305e481b2ff5ec44225861a592e09b2aa0a46d0f"} Jan 26 08:14:33 crc kubenswrapper[4806]: W0126 08:14:33.009595 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8b72a45f_26b1_466b_b078_81efe4bb135f.slice/crio-ef466a1455dc34d7aa2c94066de49c6542e01c9cb2d41e6d41049d133804424d WatchSource:0}: Error finding container ef466a1455dc34d7aa2c94066de49c6542e01c9cb2d41e6d41049d133804424d: Status 404 returned error can't find the container with id ef466a1455dc34d7aa2c94066de49c6542e01c9cb2d41e6d41049d133804424d Jan 26 08:14:33 crc kubenswrapper[4806]: I0126 08:14:33.015954 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87668aa1-185b-4149-98a5-c6ad71face4d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "87668aa1-185b-4149-98a5-c6ad71face4d" (UID: "87668aa1-185b-4149-98a5-c6ad71face4d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:14:33 crc kubenswrapper[4806]: I0126 08:14:33.017406 4806 generic.go:334] "Generic (PLEG): container finished" podID="87668aa1-185b-4149-98a5-c6ad71face4d" containerID="9c41a1007d4331751739abb9a48c437902567f4ac02a75190437f278cc35ceb9" exitCode=0 Jan 26 08:14:33 crc kubenswrapper[4806]: I0126 08:14:33.017601 4806 generic.go:334] "Generic (PLEG): container finished" podID="87668aa1-185b-4149-98a5-c6ad71face4d" containerID="d0554aa91f225d79772b96b1fa91255ded347d6027163cb5a1ac35f064073ec1" exitCode=143 Jan 26 08:14:33 crc kubenswrapper[4806]: I0126 08:14:33.017575 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 08:14:33 crc kubenswrapper[4806]: I0126 08:14:33.017544 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"87668aa1-185b-4149-98a5-c6ad71face4d","Type":"ContainerDied","Data":"9c41a1007d4331751739abb9a48c437902567f4ac02a75190437f278cc35ceb9"} Jan 26 08:14:33 crc kubenswrapper[4806]: I0126 08:14:33.022893 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"87668aa1-185b-4149-98a5-c6ad71face4d","Type":"ContainerDied","Data":"d0554aa91f225d79772b96b1fa91255ded347d6027163cb5a1ac35f064073ec1"} Jan 26 08:14:33 crc kubenswrapper[4806]: I0126 08:14:33.022994 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"87668aa1-185b-4149-98a5-c6ad71face4d","Type":"ContainerDied","Data":"1cdc6c031e13e494d612e1b0d1efdcab663a5600c28573c133920a652fb64e02"} Jan 26 08:14:33 crc kubenswrapper[4806]: I0126 08:14:33.023070 4806 scope.go:117] "RemoveContainer" containerID="9c41a1007d4331751739abb9a48c437902567f4ac02a75190437f278cc35ceb9" Jan 26 08:14:33 crc kubenswrapper[4806]: I0126 08:14:33.018951 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87668aa1-185b-4149-98a5-c6ad71face4d-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "87668aa1-185b-4149-98a5-c6ad71face4d" (UID: "87668aa1-185b-4149-98a5-c6ad71face4d"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:14:33 crc kubenswrapper[4806]: I0126 08:14:33.036706 4806 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87668aa1-185b-4149-98a5-c6ad71face4d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 08:14:33 crc kubenswrapper[4806]: I0126 08:14:33.037056 4806 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/87668aa1-185b-4149-98a5-c6ad71face4d-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 08:14:33 crc kubenswrapper[4806]: I0126 08:14:33.037108 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sbk7b\" (UniqueName: \"kubernetes.io/projected/87668aa1-185b-4149-98a5-c6ad71face4d-kube-api-access-sbk7b\") on node \"crc\" DevicePath \"\"" Jan 26 08:14:33 crc kubenswrapper[4806]: I0126 08:14:33.037158 4806 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87668aa1-185b-4149-98a5-c6ad71face4d-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 08:14:33 crc kubenswrapper[4806]: I0126 08:14:33.037433 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87668aa1-185b-4149-98a5-c6ad71face4d-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "87668aa1-185b-4149-98a5-c6ad71face4d" (UID: "87668aa1-185b-4149-98a5-c6ad71face4d"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:14:33 crc kubenswrapper[4806]: I0126 08:14:33.067992 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6ff9c60-ebe3-40fb-aa92-180ed68ed333" path="/var/lib/kubelet/pods/f6ff9c60-ebe3-40fb-aa92-180ed68ed333/volumes" Jan 26 08:14:33 crc kubenswrapper[4806]: I0126 08:14:33.106769 4806 scope.go:117] "RemoveContainer" containerID="d0554aa91f225d79772b96b1fa91255ded347d6027163cb5a1ac35f064073ec1" Jan 26 08:14:33 crc kubenswrapper[4806]: I0126 08:14:33.138735 4806 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/87668aa1-185b-4149-98a5-c6ad71face4d-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 08:14:33 crc kubenswrapper[4806]: I0126 08:14:33.148220 4806 scope.go:117] "RemoveContainer" containerID="9c41a1007d4331751739abb9a48c437902567f4ac02a75190437f278cc35ceb9" Jan 26 08:14:33 crc kubenswrapper[4806]: E0126 08:14:33.148596 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c41a1007d4331751739abb9a48c437902567f4ac02a75190437f278cc35ceb9\": container with ID starting with 9c41a1007d4331751739abb9a48c437902567f4ac02a75190437f278cc35ceb9 not found: ID does not exist" containerID="9c41a1007d4331751739abb9a48c437902567f4ac02a75190437f278cc35ceb9" Jan 26 08:14:33 crc kubenswrapper[4806]: I0126 08:14:33.148641 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c41a1007d4331751739abb9a48c437902567f4ac02a75190437f278cc35ceb9"} err="failed to get container status \"9c41a1007d4331751739abb9a48c437902567f4ac02a75190437f278cc35ceb9\": rpc error: code = NotFound desc = could not find container \"9c41a1007d4331751739abb9a48c437902567f4ac02a75190437f278cc35ceb9\": container with ID starting with 9c41a1007d4331751739abb9a48c437902567f4ac02a75190437f278cc35ceb9 not found: ID does not exist" Jan 26 08:14:33 crc kubenswrapper[4806]: I0126 08:14:33.148667 4806 scope.go:117] "RemoveContainer" containerID="d0554aa91f225d79772b96b1fa91255ded347d6027163cb5a1ac35f064073ec1" Jan 26 08:14:33 crc kubenswrapper[4806]: E0126 08:14:33.149143 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d0554aa91f225d79772b96b1fa91255ded347d6027163cb5a1ac35f064073ec1\": container with ID starting with d0554aa91f225d79772b96b1fa91255ded347d6027163cb5a1ac35f064073ec1 not found: ID does not exist" containerID="d0554aa91f225d79772b96b1fa91255ded347d6027163cb5a1ac35f064073ec1" Jan 26 08:14:33 crc kubenswrapper[4806]: I0126 08:14:33.149184 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0554aa91f225d79772b96b1fa91255ded347d6027163cb5a1ac35f064073ec1"} err="failed to get container status \"d0554aa91f225d79772b96b1fa91255ded347d6027163cb5a1ac35f064073ec1\": rpc error: code = NotFound desc = could not find container \"d0554aa91f225d79772b96b1fa91255ded347d6027163cb5a1ac35f064073ec1\": container with ID starting with d0554aa91f225d79772b96b1fa91255ded347d6027163cb5a1ac35f064073ec1 not found: ID does not exist" Jan 26 08:14:33 crc kubenswrapper[4806]: I0126 08:14:33.149212 4806 scope.go:117] "RemoveContainer" containerID="9c41a1007d4331751739abb9a48c437902567f4ac02a75190437f278cc35ceb9" Jan 26 08:14:33 crc kubenswrapper[4806]: I0126 08:14:33.149581 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c41a1007d4331751739abb9a48c437902567f4ac02a75190437f278cc35ceb9"} err="failed to get container status \"9c41a1007d4331751739abb9a48c437902567f4ac02a75190437f278cc35ceb9\": rpc error: code = NotFound desc = could not find container \"9c41a1007d4331751739abb9a48c437902567f4ac02a75190437f278cc35ceb9\": container with ID starting with 9c41a1007d4331751739abb9a48c437902567f4ac02a75190437f278cc35ceb9 not found: ID does not exist" Jan 26 08:14:33 crc kubenswrapper[4806]: I0126 08:14:33.149611 4806 scope.go:117] "RemoveContainer" containerID="d0554aa91f225d79772b96b1fa91255ded347d6027163cb5a1ac35f064073ec1" Jan 26 08:14:33 crc kubenswrapper[4806]: I0126 08:14:33.149847 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0554aa91f225d79772b96b1fa91255ded347d6027163cb5a1ac35f064073ec1"} err="failed to get container status \"d0554aa91f225d79772b96b1fa91255ded347d6027163cb5a1ac35f064073ec1\": rpc error: code = NotFound desc = could not find container \"d0554aa91f225d79772b96b1fa91255ded347d6027163cb5a1ac35f064073ec1\": container with ID starting with d0554aa91f225d79772b96b1fa91255ded347d6027163cb5a1ac35f064073ec1 not found: ID does not exist" Jan 26 08:14:33 crc kubenswrapper[4806]: I0126 08:14:33.342596 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 26 08:14:33 crc kubenswrapper[4806]: I0126 08:14:33.350892 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 26 08:14:33 crc kubenswrapper[4806]: I0126 08:14:33.367984 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 26 08:14:33 crc kubenswrapper[4806]: E0126 08:14:33.368559 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87668aa1-185b-4149-98a5-c6ad71face4d" containerName="nova-api-log" Jan 26 08:14:33 crc kubenswrapper[4806]: I0126 08:14:33.368592 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="87668aa1-185b-4149-98a5-c6ad71face4d" containerName="nova-api-log" Jan 26 08:14:33 crc kubenswrapper[4806]: E0126 08:14:33.368611 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87668aa1-185b-4149-98a5-c6ad71face4d" containerName="nova-api-api" Jan 26 08:14:33 crc kubenswrapper[4806]: I0126 08:14:33.368617 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="87668aa1-185b-4149-98a5-c6ad71face4d" containerName="nova-api-api" Jan 26 08:14:33 crc kubenswrapper[4806]: I0126 08:14:33.368888 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="87668aa1-185b-4149-98a5-c6ad71face4d" containerName="nova-api-log" Jan 26 08:14:33 crc kubenswrapper[4806]: I0126 08:14:33.368910 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="87668aa1-185b-4149-98a5-c6ad71face4d" containerName="nova-api-api" Jan 26 08:14:33 crc kubenswrapper[4806]: I0126 08:14:33.369917 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 08:14:33 crc kubenswrapper[4806]: I0126 08:14:33.371864 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 26 08:14:33 crc kubenswrapper[4806]: I0126 08:14:33.372207 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 26 08:14:33 crc kubenswrapper[4806]: I0126 08:14:33.376327 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 26 08:14:33 crc kubenswrapper[4806]: I0126 08:14:33.382400 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 26 08:14:33 crc kubenswrapper[4806]: I0126 08:14:33.444968 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/58e34a8b-db1e-40a7-8d39-e791e2e45de9-public-tls-certs\") pod \"nova-api-0\" (UID: \"58e34a8b-db1e-40a7-8d39-e791e2e45de9\") " pod="openstack/nova-api-0" Jan 26 08:14:33 crc kubenswrapper[4806]: I0126 08:14:33.445083 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58e34a8b-db1e-40a7-8d39-e791e2e45de9-config-data\") pod \"nova-api-0\" (UID: \"58e34a8b-db1e-40a7-8d39-e791e2e45de9\") " pod="openstack/nova-api-0" Jan 26 08:14:33 crc kubenswrapper[4806]: I0126 08:14:33.445178 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/58e34a8b-db1e-40a7-8d39-e791e2e45de9-internal-tls-certs\") pod \"nova-api-0\" (UID: \"58e34a8b-db1e-40a7-8d39-e791e2e45de9\") " pod="openstack/nova-api-0" Jan 26 08:14:33 crc kubenswrapper[4806]: I0126 08:14:33.445290 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qkpn\" (UniqueName: \"kubernetes.io/projected/58e34a8b-db1e-40a7-8d39-e791e2e45de9-kube-api-access-6qkpn\") pod \"nova-api-0\" (UID: \"58e34a8b-db1e-40a7-8d39-e791e2e45de9\") " pod="openstack/nova-api-0" Jan 26 08:14:33 crc kubenswrapper[4806]: I0126 08:14:33.445341 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/58e34a8b-db1e-40a7-8d39-e791e2e45de9-logs\") pod \"nova-api-0\" (UID: \"58e34a8b-db1e-40a7-8d39-e791e2e45de9\") " pod="openstack/nova-api-0" Jan 26 08:14:33 crc kubenswrapper[4806]: I0126 08:14:33.445397 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58e34a8b-db1e-40a7-8d39-e791e2e45de9-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"58e34a8b-db1e-40a7-8d39-e791e2e45de9\") " pod="openstack/nova-api-0" Jan 26 08:14:33 crc kubenswrapper[4806]: I0126 08:14:33.547159 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/58e34a8b-db1e-40a7-8d39-e791e2e45de9-internal-tls-certs\") pod \"nova-api-0\" (UID: \"58e34a8b-db1e-40a7-8d39-e791e2e45de9\") " pod="openstack/nova-api-0" Jan 26 08:14:33 crc kubenswrapper[4806]: I0126 08:14:33.548041 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6qkpn\" (UniqueName: \"kubernetes.io/projected/58e34a8b-db1e-40a7-8d39-e791e2e45de9-kube-api-access-6qkpn\") pod \"nova-api-0\" (UID: \"58e34a8b-db1e-40a7-8d39-e791e2e45de9\") " pod="openstack/nova-api-0" Jan 26 08:14:33 crc kubenswrapper[4806]: I0126 08:14:33.548072 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/58e34a8b-db1e-40a7-8d39-e791e2e45de9-logs\") pod \"nova-api-0\" (UID: \"58e34a8b-db1e-40a7-8d39-e791e2e45de9\") " pod="openstack/nova-api-0" Jan 26 08:14:33 crc kubenswrapper[4806]: I0126 08:14:33.548095 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58e34a8b-db1e-40a7-8d39-e791e2e45de9-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"58e34a8b-db1e-40a7-8d39-e791e2e45de9\") " pod="openstack/nova-api-0" Jan 26 08:14:33 crc kubenswrapper[4806]: I0126 08:14:33.548135 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/58e34a8b-db1e-40a7-8d39-e791e2e45de9-public-tls-certs\") pod \"nova-api-0\" (UID: \"58e34a8b-db1e-40a7-8d39-e791e2e45de9\") " pod="openstack/nova-api-0" Jan 26 08:14:33 crc kubenswrapper[4806]: I0126 08:14:33.548179 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58e34a8b-db1e-40a7-8d39-e791e2e45de9-config-data\") pod \"nova-api-0\" (UID: \"58e34a8b-db1e-40a7-8d39-e791e2e45de9\") " pod="openstack/nova-api-0" Jan 26 08:14:33 crc kubenswrapper[4806]: I0126 08:14:33.548818 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/58e34a8b-db1e-40a7-8d39-e791e2e45de9-logs\") pod \"nova-api-0\" (UID: \"58e34a8b-db1e-40a7-8d39-e791e2e45de9\") " pod="openstack/nova-api-0" Jan 26 08:14:33 crc kubenswrapper[4806]: I0126 08:14:33.554066 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/58e34a8b-db1e-40a7-8d39-e791e2e45de9-config-data\") pod \"nova-api-0\" (UID: \"58e34a8b-db1e-40a7-8d39-e791e2e45de9\") " pod="openstack/nova-api-0" Jan 26 08:14:33 crc kubenswrapper[4806]: I0126 08:14:33.554133 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/58e34a8b-db1e-40a7-8d39-e791e2e45de9-internal-tls-certs\") pod \"nova-api-0\" (UID: \"58e34a8b-db1e-40a7-8d39-e791e2e45de9\") " pod="openstack/nova-api-0" Jan 26 08:14:33 crc kubenswrapper[4806]: I0126 08:14:33.555079 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/58e34a8b-db1e-40a7-8d39-e791e2e45de9-public-tls-certs\") pod \"nova-api-0\" (UID: \"58e34a8b-db1e-40a7-8d39-e791e2e45de9\") " pod="openstack/nova-api-0" Jan 26 08:14:33 crc kubenswrapper[4806]: I0126 08:14:33.560060 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58e34a8b-db1e-40a7-8d39-e791e2e45de9-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"58e34a8b-db1e-40a7-8d39-e791e2e45de9\") " pod="openstack/nova-api-0" Jan 26 08:14:33 crc kubenswrapper[4806]: I0126 08:14:33.565724 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6qkpn\" (UniqueName: \"kubernetes.io/projected/58e34a8b-db1e-40a7-8d39-e791e2e45de9-kube-api-access-6qkpn\") pod \"nova-api-0\" (UID: \"58e34a8b-db1e-40a7-8d39-e791e2e45de9\") " pod="openstack/nova-api-0" Jan 26 08:14:33 crc kubenswrapper[4806]: I0126 08:14:33.683873 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 08:14:34 crc kubenswrapper[4806]: I0126 08:14:34.035705 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8b72a45f-26b1-466b-b078-81efe4bb135f","Type":"ContainerStarted","Data":"66ad429a9b8224d09e62258884165fc48ad4b3a5d69e56cc46f2542154605846"} Jan 26 08:14:34 crc kubenswrapper[4806]: I0126 08:14:34.035985 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8b72a45f-26b1-466b-b078-81efe4bb135f","Type":"ContainerStarted","Data":"ef466a1455dc34d7aa2c94066de49c6542e01c9cb2d41e6d41049d133804424d"} Jan 26 08:14:34 crc kubenswrapper[4806]: I0126 08:14:34.187177 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 26 08:14:34 crc kubenswrapper[4806]: W0126 08:14:34.204116 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod58e34a8b_db1e_40a7_8d39_e791e2e45de9.slice/crio-3245352ce6ad8c491d93dfa1427c44d827e240284d8b639a2be214bc06ae74b7 WatchSource:0}: Error finding container 3245352ce6ad8c491d93dfa1427c44d827e240284d8b639a2be214bc06ae74b7: Status 404 returned error can't find the container with id 3245352ce6ad8c491d93dfa1427c44d827e240284d8b639a2be214bc06ae74b7 Jan 26 08:14:35 crc kubenswrapper[4806]: I0126 08:14:35.051668 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87668aa1-185b-4149-98a5-c6ad71face4d" path="/var/lib/kubelet/pods/87668aa1-185b-4149-98a5-c6ad71face4d/volumes" Jan 26 08:14:35 crc kubenswrapper[4806]: I0126 08:14:35.053090 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"58e34a8b-db1e-40a7-8d39-e791e2e45de9","Type":"ContainerStarted","Data":"d4c0775754c0ad8067e5849a8407f959ae14c7675e2ceaa63e3218db7e8fd666"} Jan 26 08:14:35 crc kubenswrapper[4806]: I0126 08:14:35.053118 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"58e34a8b-db1e-40a7-8d39-e791e2e45de9","Type":"ContainerStarted","Data":"8032cf26be332bf4c23c41dbea52083d9fa2119e120454de8c991c61e0b705ce"} Jan 26 08:14:35 crc kubenswrapper[4806]: I0126 08:14:35.053127 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"58e34a8b-db1e-40a7-8d39-e791e2e45de9","Type":"ContainerStarted","Data":"3245352ce6ad8c491d93dfa1427c44d827e240284d8b639a2be214bc06ae74b7"} Jan 26 08:14:35 crc kubenswrapper[4806]: I0126 08:14:35.066143 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8b72a45f-26b1-466b-b078-81efe4bb135f","Type":"ContainerStarted","Data":"b6a9d5e0a4af6a60e284276b947fcaa62a03a783e062af0b7556c8508b5f0934"} Jan 26 08:14:35 crc kubenswrapper[4806]: I0126 08:14:35.111487 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.111467075 podStartE2EDuration="2.111467075s" podCreationTimestamp="2026-01-26 08:14:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 08:14:35.101331202 +0000 UTC m=+1254.365739258" watchObservedRunningTime="2026-01-26 08:14:35.111467075 +0000 UTC m=+1254.375875131" Jan 26 08:14:35 crc kubenswrapper[4806]: I0126 08:14:35.873416 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 08:14:36 crc kubenswrapper[4806]: I0126 08:14:36.009178 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5132f7a4-1bcc-4f2e-8dea-d5382b7a52e8-nova-metadata-tls-certs\") pod \"5132f7a4-1bcc-4f2e-8dea-d5382b7a52e8\" (UID: \"5132f7a4-1bcc-4f2e-8dea-d5382b7a52e8\") " Jan 26 08:14:36 crc kubenswrapper[4806]: I0126 08:14:36.009493 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5132f7a4-1bcc-4f2e-8dea-d5382b7a52e8-combined-ca-bundle\") pod \"5132f7a4-1bcc-4f2e-8dea-d5382b7a52e8\" (UID: \"5132f7a4-1bcc-4f2e-8dea-d5382b7a52e8\") " Jan 26 08:14:36 crc kubenswrapper[4806]: I0126 08:14:36.009537 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bmnhk\" (UniqueName: \"kubernetes.io/projected/5132f7a4-1bcc-4f2e-8dea-d5382b7a52e8-kube-api-access-bmnhk\") pod \"5132f7a4-1bcc-4f2e-8dea-d5382b7a52e8\" (UID: \"5132f7a4-1bcc-4f2e-8dea-d5382b7a52e8\") " Jan 26 08:14:36 crc kubenswrapper[4806]: I0126 08:14:36.009620 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5132f7a4-1bcc-4f2e-8dea-d5382b7a52e8-config-data\") pod \"5132f7a4-1bcc-4f2e-8dea-d5382b7a52e8\" (UID: \"5132f7a4-1bcc-4f2e-8dea-d5382b7a52e8\") " Jan 26 08:14:36 crc kubenswrapper[4806]: I0126 08:14:36.009691 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5132f7a4-1bcc-4f2e-8dea-d5382b7a52e8-logs\") pod \"5132f7a4-1bcc-4f2e-8dea-d5382b7a52e8\" (UID: \"5132f7a4-1bcc-4f2e-8dea-d5382b7a52e8\") " Jan 26 08:14:36 crc kubenswrapper[4806]: I0126 08:14:36.011077 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5132f7a4-1bcc-4f2e-8dea-d5382b7a52e8-logs" (OuterVolumeSpecName: "logs") pod "5132f7a4-1bcc-4f2e-8dea-d5382b7a52e8" (UID: "5132f7a4-1bcc-4f2e-8dea-d5382b7a52e8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 08:14:36 crc kubenswrapper[4806]: I0126 08:14:36.033732 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5132f7a4-1bcc-4f2e-8dea-d5382b7a52e8-kube-api-access-bmnhk" (OuterVolumeSpecName: "kube-api-access-bmnhk") pod "5132f7a4-1bcc-4f2e-8dea-d5382b7a52e8" (UID: "5132f7a4-1bcc-4f2e-8dea-d5382b7a52e8"). InnerVolumeSpecName "kube-api-access-bmnhk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:14:36 crc kubenswrapper[4806]: I0126 08:14:36.065138 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5132f7a4-1bcc-4f2e-8dea-d5382b7a52e8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5132f7a4-1bcc-4f2e-8dea-d5382b7a52e8" (UID: "5132f7a4-1bcc-4f2e-8dea-d5382b7a52e8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:14:36 crc kubenswrapper[4806]: I0126 08:14:36.090046 4806 generic.go:334] "Generic (PLEG): container finished" podID="5132f7a4-1bcc-4f2e-8dea-d5382b7a52e8" containerID="80da07caa93be671e91b01f948a5c54c938cc2e3804b599751b6f37795828a30" exitCode=0 Jan 26 08:14:36 crc kubenswrapper[4806]: I0126 08:14:36.090137 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5132f7a4-1bcc-4f2e-8dea-d5382b7a52e8","Type":"ContainerDied","Data":"80da07caa93be671e91b01f948a5c54c938cc2e3804b599751b6f37795828a30"} Jan 26 08:14:36 crc kubenswrapper[4806]: I0126 08:14:36.090166 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"5132f7a4-1bcc-4f2e-8dea-d5382b7a52e8","Type":"ContainerDied","Data":"c58cfeed0725a913238d2fd90bd33fcd4046b7d0674f53f175f966b6d9258f4c"} Jan 26 08:14:36 crc kubenswrapper[4806]: I0126 08:14:36.090183 4806 scope.go:117] "RemoveContainer" containerID="80da07caa93be671e91b01f948a5c54c938cc2e3804b599751b6f37795828a30" Jan 26 08:14:36 crc kubenswrapper[4806]: I0126 08:14:36.090318 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 08:14:36 crc kubenswrapper[4806]: I0126 08:14:36.094629 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5132f7a4-1bcc-4f2e-8dea-d5382b7a52e8-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "5132f7a4-1bcc-4f2e-8dea-d5382b7a52e8" (UID: "5132f7a4-1bcc-4f2e-8dea-d5382b7a52e8"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:14:36 crc kubenswrapper[4806]: I0126 08:14:36.097765 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8b72a45f-26b1-466b-b078-81efe4bb135f","Type":"ContainerStarted","Data":"348daa5e7fe6006b9d19ae795f93f256a66f95f9b25b535951e16ed16a378ab8"} Jan 26 08:14:36 crc kubenswrapper[4806]: I0126 08:14:36.105360 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5132f7a4-1bcc-4f2e-8dea-d5382b7a52e8-config-data" (OuterVolumeSpecName: "config-data") pod "5132f7a4-1bcc-4f2e-8dea-d5382b7a52e8" (UID: "5132f7a4-1bcc-4f2e-8dea-d5382b7a52e8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:14:36 crc kubenswrapper[4806]: I0126 08:14:36.114744 4806 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5132f7a4-1bcc-4f2e-8dea-d5382b7a52e8-logs\") on node \"crc\" DevicePath \"\"" Jan 26 08:14:36 crc kubenswrapper[4806]: I0126 08:14:36.114898 4806 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/5132f7a4-1bcc-4f2e-8dea-d5382b7a52e8-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 08:14:36 crc kubenswrapper[4806]: I0126 08:14:36.114953 4806 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5132f7a4-1bcc-4f2e-8dea-d5382b7a52e8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 08:14:36 crc kubenswrapper[4806]: I0126 08:14:36.115000 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bmnhk\" (UniqueName: \"kubernetes.io/projected/5132f7a4-1bcc-4f2e-8dea-d5382b7a52e8-kube-api-access-bmnhk\") on node \"crc\" DevicePath \"\"" Jan 26 08:14:36 crc kubenswrapper[4806]: I0126 08:14:36.115069 4806 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5132f7a4-1bcc-4f2e-8dea-d5382b7a52e8-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 08:14:36 crc kubenswrapper[4806]: I0126 08:14:36.181825 4806 scope.go:117] "RemoveContainer" containerID="4e12f63d5bc17c8bbb6ecc00305e481b2ff5ec44225861a592e09b2aa0a46d0f" Jan 26 08:14:36 crc kubenswrapper[4806]: I0126 08:14:36.219969 4806 scope.go:117] "RemoveContainer" containerID="80da07caa93be671e91b01f948a5c54c938cc2e3804b599751b6f37795828a30" Jan 26 08:14:36 crc kubenswrapper[4806]: E0126 08:14:36.223341 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"80da07caa93be671e91b01f948a5c54c938cc2e3804b599751b6f37795828a30\": container with ID starting with 80da07caa93be671e91b01f948a5c54c938cc2e3804b599751b6f37795828a30 not found: ID does not exist" containerID="80da07caa93be671e91b01f948a5c54c938cc2e3804b599751b6f37795828a30" Jan 26 08:14:36 crc kubenswrapper[4806]: I0126 08:14:36.223372 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80da07caa93be671e91b01f948a5c54c938cc2e3804b599751b6f37795828a30"} err="failed to get container status \"80da07caa93be671e91b01f948a5c54c938cc2e3804b599751b6f37795828a30\": rpc error: code = NotFound desc = could not find container \"80da07caa93be671e91b01f948a5c54c938cc2e3804b599751b6f37795828a30\": container with ID starting with 80da07caa93be671e91b01f948a5c54c938cc2e3804b599751b6f37795828a30 not found: ID does not exist" Jan 26 08:14:36 crc kubenswrapper[4806]: I0126 08:14:36.223414 4806 scope.go:117] "RemoveContainer" containerID="4e12f63d5bc17c8bbb6ecc00305e481b2ff5ec44225861a592e09b2aa0a46d0f" Jan 26 08:14:36 crc kubenswrapper[4806]: E0126 08:14:36.223782 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e12f63d5bc17c8bbb6ecc00305e481b2ff5ec44225861a592e09b2aa0a46d0f\": container with ID starting with 4e12f63d5bc17c8bbb6ecc00305e481b2ff5ec44225861a592e09b2aa0a46d0f not found: ID does not exist" containerID="4e12f63d5bc17c8bbb6ecc00305e481b2ff5ec44225861a592e09b2aa0a46d0f" Jan 26 08:14:36 crc kubenswrapper[4806]: I0126 08:14:36.223833 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e12f63d5bc17c8bbb6ecc00305e481b2ff5ec44225861a592e09b2aa0a46d0f"} err="failed to get container status \"4e12f63d5bc17c8bbb6ecc00305e481b2ff5ec44225861a592e09b2aa0a46d0f\": rpc error: code = NotFound desc = could not find container \"4e12f63d5bc17c8bbb6ecc00305e481b2ff5ec44225861a592e09b2aa0a46d0f\": container with ID starting with 4e12f63d5bc17c8bbb6ecc00305e481b2ff5ec44225861a592e09b2aa0a46d0f not found: ID does not exist" Jan 26 08:14:36 crc kubenswrapper[4806]: I0126 08:14:36.450320 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 08:14:36 crc kubenswrapper[4806]: I0126 08:14:36.455330 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 08:14:36 crc kubenswrapper[4806]: I0126 08:14:36.474798 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 26 08:14:36 crc kubenswrapper[4806]: E0126 08:14:36.475512 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5132f7a4-1bcc-4f2e-8dea-d5382b7a52e8" containerName="nova-metadata-metadata" Jan 26 08:14:36 crc kubenswrapper[4806]: I0126 08:14:36.475541 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="5132f7a4-1bcc-4f2e-8dea-d5382b7a52e8" containerName="nova-metadata-metadata" Jan 26 08:14:36 crc kubenswrapper[4806]: E0126 08:14:36.475557 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5132f7a4-1bcc-4f2e-8dea-d5382b7a52e8" containerName="nova-metadata-log" Jan 26 08:14:36 crc kubenswrapper[4806]: I0126 08:14:36.475563 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="5132f7a4-1bcc-4f2e-8dea-d5382b7a52e8" containerName="nova-metadata-log" Jan 26 08:14:36 crc kubenswrapper[4806]: I0126 08:14:36.475743 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="5132f7a4-1bcc-4f2e-8dea-d5382b7a52e8" containerName="nova-metadata-log" Jan 26 08:14:36 crc kubenswrapper[4806]: I0126 08:14:36.475770 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="5132f7a4-1bcc-4f2e-8dea-d5382b7a52e8" containerName="nova-metadata-metadata" Jan 26 08:14:36 crc kubenswrapper[4806]: I0126 08:14:36.477068 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 08:14:36 crc kubenswrapper[4806]: I0126 08:14:36.479661 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 26 08:14:36 crc kubenswrapper[4806]: I0126 08:14:36.480125 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 26 08:14:36 crc kubenswrapper[4806]: I0126 08:14:36.498588 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 08:14:36 crc kubenswrapper[4806]: I0126 08:14:36.524588 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ee33ca06-96f1-4ef2-ba3c-ddd2a15a2a47-logs\") pod \"nova-metadata-0\" (UID: \"ee33ca06-96f1-4ef2-ba3c-ddd2a15a2a47\") " pod="openstack/nova-metadata-0" Jan 26 08:14:36 crc kubenswrapper[4806]: I0126 08:14:36.524686 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee33ca06-96f1-4ef2-ba3c-ddd2a15a2a47-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ee33ca06-96f1-4ef2-ba3c-ddd2a15a2a47\") " pod="openstack/nova-metadata-0" Jan 26 08:14:36 crc kubenswrapper[4806]: I0126 08:14:36.524738 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee33ca06-96f1-4ef2-ba3c-ddd2a15a2a47-config-data\") pod \"nova-metadata-0\" (UID: \"ee33ca06-96f1-4ef2-ba3c-ddd2a15a2a47\") " pod="openstack/nova-metadata-0" Jan 26 08:14:36 crc kubenswrapper[4806]: I0126 08:14:36.524867 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4v4sq\" (UniqueName: \"kubernetes.io/projected/ee33ca06-96f1-4ef2-ba3c-ddd2a15a2a47-kube-api-access-4v4sq\") pod \"nova-metadata-0\" (UID: \"ee33ca06-96f1-4ef2-ba3c-ddd2a15a2a47\") " pod="openstack/nova-metadata-0" Jan 26 08:14:36 crc kubenswrapper[4806]: I0126 08:14:36.524897 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee33ca06-96f1-4ef2-ba3c-ddd2a15a2a47-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ee33ca06-96f1-4ef2-ba3c-ddd2a15a2a47\") " pod="openstack/nova-metadata-0" Jan 26 08:14:36 crc kubenswrapper[4806]: I0126 08:14:36.626591 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee33ca06-96f1-4ef2-ba3c-ddd2a15a2a47-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ee33ca06-96f1-4ef2-ba3c-ddd2a15a2a47\") " pod="openstack/nova-metadata-0" Jan 26 08:14:36 crc kubenswrapper[4806]: I0126 08:14:36.626974 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee33ca06-96f1-4ef2-ba3c-ddd2a15a2a47-config-data\") pod \"nova-metadata-0\" (UID: \"ee33ca06-96f1-4ef2-ba3c-ddd2a15a2a47\") " pod="openstack/nova-metadata-0" Jan 26 08:14:36 crc kubenswrapper[4806]: I0126 08:14:36.627068 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4v4sq\" (UniqueName: \"kubernetes.io/projected/ee33ca06-96f1-4ef2-ba3c-ddd2a15a2a47-kube-api-access-4v4sq\") pod \"nova-metadata-0\" (UID: \"ee33ca06-96f1-4ef2-ba3c-ddd2a15a2a47\") " pod="openstack/nova-metadata-0" Jan 26 08:14:36 crc kubenswrapper[4806]: I0126 08:14:36.627090 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee33ca06-96f1-4ef2-ba3c-ddd2a15a2a47-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ee33ca06-96f1-4ef2-ba3c-ddd2a15a2a47\") " pod="openstack/nova-metadata-0" Jan 26 08:14:36 crc kubenswrapper[4806]: I0126 08:14:36.627192 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ee33ca06-96f1-4ef2-ba3c-ddd2a15a2a47-logs\") pod \"nova-metadata-0\" (UID: \"ee33ca06-96f1-4ef2-ba3c-ddd2a15a2a47\") " pod="openstack/nova-metadata-0" Jan 26 08:14:36 crc kubenswrapper[4806]: I0126 08:14:36.627689 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ee33ca06-96f1-4ef2-ba3c-ddd2a15a2a47-logs\") pod \"nova-metadata-0\" (UID: \"ee33ca06-96f1-4ef2-ba3c-ddd2a15a2a47\") " pod="openstack/nova-metadata-0" Jan 26 08:14:36 crc kubenswrapper[4806]: I0126 08:14:36.641629 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee33ca06-96f1-4ef2-ba3c-ddd2a15a2a47-config-data\") pod \"nova-metadata-0\" (UID: \"ee33ca06-96f1-4ef2-ba3c-ddd2a15a2a47\") " pod="openstack/nova-metadata-0" Jan 26 08:14:36 crc kubenswrapper[4806]: I0126 08:14:36.641673 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee33ca06-96f1-4ef2-ba3c-ddd2a15a2a47-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ee33ca06-96f1-4ef2-ba3c-ddd2a15a2a47\") " pod="openstack/nova-metadata-0" Jan 26 08:14:36 crc kubenswrapper[4806]: I0126 08:14:36.645010 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee33ca06-96f1-4ef2-ba3c-ddd2a15a2a47-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ee33ca06-96f1-4ef2-ba3c-ddd2a15a2a47\") " pod="openstack/nova-metadata-0" Jan 26 08:14:36 crc kubenswrapper[4806]: I0126 08:14:36.645935 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4v4sq\" (UniqueName: \"kubernetes.io/projected/ee33ca06-96f1-4ef2-ba3c-ddd2a15a2a47-kube-api-access-4v4sq\") pod \"nova-metadata-0\" (UID: \"ee33ca06-96f1-4ef2-ba3c-ddd2a15a2a47\") " pod="openstack/nova-metadata-0" Jan 26 08:14:36 crc kubenswrapper[4806]: I0126 08:14:36.794184 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 08:14:37 crc kubenswrapper[4806]: I0126 08:14:37.057395 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5132f7a4-1bcc-4f2e-8dea-d5382b7a52e8" path="/var/lib/kubelet/pods/5132f7a4-1bcc-4f2e-8dea-d5382b7a52e8/volumes" Jan 26 08:14:37 crc kubenswrapper[4806]: I0126 08:14:37.114065 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8b72a45f-26b1-466b-b078-81efe4bb135f","Type":"ContainerStarted","Data":"bcd67f48431fec9876dffc4b30b73b982bf98e6ce2b65a1f7c2de71015c4ec8c"} Jan 26 08:14:37 crc kubenswrapper[4806]: I0126 08:14:37.114297 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 26 08:14:37 crc kubenswrapper[4806]: E0126 08:14:37.140054 4806 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 15b7071699f668ad4cf42c9e0a968b681c5c25d1c8b04d99a240d413f258f75b is running failed: container process not found" containerID="15b7071699f668ad4cf42c9e0a968b681c5c25d1c8b04d99a240d413f258f75b" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 26 08:14:37 crc kubenswrapper[4806]: I0126 08:14:37.140093 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.84189628 podStartE2EDuration="5.140064056s" podCreationTimestamp="2026-01-26 08:14:32 +0000 UTC" firstStartedPulling="2026-01-26 08:14:33.016060205 +0000 UTC m=+1252.280468271" lastFinishedPulling="2026-01-26 08:14:36.314227991 +0000 UTC m=+1255.578636047" observedRunningTime="2026-01-26 08:14:37.138191343 +0000 UTC m=+1256.402599409" watchObservedRunningTime="2026-01-26 08:14:37.140064056 +0000 UTC m=+1256.404472112" Jan 26 08:14:37 crc kubenswrapper[4806]: E0126 08:14:37.141860 4806 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 15b7071699f668ad4cf42c9e0a968b681c5c25d1c8b04d99a240d413f258f75b is running failed: container process not found" containerID="15b7071699f668ad4cf42c9e0a968b681c5c25d1c8b04d99a240d413f258f75b" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 26 08:14:37 crc kubenswrapper[4806]: E0126 08:14:37.142235 4806 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 15b7071699f668ad4cf42c9e0a968b681c5c25d1c8b04d99a240d413f258f75b is running failed: container process not found" containerID="15b7071699f668ad4cf42c9e0a968b681c5c25d1c8b04d99a240d413f258f75b" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 26 08:14:37 crc kubenswrapper[4806]: E0126 08:14:37.142266 4806 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 15b7071699f668ad4cf42c9e0a968b681c5c25d1c8b04d99a240d413f258f75b is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="adb88266-6c78-4f92-89a8-1f8eb73b60c3" containerName="nova-scheduler-scheduler" Jan 26 08:14:37 crc kubenswrapper[4806]: I0126 08:14:37.227587 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 08:14:37 crc kubenswrapper[4806]: W0126 08:14:37.244457 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podee33ca06_96f1_4ef2_ba3c_ddd2a15a2a47.slice/crio-0b2bebbe147ebc94f52c973913cf43a65bc7d7b53c3e26b386b9fc6ed2c16b24 WatchSource:0}: Error finding container 0b2bebbe147ebc94f52c973913cf43a65bc7d7b53c3e26b386b9fc6ed2c16b24: Status 404 returned error can't find the container with id 0b2bebbe147ebc94f52c973913cf43a65bc7d7b53c3e26b386b9fc6ed2c16b24 Jan 26 08:14:37 crc kubenswrapper[4806]: I0126 08:14:37.456340 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 08:14:37 crc kubenswrapper[4806]: I0126 08:14:37.560296 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/adb88266-6c78-4f92-89a8-1f8eb73b60c3-config-data\") pod \"adb88266-6c78-4f92-89a8-1f8eb73b60c3\" (UID: \"adb88266-6c78-4f92-89a8-1f8eb73b60c3\") " Jan 26 08:14:37 crc kubenswrapper[4806]: I0126 08:14:37.560438 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/adb88266-6c78-4f92-89a8-1f8eb73b60c3-combined-ca-bundle\") pod \"adb88266-6c78-4f92-89a8-1f8eb73b60c3\" (UID: \"adb88266-6c78-4f92-89a8-1f8eb73b60c3\") " Jan 26 08:14:37 crc kubenswrapper[4806]: I0126 08:14:37.560544 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q72nk\" (UniqueName: \"kubernetes.io/projected/adb88266-6c78-4f92-89a8-1f8eb73b60c3-kube-api-access-q72nk\") pod \"adb88266-6c78-4f92-89a8-1f8eb73b60c3\" (UID: \"adb88266-6c78-4f92-89a8-1f8eb73b60c3\") " Jan 26 08:14:37 crc kubenswrapper[4806]: I0126 08:14:37.572197 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/adb88266-6c78-4f92-89a8-1f8eb73b60c3-kube-api-access-q72nk" (OuterVolumeSpecName: "kube-api-access-q72nk") pod "adb88266-6c78-4f92-89a8-1f8eb73b60c3" (UID: "adb88266-6c78-4f92-89a8-1f8eb73b60c3"). InnerVolumeSpecName "kube-api-access-q72nk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:14:37 crc kubenswrapper[4806]: I0126 08:14:37.597065 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/adb88266-6c78-4f92-89a8-1f8eb73b60c3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "adb88266-6c78-4f92-89a8-1f8eb73b60c3" (UID: "adb88266-6c78-4f92-89a8-1f8eb73b60c3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:14:37 crc kubenswrapper[4806]: I0126 08:14:37.607905 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/adb88266-6c78-4f92-89a8-1f8eb73b60c3-config-data" (OuterVolumeSpecName: "config-data") pod "adb88266-6c78-4f92-89a8-1f8eb73b60c3" (UID: "adb88266-6c78-4f92-89a8-1f8eb73b60c3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:14:37 crc kubenswrapper[4806]: I0126 08:14:37.663187 4806 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/adb88266-6c78-4f92-89a8-1f8eb73b60c3-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 08:14:37 crc kubenswrapper[4806]: I0126 08:14:37.663221 4806 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/adb88266-6c78-4f92-89a8-1f8eb73b60c3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 08:14:37 crc kubenswrapper[4806]: I0126 08:14:37.663233 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q72nk\" (UniqueName: \"kubernetes.io/projected/adb88266-6c78-4f92-89a8-1f8eb73b60c3-kube-api-access-q72nk\") on node \"crc\" DevicePath \"\"" Jan 26 08:14:38 crc kubenswrapper[4806]: I0126 08:14:38.122812 4806 generic.go:334] "Generic (PLEG): container finished" podID="adb88266-6c78-4f92-89a8-1f8eb73b60c3" containerID="15b7071699f668ad4cf42c9e0a968b681c5c25d1c8b04d99a240d413f258f75b" exitCode=0 Jan 26 08:14:38 crc kubenswrapper[4806]: I0126 08:14:38.122872 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 08:14:38 crc kubenswrapper[4806]: I0126 08:14:38.122898 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"adb88266-6c78-4f92-89a8-1f8eb73b60c3","Type":"ContainerDied","Data":"15b7071699f668ad4cf42c9e0a968b681c5c25d1c8b04d99a240d413f258f75b"} Jan 26 08:14:38 crc kubenswrapper[4806]: I0126 08:14:38.123236 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"adb88266-6c78-4f92-89a8-1f8eb73b60c3","Type":"ContainerDied","Data":"ad3234e14f1189d7798d4648d0e81ea2a840bdc58ce8228c993380d6bddfad1f"} Jan 26 08:14:38 crc kubenswrapper[4806]: I0126 08:14:38.123254 4806 scope.go:117] "RemoveContainer" containerID="15b7071699f668ad4cf42c9e0a968b681c5c25d1c8b04d99a240d413f258f75b" Jan 26 08:14:38 crc kubenswrapper[4806]: I0126 08:14:38.126874 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ee33ca06-96f1-4ef2-ba3c-ddd2a15a2a47","Type":"ContainerStarted","Data":"828e6f03edd299c45fe79cdc8a2dbb1cfc0ff3448b95b0f9f40269f98aa8b6fb"} Jan 26 08:14:38 crc kubenswrapper[4806]: I0126 08:14:38.126901 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ee33ca06-96f1-4ef2-ba3c-ddd2a15a2a47","Type":"ContainerStarted","Data":"7da243114741a729812229842544138fd9c16e8a9e971f3c023041343421a34a"} Jan 26 08:14:38 crc kubenswrapper[4806]: I0126 08:14:38.126911 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ee33ca06-96f1-4ef2-ba3c-ddd2a15a2a47","Type":"ContainerStarted","Data":"0b2bebbe147ebc94f52c973913cf43a65bc7d7b53c3e26b386b9fc6ed2c16b24"} Jan 26 08:14:38 crc kubenswrapper[4806]: I0126 08:14:38.144491 4806 scope.go:117] "RemoveContainer" containerID="15b7071699f668ad4cf42c9e0a968b681c5c25d1c8b04d99a240d413f258f75b" Jan 26 08:14:38 crc kubenswrapper[4806]: E0126 08:14:38.145010 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15b7071699f668ad4cf42c9e0a968b681c5c25d1c8b04d99a240d413f258f75b\": container with ID starting with 15b7071699f668ad4cf42c9e0a968b681c5c25d1c8b04d99a240d413f258f75b not found: ID does not exist" containerID="15b7071699f668ad4cf42c9e0a968b681c5c25d1c8b04d99a240d413f258f75b" Jan 26 08:14:38 crc kubenswrapper[4806]: I0126 08:14:38.145054 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15b7071699f668ad4cf42c9e0a968b681c5c25d1c8b04d99a240d413f258f75b"} err="failed to get container status \"15b7071699f668ad4cf42c9e0a968b681c5c25d1c8b04d99a240d413f258f75b\": rpc error: code = NotFound desc = could not find container \"15b7071699f668ad4cf42c9e0a968b681c5c25d1c8b04d99a240d413f258f75b\": container with ID starting with 15b7071699f668ad4cf42c9e0a968b681c5c25d1c8b04d99a240d413f258f75b not found: ID does not exist" Jan 26 08:14:38 crc kubenswrapper[4806]: I0126 08:14:38.147752 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.147739237 podStartE2EDuration="2.147739237s" podCreationTimestamp="2026-01-26 08:14:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 08:14:38.145848814 +0000 UTC m=+1257.410256870" watchObservedRunningTime="2026-01-26 08:14:38.147739237 +0000 UTC m=+1257.412147293" Jan 26 08:14:38 crc kubenswrapper[4806]: I0126 08:14:38.172354 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 08:14:38 crc kubenswrapper[4806]: I0126 08:14:38.182657 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 08:14:38 crc kubenswrapper[4806]: I0126 08:14:38.201465 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 08:14:38 crc kubenswrapper[4806]: E0126 08:14:38.202174 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adb88266-6c78-4f92-89a8-1f8eb73b60c3" containerName="nova-scheduler-scheduler" Jan 26 08:14:38 crc kubenswrapper[4806]: I0126 08:14:38.202256 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="adb88266-6c78-4f92-89a8-1f8eb73b60c3" containerName="nova-scheduler-scheduler" Jan 26 08:14:38 crc kubenswrapper[4806]: I0126 08:14:38.202577 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="adb88266-6c78-4f92-89a8-1f8eb73b60c3" containerName="nova-scheduler-scheduler" Jan 26 08:14:38 crc kubenswrapper[4806]: I0126 08:14:38.203388 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 08:14:38 crc kubenswrapper[4806]: I0126 08:14:38.207973 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 26 08:14:38 crc kubenswrapper[4806]: I0126 08:14:38.224636 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 08:14:38 crc kubenswrapper[4806]: I0126 08:14:38.375637 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twxd2\" (UniqueName: \"kubernetes.io/projected/7b4904b8-0f8b-4492-b879-8361f8b9e092-kube-api-access-twxd2\") pod \"nova-scheduler-0\" (UID: \"7b4904b8-0f8b-4492-b879-8361f8b9e092\") " pod="openstack/nova-scheduler-0" Jan 26 08:14:38 crc kubenswrapper[4806]: I0126 08:14:38.375746 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b4904b8-0f8b-4492-b879-8361f8b9e092-config-data\") pod \"nova-scheduler-0\" (UID: \"7b4904b8-0f8b-4492-b879-8361f8b9e092\") " pod="openstack/nova-scheduler-0" Jan 26 08:14:38 crc kubenswrapper[4806]: I0126 08:14:38.375829 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b4904b8-0f8b-4492-b879-8361f8b9e092-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"7b4904b8-0f8b-4492-b879-8361f8b9e092\") " pod="openstack/nova-scheduler-0" Jan 26 08:14:38 crc kubenswrapper[4806]: I0126 08:14:38.478261 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-twxd2\" (UniqueName: \"kubernetes.io/projected/7b4904b8-0f8b-4492-b879-8361f8b9e092-kube-api-access-twxd2\") pod \"nova-scheduler-0\" (UID: \"7b4904b8-0f8b-4492-b879-8361f8b9e092\") " pod="openstack/nova-scheduler-0" Jan 26 08:14:38 crc kubenswrapper[4806]: I0126 08:14:38.478344 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b4904b8-0f8b-4492-b879-8361f8b9e092-config-data\") pod \"nova-scheduler-0\" (UID: \"7b4904b8-0f8b-4492-b879-8361f8b9e092\") " pod="openstack/nova-scheduler-0" Jan 26 08:14:38 crc kubenswrapper[4806]: I0126 08:14:38.478550 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b4904b8-0f8b-4492-b879-8361f8b9e092-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"7b4904b8-0f8b-4492-b879-8361f8b9e092\") " pod="openstack/nova-scheduler-0" Jan 26 08:14:38 crc kubenswrapper[4806]: I0126 08:14:38.492126 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b4904b8-0f8b-4492-b879-8361f8b9e092-config-data\") pod \"nova-scheduler-0\" (UID: \"7b4904b8-0f8b-4492-b879-8361f8b9e092\") " pod="openstack/nova-scheduler-0" Jan 26 08:14:38 crc kubenswrapper[4806]: I0126 08:14:38.492282 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b4904b8-0f8b-4492-b879-8361f8b9e092-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"7b4904b8-0f8b-4492-b879-8361f8b9e092\") " pod="openstack/nova-scheduler-0" Jan 26 08:14:38 crc kubenswrapper[4806]: I0126 08:14:38.495222 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-twxd2\" (UniqueName: \"kubernetes.io/projected/7b4904b8-0f8b-4492-b879-8361f8b9e092-kube-api-access-twxd2\") pod \"nova-scheduler-0\" (UID: \"7b4904b8-0f8b-4492-b879-8361f8b9e092\") " pod="openstack/nova-scheduler-0" Jan 26 08:14:38 crc kubenswrapper[4806]: I0126 08:14:38.518264 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 08:14:38 crc kubenswrapper[4806]: I0126 08:14:38.975020 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 08:14:39 crc kubenswrapper[4806]: I0126 08:14:39.054884 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="adb88266-6c78-4f92-89a8-1f8eb73b60c3" path="/var/lib/kubelet/pods/adb88266-6c78-4f92-89a8-1f8eb73b60c3/volumes" Jan 26 08:14:39 crc kubenswrapper[4806]: I0126 08:14:39.137637 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"7b4904b8-0f8b-4492-b879-8361f8b9e092","Type":"ContainerStarted","Data":"d2c9f4051c61e46f45d68e4017cba036733ec6ee527db542006e04c5fa736448"} Jan 26 08:14:40 crc kubenswrapper[4806]: I0126 08:14:40.149751 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"7b4904b8-0f8b-4492-b879-8361f8b9e092","Type":"ContainerStarted","Data":"80a07a4775e7eb13078634500da274267e6be53794cff1fc7fe94e01acfcc691"} Jan 26 08:14:41 crc kubenswrapper[4806]: I0126 08:14:41.795051 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 26 08:14:41 crc kubenswrapper[4806]: I0126 08:14:41.795405 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 26 08:14:43 crc kubenswrapper[4806]: I0126 08:14:43.518733 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 26 08:14:43 crc kubenswrapper[4806]: I0126 08:14:43.684578 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 26 08:14:43 crc kubenswrapper[4806]: I0126 08:14:43.684964 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 26 08:14:44 crc kubenswrapper[4806]: I0126 08:14:44.700151 4806 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="58e34a8b-db1e-40a7-8d39-e791e2e45de9" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.220:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 26 08:14:44 crc kubenswrapper[4806]: I0126 08:14:44.700162 4806 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="58e34a8b-db1e-40a7-8d39-e791e2e45de9" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.220:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 26 08:14:45 crc kubenswrapper[4806]: I0126 08:14:45.806473 4806 patch_prober.go:28] interesting pod/machine-config-daemon-k2tlk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 08:14:45 crc kubenswrapper[4806]: I0126 08:14:45.806542 4806 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 08:14:45 crc kubenswrapper[4806]: I0126 08:14:45.806585 4806 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" Jan 26 08:14:45 crc kubenswrapper[4806]: I0126 08:14:45.807232 4806 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8880d10e53faf854bc25456c263d76882c8161d6eb264ea6dd36a69766a56246"} pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 08:14:45 crc kubenswrapper[4806]: I0126 08:14:45.807297 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" containerName="machine-config-daemon" containerID="cri-o://8880d10e53faf854bc25456c263d76882c8161d6eb264ea6dd36a69766a56246" gracePeriod=600 Jan 26 08:14:46 crc kubenswrapper[4806]: I0126 08:14:46.449047 4806 generic.go:334] "Generic (PLEG): container finished" podID="d07502a2-50b0-4012-b335-340a1c694c50" containerID="8880d10e53faf854bc25456c263d76882c8161d6eb264ea6dd36a69766a56246" exitCode=0 Jan 26 08:14:46 crc kubenswrapper[4806]: I0126 08:14:46.449081 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" event={"ID":"d07502a2-50b0-4012-b335-340a1c694c50","Type":"ContainerDied","Data":"8880d10e53faf854bc25456c263d76882c8161d6eb264ea6dd36a69766a56246"} Jan 26 08:14:46 crc kubenswrapper[4806]: I0126 08:14:46.449465 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" event={"ID":"d07502a2-50b0-4012-b335-340a1c694c50","Type":"ContainerStarted","Data":"09669619f64d4d35cd31b87d98b04e88f92b9a54a34f625c50be4875e6fefe66"} Jan 26 08:14:46 crc kubenswrapper[4806]: I0126 08:14:46.449493 4806 scope.go:117] "RemoveContainer" containerID="1062ca2b49b34478f04a62458a36769a2e31737989a78160ffd05a185dfcbbaa" Jan 26 08:14:46 crc kubenswrapper[4806]: I0126 08:14:46.478887 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=8.478870342 podStartE2EDuration="8.478870342s" podCreationTimestamp="2026-01-26 08:14:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 08:14:40.175949937 +0000 UTC m=+1259.440358003" watchObservedRunningTime="2026-01-26 08:14:46.478870342 +0000 UTC m=+1265.743278398" Jan 26 08:14:46 crc kubenswrapper[4806]: I0126 08:14:46.795327 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 26 08:14:46 crc kubenswrapper[4806]: I0126 08:14:46.795719 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 26 08:14:47 crc kubenswrapper[4806]: I0126 08:14:47.809736 4806 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="ee33ca06-96f1-4ef2-ba3c-ddd2a15a2a47" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.221:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 26 08:14:47 crc kubenswrapper[4806]: I0126 08:14:47.809742 4806 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="ee33ca06-96f1-4ef2-ba3c-ddd2a15a2a47" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.221:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 26 08:14:48 crc kubenswrapper[4806]: I0126 08:14:48.519425 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 26 08:14:48 crc kubenswrapper[4806]: I0126 08:14:48.552965 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 26 08:14:49 crc kubenswrapper[4806]: I0126 08:14:49.514191 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 26 08:14:53 crc kubenswrapper[4806]: I0126 08:14:53.694494 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 26 08:14:53 crc kubenswrapper[4806]: I0126 08:14:53.695789 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 26 08:14:53 crc kubenswrapper[4806]: I0126 08:14:53.695881 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 26 08:14:53 crc kubenswrapper[4806]: I0126 08:14:53.708270 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 26 08:14:54 crc kubenswrapper[4806]: I0126 08:14:54.533057 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 26 08:14:54 crc kubenswrapper[4806]: I0126 08:14:54.538720 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 26 08:14:56 crc kubenswrapper[4806]: I0126 08:14:56.812833 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 26 08:14:56 crc kubenswrapper[4806]: I0126 08:14:56.813572 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 26 08:14:56 crc kubenswrapper[4806]: I0126 08:14:56.819312 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 26 08:14:56 crc kubenswrapper[4806]: I0126 08:14:56.826274 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 26 08:15:00 crc kubenswrapper[4806]: I0126 08:15:00.146362 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490255-wznsc"] Jan 26 08:15:00 crc kubenswrapper[4806]: I0126 08:15:00.147922 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490255-wznsc" Jan 26 08:15:00 crc kubenswrapper[4806]: I0126 08:15:00.152493 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 26 08:15:00 crc kubenswrapper[4806]: I0126 08:15:00.155184 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 26 08:15:00 crc kubenswrapper[4806]: I0126 08:15:00.171053 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490255-wznsc"] Jan 26 08:15:00 crc kubenswrapper[4806]: I0126 08:15:00.214069 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2d813a14-773d-4ceb-858f-8978f96fe6de-config-volume\") pod \"collect-profiles-29490255-wznsc\" (UID: \"2d813a14-773d-4ceb-858f-8978f96fe6de\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490255-wznsc" Jan 26 08:15:00 crc kubenswrapper[4806]: I0126 08:15:00.214184 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2d813a14-773d-4ceb-858f-8978f96fe6de-secret-volume\") pod \"collect-profiles-29490255-wznsc\" (UID: \"2d813a14-773d-4ceb-858f-8978f96fe6de\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490255-wznsc" Jan 26 08:15:00 crc kubenswrapper[4806]: I0126 08:15:00.214211 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxzc4\" (UniqueName: \"kubernetes.io/projected/2d813a14-773d-4ceb-858f-8978f96fe6de-kube-api-access-sxzc4\") pod \"collect-profiles-29490255-wznsc\" (UID: \"2d813a14-773d-4ceb-858f-8978f96fe6de\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490255-wznsc" Jan 26 08:15:00 crc kubenswrapper[4806]: I0126 08:15:00.315784 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2d813a14-773d-4ceb-858f-8978f96fe6de-secret-volume\") pod \"collect-profiles-29490255-wznsc\" (UID: \"2d813a14-773d-4ceb-858f-8978f96fe6de\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490255-wznsc" Jan 26 08:15:00 crc kubenswrapper[4806]: I0126 08:15:00.316049 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxzc4\" (UniqueName: \"kubernetes.io/projected/2d813a14-773d-4ceb-858f-8978f96fe6de-kube-api-access-sxzc4\") pod \"collect-profiles-29490255-wznsc\" (UID: \"2d813a14-773d-4ceb-858f-8978f96fe6de\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490255-wznsc" Jan 26 08:15:00 crc kubenswrapper[4806]: I0126 08:15:00.316243 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2d813a14-773d-4ceb-858f-8978f96fe6de-config-volume\") pod \"collect-profiles-29490255-wznsc\" (UID: \"2d813a14-773d-4ceb-858f-8978f96fe6de\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490255-wznsc" Jan 26 08:15:00 crc kubenswrapper[4806]: I0126 08:15:00.317095 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2d813a14-773d-4ceb-858f-8978f96fe6de-config-volume\") pod \"collect-profiles-29490255-wznsc\" (UID: \"2d813a14-773d-4ceb-858f-8978f96fe6de\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490255-wznsc" Jan 26 08:15:00 crc kubenswrapper[4806]: I0126 08:15:00.322215 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2d813a14-773d-4ceb-858f-8978f96fe6de-secret-volume\") pod \"collect-profiles-29490255-wznsc\" (UID: \"2d813a14-773d-4ceb-858f-8978f96fe6de\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490255-wznsc" Jan 26 08:15:00 crc kubenswrapper[4806]: I0126 08:15:00.333987 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxzc4\" (UniqueName: \"kubernetes.io/projected/2d813a14-773d-4ceb-858f-8978f96fe6de-kube-api-access-sxzc4\") pod \"collect-profiles-29490255-wznsc\" (UID: \"2d813a14-773d-4ceb-858f-8978f96fe6de\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490255-wznsc" Jan 26 08:15:00 crc kubenswrapper[4806]: I0126 08:15:00.469877 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490255-wznsc" Jan 26 08:15:00 crc kubenswrapper[4806]: I0126 08:15:00.940414 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490255-wznsc"] Jan 26 08:15:00 crc kubenswrapper[4806]: W0126 08:15:00.952362 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2d813a14_773d_4ceb_858f_8978f96fe6de.slice/crio-4c278923d3b04231ca2e647aa22ac0a8a1858050f10cccd874b47160ec07515f WatchSource:0}: Error finding container 4c278923d3b04231ca2e647aa22ac0a8a1858050f10cccd874b47160ec07515f: Status 404 returned error can't find the container with id 4c278923d3b04231ca2e647aa22ac0a8a1858050f10cccd874b47160ec07515f Jan 26 08:15:01 crc kubenswrapper[4806]: I0126 08:15:01.622832 4806 generic.go:334] "Generic (PLEG): container finished" podID="2d813a14-773d-4ceb-858f-8978f96fe6de" containerID="954a212c4d7ea0d7669d70baa7fd88d9093d7b86e201cd24f8678ceff7b15c54" exitCode=0 Jan 26 08:15:01 crc kubenswrapper[4806]: I0126 08:15:01.622879 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490255-wznsc" event={"ID":"2d813a14-773d-4ceb-858f-8978f96fe6de","Type":"ContainerDied","Data":"954a212c4d7ea0d7669d70baa7fd88d9093d7b86e201cd24f8678ceff7b15c54"} Jan 26 08:15:01 crc kubenswrapper[4806]: I0126 08:15:01.623174 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490255-wznsc" event={"ID":"2d813a14-773d-4ceb-858f-8978f96fe6de","Type":"ContainerStarted","Data":"4c278923d3b04231ca2e647aa22ac0a8a1858050f10cccd874b47160ec07515f"} Jan 26 08:15:02 crc kubenswrapper[4806]: I0126 08:15:02.480384 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 26 08:15:02 crc kubenswrapper[4806]: I0126 08:15:02.966230 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490255-wznsc" Jan 26 08:15:03 crc kubenswrapper[4806]: I0126 08:15:03.070932 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sxzc4\" (UniqueName: \"kubernetes.io/projected/2d813a14-773d-4ceb-858f-8978f96fe6de-kube-api-access-sxzc4\") pod \"2d813a14-773d-4ceb-858f-8978f96fe6de\" (UID: \"2d813a14-773d-4ceb-858f-8978f96fe6de\") " Jan 26 08:15:03 crc kubenswrapper[4806]: I0126 08:15:03.071107 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2d813a14-773d-4ceb-858f-8978f96fe6de-config-volume\") pod \"2d813a14-773d-4ceb-858f-8978f96fe6de\" (UID: \"2d813a14-773d-4ceb-858f-8978f96fe6de\") " Jan 26 08:15:03 crc kubenswrapper[4806]: I0126 08:15:03.071224 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2d813a14-773d-4ceb-858f-8978f96fe6de-secret-volume\") pod \"2d813a14-773d-4ceb-858f-8978f96fe6de\" (UID: \"2d813a14-773d-4ceb-858f-8978f96fe6de\") " Jan 26 08:15:03 crc kubenswrapper[4806]: I0126 08:15:03.072157 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d813a14-773d-4ceb-858f-8978f96fe6de-config-volume" (OuterVolumeSpecName: "config-volume") pod "2d813a14-773d-4ceb-858f-8978f96fe6de" (UID: "2d813a14-773d-4ceb-858f-8978f96fe6de"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 08:15:03 crc kubenswrapper[4806]: I0126 08:15:03.077021 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d813a14-773d-4ceb-858f-8978f96fe6de-kube-api-access-sxzc4" (OuterVolumeSpecName: "kube-api-access-sxzc4") pod "2d813a14-773d-4ceb-858f-8978f96fe6de" (UID: "2d813a14-773d-4ceb-858f-8978f96fe6de"). InnerVolumeSpecName "kube-api-access-sxzc4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:15:03 crc kubenswrapper[4806]: I0126 08:15:03.078661 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d813a14-773d-4ceb-858f-8978f96fe6de-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "2d813a14-773d-4ceb-858f-8978f96fe6de" (UID: "2d813a14-773d-4ceb-858f-8978f96fe6de"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:15:03 crc kubenswrapper[4806]: I0126 08:15:03.177787 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sxzc4\" (UniqueName: \"kubernetes.io/projected/2d813a14-773d-4ceb-858f-8978f96fe6de-kube-api-access-sxzc4\") on node \"crc\" DevicePath \"\"" Jan 26 08:15:03 crc kubenswrapper[4806]: I0126 08:15:03.177819 4806 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2d813a14-773d-4ceb-858f-8978f96fe6de-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 08:15:03 crc kubenswrapper[4806]: I0126 08:15:03.177830 4806 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2d813a14-773d-4ceb-858f-8978f96fe6de-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 08:15:03 crc kubenswrapper[4806]: I0126 08:15:03.643557 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490255-wznsc" event={"ID":"2d813a14-773d-4ceb-858f-8978f96fe6de","Type":"ContainerDied","Data":"4c278923d3b04231ca2e647aa22ac0a8a1858050f10cccd874b47160ec07515f"} Jan 26 08:15:03 crc kubenswrapper[4806]: I0126 08:15:03.643884 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4c278923d3b04231ca2e647aa22ac0a8a1858050f10cccd874b47160ec07515f" Jan 26 08:15:03 crc kubenswrapper[4806]: I0126 08:15:03.643608 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490255-wznsc" Jan 26 08:15:06 crc kubenswrapper[4806]: I0126 08:15:06.389882 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 26 08:15:06 crc kubenswrapper[4806]: I0126 08:15:06.390092 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="0e3b9ca7-b75b-4a84-b3f1-23b98ee8ed2a" containerName="kube-state-metrics" containerID="cri-o://01d822e815455b90e89921b950bd9623a731401fcd546f20ff1ebec61aab5f8e" gracePeriod=30 Jan 26 08:15:06 crc kubenswrapper[4806]: I0126 08:15:06.691062 4806 generic.go:334] "Generic (PLEG): container finished" podID="0e3b9ca7-b75b-4a84-b3f1-23b98ee8ed2a" containerID="01d822e815455b90e89921b950bd9623a731401fcd546f20ff1ebec61aab5f8e" exitCode=2 Jan 26 08:15:06 crc kubenswrapper[4806]: I0126 08:15:06.691139 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"0e3b9ca7-b75b-4a84-b3f1-23b98ee8ed2a","Type":"ContainerDied","Data":"01d822e815455b90e89921b950bd9623a731401fcd546f20ff1ebec61aab5f8e"} Jan 26 08:15:06 crc kubenswrapper[4806]: I0126 08:15:06.908002 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 26 08:15:07 crc kubenswrapper[4806]: I0126 08:15:07.050884 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8r6cr\" (UniqueName: \"kubernetes.io/projected/0e3b9ca7-b75b-4a84-b3f1-23b98ee8ed2a-kube-api-access-8r6cr\") pod \"0e3b9ca7-b75b-4a84-b3f1-23b98ee8ed2a\" (UID: \"0e3b9ca7-b75b-4a84-b3f1-23b98ee8ed2a\") " Jan 26 08:15:07 crc kubenswrapper[4806]: I0126 08:15:07.063168 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e3b9ca7-b75b-4a84-b3f1-23b98ee8ed2a-kube-api-access-8r6cr" (OuterVolumeSpecName: "kube-api-access-8r6cr") pod "0e3b9ca7-b75b-4a84-b3f1-23b98ee8ed2a" (UID: "0e3b9ca7-b75b-4a84-b3f1-23b98ee8ed2a"). InnerVolumeSpecName "kube-api-access-8r6cr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:15:07 crc kubenswrapper[4806]: I0126 08:15:07.154385 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8r6cr\" (UniqueName: \"kubernetes.io/projected/0e3b9ca7-b75b-4a84-b3f1-23b98ee8ed2a-kube-api-access-8r6cr\") on node \"crc\" DevicePath \"\"" Jan 26 08:15:07 crc kubenswrapper[4806]: I0126 08:15:07.700556 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"0e3b9ca7-b75b-4a84-b3f1-23b98ee8ed2a","Type":"ContainerDied","Data":"002cb32771f528b20edab6100fbf0401cf4e50f474e4ba20d6291943ffbe329c"} Jan 26 08:15:07 crc kubenswrapper[4806]: I0126 08:15:07.700588 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 26 08:15:07 crc kubenswrapper[4806]: I0126 08:15:07.700613 4806 scope.go:117] "RemoveContainer" containerID="01d822e815455b90e89921b950bd9623a731401fcd546f20ff1ebec61aab5f8e" Jan 26 08:15:07 crc kubenswrapper[4806]: I0126 08:15:07.731368 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 26 08:15:07 crc kubenswrapper[4806]: I0126 08:15:07.741857 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 26 08:15:07 crc kubenswrapper[4806]: I0126 08:15:07.760941 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 26 08:15:07 crc kubenswrapper[4806]: E0126 08:15:07.761366 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d813a14-773d-4ceb-858f-8978f96fe6de" containerName="collect-profiles" Jan 26 08:15:07 crc kubenswrapper[4806]: I0126 08:15:07.761378 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d813a14-773d-4ceb-858f-8978f96fe6de" containerName="collect-profiles" Jan 26 08:15:07 crc kubenswrapper[4806]: E0126 08:15:07.761393 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e3b9ca7-b75b-4a84-b3f1-23b98ee8ed2a" containerName="kube-state-metrics" Jan 26 08:15:07 crc kubenswrapper[4806]: I0126 08:15:07.761398 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e3b9ca7-b75b-4a84-b3f1-23b98ee8ed2a" containerName="kube-state-metrics" Jan 26 08:15:07 crc kubenswrapper[4806]: I0126 08:15:07.761595 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d813a14-773d-4ceb-858f-8978f96fe6de" containerName="collect-profiles" Jan 26 08:15:07 crc kubenswrapper[4806]: I0126 08:15:07.761630 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e3b9ca7-b75b-4a84-b3f1-23b98ee8ed2a" containerName="kube-state-metrics" Jan 26 08:15:07 crc kubenswrapper[4806]: I0126 08:15:07.762352 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 26 08:15:07 crc kubenswrapper[4806]: I0126 08:15:07.767682 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Jan 26 08:15:07 crc kubenswrapper[4806]: I0126 08:15:07.767953 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Jan 26 08:15:07 crc kubenswrapper[4806]: I0126 08:15:07.772895 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 26 08:15:07 crc kubenswrapper[4806]: I0126 08:15:07.864732 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/44dd7b53-ec3f-4dde-b448-315e571d5249-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"44dd7b53-ec3f-4dde-b448-315e571d5249\") " pod="openstack/kube-state-metrics-0" Jan 26 08:15:07 crc kubenswrapper[4806]: I0126 08:15:07.864790 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjjtv\" (UniqueName: \"kubernetes.io/projected/44dd7b53-ec3f-4dde-b448-315e571d5249-kube-api-access-jjjtv\") pod \"kube-state-metrics-0\" (UID: \"44dd7b53-ec3f-4dde-b448-315e571d5249\") " pod="openstack/kube-state-metrics-0" Jan 26 08:15:07 crc kubenswrapper[4806]: I0126 08:15:07.864874 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44dd7b53-ec3f-4dde-b448-315e571d5249-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"44dd7b53-ec3f-4dde-b448-315e571d5249\") " pod="openstack/kube-state-metrics-0" Jan 26 08:15:07 crc kubenswrapper[4806]: I0126 08:15:07.864893 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/44dd7b53-ec3f-4dde-b448-315e571d5249-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"44dd7b53-ec3f-4dde-b448-315e571d5249\") " pod="openstack/kube-state-metrics-0" Jan 26 08:15:07 crc kubenswrapper[4806]: I0126 08:15:07.965942 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jjjtv\" (UniqueName: \"kubernetes.io/projected/44dd7b53-ec3f-4dde-b448-315e571d5249-kube-api-access-jjjtv\") pod \"kube-state-metrics-0\" (UID: \"44dd7b53-ec3f-4dde-b448-315e571d5249\") " pod="openstack/kube-state-metrics-0" Jan 26 08:15:07 crc kubenswrapper[4806]: I0126 08:15:07.966051 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44dd7b53-ec3f-4dde-b448-315e571d5249-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"44dd7b53-ec3f-4dde-b448-315e571d5249\") " pod="openstack/kube-state-metrics-0" Jan 26 08:15:07 crc kubenswrapper[4806]: I0126 08:15:07.966075 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/44dd7b53-ec3f-4dde-b448-315e571d5249-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"44dd7b53-ec3f-4dde-b448-315e571d5249\") " pod="openstack/kube-state-metrics-0" Jan 26 08:15:07 crc kubenswrapper[4806]: I0126 08:15:07.966141 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/44dd7b53-ec3f-4dde-b448-315e571d5249-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"44dd7b53-ec3f-4dde-b448-315e571d5249\") " pod="openstack/kube-state-metrics-0" Jan 26 08:15:07 crc kubenswrapper[4806]: I0126 08:15:07.974167 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/44dd7b53-ec3f-4dde-b448-315e571d5249-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"44dd7b53-ec3f-4dde-b448-315e571d5249\") " pod="openstack/kube-state-metrics-0" Jan 26 08:15:07 crc kubenswrapper[4806]: I0126 08:15:07.976212 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44dd7b53-ec3f-4dde-b448-315e571d5249-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"44dd7b53-ec3f-4dde-b448-315e571d5249\") " pod="openstack/kube-state-metrics-0" Jan 26 08:15:07 crc kubenswrapper[4806]: I0126 08:15:07.976691 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/44dd7b53-ec3f-4dde-b448-315e571d5249-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"44dd7b53-ec3f-4dde-b448-315e571d5249\") " pod="openstack/kube-state-metrics-0" Jan 26 08:15:08 crc kubenswrapper[4806]: I0126 08:15:08.004355 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jjjtv\" (UniqueName: \"kubernetes.io/projected/44dd7b53-ec3f-4dde-b448-315e571d5249-kube-api-access-jjjtv\") pod \"kube-state-metrics-0\" (UID: \"44dd7b53-ec3f-4dde-b448-315e571d5249\") " pod="openstack/kube-state-metrics-0" Jan 26 08:15:08 crc kubenswrapper[4806]: I0126 08:15:08.078408 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 26 08:15:08 crc kubenswrapper[4806]: I0126 08:15:08.547460 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 26 08:15:08 crc kubenswrapper[4806]: I0126 08:15:08.723944 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"44dd7b53-ec3f-4dde-b448-315e571d5249","Type":"ContainerStarted","Data":"5284791340dec6f2ea63dd0bcb4536ee7d5b3d5f782b2b9b66dd3a7c215d5f71"} Jan 26 08:15:08 crc kubenswrapper[4806]: I0126 08:15:08.730292 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 08:15:08 crc kubenswrapper[4806]: I0126 08:15:08.730670 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8b72a45f-26b1-466b-b078-81efe4bb135f" containerName="ceilometer-central-agent" containerID="cri-o://66ad429a9b8224d09e62258884165fc48ad4b3a5d69e56cc46f2542154605846" gracePeriod=30 Jan 26 08:15:08 crc kubenswrapper[4806]: I0126 08:15:08.730805 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8b72a45f-26b1-466b-b078-81efe4bb135f" containerName="proxy-httpd" containerID="cri-o://bcd67f48431fec9876dffc4b30b73b982bf98e6ce2b65a1f7c2de71015c4ec8c" gracePeriod=30 Jan 26 08:15:08 crc kubenswrapper[4806]: I0126 08:15:08.730851 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8b72a45f-26b1-466b-b078-81efe4bb135f" containerName="sg-core" containerID="cri-o://348daa5e7fe6006b9d19ae795f93f256a66f95f9b25b535951e16ed16a378ab8" gracePeriod=30 Jan 26 08:15:08 crc kubenswrapper[4806]: I0126 08:15:08.730884 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8b72a45f-26b1-466b-b078-81efe4bb135f" containerName="ceilometer-notification-agent" containerID="cri-o://b6a9d5e0a4af6a60e284276b947fcaa62a03a783e062af0b7556c8508b5f0934" gracePeriod=30 Jan 26 08:15:09 crc kubenswrapper[4806]: I0126 08:15:09.053443 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e3b9ca7-b75b-4a84-b3f1-23b98ee8ed2a" path="/var/lib/kubelet/pods/0e3b9ca7-b75b-4a84-b3f1-23b98ee8ed2a/volumes" Jan 26 08:15:09 crc kubenswrapper[4806]: I0126 08:15:09.736232 4806 generic.go:334] "Generic (PLEG): container finished" podID="8b72a45f-26b1-466b-b078-81efe4bb135f" containerID="bcd67f48431fec9876dffc4b30b73b982bf98e6ce2b65a1f7c2de71015c4ec8c" exitCode=0 Jan 26 08:15:09 crc kubenswrapper[4806]: I0126 08:15:09.736271 4806 generic.go:334] "Generic (PLEG): container finished" podID="8b72a45f-26b1-466b-b078-81efe4bb135f" containerID="348daa5e7fe6006b9d19ae795f93f256a66f95f9b25b535951e16ed16a378ab8" exitCode=2 Jan 26 08:15:09 crc kubenswrapper[4806]: I0126 08:15:09.736278 4806 generic.go:334] "Generic (PLEG): container finished" podID="8b72a45f-26b1-466b-b078-81efe4bb135f" containerID="66ad429a9b8224d09e62258884165fc48ad4b3a5d69e56cc46f2542154605846" exitCode=0 Jan 26 08:15:09 crc kubenswrapper[4806]: I0126 08:15:09.736307 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8b72a45f-26b1-466b-b078-81efe4bb135f","Type":"ContainerDied","Data":"bcd67f48431fec9876dffc4b30b73b982bf98e6ce2b65a1f7c2de71015c4ec8c"} Jan 26 08:15:09 crc kubenswrapper[4806]: I0126 08:15:09.736349 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8b72a45f-26b1-466b-b078-81efe4bb135f","Type":"ContainerDied","Data":"348daa5e7fe6006b9d19ae795f93f256a66f95f9b25b535951e16ed16a378ab8"} Jan 26 08:15:09 crc kubenswrapper[4806]: I0126 08:15:09.736359 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8b72a45f-26b1-466b-b078-81efe4bb135f","Type":"ContainerDied","Data":"66ad429a9b8224d09e62258884165fc48ad4b3a5d69e56cc46f2542154605846"} Jan 26 08:15:09 crc kubenswrapper[4806]: I0126 08:15:09.738716 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"44dd7b53-ec3f-4dde-b448-315e571d5249","Type":"ContainerStarted","Data":"000ec62f3b06f63d0e0fa0a2da36ae51c547a85ff5054c32463299979e827b1a"} Jan 26 08:15:09 crc kubenswrapper[4806]: I0126 08:15:09.738886 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 26 08:15:09 crc kubenswrapper[4806]: I0126 08:15:09.766203 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.38327985 podStartE2EDuration="2.766183168s" podCreationTimestamp="2026-01-26 08:15:07 +0000 UTC" firstStartedPulling="2026-01-26 08:15:08.554128482 +0000 UTC m=+1287.818536538" lastFinishedPulling="2026-01-26 08:15:08.93703181 +0000 UTC m=+1288.201439856" observedRunningTime="2026-01-26 08:15:09.760131039 +0000 UTC m=+1289.024539095" watchObservedRunningTime="2026-01-26 08:15:09.766183168 +0000 UTC m=+1289.030591224" Jan 26 08:15:11 crc kubenswrapper[4806]: I0126 08:15:11.440957 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 08:15:11 crc kubenswrapper[4806]: I0126 08:15:11.533122 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8b72a45f-26b1-466b-b078-81efe4bb135f-sg-core-conf-yaml\") pod \"8b72a45f-26b1-466b-b078-81efe4bb135f\" (UID: \"8b72a45f-26b1-466b-b078-81efe4bb135f\") " Jan 26 08:15:11 crc kubenswrapper[4806]: I0126 08:15:11.533190 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8b72a45f-26b1-466b-b078-81efe4bb135f-log-httpd\") pod \"8b72a45f-26b1-466b-b078-81efe4bb135f\" (UID: \"8b72a45f-26b1-466b-b078-81efe4bb135f\") " Jan 26 08:15:11 crc kubenswrapper[4806]: I0126 08:15:11.533233 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b72a45f-26b1-466b-b078-81efe4bb135f-config-data\") pod \"8b72a45f-26b1-466b-b078-81efe4bb135f\" (UID: \"8b72a45f-26b1-466b-b078-81efe4bb135f\") " Jan 26 08:15:11 crc kubenswrapper[4806]: I0126 08:15:11.533286 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b72a45f-26b1-466b-b078-81efe4bb135f-combined-ca-bundle\") pod \"8b72a45f-26b1-466b-b078-81efe4bb135f\" (UID: \"8b72a45f-26b1-466b-b078-81efe4bb135f\") " Jan 26 08:15:11 crc kubenswrapper[4806]: I0126 08:15:11.533428 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8b72a45f-26b1-466b-b078-81efe4bb135f-scripts\") pod \"8b72a45f-26b1-466b-b078-81efe4bb135f\" (UID: \"8b72a45f-26b1-466b-b078-81efe4bb135f\") " Jan 26 08:15:11 crc kubenswrapper[4806]: I0126 08:15:11.533510 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xp6vj\" (UniqueName: \"kubernetes.io/projected/8b72a45f-26b1-466b-b078-81efe4bb135f-kube-api-access-xp6vj\") pod \"8b72a45f-26b1-466b-b078-81efe4bb135f\" (UID: \"8b72a45f-26b1-466b-b078-81efe4bb135f\") " Jan 26 08:15:11 crc kubenswrapper[4806]: I0126 08:15:11.533659 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8b72a45f-26b1-466b-b078-81efe4bb135f-run-httpd\") pod \"8b72a45f-26b1-466b-b078-81efe4bb135f\" (UID: \"8b72a45f-26b1-466b-b078-81efe4bb135f\") " Jan 26 08:15:11 crc kubenswrapper[4806]: I0126 08:15:11.534642 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b72a45f-26b1-466b-b078-81efe4bb135f-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "8b72a45f-26b1-466b-b078-81efe4bb135f" (UID: "8b72a45f-26b1-466b-b078-81efe4bb135f"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 08:15:11 crc kubenswrapper[4806]: I0126 08:15:11.536771 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b72a45f-26b1-466b-b078-81efe4bb135f-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "8b72a45f-26b1-466b-b078-81efe4bb135f" (UID: "8b72a45f-26b1-466b-b078-81efe4bb135f"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 08:15:11 crc kubenswrapper[4806]: I0126 08:15:11.546123 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b72a45f-26b1-466b-b078-81efe4bb135f-kube-api-access-xp6vj" (OuterVolumeSpecName: "kube-api-access-xp6vj") pod "8b72a45f-26b1-466b-b078-81efe4bb135f" (UID: "8b72a45f-26b1-466b-b078-81efe4bb135f"). InnerVolumeSpecName "kube-api-access-xp6vj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:15:11 crc kubenswrapper[4806]: I0126 08:15:11.548062 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b72a45f-26b1-466b-b078-81efe4bb135f-scripts" (OuterVolumeSpecName: "scripts") pod "8b72a45f-26b1-466b-b078-81efe4bb135f" (UID: "8b72a45f-26b1-466b-b078-81efe4bb135f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:15:11 crc kubenswrapper[4806]: I0126 08:15:11.631706 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b72a45f-26b1-466b-b078-81efe4bb135f-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "8b72a45f-26b1-466b-b078-81efe4bb135f" (UID: "8b72a45f-26b1-466b-b078-81efe4bb135f"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:15:11 crc kubenswrapper[4806]: I0126 08:15:11.635694 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xp6vj\" (UniqueName: \"kubernetes.io/projected/8b72a45f-26b1-466b-b078-81efe4bb135f-kube-api-access-xp6vj\") on node \"crc\" DevicePath \"\"" Jan 26 08:15:11 crc kubenswrapper[4806]: I0126 08:15:11.635724 4806 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8b72a45f-26b1-466b-b078-81efe4bb135f-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 08:15:11 crc kubenswrapper[4806]: I0126 08:15:11.635735 4806 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8b72a45f-26b1-466b-b078-81efe4bb135f-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 26 08:15:11 crc kubenswrapper[4806]: I0126 08:15:11.635747 4806 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8b72a45f-26b1-466b-b078-81efe4bb135f-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 08:15:11 crc kubenswrapper[4806]: I0126 08:15:11.635755 4806 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8b72a45f-26b1-466b-b078-81efe4bb135f-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 08:15:11 crc kubenswrapper[4806]: I0126 08:15:11.667778 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b72a45f-26b1-466b-b078-81efe4bb135f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8b72a45f-26b1-466b-b078-81efe4bb135f" (UID: "8b72a45f-26b1-466b-b078-81efe4bb135f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:15:11 crc kubenswrapper[4806]: I0126 08:15:11.690058 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b72a45f-26b1-466b-b078-81efe4bb135f-config-data" (OuterVolumeSpecName: "config-data") pod "8b72a45f-26b1-466b-b078-81efe4bb135f" (UID: "8b72a45f-26b1-466b-b078-81efe4bb135f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:15:11 crc kubenswrapper[4806]: I0126 08:15:11.737817 4806 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b72a45f-26b1-466b-b078-81efe4bb135f-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 08:15:11 crc kubenswrapper[4806]: I0126 08:15:11.738142 4806 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b72a45f-26b1-466b-b078-81efe4bb135f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 08:15:11 crc kubenswrapper[4806]: I0126 08:15:11.756957 4806 generic.go:334] "Generic (PLEG): container finished" podID="8b72a45f-26b1-466b-b078-81efe4bb135f" containerID="b6a9d5e0a4af6a60e284276b947fcaa62a03a783e062af0b7556c8508b5f0934" exitCode=0 Jan 26 08:15:11 crc kubenswrapper[4806]: I0126 08:15:11.757005 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8b72a45f-26b1-466b-b078-81efe4bb135f","Type":"ContainerDied","Data":"b6a9d5e0a4af6a60e284276b947fcaa62a03a783e062af0b7556c8508b5f0934"} Jan 26 08:15:11 crc kubenswrapper[4806]: I0126 08:15:11.757066 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8b72a45f-26b1-466b-b078-81efe4bb135f","Type":"ContainerDied","Data":"ef466a1455dc34d7aa2c94066de49c6542e01c9cb2d41e6d41049d133804424d"} Jan 26 08:15:11 crc kubenswrapper[4806]: I0126 08:15:11.757085 4806 scope.go:117] "RemoveContainer" containerID="bcd67f48431fec9876dffc4b30b73b982bf98e6ce2b65a1f7c2de71015c4ec8c" Jan 26 08:15:11 crc kubenswrapper[4806]: I0126 08:15:11.757586 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 08:15:11 crc kubenswrapper[4806]: I0126 08:15:11.814915 4806 scope.go:117] "RemoveContainer" containerID="348daa5e7fe6006b9d19ae795f93f256a66f95f9b25b535951e16ed16a378ab8" Jan 26 08:15:11 crc kubenswrapper[4806]: I0126 08:15:11.831533 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 08:15:11 crc kubenswrapper[4806]: I0126 08:15:11.850715 4806 scope.go:117] "RemoveContainer" containerID="b6a9d5e0a4af6a60e284276b947fcaa62a03a783e062af0b7556c8508b5f0934" Jan 26 08:15:11 crc kubenswrapper[4806]: I0126 08:15:11.850934 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 26 08:15:11 crc kubenswrapper[4806]: I0126 08:15:11.863858 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 26 08:15:11 crc kubenswrapper[4806]: E0126 08:15:11.864429 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b72a45f-26b1-466b-b078-81efe4bb135f" containerName="ceilometer-central-agent" Jan 26 08:15:11 crc kubenswrapper[4806]: I0126 08:15:11.864448 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b72a45f-26b1-466b-b078-81efe4bb135f" containerName="ceilometer-central-agent" Jan 26 08:15:11 crc kubenswrapper[4806]: E0126 08:15:11.864467 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b72a45f-26b1-466b-b078-81efe4bb135f" containerName="sg-core" Jan 26 08:15:11 crc kubenswrapper[4806]: I0126 08:15:11.864476 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b72a45f-26b1-466b-b078-81efe4bb135f" containerName="sg-core" Jan 26 08:15:11 crc kubenswrapper[4806]: E0126 08:15:11.864495 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b72a45f-26b1-466b-b078-81efe4bb135f" containerName="proxy-httpd" Jan 26 08:15:11 crc kubenswrapper[4806]: I0126 08:15:11.864501 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b72a45f-26b1-466b-b078-81efe4bb135f" containerName="proxy-httpd" Jan 26 08:15:11 crc kubenswrapper[4806]: E0126 08:15:11.864604 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b72a45f-26b1-466b-b078-81efe4bb135f" containerName="ceilometer-notification-agent" Jan 26 08:15:11 crc kubenswrapper[4806]: I0126 08:15:11.864612 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b72a45f-26b1-466b-b078-81efe4bb135f" containerName="ceilometer-notification-agent" Jan 26 08:15:11 crc kubenswrapper[4806]: I0126 08:15:11.864847 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b72a45f-26b1-466b-b078-81efe4bb135f" containerName="sg-core" Jan 26 08:15:11 crc kubenswrapper[4806]: I0126 08:15:11.864864 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b72a45f-26b1-466b-b078-81efe4bb135f" containerName="ceilometer-notification-agent" Jan 26 08:15:11 crc kubenswrapper[4806]: I0126 08:15:11.864878 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b72a45f-26b1-466b-b078-81efe4bb135f" containerName="proxy-httpd" Jan 26 08:15:11 crc kubenswrapper[4806]: I0126 08:15:11.864893 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b72a45f-26b1-466b-b078-81efe4bb135f" containerName="ceilometer-central-agent" Jan 26 08:15:11 crc kubenswrapper[4806]: I0126 08:15:11.870377 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 08:15:11 crc kubenswrapper[4806]: I0126 08:15:11.871136 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 08:15:11 crc kubenswrapper[4806]: I0126 08:15:11.878488 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 26 08:15:11 crc kubenswrapper[4806]: I0126 08:15:11.879002 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 26 08:15:11 crc kubenswrapper[4806]: I0126 08:15:11.886703 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 26 08:15:11 crc kubenswrapper[4806]: I0126 08:15:11.892444 4806 scope.go:117] "RemoveContainer" containerID="66ad429a9b8224d09e62258884165fc48ad4b3a5d69e56cc46f2542154605846" Jan 26 08:15:11 crc kubenswrapper[4806]: I0126 08:15:11.920381 4806 scope.go:117] "RemoveContainer" containerID="bcd67f48431fec9876dffc4b30b73b982bf98e6ce2b65a1f7c2de71015c4ec8c" Jan 26 08:15:11 crc kubenswrapper[4806]: E0126 08:15:11.921862 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bcd67f48431fec9876dffc4b30b73b982bf98e6ce2b65a1f7c2de71015c4ec8c\": container with ID starting with bcd67f48431fec9876dffc4b30b73b982bf98e6ce2b65a1f7c2de71015c4ec8c not found: ID does not exist" containerID="bcd67f48431fec9876dffc4b30b73b982bf98e6ce2b65a1f7c2de71015c4ec8c" Jan 26 08:15:11 crc kubenswrapper[4806]: I0126 08:15:11.921968 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bcd67f48431fec9876dffc4b30b73b982bf98e6ce2b65a1f7c2de71015c4ec8c"} err="failed to get container status \"bcd67f48431fec9876dffc4b30b73b982bf98e6ce2b65a1f7c2de71015c4ec8c\": rpc error: code = NotFound desc = could not find container \"bcd67f48431fec9876dffc4b30b73b982bf98e6ce2b65a1f7c2de71015c4ec8c\": container with ID starting with bcd67f48431fec9876dffc4b30b73b982bf98e6ce2b65a1f7c2de71015c4ec8c not found: ID does not exist" Jan 26 08:15:11 crc kubenswrapper[4806]: I0126 08:15:11.922043 4806 scope.go:117] "RemoveContainer" containerID="348daa5e7fe6006b9d19ae795f93f256a66f95f9b25b535951e16ed16a378ab8" Jan 26 08:15:11 crc kubenswrapper[4806]: E0126 08:15:11.922534 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"348daa5e7fe6006b9d19ae795f93f256a66f95f9b25b535951e16ed16a378ab8\": container with ID starting with 348daa5e7fe6006b9d19ae795f93f256a66f95f9b25b535951e16ed16a378ab8 not found: ID does not exist" containerID="348daa5e7fe6006b9d19ae795f93f256a66f95f9b25b535951e16ed16a378ab8" Jan 26 08:15:11 crc kubenswrapper[4806]: I0126 08:15:11.922576 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"348daa5e7fe6006b9d19ae795f93f256a66f95f9b25b535951e16ed16a378ab8"} err="failed to get container status \"348daa5e7fe6006b9d19ae795f93f256a66f95f9b25b535951e16ed16a378ab8\": rpc error: code = NotFound desc = could not find container \"348daa5e7fe6006b9d19ae795f93f256a66f95f9b25b535951e16ed16a378ab8\": container with ID starting with 348daa5e7fe6006b9d19ae795f93f256a66f95f9b25b535951e16ed16a378ab8 not found: ID does not exist" Jan 26 08:15:11 crc kubenswrapper[4806]: I0126 08:15:11.922603 4806 scope.go:117] "RemoveContainer" containerID="b6a9d5e0a4af6a60e284276b947fcaa62a03a783e062af0b7556c8508b5f0934" Jan 26 08:15:11 crc kubenswrapper[4806]: E0126 08:15:11.922821 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b6a9d5e0a4af6a60e284276b947fcaa62a03a783e062af0b7556c8508b5f0934\": container with ID starting with b6a9d5e0a4af6a60e284276b947fcaa62a03a783e062af0b7556c8508b5f0934 not found: ID does not exist" containerID="b6a9d5e0a4af6a60e284276b947fcaa62a03a783e062af0b7556c8508b5f0934" Jan 26 08:15:11 crc kubenswrapper[4806]: I0126 08:15:11.922842 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b6a9d5e0a4af6a60e284276b947fcaa62a03a783e062af0b7556c8508b5f0934"} err="failed to get container status \"b6a9d5e0a4af6a60e284276b947fcaa62a03a783e062af0b7556c8508b5f0934\": rpc error: code = NotFound desc = could not find container \"b6a9d5e0a4af6a60e284276b947fcaa62a03a783e062af0b7556c8508b5f0934\": container with ID starting with b6a9d5e0a4af6a60e284276b947fcaa62a03a783e062af0b7556c8508b5f0934 not found: ID does not exist" Jan 26 08:15:11 crc kubenswrapper[4806]: I0126 08:15:11.922855 4806 scope.go:117] "RemoveContainer" containerID="66ad429a9b8224d09e62258884165fc48ad4b3a5d69e56cc46f2542154605846" Jan 26 08:15:11 crc kubenswrapper[4806]: E0126 08:15:11.923032 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"66ad429a9b8224d09e62258884165fc48ad4b3a5d69e56cc46f2542154605846\": container with ID starting with 66ad429a9b8224d09e62258884165fc48ad4b3a5d69e56cc46f2542154605846 not found: ID does not exist" containerID="66ad429a9b8224d09e62258884165fc48ad4b3a5d69e56cc46f2542154605846" Jan 26 08:15:11 crc kubenswrapper[4806]: I0126 08:15:11.923051 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66ad429a9b8224d09e62258884165fc48ad4b3a5d69e56cc46f2542154605846"} err="failed to get container status \"66ad429a9b8224d09e62258884165fc48ad4b3a5d69e56cc46f2542154605846\": rpc error: code = NotFound desc = could not find container \"66ad429a9b8224d09e62258884165fc48ad4b3a5d69e56cc46f2542154605846\": container with ID starting with 66ad429a9b8224d09e62258884165fc48ad4b3a5d69e56cc46f2542154605846 not found: ID does not exist" Jan 26 08:15:11 crc kubenswrapper[4806]: I0126 08:15:11.942076 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be6dfa34-fa38-4375-be1f-467c5428818d-scripts\") pod \"ceilometer-0\" (UID: \"be6dfa34-fa38-4375-be1f-467c5428818d\") " pod="openstack/ceilometer-0" Jan 26 08:15:11 crc kubenswrapper[4806]: I0126 08:15:11.942436 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgnb5\" (UniqueName: \"kubernetes.io/projected/be6dfa34-fa38-4375-be1f-467c5428818d-kube-api-access-qgnb5\") pod \"ceilometer-0\" (UID: \"be6dfa34-fa38-4375-be1f-467c5428818d\") " pod="openstack/ceilometer-0" Jan 26 08:15:11 crc kubenswrapper[4806]: I0126 08:15:11.942587 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be6dfa34-fa38-4375-be1f-467c5428818d-config-data\") pod \"ceilometer-0\" (UID: \"be6dfa34-fa38-4375-be1f-467c5428818d\") " pod="openstack/ceilometer-0" Jan 26 08:15:11 crc kubenswrapper[4806]: I0126 08:15:11.942713 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be6dfa34-fa38-4375-be1f-467c5428818d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"be6dfa34-fa38-4375-be1f-467c5428818d\") " pod="openstack/ceilometer-0" Jan 26 08:15:11 crc kubenswrapper[4806]: I0126 08:15:11.942833 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/be6dfa34-fa38-4375-be1f-467c5428818d-run-httpd\") pod \"ceilometer-0\" (UID: \"be6dfa34-fa38-4375-be1f-467c5428818d\") " pod="openstack/ceilometer-0" Jan 26 08:15:11 crc kubenswrapper[4806]: I0126 08:15:11.942969 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/be6dfa34-fa38-4375-be1f-467c5428818d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"be6dfa34-fa38-4375-be1f-467c5428818d\") " pod="openstack/ceilometer-0" Jan 26 08:15:11 crc kubenswrapper[4806]: I0126 08:15:11.943090 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/be6dfa34-fa38-4375-be1f-467c5428818d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"be6dfa34-fa38-4375-be1f-467c5428818d\") " pod="openstack/ceilometer-0" Jan 26 08:15:11 crc kubenswrapper[4806]: I0126 08:15:11.943231 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/be6dfa34-fa38-4375-be1f-467c5428818d-log-httpd\") pod \"ceilometer-0\" (UID: \"be6dfa34-fa38-4375-be1f-467c5428818d\") " pod="openstack/ceilometer-0" Jan 26 08:15:12 crc kubenswrapper[4806]: I0126 08:15:12.049264 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be6dfa34-fa38-4375-be1f-467c5428818d-scripts\") pod \"ceilometer-0\" (UID: \"be6dfa34-fa38-4375-be1f-467c5428818d\") " pod="openstack/ceilometer-0" Jan 26 08:15:12 crc kubenswrapper[4806]: I0126 08:15:12.049328 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qgnb5\" (UniqueName: \"kubernetes.io/projected/be6dfa34-fa38-4375-be1f-467c5428818d-kube-api-access-qgnb5\") pod \"ceilometer-0\" (UID: \"be6dfa34-fa38-4375-be1f-467c5428818d\") " pod="openstack/ceilometer-0" Jan 26 08:15:12 crc kubenswrapper[4806]: I0126 08:15:12.049360 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be6dfa34-fa38-4375-be1f-467c5428818d-config-data\") pod \"ceilometer-0\" (UID: \"be6dfa34-fa38-4375-be1f-467c5428818d\") " pod="openstack/ceilometer-0" Jan 26 08:15:12 crc kubenswrapper[4806]: I0126 08:15:12.049389 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be6dfa34-fa38-4375-be1f-467c5428818d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"be6dfa34-fa38-4375-be1f-467c5428818d\") " pod="openstack/ceilometer-0" Jan 26 08:15:12 crc kubenswrapper[4806]: I0126 08:15:12.049405 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/be6dfa34-fa38-4375-be1f-467c5428818d-run-httpd\") pod \"ceilometer-0\" (UID: \"be6dfa34-fa38-4375-be1f-467c5428818d\") " pod="openstack/ceilometer-0" Jan 26 08:15:12 crc kubenswrapper[4806]: I0126 08:15:12.049423 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/be6dfa34-fa38-4375-be1f-467c5428818d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"be6dfa34-fa38-4375-be1f-467c5428818d\") " pod="openstack/ceilometer-0" Jan 26 08:15:12 crc kubenswrapper[4806]: I0126 08:15:12.049451 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/be6dfa34-fa38-4375-be1f-467c5428818d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"be6dfa34-fa38-4375-be1f-467c5428818d\") " pod="openstack/ceilometer-0" Jan 26 08:15:12 crc kubenswrapper[4806]: I0126 08:15:12.049473 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/be6dfa34-fa38-4375-be1f-467c5428818d-log-httpd\") pod \"ceilometer-0\" (UID: \"be6dfa34-fa38-4375-be1f-467c5428818d\") " pod="openstack/ceilometer-0" Jan 26 08:15:12 crc kubenswrapper[4806]: I0126 08:15:12.050287 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/be6dfa34-fa38-4375-be1f-467c5428818d-log-httpd\") pod \"ceilometer-0\" (UID: \"be6dfa34-fa38-4375-be1f-467c5428818d\") " pod="openstack/ceilometer-0" Jan 26 08:15:12 crc kubenswrapper[4806]: I0126 08:15:12.051491 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/be6dfa34-fa38-4375-be1f-467c5428818d-run-httpd\") pod \"ceilometer-0\" (UID: \"be6dfa34-fa38-4375-be1f-467c5428818d\") " pod="openstack/ceilometer-0" Jan 26 08:15:12 crc kubenswrapper[4806]: I0126 08:15:12.054689 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be6dfa34-fa38-4375-be1f-467c5428818d-config-data\") pod \"ceilometer-0\" (UID: \"be6dfa34-fa38-4375-be1f-467c5428818d\") " pod="openstack/ceilometer-0" Jan 26 08:15:12 crc kubenswrapper[4806]: I0126 08:15:12.067184 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be6dfa34-fa38-4375-be1f-467c5428818d-scripts\") pod \"ceilometer-0\" (UID: \"be6dfa34-fa38-4375-be1f-467c5428818d\") " pod="openstack/ceilometer-0" Jan 26 08:15:12 crc kubenswrapper[4806]: I0126 08:15:12.069032 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/be6dfa34-fa38-4375-be1f-467c5428818d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"be6dfa34-fa38-4375-be1f-467c5428818d\") " pod="openstack/ceilometer-0" Jan 26 08:15:12 crc kubenswrapper[4806]: I0126 08:15:12.069252 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be6dfa34-fa38-4375-be1f-467c5428818d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"be6dfa34-fa38-4375-be1f-467c5428818d\") " pod="openstack/ceilometer-0" Jan 26 08:15:12 crc kubenswrapper[4806]: I0126 08:15:12.073345 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qgnb5\" (UniqueName: \"kubernetes.io/projected/be6dfa34-fa38-4375-be1f-467c5428818d-kube-api-access-qgnb5\") pod \"ceilometer-0\" (UID: \"be6dfa34-fa38-4375-be1f-467c5428818d\") " pod="openstack/ceilometer-0" Jan 26 08:15:12 crc kubenswrapper[4806]: I0126 08:15:12.084054 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/be6dfa34-fa38-4375-be1f-467c5428818d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"be6dfa34-fa38-4375-be1f-467c5428818d\") " pod="openstack/ceilometer-0" Jan 26 08:15:12 crc kubenswrapper[4806]: I0126 08:15:12.192696 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 08:15:12 crc kubenswrapper[4806]: I0126 08:15:12.679846 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 08:15:12 crc kubenswrapper[4806]: I0126 08:15:12.769394 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"be6dfa34-fa38-4375-be1f-467c5428818d","Type":"ContainerStarted","Data":"6b10fefc0187627247c23d3e3619e92efa102d8d18dfe1129900a6e0da77b239"} Jan 26 08:15:13 crc kubenswrapper[4806]: I0126 08:15:13.053036 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b72a45f-26b1-466b-b078-81efe4bb135f" path="/var/lib/kubelet/pods/8b72a45f-26b1-466b-b078-81efe4bb135f/volumes" Jan 26 08:15:13 crc kubenswrapper[4806]: I0126 08:15:13.780261 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"be6dfa34-fa38-4375-be1f-467c5428818d","Type":"ContainerStarted","Data":"51b58dec6b731f6ab8cc3b34a5fd67b77085b42ed3d14aaa091232da2ff1234b"} Jan 26 08:15:13 crc kubenswrapper[4806]: I0126 08:15:13.996779 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 26 08:15:14 crc kubenswrapper[4806]: I0126 08:15:14.790394 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"be6dfa34-fa38-4375-be1f-467c5428818d","Type":"ContainerStarted","Data":"d172e39f1e99c0606d173093da4b30a5e9d97348490b4a743723f17caf773a2a"} Jan 26 08:15:15 crc kubenswrapper[4806]: I0126 08:15:15.294604 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 26 08:15:15 crc kubenswrapper[4806]: I0126 08:15:15.802203 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"be6dfa34-fa38-4375-be1f-467c5428818d","Type":"ContainerStarted","Data":"f81e3793cbc4659b5071a73a7ed5bc01fd8fc5a7050b40124b1c0182a020292f"} Jan 26 08:15:16 crc kubenswrapper[4806]: I0126 08:15:16.814659 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"be6dfa34-fa38-4375-be1f-467c5428818d","Type":"ContainerStarted","Data":"93d21f5a77dd887cf093102ea4930917a3a8d198d2b0f48d83414467fd4b0d92"} Jan 26 08:15:16 crc kubenswrapper[4806]: I0126 08:15:16.815840 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 26 08:15:16 crc kubenswrapper[4806]: I0126 08:15:16.843341 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.336260788 podStartE2EDuration="5.843326006s" podCreationTimestamp="2026-01-26 08:15:11 +0000 UTC" firstStartedPulling="2026-01-26 08:15:12.683320268 +0000 UTC m=+1291.947728324" lastFinishedPulling="2026-01-26 08:15:16.190385486 +0000 UTC m=+1295.454793542" observedRunningTime="2026-01-26 08:15:16.839486859 +0000 UTC m=+1296.103894905" watchObservedRunningTime="2026-01-26 08:15:16.843326006 +0000 UTC m=+1296.107734062" Jan 26 08:15:18 crc kubenswrapper[4806]: I0126 08:15:18.172587 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 26 08:15:19 crc kubenswrapper[4806]: I0126 08:15:19.327305 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="8f0a4edb-6a17-43a2-9c55-88ded9dfcc35" containerName="rabbitmq" containerID="cri-o://1727c328532a2dfb6251d6e5e4df741df38623a7ae21c46b0fa9b876282b4d7f" gracePeriod=604795 Jan 26 08:15:20 crc kubenswrapper[4806]: I0126 08:15:20.153561 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="025ae3ca-3082-4bc8-8611-5b23cec63932" containerName="rabbitmq" containerID="cri-o://f141762180104c521b7bffcd8ea54361bb9f059083d362577cf9100a2dd235dc" gracePeriod=604796 Jan 26 08:15:25 crc kubenswrapper[4806]: I0126 08:15:25.901244 4806 generic.go:334] "Generic (PLEG): container finished" podID="8f0a4edb-6a17-43a2-9c55-88ded9dfcc35" containerID="1727c328532a2dfb6251d6e5e4df741df38623a7ae21c46b0fa9b876282b4d7f" exitCode=0 Jan 26 08:15:25 crc kubenswrapper[4806]: I0126 08:15:25.901286 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"8f0a4edb-6a17-43a2-9c55-88ded9dfcc35","Type":"ContainerDied","Data":"1727c328532a2dfb6251d6e5e4df741df38623a7ae21c46b0fa9b876282b4d7f"} Jan 26 08:15:26 crc kubenswrapper[4806]: I0126 08:15:26.019107 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 26 08:15:26 crc kubenswrapper[4806]: I0126 08:15:26.122770 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8f0a4edb-6a17-43a2-9c55-88ded9dfcc35-rabbitmq-plugins\") pod \"8f0a4edb-6a17-43a2-9c55-88ded9dfcc35\" (UID: \"8f0a4edb-6a17-43a2-9c55-88ded9dfcc35\") " Jan 26 08:15:26 crc kubenswrapper[4806]: I0126 08:15:26.122831 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"8f0a4edb-6a17-43a2-9c55-88ded9dfcc35\" (UID: \"8f0a4edb-6a17-43a2-9c55-88ded9dfcc35\") " Jan 26 08:15:26 crc kubenswrapper[4806]: I0126 08:15:26.122855 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8f0a4edb-6a17-43a2-9c55-88ded9dfcc35-erlang-cookie-secret\") pod \"8f0a4edb-6a17-43a2-9c55-88ded9dfcc35\" (UID: \"8f0a4edb-6a17-43a2-9c55-88ded9dfcc35\") " Jan 26 08:15:26 crc kubenswrapper[4806]: I0126 08:15:26.122899 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8f0a4edb-6a17-43a2-9c55-88ded9dfcc35-server-conf\") pod \"8f0a4edb-6a17-43a2-9c55-88ded9dfcc35\" (UID: \"8f0a4edb-6a17-43a2-9c55-88ded9dfcc35\") " Jan 26 08:15:26 crc kubenswrapper[4806]: I0126 08:15:26.123008 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8f0a4edb-6a17-43a2-9c55-88ded9dfcc35-pod-info\") pod \"8f0a4edb-6a17-43a2-9c55-88ded9dfcc35\" (UID: \"8f0a4edb-6a17-43a2-9c55-88ded9dfcc35\") " Jan 26 08:15:26 crc kubenswrapper[4806]: I0126 08:15:26.123075 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8f0a4edb-6a17-43a2-9c55-88ded9dfcc35-rabbitmq-tls\") pod \"8f0a4edb-6a17-43a2-9c55-88ded9dfcc35\" (UID: \"8f0a4edb-6a17-43a2-9c55-88ded9dfcc35\") " Jan 26 08:15:26 crc kubenswrapper[4806]: I0126 08:15:26.123101 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xkj6w\" (UniqueName: \"kubernetes.io/projected/8f0a4edb-6a17-43a2-9c55-88ded9dfcc35-kube-api-access-xkj6w\") pod \"8f0a4edb-6a17-43a2-9c55-88ded9dfcc35\" (UID: \"8f0a4edb-6a17-43a2-9c55-88ded9dfcc35\") " Jan 26 08:15:26 crc kubenswrapper[4806]: I0126 08:15:26.123130 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8f0a4edb-6a17-43a2-9c55-88ded9dfcc35-plugins-conf\") pod \"8f0a4edb-6a17-43a2-9c55-88ded9dfcc35\" (UID: \"8f0a4edb-6a17-43a2-9c55-88ded9dfcc35\") " Jan 26 08:15:26 crc kubenswrapper[4806]: I0126 08:15:26.123173 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8f0a4edb-6a17-43a2-9c55-88ded9dfcc35-rabbitmq-erlang-cookie\") pod \"8f0a4edb-6a17-43a2-9c55-88ded9dfcc35\" (UID: \"8f0a4edb-6a17-43a2-9c55-88ded9dfcc35\") " Jan 26 08:15:26 crc kubenswrapper[4806]: I0126 08:15:26.123194 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8f0a4edb-6a17-43a2-9c55-88ded9dfcc35-rabbitmq-confd\") pod \"8f0a4edb-6a17-43a2-9c55-88ded9dfcc35\" (UID: \"8f0a4edb-6a17-43a2-9c55-88ded9dfcc35\") " Jan 26 08:15:26 crc kubenswrapper[4806]: I0126 08:15:26.123242 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8f0a4edb-6a17-43a2-9c55-88ded9dfcc35-config-data\") pod \"8f0a4edb-6a17-43a2-9c55-88ded9dfcc35\" (UID: \"8f0a4edb-6a17-43a2-9c55-88ded9dfcc35\") " Jan 26 08:15:26 crc kubenswrapper[4806]: I0126 08:15:26.130398 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f0a4edb-6a17-43a2-9c55-88ded9dfcc35-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "8f0a4edb-6a17-43a2-9c55-88ded9dfcc35" (UID: "8f0a4edb-6a17-43a2-9c55-88ded9dfcc35"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 08:15:26 crc kubenswrapper[4806]: I0126 08:15:26.133544 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f0a4edb-6a17-43a2-9c55-88ded9dfcc35-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "8f0a4edb-6a17-43a2-9c55-88ded9dfcc35" (UID: "8f0a4edb-6a17-43a2-9c55-88ded9dfcc35"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 08:15:26 crc kubenswrapper[4806]: I0126 08:15:26.135992 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f0a4edb-6a17-43a2-9c55-88ded9dfcc35-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "8f0a4edb-6a17-43a2-9c55-88ded9dfcc35" (UID: "8f0a4edb-6a17-43a2-9c55-88ded9dfcc35"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 08:15:26 crc kubenswrapper[4806]: I0126 08:15:26.142712 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f0a4edb-6a17-43a2-9c55-88ded9dfcc35-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "8f0a4edb-6a17-43a2-9c55-88ded9dfcc35" (UID: "8f0a4edb-6a17-43a2-9c55-88ded9dfcc35"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:15:26 crc kubenswrapper[4806]: I0126 08:15:26.143541 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f0a4edb-6a17-43a2-9c55-88ded9dfcc35-kube-api-access-xkj6w" (OuterVolumeSpecName: "kube-api-access-xkj6w") pod "8f0a4edb-6a17-43a2-9c55-88ded9dfcc35" (UID: "8f0a4edb-6a17-43a2-9c55-88ded9dfcc35"). InnerVolumeSpecName "kube-api-access-xkj6w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:15:26 crc kubenswrapper[4806]: I0126 08:15:26.151167 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "persistence") pod "8f0a4edb-6a17-43a2-9c55-88ded9dfcc35" (UID: "8f0a4edb-6a17-43a2-9c55-88ded9dfcc35"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 26 08:15:26 crc kubenswrapper[4806]: I0126 08:15:26.151601 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/8f0a4edb-6a17-43a2-9c55-88ded9dfcc35-pod-info" (OuterVolumeSpecName: "pod-info") pod "8f0a4edb-6a17-43a2-9c55-88ded9dfcc35" (UID: "8f0a4edb-6a17-43a2-9c55-88ded9dfcc35"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 26 08:15:26 crc kubenswrapper[4806]: I0126 08:15:26.152077 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f0a4edb-6a17-43a2-9c55-88ded9dfcc35-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "8f0a4edb-6a17-43a2-9c55-88ded9dfcc35" (UID: "8f0a4edb-6a17-43a2-9c55-88ded9dfcc35"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:15:26 crc kubenswrapper[4806]: I0126 08:15:26.168580 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f0a4edb-6a17-43a2-9c55-88ded9dfcc35-config-data" (OuterVolumeSpecName: "config-data") pod "8f0a4edb-6a17-43a2-9c55-88ded9dfcc35" (UID: "8f0a4edb-6a17-43a2-9c55-88ded9dfcc35"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 08:15:26 crc kubenswrapper[4806]: I0126 08:15:26.204896 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f0a4edb-6a17-43a2-9c55-88ded9dfcc35-server-conf" (OuterVolumeSpecName: "server-conf") pod "8f0a4edb-6a17-43a2-9c55-88ded9dfcc35" (UID: "8f0a4edb-6a17-43a2-9c55-88ded9dfcc35"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 08:15:26 crc kubenswrapper[4806]: I0126 08:15:26.230290 4806 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8f0a4edb-6a17-43a2-9c55-88ded9dfcc35-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 08:15:26 crc kubenswrapper[4806]: I0126 08:15:26.230607 4806 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8f0a4edb-6a17-43a2-9c55-88ded9dfcc35-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 26 08:15:26 crc kubenswrapper[4806]: I0126 08:15:26.230752 4806 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Jan 26 08:15:26 crc kubenswrapper[4806]: I0126 08:15:26.230855 4806 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8f0a4edb-6a17-43a2-9c55-88ded9dfcc35-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 26 08:15:26 crc kubenswrapper[4806]: I0126 08:15:26.230941 4806 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8f0a4edb-6a17-43a2-9c55-88ded9dfcc35-server-conf\") on node \"crc\" DevicePath \"\"" Jan 26 08:15:26 crc kubenswrapper[4806]: I0126 08:15:26.231016 4806 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8f0a4edb-6a17-43a2-9c55-88ded9dfcc35-pod-info\") on node \"crc\" DevicePath \"\"" Jan 26 08:15:26 crc kubenswrapper[4806]: I0126 08:15:26.231127 4806 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8f0a4edb-6a17-43a2-9c55-88ded9dfcc35-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 26 08:15:26 crc kubenswrapper[4806]: I0126 08:15:26.231214 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xkj6w\" (UniqueName: \"kubernetes.io/projected/8f0a4edb-6a17-43a2-9c55-88ded9dfcc35-kube-api-access-xkj6w\") on node \"crc\" DevicePath \"\"" Jan 26 08:15:26 crc kubenswrapper[4806]: I0126 08:15:26.231302 4806 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8f0a4edb-6a17-43a2-9c55-88ded9dfcc35-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 26 08:15:26 crc kubenswrapper[4806]: I0126 08:15:26.231382 4806 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8f0a4edb-6a17-43a2-9c55-88ded9dfcc35-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 26 08:15:26 crc kubenswrapper[4806]: I0126 08:15:26.254564 4806 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Jan 26 08:15:26 crc kubenswrapper[4806]: I0126 08:15:26.266540 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f0a4edb-6a17-43a2-9c55-88ded9dfcc35-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "8f0a4edb-6a17-43a2-9c55-88ded9dfcc35" (UID: "8f0a4edb-6a17-43a2-9c55-88ded9dfcc35"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:15:26 crc kubenswrapper[4806]: I0126 08:15:26.333765 4806 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8f0a4edb-6a17-43a2-9c55-88ded9dfcc35-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 26 08:15:26 crc kubenswrapper[4806]: I0126 08:15:26.333807 4806 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Jan 26 08:15:26 crc kubenswrapper[4806]: I0126 08:15:26.787615 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 26 08:15:26 crc kubenswrapper[4806]: I0126 08:15:26.842298 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/025ae3ca-3082-4bc8-8611-5b23cec63932-rabbitmq-tls\") pod \"025ae3ca-3082-4bc8-8611-5b23cec63932\" (UID: \"025ae3ca-3082-4bc8-8611-5b23cec63932\") " Jan 26 08:15:26 crc kubenswrapper[4806]: I0126 08:15:26.842359 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/025ae3ca-3082-4bc8-8611-5b23cec63932-erlang-cookie-secret\") pod \"025ae3ca-3082-4bc8-8611-5b23cec63932\" (UID: \"025ae3ca-3082-4bc8-8611-5b23cec63932\") " Jan 26 08:15:26 crc kubenswrapper[4806]: I0126 08:15:26.842401 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/025ae3ca-3082-4bc8-8611-5b23cec63932-pod-info\") pod \"025ae3ca-3082-4bc8-8611-5b23cec63932\" (UID: \"025ae3ca-3082-4bc8-8611-5b23cec63932\") " Jan 26 08:15:26 crc kubenswrapper[4806]: I0126 08:15:26.842453 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gr76k\" (UniqueName: \"kubernetes.io/projected/025ae3ca-3082-4bc8-8611-5b23cec63932-kube-api-access-gr76k\") pod \"025ae3ca-3082-4bc8-8611-5b23cec63932\" (UID: \"025ae3ca-3082-4bc8-8611-5b23cec63932\") " Jan 26 08:15:26 crc kubenswrapper[4806]: I0126 08:15:26.842572 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/025ae3ca-3082-4bc8-8611-5b23cec63932-rabbitmq-erlang-cookie\") pod \"025ae3ca-3082-4bc8-8611-5b23cec63932\" (UID: \"025ae3ca-3082-4bc8-8611-5b23cec63932\") " Jan 26 08:15:26 crc kubenswrapper[4806]: I0126 08:15:26.842620 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/025ae3ca-3082-4bc8-8611-5b23cec63932-plugins-conf\") pod \"025ae3ca-3082-4bc8-8611-5b23cec63932\" (UID: \"025ae3ca-3082-4bc8-8611-5b23cec63932\") " Jan 26 08:15:26 crc kubenswrapper[4806]: I0126 08:15:26.842677 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/025ae3ca-3082-4bc8-8611-5b23cec63932-rabbitmq-plugins\") pod \"025ae3ca-3082-4bc8-8611-5b23cec63932\" (UID: \"025ae3ca-3082-4bc8-8611-5b23cec63932\") " Jan 26 08:15:26 crc kubenswrapper[4806]: I0126 08:15:26.842700 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/025ae3ca-3082-4bc8-8611-5b23cec63932-config-data\") pod \"025ae3ca-3082-4bc8-8611-5b23cec63932\" (UID: \"025ae3ca-3082-4bc8-8611-5b23cec63932\") " Jan 26 08:15:26 crc kubenswrapper[4806]: I0126 08:15:26.842739 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"025ae3ca-3082-4bc8-8611-5b23cec63932\" (UID: \"025ae3ca-3082-4bc8-8611-5b23cec63932\") " Jan 26 08:15:26 crc kubenswrapper[4806]: I0126 08:15:26.842764 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/025ae3ca-3082-4bc8-8611-5b23cec63932-rabbitmq-confd\") pod \"025ae3ca-3082-4bc8-8611-5b23cec63932\" (UID: \"025ae3ca-3082-4bc8-8611-5b23cec63932\") " Jan 26 08:15:26 crc kubenswrapper[4806]: I0126 08:15:26.842805 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/025ae3ca-3082-4bc8-8611-5b23cec63932-server-conf\") pod \"025ae3ca-3082-4bc8-8611-5b23cec63932\" (UID: \"025ae3ca-3082-4bc8-8611-5b23cec63932\") " Jan 26 08:15:26 crc kubenswrapper[4806]: I0126 08:15:26.843783 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/025ae3ca-3082-4bc8-8611-5b23cec63932-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "025ae3ca-3082-4bc8-8611-5b23cec63932" (UID: "025ae3ca-3082-4bc8-8611-5b23cec63932"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 08:15:26 crc kubenswrapper[4806]: I0126 08:15:26.844147 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/025ae3ca-3082-4bc8-8611-5b23cec63932-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "025ae3ca-3082-4bc8-8611-5b23cec63932" (UID: "025ae3ca-3082-4bc8-8611-5b23cec63932"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 08:15:26 crc kubenswrapper[4806]: I0126 08:15:26.845027 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/025ae3ca-3082-4bc8-8611-5b23cec63932-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "025ae3ca-3082-4bc8-8611-5b23cec63932" (UID: "025ae3ca-3082-4bc8-8611-5b23cec63932"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 08:15:26 crc kubenswrapper[4806]: I0126 08:15:26.861458 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/025ae3ca-3082-4bc8-8611-5b23cec63932-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "025ae3ca-3082-4bc8-8611-5b23cec63932" (UID: "025ae3ca-3082-4bc8-8611-5b23cec63932"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:15:26 crc kubenswrapper[4806]: I0126 08:15:26.862686 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "persistence") pod "025ae3ca-3082-4bc8-8611-5b23cec63932" (UID: "025ae3ca-3082-4bc8-8611-5b23cec63932"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 26 08:15:26 crc kubenswrapper[4806]: I0126 08:15:26.869248 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/025ae3ca-3082-4bc8-8611-5b23cec63932-kube-api-access-gr76k" (OuterVolumeSpecName: "kube-api-access-gr76k") pod "025ae3ca-3082-4bc8-8611-5b23cec63932" (UID: "025ae3ca-3082-4bc8-8611-5b23cec63932"). InnerVolumeSpecName "kube-api-access-gr76k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:15:26 crc kubenswrapper[4806]: I0126 08:15:26.869440 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/025ae3ca-3082-4bc8-8611-5b23cec63932-pod-info" (OuterVolumeSpecName: "pod-info") pod "025ae3ca-3082-4bc8-8611-5b23cec63932" (UID: "025ae3ca-3082-4bc8-8611-5b23cec63932"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 26 08:15:26 crc kubenswrapper[4806]: I0126 08:15:26.896675 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/025ae3ca-3082-4bc8-8611-5b23cec63932-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "025ae3ca-3082-4bc8-8611-5b23cec63932" (UID: "025ae3ca-3082-4bc8-8611-5b23cec63932"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:15:26 crc kubenswrapper[4806]: I0126 08:15:26.920886 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/025ae3ca-3082-4bc8-8611-5b23cec63932-config-data" (OuterVolumeSpecName: "config-data") pod "025ae3ca-3082-4bc8-8611-5b23cec63932" (UID: "025ae3ca-3082-4bc8-8611-5b23cec63932"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 08:15:26 crc kubenswrapper[4806]: I0126 08:15:26.946829 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gr76k\" (UniqueName: \"kubernetes.io/projected/025ae3ca-3082-4bc8-8611-5b23cec63932-kube-api-access-gr76k\") on node \"crc\" DevicePath \"\"" Jan 26 08:15:26 crc kubenswrapper[4806]: I0126 08:15:26.946854 4806 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/025ae3ca-3082-4bc8-8611-5b23cec63932-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 26 08:15:26 crc kubenswrapper[4806]: I0126 08:15:26.946863 4806 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/025ae3ca-3082-4bc8-8611-5b23cec63932-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 26 08:15:26 crc kubenswrapper[4806]: I0126 08:15:26.946873 4806 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/025ae3ca-3082-4bc8-8611-5b23cec63932-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 26 08:15:26 crc kubenswrapper[4806]: I0126 08:15:26.946881 4806 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/025ae3ca-3082-4bc8-8611-5b23cec63932-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 08:15:26 crc kubenswrapper[4806]: I0126 08:15:26.946903 4806 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Jan 26 08:15:26 crc kubenswrapper[4806]: I0126 08:15:26.946912 4806 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/025ae3ca-3082-4bc8-8611-5b23cec63932-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 26 08:15:26 crc kubenswrapper[4806]: I0126 08:15:26.946920 4806 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/025ae3ca-3082-4bc8-8611-5b23cec63932-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 26 08:15:26 crc kubenswrapper[4806]: I0126 08:15:26.946929 4806 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/025ae3ca-3082-4bc8-8611-5b23cec63932-pod-info\") on node \"crc\" DevicePath \"\"" Jan 26 08:15:26 crc kubenswrapper[4806]: I0126 08:15:26.948006 4806 generic.go:334] "Generic (PLEG): container finished" podID="025ae3ca-3082-4bc8-8611-5b23cec63932" containerID="f141762180104c521b7bffcd8ea54361bb9f059083d362577cf9100a2dd235dc" exitCode=0 Jan 26 08:15:26 crc kubenswrapper[4806]: I0126 08:15:26.948472 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"025ae3ca-3082-4bc8-8611-5b23cec63932","Type":"ContainerDied","Data":"f141762180104c521b7bffcd8ea54361bb9f059083d362577cf9100a2dd235dc"} Jan 26 08:15:26 crc kubenswrapper[4806]: I0126 08:15:26.948539 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"025ae3ca-3082-4bc8-8611-5b23cec63932","Type":"ContainerDied","Data":"261fde538c17c5c59f604f1bee431cd40a02e04aeb03e67f7f3577e90392d908"} Jan 26 08:15:26 crc kubenswrapper[4806]: I0126 08:15:26.948559 4806 scope.go:117] "RemoveContainer" containerID="f141762180104c521b7bffcd8ea54361bb9f059083d362577cf9100a2dd235dc" Jan 26 08:15:26 crc kubenswrapper[4806]: I0126 08:15:26.948751 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 26 08:15:26 crc kubenswrapper[4806]: I0126 08:15:26.975811 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/025ae3ca-3082-4bc8-8611-5b23cec63932-server-conf" (OuterVolumeSpecName: "server-conf") pod "025ae3ca-3082-4bc8-8611-5b23cec63932" (UID: "025ae3ca-3082-4bc8-8611-5b23cec63932"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 08:15:26 crc kubenswrapper[4806]: I0126 08:15:26.982571 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"8f0a4edb-6a17-43a2-9c55-88ded9dfcc35","Type":"ContainerDied","Data":"b539fddec11123e110e6cdb1ffeb36945cec2a3941bcf94deac4653c6cbaee79"} Jan 26 08:15:26 crc kubenswrapper[4806]: I0126 08:15:26.982660 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 26 08:15:27 crc kubenswrapper[4806]: I0126 08:15:27.013855 4806 scope.go:117] "RemoveContainer" containerID="064bb866426cda4e680a27ba4d576877d2096d5ec28f01113c3d45c614189c37" Jan 26 08:15:27 crc kubenswrapper[4806]: I0126 08:15:27.045431 4806 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Jan 26 08:15:27 crc kubenswrapper[4806]: I0126 08:15:27.057645 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/025ae3ca-3082-4bc8-8611-5b23cec63932-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "025ae3ca-3082-4bc8-8611-5b23cec63932" (UID: "025ae3ca-3082-4bc8-8611-5b23cec63932"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:15:27 crc kubenswrapper[4806]: I0126 08:15:27.064290 4806 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Jan 26 08:15:27 crc kubenswrapper[4806]: I0126 08:15:27.064359 4806 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/025ae3ca-3082-4bc8-8611-5b23cec63932-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 26 08:15:27 crc kubenswrapper[4806]: I0126 08:15:27.064376 4806 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/025ae3ca-3082-4bc8-8611-5b23cec63932-server-conf\") on node \"crc\" DevicePath \"\"" Jan 26 08:15:27 crc kubenswrapper[4806]: I0126 08:15:27.083120 4806 scope.go:117] "RemoveContainer" containerID="f141762180104c521b7bffcd8ea54361bb9f059083d362577cf9100a2dd235dc" Jan 26 08:15:27 crc kubenswrapper[4806]: E0126 08:15:27.085829 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f141762180104c521b7bffcd8ea54361bb9f059083d362577cf9100a2dd235dc\": container with ID starting with f141762180104c521b7bffcd8ea54361bb9f059083d362577cf9100a2dd235dc not found: ID does not exist" containerID="f141762180104c521b7bffcd8ea54361bb9f059083d362577cf9100a2dd235dc" Jan 26 08:15:27 crc kubenswrapper[4806]: I0126 08:15:27.085878 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f141762180104c521b7bffcd8ea54361bb9f059083d362577cf9100a2dd235dc"} err="failed to get container status \"f141762180104c521b7bffcd8ea54361bb9f059083d362577cf9100a2dd235dc\": rpc error: code = NotFound desc = could not find container \"f141762180104c521b7bffcd8ea54361bb9f059083d362577cf9100a2dd235dc\": container with ID starting with f141762180104c521b7bffcd8ea54361bb9f059083d362577cf9100a2dd235dc not found: ID does not exist" Jan 26 08:15:27 crc kubenswrapper[4806]: I0126 08:15:27.085904 4806 scope.go:117] "RemoveContainer" containerID="064bb866426cda4e680a27ba4d576877d2096d5ec28f01113c3d45c614189c37" Jan 26 08:15:27 crc kubenswrapper[4806]: E0126 08:15:27.089054 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"064bb866426cda4e680a27ba4d576877d2096d5ec28f01113c3d45c614189c37\": container with ID starting with 064bb866426cda4e680a27ba4d576877d2096d5ec28f01113c3d45c614189c37 not found: ID does not exist" containerID="064bb866426cda4e680a27ba4d576877d2096d5ec28f01113c3d45c614189c37" Jan 26 08:15:27 crc kubenswrapper[4806]: I0126 08:15:27.089088 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"064bb866426cda4e680a27ba4d576877d2096d5ec28f01113c3d45c614189c37"} err="failed to get container status \"064bb866426cda4e680a27ba4d576877d2096d5ec28f01113c3d45c614189c37\": rpc error: code = NotFound desc = could not find container \"064bb866426cda4e680a27ba4d576877d2096d5ec28f01113c3d45c614189c37\": container with ID starting with 064bb866426cda4e680a27ba4d576877d2096d5ec28f01113c3d45c614189c37 not found: ID does not exist" Jan 26 08:15:27 crc kubenswrapper[4806]: I0126 08:15:27.089112 4806 scope.go:117] "RemoveContainer" containerID="1727c328532a2dfb6251d6e5e4df741df38623a7ae21c46b0fa9b876282b4d7f" Jan 26 08:15:27 crc kubenswrapper[4806]: I0126 08:15:27.094131 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 26 08:15:27 crc kubenswrapper[4806]: I0126 08:15:27.114859 4806 scope.go:117] "RemoveContainer" containerID="2218f5533d96af9fe346f68622866cb68caba66cdeb205c50f295727b54e7752" Jan 26 08:15:27 crc kubenswrapper[4806]: I0126 08:15:27.128872 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 26 08:15:27 crc kubenswrapper[4806]: I0126 08:15:27.143053 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 26 08:15:27 crc kubenswrapper[4806]: E0126 08:15:27.143751 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="025ae3ca-3082-4bc8-8611-5b23cec63932" containerName="setup-container" Jan 26 08:15:27 crc kubenswrapper[4806]: I0126 08:15:27.143770 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="025ae3ca-3082-4bc8-8611-5b23cec63932" containerName="setup-container" Jan 26 08:15:27 crc kubenswrapper[4806]: E0126 08:15:27.143803 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f0a4edb-6a17-43a2-9c55-88ded9dfcc35" containerName="rabbitmq" Jan 26 08:15:27 crc kubenswrapper[4806]: I0126 08:15:27.143813 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f0a4edb-6a17-43a2-9c55-88ded9dfcc35" containerName="rabbitmq" Jan 26 08:15:27 crc kubenswrapper[4806]: E0126 08:15:27.143840 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f0a4edb-6a17-43a2-9c55-88ded9dfcc35" containerName="setup-container" Jan 26 08:15:27 crc kubenswrapper[4806]: I0126 08:15:27.143852 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f0a4edb-6a17-43a2-9c55-88ded9dfcc35" containerName="setup-container" Jan 26 08:15:27 crc kubenswrapper[4806]: E0126 08:15:27.143864 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="025ae3ca-3082-4bc8-8611-5b23cec63932" containerName="rabbitmq" Jan 26 08:15:27 crc kubenswrapper[4806]: I0126 08:15:27.143871 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="025ae3ca-3082-4bc8-8611-5b23cec63932" containerName="rabbitmq" Jan 26 08:15:27 crc kubenswrapper[4806]: I0126 08:15:27.144054 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="025ae3ca-3082-4bc8-8611-5b23cec63932" containerName="rabbitmq" Jan 26 08:15:27 crc kubenswrapper[4806]: I0126 08:15:27.144085 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f0a4edb-6a17-43a2-9c55-88ded9dfcc35" containerName="rabbitmq" Jan 26 08:15:27 crc kubenswrapper[4806]: I0126 08:15:27.145718 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 26 08:15:27 crc kubenswrapper[4806]: I0126 08:15:27.149582 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 26 08:15:27 crc kubenswrapper[4806]: I0126 08:15:27.149624 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 26 08:15:27 crc kubenswrapper[4806]: I0126 08:15:27.149678 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 26 08:15:27 crc kubenswrapper[4806]: I0126 08:15:27.149805 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 26 08:15:27 crc kubenswrapper[4806]: I0126 08:15:27.149914 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-czc68" Jan 26 08:15:27 crc kubenswrapper[4806]: I0126 08:15:27.150194 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 26 08:15:27 crc kubenswrapper[4806]: I0126 08:15:27.150349 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 26 08:15:27 crc kubenswrapper[4806]: I0126 08:15:27.181547 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 26 08:15:27 crc kubenswrapper[4806]: I0126 08:15:27.274040 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/86791215-4d1e-4b06-b013-fa551e935b74-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"86791215-4d1e-4b06-b013-fa551e935b74\") " pod="openstack/rabbitmq-server-0" Jan 26 08:15:27 crc kubenswrapper[4806]: I0126 08:15:27.274120 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7j7rw\" (UniqueName: \"kubernetes.io/projected/86791215-4d1e-4b06-b013-fa551e935b74-kube-api-access-7j7rw\") pod \"rabbitmq-server-0\" (UID: \"86791215-4d1e-4b06-b013-fa551e935b74\") " pod="openstack/rabbitmq-server-0" Jan 26 08:15:27 crc kubenswrapper[4806]: I0126 08:15:27.274155 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/86791215-4d1e-4b06-b013-fa551e935b74-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"86791215-4d1e-4b06-b013-fa551e935b74\") " pod="openstack/rabbitmq-server-0" Jan 26 08:15:27 crc kubenswrapper[4806]: I0126 08:15:27.274203 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"86791215-4d1e-4b06-b013-fa551e935b74\") " pod="openstack/rabbitmq-server-0" Jan 26 08:15:27 crc kubenswrapper[4806]: I0126 08:15:27.274268 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/86791215-4d1e-4b06-b013-fa551e935b74-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"86791215-4d1e-4b06-b013-fa551e935b74\") " pod="openstack/rabbitmq-server-0" Jan 26 08:15:27 crc kubenswrapper[4806]: I0126 08:15:27.274291 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/86791215-4d1e-4b06-b013-fa551e935b74-pod-info\") pod \"rabbitmq-server-0\" (UID: \"86791215-4d1e-4b06-b013-fa551e935b74\") " pod="openstack/rabbitmq-server-0" Jan 26 08:15:27 crc kubenswrapper[4806]: I0126 08:15:27.274334 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/86791215-4d1e-4b06-b013-fa551e935b74-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"86791215-4d1e-4b06-b013-fa551e935b74\") " pod="openstack/rabbitmq-server-0" Jan 26 08:15:27 crc kubenswrapper[4806]: I0126 08:15:27.274374 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/86791215-4d1e-4b06-b013-fa551e935b74-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"86791215-4d1e-4b06-b013-fa551e935b74\") " pod="openstack/rabbitmq-server-0" Jan 26 08:15:27 crc kubenswrapper[4806]: I0126 08:15:27.274405 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/86791215-4d1e-4b06-b013-fa551e935b74-config-data\") pod \"rabbitmq-server-0\" (UID: \"86791215-4d1e-4b06-b013-fa551e935b74\") " pod="openstack/rabbitmq-server-0" Jan 26 08:15:27 crc kubenswrapper[4806]: I0126 08:15:27.274434 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/86791215-4d1e-4b06-b013-fa551e935b74-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"86791215-4d1e-4b06-b013-fa551e935b74\") " pod="openstack/rabbitmq-server-0" Jan 26 08:15:27 crc kubenswrapper[4806]: I0126 08:15:27.274469 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/86791215-4d1e-4b06-b013-fa551e935b74-server-conf\") pod \"rabbitmq-server-0\" (UID: \"86791215-4d1e-4b06-b013-fa551e935b74\") " pod="openstack/rabbitmq-server-0" Jan 26 08:15:27 crc kubenswrapper[4806]: I0126 08:15:27.275873 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 26 08:15:27 crc kubenswrapper[4806]: I0126 08:15:27.293079 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 26 08:15:27 crc kubenswrapper[4806]: I0126 08:15:27.299698 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 26 08:15:27 crc kubenswrapper[4806]: I0126 08:15:27.301170 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 26 08:15:27 crc kubenswrapper[4806]: I0126 08:15:27.304090 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 26 08:15:27 crc kubenswrapper[4806]: I0126 08:15:27.304626 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 26 08:15:27 crc kubenswrapper[4806]: I0126 08:15:27.304735 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 26 08:15:27 crc kubenswrapper[4806]: I0126 08:15:27.312123 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-77bn9" Jan 26 08:15:27 crc kubenswrapper[4806]: I0126 08:15:27.312404 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 26 08:15:27 crc kubenswrapper[4806]: I0126 08:15:27.312423 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 26 08:15:27 crc kubenswrapper[4806]: I0126 08:15:27.312553 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 26 08:15:27 crc kubenswrapper[4806]: I0126 08:15:27.347270 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 26 08:15:27 crc kubenswrapper[4806]: I0126 08:15:27.379755 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/86791215-4d1e-4b06-b013-fa551e935b74-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"86791215-4d1e-4b06-b013-fa551e935b74\") " pod="openstack/rabbitmq-server-0" Jan 26 08:15:27 crc kubenswrapper[4806]: I0126 08:15:27.379801 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/86791215-4d1e-4b06-b013-fa551e935b74-pod-info\") pod \"rabbitmq-server-0\" (UID: \"86791215-4d1e-4b06-b013-fa551e935b74\") " pod="openstack/rabbitmq-server-0" Jan 26 08:15:27 crc kubenswrapper[4806]: I0126 08:15:27.379843 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/da8e3d3a-b943-47b1-9c7d-5c44a1816934-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"da8e3d3a-b943-47b1-9c7d-5c44a1816934\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 08:15:27 crc kubenswrapper[4806]: I0126 08:15:27.379862 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/86791215-4d1e-4b06-b013-fa551e935b74-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"86791215-4d1e-4b06-b013-fa551e935b74\") " pod="openstack/rabbitmq-server-0" Jan 26 08:15:27 crc kubenswrapper[4806]: I0126 08:15:27.379879 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"da8e3d3a-b943-47b1-9c7d-5c44a1816934\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 08:15:27 crc kubenswrapper[4806]: I0126 08:15:27.379910 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/86791215-4d1e-4b06-b013-fa551e935b74-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"86791215-4d1e-4b06-b013-fa551e935b74\") " pod="openstack/rabbitmq-server-0" Jan 26 08:15:27 crc kubenswrapper[4806]: I0126 08:15:27.379946 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/86791215-4d1e-4b06-b013-fa551e935b74-config-data\") pod \"rabbitmq-server-0\" (UID: \"86791215-4d1e-4b06-b013-fa551e935b74\") " pod="openstack/rabbitmq-server-0" Jan 26 08:15:27 crc kubenswrapper[4806]: I0126 08:15:27.379963 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/da8e3d3a-b943-47b1-9c7d-5c44a1816934-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"da8e3d3a-b943-47b1-9c7d-5c44a1816934\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 08:15:27 crc kubenswrapper[4806]: I0126 08:15:27.379976 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/da8e3d3a-b943-47b1-9c7d-5c44a1816934-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"da8e3d3a-b943-47b1-9c7d-5c44a1816934\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 08:15:27 crc kubenswrapper[4806]: I0126 08:15:27.379994 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/86791215-4d1e-4b06-b013-fa551e935b74-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"86791215-4d1e-4b06-b013-fa551e935b74\") " pod="openstack/rabbitmq-server-0" Jan 26 08:15:27 crc kubenswrapper[4806]: I0126 08:15:27.380020 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/86791215-4d1e-4b06-b013-fa551e935b74-server-conf\") pod \"rabbitmq-server-0\" (UID: \"86791215-4d1e-4b06-b013-fa551e935b74\") " pod="openstack/rabbitmq-server-0" Jan 26 08:15:27 crc kubenswrapper[4806]: I0126 08:15:27.380069 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-js9zq\" (UniqueName: \"kubernetes.io/projected/da8e3d3a-b943-47b1-9c7d-5c44a1816934-kube-api-access-js9zq\") pod \"rabbitmq-cell1-server-0\" (UID: \"da8e3d3a-b943-47b1-9c7d-5c44a1816934\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 08:15:27 crc kubenswrapper[4806]: I0126 08:15:27.380090 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/da8e3d3a-b943-47b1-9c7d-5c44a1816934-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"da8e3d3a-b943-47b1-9c7d-5c44a1816934\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 08:15:27 crc kubenswrapper[4806]: I0126 08:15:27.380113 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/86791215-4d1e-4b06-b013-fa551e935b74-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"86791215-4d1e-4b06-b013-fa551e935b74\") " pod="openstack/rabbitmq-server-0" Jan 26 08:15:27 crc kubenswrapper[4806]: I0126 08:15:27.380138 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/da8e3d3a-b943-47b1-9c7d-5c44a1816934-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"da8e3d3a-b943-47b1-9c7d-5c44a1816934\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 08:15:27 crc kubenswrapper[4806]: I0126 08:15:27.380154 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/da8e3d3a-b943-47b1-9c7d-5c44a1816934-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"da8e3d3a-b943-47b1-9c7d-5c44a1816934\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 08:15:27 crc kubenswrapper[4806]: I0126 08:15:27.380178 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7j7rw\" (UniqueName: \"kubernetes.io/projected/86791215-4d1e-4b06-b013-fa551e935b74-kube-api-access-7j7rw\") pod \"rabbitmq-server-0\" (UID: \"86791215-4d1e-4b06-b013-fa551e935b74\") " pod="openstack/rabbitmq-server-0" Jan 26 08:15:27 crc kubenswrapper[4806]: I0126 08:15:27.380195 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/da8e3d3a-b943-47b1-9c7d-5c44a1816934-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"da8e3d3a-b943-47b1-9c7d-5c44a1816934\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 08:15:27 crc kubenswrapper[4806]: I0126 08:15:27.380212 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/86791215-4d1e-4b06-b013-fa551e935b74-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"86791215-4d1e-4b06-b013-fa551e935b74\") " pod="openstack/rabbitmq-server-0" Jan 26 08:15:27 crc kubenswrapper[4806]: I0126 08:15:27.380235 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/da8e3d3a-b943-47b1-9c7d-5c44a1816934-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"da8e3d3a-b943-47b1-9c7d-5c44a1816934\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 08:15:27 crc kubenswrapper[4806]: I0126 08:15:27.380259 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/da8e3d3a-b943-47b1-9c7d-5c44a1816934-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"da8e3d3a-b943-47b1-9c7d-5c44a1816934\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 08:15:27 crc kubenswrapper[4806]: I0126 08:15:27.380284 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"86791215-4d1e-4b06-b013-fa551e935b74\") " pod="openstack/rabbitmq-server-0" Jan 26 08:15:27 crc kubenswrapper[4806]: I0126 08:15:27.380513 4806 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"86791215-4d1e-4b06-b013-fa551e935b74\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/rabbitmq-server-0" Jan 26 08:15:27 crc kubenswrapper[4806]: I0126 08:15:27.381268 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/86791215-4d1e-4b06-b013-fa551e935b74-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"86791215-4d1e-4b06-b013-fa551e935b74\") " pod="openstack/rabbitmq-server-0" Jan 26 08:15:27 crc kubenswrapper[4806]: I0126 08:15:27.382492 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/86791215-4d1e-4b06-b013-fa551e935b74-server-conf\") pod \"rabbitmq-server-0\" (UID: \"86791215-4d1e-4b06-b013-fa551e935b74\") " pod="openstack/rabbitmq-server-0" Jan 26 08:15:27 crc kubenswrapper[4806]: I0126 08:15:27.382910 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/86791215-4d1e-4b06-b013-fa551e935b74-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"86791215-4d1e-4b06-b013-fa551e935b74\") " pod="openstack/rabbitmq-server-0" Jan 26 08:15:27 crc kubenswrapper[4806]: I0126 08:15:27.383334 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/86791215-4d1e-4b06-b013-fa551e935b74-config-data\") pod \"rabbitmq-server-0\" (UID: \"86791215-4d1e-4b06-b013-fa551e935b74\") " pod="openstack/rabbitmq-server-0" Jan 26 08:15:27 crc kubenswrapper[4806]: I0126 08:15:27.383787 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/86791215-4d1e-4b06-b013-fa551e935b74-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"86791215-4d1e-4b06-b013-fa551e935b74\") " pod="openstack/rabbitmq-server-0" Jan 26 08:15:27 crc kubenswrapper[4806]: I0126 08:15:27.386780 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/86791215-4d1e-4b06-b013-fa551e935b74-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"86791215-4d1e-4b06-b013-fa551e935b74\") " pod="openstack/rabbitmq-server-0" Jan 26 08:15:27 crc kubenswrapper[4806]: I0126 08:15:27.386914 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/86791215-4d1e-4b06-b013-fa551e935b74-pod-info\") pod \"rabbitmq-server-0\" (UID: \"86791215-4d1e-4b06-b013-fa551e935b74\") " pod="openstack/rabbitmq-server-0" Jan 26 08:15:27 crc kubenswrapper[4806]: I0126 08:15:27.388758 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/86791215-4d1e-4b06-b013-fa551e935b74-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"86791215-4d1e-4b06-b013-fa551e935b74\") " pod="openstack/rabbitmq-server-0" Jan 26 08:15:27 crc kubenswrapper[4806]: I0126 08:15:27.397577 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/86791215-4d1e-4b06-b013-fa551e935b74-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"86791215-4d1e-4b06-b013-fa551e935b74\") " pod="openstack/rabbitmq-server-0" Jan 26 08:15:27 crc kubenswrapper[4806]: I0126 08:15:27.405336 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7j7rw\" (UniqueName: \"kubernetes.io/projected/86791215-4d1e-4b06-b013-fa551e935b74-kube-api-access-7j7rw\") pod \"rabbitmq-server-0\" (UID: \"86791215-4d1e-4b06-b013-fa551e935b74\") " pod="openstack/rabbitmq-server-0" Jan 26 08:15:27 crc kubenswrapper[4806]: I0126 08:15:27.430161 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"86791215-4d1e-4b06-b013-fa551e935b74\") " pod="openstack/rabbitmq-server-0" Jan 26 08:15:27 crc kubenswrapper[4806]: I0126 08:15:27.477682 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 26 08:15:27 crc kubenswrapper[4806]: I0126 08:15:27.481734 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-js9zq\" (UniqueName: \"kubernetes.io/projected/da8e3d3a-b943-47b1-9c7d-5c44a1816934-kube-api-access-js9zq\") pod \"rabbitmq-cell1-server-0\" (UID: \"da8e3d3a-b943-47b1-9c7d-5c44a1816934\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 08:15:27 crc kubenswrapper[4806]: I0126 08:15:27.482177 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/da8e3d3a-b943-47b1-9c7d-5c44a1816934-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"da8e3d3a-b943-47b1-9c7d-5c44a1816934\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 08:15:27 crc kubenswrapper[4806]: I0126 08:15:27.483012 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/da8e3d3a-b943-47b1-9c7d-5c44a1816934-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"da8e3d3a-b943-47b1-9c7d-5c44a1816934\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 08:15:27 crc kubenswrapper[4806]: I0126 08:15:27.483090 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/da8e3d3a-b943-47b1-9c7d-5c44a1816934-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"da8e3d3a-b943-47b1-9c7d-5c44a1816934\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 08:15:27 crc kubenswrapper[4806]: I0126 08:15:27.483151 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/da8e3d3a-b943-47b1-9c7d-5c44a1816934-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"da8e3d3a-b943-47b1-9c7d-5c44a1816934\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 08:15:27 crc kubenswrapper[4806]: I0126 08:15:27.483548 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/da8e3d3a-b943-47b1-9c7d-5c44a1816934-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"da8e3d3a-b943-47b1-9c7d-5c44a1816934\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 08:15:27 crc kubenswrapper[4806]: I0126 08:15:27.483594 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/da8e3d3a-b943-47b1-9c7d-5c44a1816934-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"da8e3d3a-b943-47b1-9c7d-5c44a1816934\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 08:15:27 crc kubenswrapper[4806]: I0126 08:15:27.483590 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/da8e3d3a-b943-47b1-9c7d-5c44a1816934-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"da8e3d3a-b943-47b1-9c7d-5c44a1816934\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 08:15:27 crc kubenswrapper[4806]: I0126 08:15:27.483628 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/da8e3d3a-b943-47b1-9c7d-5c44a1816934-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"da8e3d3a-b943-47b1-9c7d-5c44a1816934\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 08:15:27 crc kubenswrapper[4806]: I0126 08:15:27.483739 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/da8e3d3a-b943-47b1-9c7d-5c44a1816934-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"da8e3d3a-b943-47b1-9c7d-5c44a1816934\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 08:15:27 crc kubenswrapper[4806]: I0126 08:15:27.483758 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"da8e3d3a-b943-47b1-9c7d-5c44a1816934\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 08:15:27 crc kubenswrapper[4806]: I0126 08:15:27.483819 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/da8e3d3a-b943-47b1-9c7d-5c44a1816934-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"da8e3d3a-b943-47b1-9c7d-5c44a1816934\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 08:15:27 crc kubenswrapper[4806]: I0126 08:15:27.483835 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/da8e3d3a-b943-47b1-9c7d-5c44a1816934-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"da8e3d3a-b943-47b1-9c7d-5c44a1816934\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 08:15:27 crc kubenswrapper[4806]: I0126 08:15:27.484198 4806 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"da8e3d3a-b943-47b1-9c7d-5c44a1816934\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/rabbitmq-cell1-server-0" Jan 26 08:15:27 crc kubenswrapper[4806]: I0126 08:15:27.484607 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/da8e3d3a-b943-47b1-9c7d-5c44a1816934-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"da8e3d3a-b943-47b1-9c7d-5c44a1816934\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 08:15:27 crc kubenswrapper[4806]: I0126 08:15:27.484788 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/da8e3d3a-b943-47b1-9c7d-5c44a1816934-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"da8e3d3a-b943-47b1-9c7d-5c44a1816934\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 08:15:27 crc kubenswrapper[4806]: I0126 08:15:27.486635 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/da8e3d3a-b943-47b1-9c7d-5c44a1816934-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"da8e3d3a-b943-47b1-9c7d-5c44a1816934\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 08:15:27 crc kubenswrapper[4806]: I0126 08:15:27.490379 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/da8e3d3a-b943-47b1-9c7d-5c44a1816934-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"da8e3d3a-b943-47b1-9c7d-5c44a1816934\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 08:15:27 crc kubenswrapper[4806]: I0126 08:15:27.491196 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/da8e3d3a-b943-47b1-9c7d-5c44a1816934-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"da8e3d3a-b943-47b1-9c7d-5c44a1816934\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 08:15:27 crc kubenswrapper[4806]: I0126 08:15:27.491421 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/da8e3d3a-b943-47b1-9c7d-5c44a1816934-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"da8e3d3a-b943-47b1-9c7d-5c44a1816934\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 08:15:27 crc kubenswrapper[4806]: I0126 08:15:27.503001 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-js9zq\" (UniqueName: \"kubernetes.io/projected/da8e3d3a-b943-47b1-9c7d-5c44a1816934-kube-api-access-js9zq\") pod \"rabbitmq-cell1-server-0\" (UID: \"da8e3d3a-b943-47b1-9c7d-5c44a1816934\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 08:15:27 crc kubenswrapper[4806]: I0126 08:15:27.505014 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/da8e3d3a-b943-47b1-9c7d-5c44a1816934-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"da8e3d3a-b943-47b1-9c7d-5c44a1816934\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 08:15:27 crc kubenswrapper[4806]: I0126 08:15:27.513986 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"da8e3d3a-b943-47b1-9c7d-5c44a1816934\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 08:15:27 crc kubenswrapper[4806]: I0126 08:15:27.633499 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 26 08:15:27 crc kubenswrapper[4806]: I0126 08:15:27.788035 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 26 08:15:28 crc kubenswrapper[4806]: I0126 08:15:28.001268 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"86791215-4d1e-4b06-b013-fa551e935b74","Type":"ContainerStarted","Data":"58afa4e5b42584d756fa58f5c1d4491db298d7e08806b68cde1e7fdb106e19df"} Jan 26 08:15:28 crc kubenswrapper[4806]: I0126 08:15:28.125744 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 26 08:15:28 crc kubenswrapper[4806]: W0126 08:15:28.126127 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podda8e3d3a_b943_47b1_9c7d_5c44a1816934.slice/crio-92575d9617bdfdf81cffad9fec86f2473270bab3dabba81bed276b51fc2f09c5 WatchSource:0}: Error finding container 92575d9617bdfdf81cffad9fec86f2473270bab3dabba81bed276b51fc2f09c5: Status 404 returned error can't find the container with id 92575d9617bdfdf81cffad9fec86f2473270bab3dabba81bed276b51fc2f09c5 Jan 26 08:15:29 crc kubenswrapper[4806]: I0126 08:15:29.008796 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"da8e3d3a-b943-47b1-9c7d-5c44a1816934","Type":"ContainerStarted","Data":"92575d9617bdfdf81cffad9fec86f2473270bab3dabba81bed276b51fc2f09c5"} Jan 26 08:15:29 crc kubenswrapper[4806]: I0126 08:15:29.050536 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="025ae3ca-3082-4bc8-8611-5b23cec63932" path="/var/lib/kubelet/pods/025ae3ca-3082-4bc8-8611-5b23cec63932/volumes" Jan 26 08:15:29 crc kubenswrapper[4806]: I0126 08:15:29.051819 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f0a4edb-6a17-43a2-9c55-88ded9dfcc35" path="/var/lib/kubelet/pods/8f0a4edb-6a17-43a2-9c55-88ded9dfcc35/volumes" Jan 26 08:15:29 crc kubenswrapper[4806]: I0126 08:15:29.348903 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b75489c6f-sp966"] Jan 26 08:15:29 crc kubenswrapper[4806]: I0126 08:15:29.350692 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b75489c6f-sp966" Jan 26 08:15:29 crc kubenswrapper[4806]: I0126 08:15:29.360166 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Jan 26 08:15:29 crc kubenswrapper[4806]: I0126 08:15:29.420808 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f38dea41-6b2d-40f4-ab2c-c10600f0ed9e-ovsdbserver-sb\") pod \"dnsmasq-dns-5b75489c6f-sp966\" (UID: \"f38dea41-6b2d-40f4-ab2c-c10600f0ed9e\") " pod="openstack/dnsmasq-dns-5b75489c6f-sp966" Jan 26 08:15:29 crc kubenswrapper[4806]: I0126 08:15:29.420861 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f38dea41-6b2d-40f4-ab2c-c10600f0ed9e-dns-svc\") pod \"dnsmasq-dns-5b75489c6f-sp966\" (UID: \"f38dea41-6b2d-40f4-ab2c-c10600f0ed9e\") " pod="openstack/dnsmasq-dns-5b75489c6f-sp966" Jan 26 08:15:29 crc kubenswrapper[4806]: I0126 08:15:29.420944 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f38dea41-6b2d-40f4-ab2c-c10600f0ed9e-config\") pod \"dnsmasq-dns-5b75489c6f-sp966\" (UID: \"f38dea41-6b2d-40f4-ab2c-c10600f0ed9e\") " pod="openstack/dnsmasq-dns-5b75489c6f-sp966" Jan 26 08:15:29 crc kubenswrapper[4806]: I0126 08:15:29.421045 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f38dea41-6b2d-40f4-ab2c-c10600f0ed9e-dns-swift-storage-0\") pod \"dnsmasq-dns-5b75489c6f-sp966\" (UID: \"f38dea41-6b2d-40f4-ab2c-c10600f0ed9e\") " pod="openstack/dnsmasq-dns-5b75489c6f-sp966" Jan 26 08:15:29 crc kubenswrapper[4806]: I0126 08:15:29.421087 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f38dea41-6b2d-40f4-ab2c-c10600f0ed9e-ovsdbserver-nb\") pod \"dnsmasq-dns-5b75489c6f-sp966\" (UID: \"f38dea41-6b2d-40f4-ab2c-c10600f0ed9e\") " pod="openstack/dnsmasq-dns-5b75489c6f-sp966" Jan 26 08:15:29 crc kubenswrapper[4806]: I0126 08:15:29.421112 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/f38dea41-6b2d-40f4-ab2c-c10600f0ed9e-openstack-edpm-ipam\") pod \"dnsmasq-dns-5b75489c6f-sp966\" (UID: \"f38dea41-6b2d-40f4-ab2c-c10600f0ed9e\") " pod="openstack/dnsmasq-dns-5b75489c6f-sp966" Jan 26 08:15:29 crc kubenswrapper[4806]: I0126 08:15:29.421250 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pp47\" (UniqueName: \"kubernetes.io/projected/f38dea41-6b2d-40f4-ab2c-c10600f0ed9e-kube-api-access-7pp47\") pod \"dnsmasq-dns-5b75489c6f-sp966\" (UID: \"f38dea41-6b2d-40f4-ab2c-c10600f0ed9e\") " pod="openstack/dnsmasq-dns-5b75489c6f-sp966" Jan 26 08:15:29 crc kubenswrapper[4806]: I0126 08:15:29.425812 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b75489c6f-sp966"] Jan 26 08:15:29 crc kubenswrapper[4806]: I0126 08:15:29.522936 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f38dea41-6b2d-40f4-ab2c-c10600f0ed9e-dns-swift-storage-0\") pod \"dnsmasq-dns-5b75489c6f-sp966\" (UID: \"f38dea41-6b2d-40f4-ab2c-c10600f0ed9e\") " pod="openstack/dnsmasq-dns-5b75489c6f-sp966" Jan 26 08:15:29 crc kubenswrapper[4806]: I0126 08:15:29.523082 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f38dea41-6b2d-40f4-ab2c-c10600f0ed9e-ovsdbserver-nb\") pod \"dnsmasq-dns-5b75489c6f-sp966\" (UID: \"f38dea41-6b2d-40f4-ab2c-c10600f0ed9e\") " pod="openstack/dnsmasq-dns-5b75489c6f-sp966" Jan 26 08:15:29 crc kubenswrapper[4806]: I0126 08:15:29.523104 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/f38dea41-6b2d-40f4-ab2c-c10600f0ed9e-openstack-edpm-ipam\") pod \"dnsmasq-dns-5b75489c6f-sp966\" (UID: \"f38dea41-6b2d-40f4-ab2c-c10600f0ed9e\") " pod="openstack/dnsmasq-dns-5b75489c6f-sp966" Jan 26 08:15:29 crc kubenswrapper[4806]: I0126 08:15:29.523146 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7pp47\" (UniqueName: \"kubernetes.io/projected/f38dea41-6b2d-40f4-ab2c-c10600f0ed9e-kube-api-access-7pp47\") pod \"dnsmasq-dns-5b75489c6f-sp966\" (UID: \"f38dea41-6b2d-40f4-ab2c-c10600f0ed9e\") " pod="openstack/dnsmasq-dns-5b75489c6f-sp966" Jan 26 08:15:29 crc kubenswrapper[4806]: I0126 08:15:29.523353 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f38dea41-6b2d-40f4-ab2c-c10600f0ed9e-ovsdbserver-sb\") pod \"dnsmasq-dns-5b75489c6f-sp966\" (UID: \"f38dea41-6b2d-40f4-ab2c-c10600f0ed9e\") " pod="openstack/dnsmasq-dns-5b75489c6f-sp966" Jan 26 08:15:29 crc kubenswrapper[4806]: I0126 08:15:29.523399 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f38dea41-6b2d-40f4-ab2c-c10600f0ed9e-dns-svc\") pod \"dnsmasq-dns-5b75489c6f-sp966\" (UID: \"f38dea41-6b2d-40f4-ab2c-c10600f0ed9e\") " pod="openstack/dnsmasq-dns-5b75489c6f-sp966" Jan 26 08:15:29 crc kubenswrapper[4806]: I0126 08:15:29.523563 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f38dea41-6b2d-40f4-ab2c-c10600f0ed9e-config\") pod \"dnsmasq-dns-5b75489c6f-sp966\" (UID: \"f38dea41-6b2d-40f4-ab2c-c10600f0ed9e\") " pod="openstack/dnsmasq-dns-5b75489c6f-sp966" Jan 26 08:15:29 crc kubenswrapper[4806]: I0126 08:15:29.524001 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f38dea41-6b2d-40f4-ab2c-c10600f0ed9e-dns-swift-storage-0\") pod \"dnsmasq-dns-5b75489c6f-sp966\" (UID: \"f38dea41-6b2d-40f4-ab2c-c10600f0ed9e\") " pod="openstack/dnsmasq-dns-5b75489c6f-sp966" Jan 26 08:15:29 crc kubenswrapper[4806]: I0126 08:15:29.524072 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/f38dea41-6b2d-40f4-ab2c-c10600f0ed9e-openstack-edpm-ipam\") pod \"dnsmasq-dns-5b75489c6f-sp966\" (UID: \"f38dea41-6b2d-40f4-ab2c-c10600f0ed9e\") " pod="openstack/dnsmasq-dns-5b75489c6f-sp966" Jan 26 08:15:29 crc kubenswrapper[4806]: I0126 08:15:29.524175 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f38dea41-6b2d-40f4-ab2c-c10600f0ed9e-ovsdbserver-nb\") pod \"dnsmasq-dns-5b75489c6f-sp966\" (UID: \"f38dea41-6b2d-40f4-ab2c-c10600f0ed9e\") " pod="openstack/dnsmasq-dns-5b75489c6f-sp966" Jan 26 08:15:29 crc kubenswrapper[4806]: I0126 08:15:29.524415 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f38dea41-6b2d-40f4-ab2c-c10600f0ed9e-config\") pod \"dnsmasq-dns-5b75489c6f-sp966\" (UID: \"f38dea41-6b2d-40f4-ab2c-c10600f0ed9e\") " pod="openstack/dnsmasq-dns-5b75489c6f-sp966" Jan 26 08:15:29 crc kubenswrapper[4806]: I0126 08:15:29.524742 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f38dea41-6b2d-40f4-ab2c-c10600f0ed9e-dns-svc\") pod \"dnsmasq-dns-5b75489c6f-sp966\" (UID: \"f38dea41-6b2d-40f4-ab2c-c10600f0ed9e\") " pod="openstack/dnsmasq-dns-5b75489c6f-sp966" Jan 26 08:15:29 crc kubenswrapper[4806]: I0126 08:15:29.524786 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f38dea41-6b2d-40f4-ab2c-c10600f0ed9e-ovsdbserver-sb\") pod \"dnsmasq-dns-5b75489c6f-sp966\" (UID: \"f38dea41-6b2d-40f4-ab2c-c10600f0ed9e\") " pod="openstack/dnsmasq-dns-5b75489c6f-sp966" Jan 26 08:15:29 crc kubenswrapper[4806]: I0126 08:15:29.556036 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7pp47\" (UniqueName: \"kubernetes.io/projected/f38dea41-6b2d-40f4-ab2c-c10600f0ed9e-kube-api-access-7pp47\") pod \"dnsmasq-dns-5b75489c6f-sp966\" (UID: \"f38dea41-6b2d-40f4-ab2c-c10600f0ed9e\") " pod="openstack/dnsmasq-dns-5b75489c6f-sp966" Jan 26 08:15:29 crc kubenswrapper[4806]: I0126 08:15:29.675987 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b75489c6f-sp966" Jan 26 08:15:30 crc kubenswrapper[4806]: I0126 08:15:30.018533 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"da8e3d3a-b943-47b1-9c7d-5c44a1816934","Type":"ContainerStarted","Data":"dd4af88d04963eeb3e147b4e80c26f10fd2426c3c016a9abc5aa4f502245a724"} Jan 26 08:15:30 crc kubenswrapper[4806]: I0126 08:15:30.020513 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"86791215-4d1e-4b06-b013-fa551e935b74","Type":"ContainerStarted","Data":"17e2b167640b38907c57e35bfd8417ebf4fc5b9922e7a6d0c0d2ce66f863df4f"} Jan 26 08:15:30 crc kubenswrapper[4806]: I0126 08:15:30.168162 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b75489c6f-sp966"] Jan 26 08:15:30 crc kubenswrapper[4806]: W0126 08:15:30.171600 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf38dea41_6b2d_40f4_ab2c_c10600f0ed9e.slice/crio-9d88991d54edd66feed7754ef4b6d5521b03c54234a67700af3a53285fa98f8c WatchSource:0}: Error finding container 9d88991d54edd66feed7754ef4b6d5521b03c54234a67700af3a53285fa98f8c: Status 404 returned error can't find the container with id 9d88991d54edd66feed7754ef4b6d5521b03c54234a67700af3a53285fa98f8c Jan 26 08:15:31 crc kubenswrapper[4806]: I0126 08:15:31.032418 4806 generic.go:334] "Generic (PLEG): container finished" podID="f38dea41-6b2d-40f4-ab2c-c10600f0ed9e" containerID="ce1aadcbb471cf3df654427addc7bec36fdf485e01fdcd38dba12c71c3fc5c0f" exitCode=0 Jan 26 08:15:31 crc kubenswrapper[4806]: I0126 08:15:31.032499 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b75489c6f-sp966" event={"ID":"f38dea41-6b2d-40f4-ab2c-c10600f0ed9e","Type":"ContainerDied","Data":"ce1aadcbb471cf3df654427addc7bec36fdf485e01fdcd38dba12c71c3fc5c0f"} Jan 26 08:15:31 crc kubenswrapper[4806]: I0126 08:15:31.032894 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b75489c6f-sp966" event={"ID":"f38dea41-6b2d-40f4-ab2c-c10600f0ed9e","Type":"ContainerStarted","Data":"9d88991d54edd66feed7754ef4b6d5521b03c54234a67700af3a53285fa98f8c"} Jan 26 08:15:32 crc kubenswrapper[4806]: I0126 08:15:32.042371 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b75489c6f-sp966" event={"ID":"f38dea41-6b2d-40f4-ab2c-c10600f0ed9e","Type":"ContainerStarted","Data":"2e246ee7c82b3567cfedbe759c4f52bf5acfb3b8546e95ca3074814f2b65837c"} Jan 26 08:15:32 crc kubenswrapper[4806]: I0126 08:15:32.042925 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5b75489c6f-sp966" Jan 26 08:15:32 crc kubenswrapper[4806]: I0126 08:15:32.072122 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5b75489c6f-sp966" podStartSLOduration=3.072100081 podStartE2EDuration="3.072100081s" podCreationTimestamp="2026-01-26 08:15:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 08:15:32.058883071 +0000 UTC m=+1311.323291137" watchObservedRunningTime="2026-01-26 08:15:32.072100081 +0000 UTC m=+1311.336508137" Jan 26 08:15:39 crc kubenswrapper[4806]: I0126 08:15:39.677715 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5b75489c6f-sp966" Jan 26 08:15:39 crc kubenswrapper[4806]: I0126 08:15:39.739263 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f84f9ccf-dptl9"] Jan 26 08:15:39 crc kubenswrapper[4806]: I0126 08:15:39.739560 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-f84f9ccf-dptl9" podUID="bd8a87c9-fdf5-48dd-9d72-5767bad62a99" containerName="dnsmasq-dns" containerID="cri-o://f3df4a4892569a8fb143b20d9fad2bd154a0d27579b6b3335113ffd8ac087f6c" gracePeriod=10 Jan 26 08:15:39 crc kubenswrapper[4806]: I0126 08:15:39.934089 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-64c9b56dc5-dqgzc"] Jan 26 08:15:39 crc kubenswrapper[4806]: I0126 08:15:39.935639 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-64c9b56dc5-dqgzc" Jan 26 08:15:39 crc kubenswrapper[4806]: I0126 08:15:39.959300 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-64c9b56dc5-dqgzc"] Jan 26 08:15:40 crc kubenswrapper[4806]: I0126 08:15:40.035198 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/94d36ef4-cca6-4740-be74-d88ac60ed646-dns-svc\") pod \"dnsmasq-dns-64c9b56dc5-dqgzc\" (UID: \"94d36ef4-cca6-4740-be74-d88ac60ed646\") " pod="openstack/dnsmasq-dns-64c9b56dc5-dqgzc" Jan 26 08:15:40 crc kubenswrapper[4806]: I0126 08:15:40.035254 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dqkj\" (UniqueName: \"kubernetes.io/projected/94d36ef4-cca6-4740-be74-d88ac60ed646-kube-api-access-4dqkj\") pod \"dnsmasq-dns-64c9b56dc5-dqgzc\" (UID: \"94d36ef4-cca6-4740-be74-d88ac60ed646\") " pod="openstack/dnsmasq-dns-64c9b56dc5-dqgzc" Jan 26 08:15:40 crc kubenswrapper[4806]: I0126 08:15:40.035288 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/94d36ef4-cca6-4740-be74-d88ac60ed646-ovsdbserver-sb\") pod \"dnsmasq-dns-64c9b56dc5-dqgzc\" (UID: \"94d36ef4-cca6-4740-be74-d88ac60ed646\") " pod="openstack/dnsmasq-dns-64c9b56dc5-dqgzc" Jan 26 08:15:40 crc kubenswrapper[4806]: I0126 08:15:40.035341 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/94d36ef4-cca6-4740-be74-d88ac60ed646-dns-swift-storage-0\") pod \"dnsmasq-dns-64c9b56dc5-dqgzc\" (UID: \"94d36ef4-cca6-4740-be74-d88ac60ed646\") " pod="openstack/dnsmasq-dns-64c9b56dc5-dqgzc" Jan 26 08:15:40 crc kubenswrapper[4806]: I0126 08:15:40.035373 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/94d36ef4-cca6-4740-be74-d88ac60ed646-ovsdbserver-nb\") pod \"dnsmasq-dns-64c9b56dc5-dqgzc\" (UID: \"94d36ef4-cca6-4740-be74-d88ac60ed646\") " pod="openstack/dnsmasq-dns-64c9b56dc5-dqgzc" Jan 26 08:15:40 crc kubenswrapper[4806]: I0126 08:15:40.035463 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/94d36ef4-cca6-4740-be74-d88ac60ed646-openstack-edpm-ipam\") pod \"dnsmasq-dns-64c9b56dc5-dqgzc\" (UID: \"94d36ef4-cca6-4740-be74-d88ac60ed646\") " pod="openstack/dnsmasq-dns-64c9b56dc5-dqgzc" Jan 26 08:15:40 crc kubenswrapper[4806]: I0126 08:15:40.035514 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94d36ef4-cca6-4740-be74-d88ac60ed646-config\") pod \"dnsmasq-dns-64c9b56dc5-dqgzc\" (UID: \"94d36ef4-cca6-4740-be74-d88ac60ed646\") " pod="openstack/dnsmasq-dns-64c9b56dc5-dqgzc" Jan 26 08:15:40 crc kubenswrapper[4806]: I0126 08:15:40.118901 4806 generic.go:334] "Generic (PLEG): container finished" podID="bd8a87c9-fdf5-48dd-9d72-5767bad62a99" containerID="f3df4a4892569a8fb143b20d9fad2bd154a0d27579b6b3335113ffd8ac087f6c" exitCode=0 Jan 26 08:15:40 crc kubenswrapper[4806]: I0126 08:15:40.119247 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f84f9ccf-dptl9" event={"ID":"bd8a87c9-fdf5-48dd-9d72-5767bad62a99","Type":"ContainerDied","Data":"f3df4a4892569a8fb143b20d9fad2bd154a0d27579b6b3335113ffd8ac087f6c"} Jan 26 08:15:40 crc kubenswrapper[4806]: I0126 08:15:40.139994 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/94d36ef4-cca6-4740-be74-d88ac60ed646-openstack-edpm-ipam\") pod \"dnsmasq-dns-64c9b56dc5-dqgzc\" (UID: \"94d36ef4-cca6-4740-be74-d88ac60ed646\") " pod="openstack/dnsmasq-dns-64c9b56dc5-dqgzc" Jan 26 08:15:40 crc kubenswrapper[4806]: I0126 08:15:40.140253 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94d36ef4-cca6-4740-be74-d88ac60ed646-config\") pod \"dnsmasq-dns-64c9b56dc5-dqgzc\" (UID: \"94d36ef4-cca6-4740-be74-d88ac60ed646\") " pod="openstack/dnsmasq-dns-64c9b56dc5-dqgzc" Jan 26 08:15:40 crc kubenswrapper[4806]: I0126 08:15:40.140353 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/94d36ef4-cca6-4740-be74-d88ac60ed646-dns-svc\") pod \"dnsmasq-dns-64c9b56dc5-dqgzc\" (UID: \"94d36ef4-cca6-4740-be74-d88ac60ed646\") " pod="openstack/dnsmasq-dns-64c9b56dc5-dqgzc" Jan 26 08:15:40 crc kubenswrapper[4806]: I0126 08:15:40.140380 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4dqkj\" (UniqueName: \"kubernetes.io/projected/94d36ef4-cca6-4740-be74-d88ac60ed646-kube-api-access-4dqkj\") pod \"dnsmasq-dns-64c9b56dc5-dqgzc\" (UID: \"94d36ef4-cca6-4740-be74-d88ac60ed646\") " pod="openstack/dnsmasq-dns-64c9b56dc5-dqgzc" Jan 26 08:15:40 crc kubenswrapper[4806]: I0126 08:15:40.140414 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/94d36ef4-cca6-4740-be74-d88ac60ed646-ovsdbserver-sb\") pod \"dnsmasq-dns-64c9b56dc5-dqgzc\" (UID: \"94d36ef4-cca6-4740-be74-d88ac60ed646\") " pod="openstack/dnsmasq-dns-64c9b56dc5-dqgzc" Jan 26 08:15:40 crc kubenswrapper[4806]: I0126 08:15:40.140480 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/94d36ef4-cca6-4740-be74-d88ac60ed646-dns-swift-storage-0\") pod \"dnsmasq-dns-64c9b56dc5-dqgzc\" (UID: \"94d36ef4-cca6-4740-be74-d88ac60ed646\") " pod="openstack/dnsmasq-dns-64c9b56dc5-dqgzc" Jan 26 08:15:40 crc kubenswrapper[4806]: I0126 08:15:40.140516 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/94d36ef4-cca6-4740-be74-d88ac60ed646-ovsdbserver-nb\") pod \"dnsmasq-dns-64c9b56dc5-dqgzc\" (UID: \"94d36ef4-cca6-4740-be74-d88ac60ed646\") " pod="openstack/dnsmasq-dns-64c9b56dc5-dqgzc" Jan 26 08:15:40 crc kubenswrapper[4806]: I0126 08:15:40.143674 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/94d36ef4-cca6-4740-be74-d88ac60ed646-openstack-edpm-ipam\") pod \"dnsmasq-dns-64c9b56dc5-dqgzc\" (UID: \"94d36ef4-cca6-4740-be74-d88ac60ed646\") " pod="openstack/dnsmasq-dns-64c9b56dc5-dqgzc" Jan 26 08:15:40 crc kubenswrapper[4806]: I0126 08:15:40.145279 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94d36ef4-cca6-4740-be74-d88ac60ed646-config\") pod \"dnsmasq-dns-64c9b56dc5-dqgzc\" (UID: \"94d36ef4-cca6-4740-be74-d88ac60ed646\") " pod="openstack/dnsmasq-dns-64c9b56dc5-dqgzc" Jan 26 08:15:40 crc kubenswrapper[4806]: I0126 08:15:40.147221 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/94d36ef4-cca6-4740-be74-d88ac60ed646-dns-swift-storage-0\") pod \"dnsmasq-dns-64c9b56dc5-dqgzc\" (UID: \"94d36ef4-cca6-4740-be74-d88ac60ed646\") " pod="openstack/dnsmasq-dns-64c9b56dc5-dqgzc" Jan 26 08:15:40 crc kubenswrapper[4806]: I0126 08:15:40.148217 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/94d36ef4-cca6-4740-be74-d88ac60ed646-ovsdbserver-nb\") pod \"dnsmasq-dns-64c9b56dc5-dqgzc\" (UID: \"94d36ef4-cca6-4740-be74-d88ac60ed646\") " pod="openstack/dnsmasq-dns-64c9b56dc5-dqgzc" Jan 26 08:15:40 crc kubenswrapper[4806]: I0126 08:15:40.148693 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/94d36ef4-cca6-4740-be74-d88ac60ed646-dns-svc\") pod \"dnsmasq-dns-64c9b56dc5-dqgzc\" (UID: \"94d36ef4-cca6-4740-be74-d88ac60ed646\") " pod="openstack/dnsmasq-dns-64c9b56dc5-dqgzc" Jan 26 08:15:40 crc kubenswrapper[4806]: I0126 08:15:40.152069 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/94d36ef4-cca6-4740-be74-d88ac60ed646-ovsdbserver-sb\") pod \"dnsmasq-dns-64c9b56dc5-dqgzc\" (UID: \"94d36ef4-cca6-4740-be74-d88ac60ed646\") " pod="openstack/dnsmasq-dns-64c9b56dc5-dqgzc" Jan 26 08:15:40 crc kubenswrapper[4806]: I0126 08:15:40.169762 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dqkj\" (UniqueName: \"kubernetes.io/projected/94d36ef4-cca6-4740-be74-d88ac60ed646-kube-api-access-4dqkj\") pod \"dnsmasq-dns-64c9b56dc5-dqgzc\" (UID: \"94d36ef4-cca6-4740-be74-d88ac60ed646\") " pod="openstack/dnsmasq-dns-64c9b56dc5-dqgzc" Jan 26 08:15:40 crc kubenswrapper[4806]: I0126 08:15:40.250426 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-64c9b56dc5-dqgzc" Jan 26 08:15:40 crc kubenswrapper[4806]: I0126 08:15:40.353205 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f84f9ccf-dptl9" Jan 26 08:15:40 crc kubenswrapper[4806]: I0126 08:15:40.468052 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bd8a87c9-fdf5-48dd-9d72-5767bad62a99-dns-swift-storage-0\") pod \"bd8a87c9-fdf5-48dd-9d72-5767bad62a99\" (UID: \"bd8a87c9-fdf5-48dd-9d72-5767bad62a99\") " Jan 26 08:15:40 crc kubenswrapper[4806]: I0126 08:15:40.468157 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd8a87c9-fdf5-48dd-9d72-5767bad62a99-config\") pod \"bd8a87c9-fdf5-48dd-9d72-5767bad62a99\" (UID: \"bd8a87c9-fdf5-48dd-9d72-5767bad62a99\") " Jan 26 08:15:40 crc kubenswrapper[4806]: I0126 08:15:40.468222 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bd8a87c9-fdf5-48dd-9d72-5767bad62a99-ovsdbserver-sb\") pod \"bd8a87c9-fdf5-48dd-9d72-5767bad62a99\" (UID: \"bd8a87c9-fdf5-48dd-9d72-5767bad62a99\") " Jan 26 08:15:40 crc kubenswrapper[4806]: I0126 08:15:40.468275 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4xx7t\" (UniqueName: \"kubernetes.io/projected/bd8a87c9-fdf5-48dd-9d72-5767bad62a99-kube-api-access-4xx7t\") pod \"bd8a87c9-fdf5-48dd-9d72-5767bad62a99\" (UID: \"bd8a87c9-fdf5-48dd-9d72-5767bad62a99\") " Jan 26 08:15:40 crc kubenswrapper[4806]: I0126 08:15:40.468439 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bd8a87c9-fdf5-48dd-9d72-5767bad62a99-ovsdbserver-nb\") pod \"bd8a87c9-fdf5-48dd-9d72-5767bad62a99\" (UID: \"bd8a87c9-fdf5-48dd-9d72-5767bad62a99\") " Jan 26 08:15:40 crc kubenswrapper[4806]: I0126 08:15:40.468469 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bd8a87c9-fdf5-48dd-9d72-5767bad62a99-dns-svc\") pod \"bd8a87c9-fdf5-48dd-9d72-5767bad62a99\" (UID: \"bd8a87c9-fdf5-48dd-9d72-5767bad62a99\") " Jan 26 08:15:40 crc kubenswrapper[4806]: I0126 08:15:40.482442 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd8a87c9-fdf5-48dd-9d72-5767bad62a99-kube-api-access-4xx7t" (OuterVolumeSpecName: "kube-api-access-4xx7t") pod "bd8a87c9-fdf5-48dd-9d72-5767bad62a99" (UID: "bd8a87c9-fdf5-48dd-9d72-5767bad62a99"). InnerVolumeSpecName "kube-api-access-4xx7t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:15:40 crc kubenswrapper[4806]: I0126 08:15:40.522408 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd8a87c9-fdf5-48dd-9d72-5767bad62a99-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "bd8a87c9-fdf5-48dd-9d72-5767bad62a99" (UID: "bd8a87c9-fdf5-48dd-9d72-5767bad62a99"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 08:15:40 crc kubenswrapper[4806]: I0126 08:15:40.541279 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd8a87c9-fdf5-48dd-9d72-5767bad62a99-config" (OuterVolumeSpecName: "config") pod "bd8a87c9-fdf5-48dd-9d72-5767bad62a99" (UID: "bd8a87c9-fdf5-48dd-9d72-5767bad62a99"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 08:15:40 crc kubenswrapper[4806]: I0126 08:15:40.557493 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd8a87c9-fdf5-48dd-9d72-5767bad62a99-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "bd8a87c9-fdf5-48dd-9d72-5767bad62a99" (UID: "bd8a87c9-fdf5-48dd-9d72-5767bad62a99"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 08:15:40 crc kubenswrapper[4806]: I0126 08:15:40.561941 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd8a87c9-fdf5-48dd-9d72-5767bad62a99-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "bd8a87c9-fdf5-48dd-9d72-5767bad62a99" (UID: "bd8a87c9-fdf5-48dd-9d72-5767bad62a99"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 08:15:40 crc kubenswrapper[4806]: I0126 08:15:40.571659 4806 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bd8a87c9-fdf5-48dd-9d72-5767bad62a99-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 08:15:40 crc kubenswrapper[4806]: I0126 08:15:40.571679 4806 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bd8a87c9-fdf5-48dd-9d72-5767bad62a99-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 26 08:15:40 crc kubenswrapper[4806]: I0126 08:15:40.571689 4806 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd8a87c9-fdf5-48dd-9d72-5767bad62a99-config\") on node \"crc\" DevicePath \"\"" Jan 26 08:15:40 crc kubenswrapper[4806]: I0126 08:15:40.571698 4806 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bd8a87c9-fdf5-48dd-9d72-5767bad62a99-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 08:15:40 crc kubenswrapper[4806]: I0126 08:15:40.571706 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4xx7t\" (UniqueName: \"kubernetes.io/projected/bd8a87c9-fdf5-48dd-9d72-5767bad62a99-kube-api-access-4xx7t\") on node \"crc\" DevicePath \"\"" Jan 26 08:15:40 crc kubenswrapper[4806]: I0126 08:15:40.577366 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd8a87c9-fdf5-48dd-9d72-5767bad62a99-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "bd8a87c9-fdf5-48dd-9d72-5767bad62a99" (UID: "bd8a87c9-fdf5-48dd-9d72-5767bad62a99"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 08:15:40 crc kubenswrapper[4806]: I0126 08:15:40.674472 4806 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bd8a87c9-fdf5-48dd-9d72-5767bad62a99-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 08:15:40 crc kubenswrapper[4806]: I0126 08:15:40.787830 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-64c9b56dc5-dqgzc"] Jan 26 08:15:41 crc kubenswrapper[4806]: I0126 08:15:41.128763 4806 generic.go:334] "Generic (PLEG): container finished" podID="94d36ef4-cca6-4740-be74-d88ac60ed646" containerID="4cb8643d845a681184f201593c3aad6f01c523a20daf59fb7a8bf1d5611ddcc9" exitCode=0 Jan 26 08:15:41 crc kubenswrapper[4806]: I0126 08:15:41.128853 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-64c9b56dc5-dqgzc" event={"ID":"94d36ef4-cca6-4740-be74-d88ac60ed646","Type":"ContainerDied","Data":"4cb8643d845a681184f201593c3aad6f01c523a20daf59fb7a8bf1d5611ddcc9"} Jan 26 08:15:41 crc kubenswrapper[4806]: I0126 08:15:41.129085 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-64c9b56dc5-dqgzc" event={"ID":"94d36ef4-cca6-4740-be74-d88ac60ed646","Type":"ContainerStarted","Data":"9240db1484f269531a5e272c9ce914b466d1ac5a18ce46f5382dcf41dc7255de"} Jan 26 08:15:41 crc kubenswrapper[4806]: I0126 08:15:41.131744 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f84f9ccf-dptl9" event={"ID":"bd8a87c9-fdf5-48dd-9d72-5767bad62a99","Type":"ContainerDied","Data":"e6ff4b1a8ce3ebf2c2b29d1f4529f62b6d8efea14c07a8bf6aab0d789023742c"} Jan 26 08:15:41 crc kubenswrapper[4806]: I0126 08:15:41.131902 4806 scope.go:117] "RemoveContainer" containerID="f3df4a4892569a8fb143b20d9fad2bd154a0d27579b6b3335113ffd8ac087f6c" Jan 26 08:15:41 crc kubenswrapper[4806]: I0126 08:15:41.131852 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f84f9ccf-dptl9" Jan 26 08:15:41 crc kubenswrapper[4806]: I0126 08:15:41.307110 4806 scope.go:117] "RemoveContainer" containerID="c80b45dfe22113616d620ce7f237b8ba546243adf081b090820a8f78bb275e11" Jan 26 08:15:41 crc kubenswrapper[4806]: I0126 08:15:41.334217 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f84f9ccf-dptl9"] Jan 26 08:15:41 crc kubenswrapper[4806]: I0126 08:15:41.343621 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-f84f9ccf-dptl9"] Jan 26 08:15:42 crc kubenswrapper[4806]: I0126 08:15:42.143970 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-64c9b56dc5-dqgzc" event={"ID":"94d36ef4-cca6-4740-be74-d88ac60ed646","Type":"ContainerStarted","Data":"ea99711aaa391d391caec10ed1c3b76bad6c0edc1440171e0a4c3adfe6f9cc4b"} Jan 26 08:15:42 crc kubenswrapper[4806]: I0126 08:15:42.144479 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-64c9b56dc5-dqgzc" Jan 26 08:15:42 crc kubenswrapper[4806]: I0126 08:15:42.173101 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-64c9b56dc5-dqgzc" podStartSLOduration=3.173079592 podStartE2EDuration="3.173079592s" podCreationTimestamp="2026-01-26 08:15:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 08:15:42.162476076 +0000 UTC m=+1321.426884142" watchObservedRunningTime="2026-01-26 08:15:42.173079592 +0000 UTC m=+1321.437487658" Jan 26 08:15:42 crc kubenswrapper[4806]: I0126 08:15:42.202085 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 26 08:15:43 crc kubenswrapper[4806]: I0126 08:15:43.054452 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd8a87c9-fdf5-48dd-9d72-5767bad62a99" path="/var/lib/kubelet/pods/bd8a87c9-fdf5-48dd-9d72-5767bad62a99/volumes" Jan 26 08:15:50 crc kubenswrapper[4806]: I0126 08:15:50.251728 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-64c9b56dc5-dqgzc" Jan 26 08:15:50 crc kubenswrapper[4806]: I0126 08:15:50.376136 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b75489c6f-sp966"] Jan 26 08:15:50 crc kubenswrapper[4806]: I0126 08:15:50.376710 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5b75489c6f-sp966" podUID="f38dea41-6b2d-40f4-ab2c-c10600f0ed9e" containerName="dnsmasq-dns" containerID="cri-o://2e246ee7c82b3567cfedbe759c4f52bf5acfb3b8546e95ca3074814f2b65837c" gracePeriod=10 Jan 26 08:15:51 crc kubenswrapper[4806]: I0126 08:15:51.233272 4806 generic.go:334] "Generic (PLEG): container finished" podID="f38dea41-6b2d-40f4-ab2c-c10600f0ed9e" containerID="2e246ee7c82b3567cfedbe759c4f52bf5acfb3b8546e95ca3074814f2b65837c" exitCode=0 Jan 26 08:15:51 crc kubenswrapper[4806]: I0126 08:15:51.233341 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b75489c6f-sp966" event={"ID":"f38dea41-6b2d-40f4-ab2c-c10600f0ed9e","Type":"ContainerDied","Data":"2e246ee7c82b3567cfedbe759c4f52bf5acfb3b8546e95ca3074814f2b65837c"} Jan 26 08:15:51 crc kubenswrapper[4806]: I0126 08:15:51.392583 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b75489c6f-sp966" Jan 26 08:15:51 crc kubenswrapper[4806]: I0126 08:15:51.587885 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f38dea41-6b2d-40f4-ab2c-c10600f0ed9e-ovsdbserver-sb\") pod \"f38dea41-6b2d-40f4-ab2c-c10600f0ed9e\" (UID: \"f38dea41-6b2d-40f4-ab2c-c10600f0ed9e\") " Jan 26 08:15:51 crc kubenswrapper[4806]: I0126 08:15:51.588631 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/f38dea41-6b2d-40f4-ab2c-c10600f0ed9e-openstack-edpm-ipam\") pod \"f38dea41-6b2d-40f4-ab2c-c10600f0ed9e\" (UID: \"f38dea41-6b2d-40f4-ab2c-c10600f0ed9e\") " Jan 26 08:15:51 crc kubenswrapper[4806]: I0126 08:15:51.588728 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f38dea41-6b2d-40f4-ab2c-c10600f0ed9e-ovsdbserver-nb\") pod \"f38dea41-6b2d-40f4-ab2c-c10600f0ed9e\" (UID: \"f38dea41-6b2d-40f4-ab2c-c10600f0ed9e\") " Jan 26 08:15:51 crc kubenswrapper[4806]: I0126 08:15:51.588850 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f38dea41-6b2d-40f4-ab2c-c10600f0ed9e-config\") pod \"f38dea41-6b2d-40f4-ab2c-c10600f0ed9e\" (UID: \"f38dea41-6b2d-40f4-ab2c-c10600f0ed9e\") " Jan 26 08:15:51 crc kubenswrapper[4806]: I0126 08:15:51.588896 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7pp47\" (UniqueName: \"kubernetes.io/projected/f38dea41-6b2d-40f4-ab2c-c10600f0ed9e-kube-api-access-7pp47\") pod \"f38dea41-6b2d-40f4-ab2c-c10600f0ed9e\" (UID: \"f38dea41-6b2d-40f4-ab2c-c10600f0ed9e\") " Jan 26 08:15:51 crc kubenswrapper[4806]: I0126 08:15:51.588937 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f38dea41-6b2d-40f4-ab2c-c10600f0ed9e-dns-svc\") pod \"f38dea41-6b2d-40f4-ab2c-c10600f0ed9e\" (UID: \"f38dea41-6b2d-40f4-ab2c-c10600f0ed9e\") " Jan 26 08:15:51 crc kubenswrapper[4806]: I0126 08:15:51.588976 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f38dea41-6b2d-40f4-ab2c-c10600f0ed9e-dns-swift-storage-0\") pod \"f38dea41-6b2d-40f4-ab2c-c10600f0ed9e\" (UID: \"f38dea41-6b2d-40f4-ab2c-c10600f0ed9e\") " Jan 26 08:15:51 crc kubenswrapper[4806]: I0126 08:15:51.632997 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f38dea41-6b2d-40f4-ab2c-c10600f0ed9e-kube-api-access-7pp47" (OuterVolumeSpecName: "kube-api-access-7pp47") pod "f38dea41-6b2d-40f4-ab2c-c10600f0ed9e" (UID: "f38dea41-6b2d-40f4-ab2c-c10600f0ed9e"). InnerVolumeSpecName "kube-api-access-7pp47". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:15:51 crc kubenswrapper[4806]: I0126 08:15:51.674761 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f38dea41-6b2d-40f4-ab2c-c10600f0ed9e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f38dea41-6b2d-40f4-ab2c-c10600f0ed9e" (UID: "f38dea41-6b2d-40f4-ab2c-c10600f0ed9e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 08:15:51 crc kubenswrapper[4806]: I0126 08:15:51.678694 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f38dea41-6b2d-40f4-ab2c-c10600f0ed9e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f38dea41-6b2d-40f4-ab2c-c10600f0ed9e" (UID: "f38dea41-6b2d-40f4-ab2c-c10600f0ed9e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 08:15:51 crc kubenswrapper[4806]: I0126 08:15:51.687363 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f38dea41-6b2d-40f4-ab2c-c10600f0ed9e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "f38dea41-6b2d-40f4-ab2c-c10600f0ed9e" (UID: "f38dea41-6b2d-40f4-ab2c-c10600f0ed9e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 08:15:51 crc kubenswrapper[4806]: I0126 08:15:51.691589 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7pp47\" (UniqueName: \"kubernetes.io/projected/f38dea41-6b2d-40f4-ab2c-c10600f0ed9e-kube-api-access-7pp47\") on node \"crc\" DevicePath \"\"" Jan 26 08:15:51 crc kubenswrapper[4806]: I0126 08:15:51.691616 4806 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f38dea41-6b2d-40f4-ab2c-c10600f0ed9e-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 26 08:15:51 crc kubenswrapper[4806]: I0126 08:15:51.691627 4806 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f38dea41-6b2d-40f4-ab2c-c10600f0ed9e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 08:15:51 crc kubenswrapper[4806]: I0126 08:15:51.691637 4806 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f38dea41-6b2d-40f4-ab2c-c10600f0ed9e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 08:15:51 crc kubenswrapper[4806]: I0126 08:15:51.694488 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f38dea41-6b2d-40f4-ab2c-c10600f0ed9e-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "f38dea41-6b2d-40f4-ab2c-c10600f0ed9e" (UID: "f38dea41-6b2d-40f4-ab2c-c10600f0ed9e"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 08:15:51 crc kubenswrapper[4806]: I0126 08:15:51.697188 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f38dea41-6b2d-40f4-ab2c-c10600f0ed9e-config" (OuterVolumeSpecName: "config") pod "f38dea41-6b2d-40f4-ab2c-c10600f0ed9e" (UID: "f38dea41-6b2d-40f4-ab2c-c10600f0ed9e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 08:15:51 crc kubenswrapper[4806]: I0126 08:15:51.713957 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f38dea41-6b2d-40f4-ab2c-c10600f0ed9e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f38dea41-6b2d-40f4-ab2c-c10600f0ed9e" (UID: "f38dea41-6b2d-40f4-ab2c-c10600f0ed9e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 08:15:51 crc kubenswrapper[4806]: I0126 08:15:51.792828 4806 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f38dea41-6b2d-40f4-ab2c-c10600f0ed9e-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 08:15:51 crc kubenswrapper[4806]: I0126 08:15:51.792868 4806 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/f38dea41-6b2d-40f4-ab2c-c10600f0ed9e-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 08:15:51 crc kubenswrapper[4806]: I0126 08:15:51.792879 4806 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f38dea41-6b2d-40f4-ab2c-c10600f0ed9e-config\") on node \"crc\" DevicePath \"\"" Jan 26 08:15:52 crc kubenswrapper[4806]: I0126 08:15:52.244174 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b75489c6f-sp966" event={"ID":"f38dea41-6b2d-40f4-ab2c-c10600f0ed9e","Type":"ContainerDied","Data":"9d88991d54edd66feed7754ef4b6d5521b03c54234a67700af3a53285fa98f8c"} Jan 26 08:15:52 crc kubenswrapper[4806]: I0126 08:15:52.244453 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b75489c6f-sp966" Jan 26 08:15:52 crc kubenswrapper[4806]: I0126 08:15:52.244494 4806 scope.go:117] "RemoveContainer" containerID="2e246ee7c82b3567cfedbe759c4f52bf5acfb3b8546e95ca3074814f2b65837c" Jan 26 08:15:52 crc kubenswrapper[4806]: I0126 08:15:52.267559 4806 scope.go:117] "RemoveContainer" containerID="ce1aadcbb471cf3df654427addc7bec36fdf485e01fdcd38dba12c71c3fc5c0f" Jan 26 08:15:52 crc kubenswrapper[4806]: I0126 08:15:52.308601 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b75489c6f-sp966"] Jan 26 08:15:52 crc kubenswrapper[4806]: I0126 08:15:52.327085 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b75489c6f-sp966"] Jan 26 08:15:53 crc kubenswrapper[4806]: I0126 08:15:53.054503 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f38dea41-6b2d-40f4-ab2c-c10600f0ed9e" path="/var/lib/kubelet/pods/f38dea41-6b2d-40f4-ab2c-c10600f0ed9e/volumes" Jan 26 08:16:02 crc kubenswrapper[4806]: I0126 08:16:02.336187 4806 generic.go:334] "Generic (PLEG): container finished" podID="86791215-4d1e-4b06-b013-fa551e935b74" containerID="17e2b167640b38907c57e35bfd8417ebf4fc5b9922e7a6d0c0d2ce66f863df4f" exitCode=0 Jan 26 08:16:02 crc kubenswrapper[4806]: I0126 08:16:02.336244 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"86791215-4d1e-4b06-b013-fa551e935b74","Type":"ContainerDied","Data":"17e2b167640b38907c57e35bfd8417ebf4fc5b9922e7a6d0c0d2ce66f863df4f"} Jan 26 08:16:02 crc kubenswrapper[4806]: I0126 08:16:02.339438 4806 generic.go:334] "Generic (PLEG): container finished" podID="da8e3d3a-b943-47b1-9c7d-5c44a1816934" containerID="dd4af88d04963eeb3e147b4e80c26f10fd2426c3c016a9abc5aa4f502245a724" exitCode=0 Jan 26 08:16:02 crc kubenswrapper[4806]: I0126 08:16:02.339477 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"da8e3d3a-b943-47b1-9c7d-5c44a1816934","Type":"ContainerDied","Data":"dd4af88d04963eeb3e147b4e80c26f10fd2426c3c016a9abc5aa4f502245a724"} Jan 26 08:16:02 crc kubenswrapper[4806]: I0126 08:16:02.765737 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xcpsd"] Jan 26 08:16:02 crc kubenswrapper[4806]: E0126 08:16:02.766407 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f38dea41-6b2d-40f4-ab2c-c10600f0ed9e" containerName="dnsmasq-dns" Jan 26 08:16:02 crc kubenswrapper[4806]: I0126 08:16:02.766430 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="f38dea41-6b2d-40f4-ab2c-c10600f0ed9e" containerName="dnsmasq-dns" Jan 26 08:16:02 crc kubenswrapper[4806]: E0126 08:16:02.766458 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f38dea41-6b2d-40f4-ab2c-c10600f0ed9e" containerName="init" Jan 26 08:16:02 crc kubenswrapper[4806]: I0126 08:16:02.766467 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="f38dea41-6b2d-40f4-ab2c-c10600f0ed9e" containerName="init" Jan 26 08:16:02 crc kubenswrapper[4806]: E0126 08:16:02.766485 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd8a87c9-fdf5-48dd-9d72-5767bad62a99" containerName="init" Jan 26 08:16:02 crc kubenswrapper[4806]: I0126 08:16:02.766496 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd8a87c9-fdf5-48dd-9d72-5767bad62a99" containerName="init" Jan 26 08:16:02 crc kubenswrapper[4806]: E0126 08:16:02.766531 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd8a87c9-fdf5-48dd-9d72-5767bad62a99" containerName="dnsmasq-dns" Jan 26 08:16:02 crc kubenswrapper[4806]: I0126 08:16:02.766541 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd8a87c9-fdf5-48dd-9d72-5767bad62a99" containerName="dnsmasq-dns" Jan 26 08:16:02 crc kubenswrapper[4806]: I0126 08:16:02.766771 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd8a87c9-fdf5-48dd-9d72-5767bad62a99" containerName="dnsmasq-dns" Jan 26 08:16:02 crc kubenswrapper[4806]: I0126 08:16:02.766798 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="f38dea41-6b2d-40f4-ab2c-c10600f0ed9e" containerName="dnsmasq-dns" Jan 26 08:16:02 crc kubenswrapper[4806]: I0126 08:16:02.767442 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xcpsd" Jan 26 08:16:02 crc kubenswrapper[4806]: I0126 08:16:02.776667 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 08:16:02 crc kubenswrapper[4806]: I0126 08:16:02.777335 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 08:16:02 crc kubenswrapper[4806]: I0126 08:16:02.777386 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-tr4w7" Jan 26 08:16:02 crc kubenswrapper[4806]: I0126 08:16:02.778564 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 08:16:02 crc kubenswrapper[4806]: I0126 08:16:02.807779 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xcpsd"] Jan 26 08:16:02 crc kubenswrapper[4806]: I0126 08:16:02.920705 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e18dfde2-7334-4c26-a7bb-b79bf78fad03-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-xcpsd\" (UID: \"e18dfde2-7334-4c26-a7bb-b79bf78fad03\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xcpsd" Jan 26 08:16:02 crc kubenswrapper[4806]: I0126 08:16:02.920766 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e18dfde2-7334-4c26-a7bb-b79bf78fad03-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-xcpsd\" (UID: \"e18dfde2-7334-4c26-a7bb-b79bf78fad03\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xcpsd" Jan 26 08:16:02 crc kubenswrapper[4806]: I0126 08:16:02.920887 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e18dfde2-7334-4c26-a7bb-b79bf78fad03-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-xcpsd\" (UID: \"e18dfde2-7334-4c26-a7bb-b79bf78fad03\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xcpsd" Jan 26 08:16:02 crc kubenswrapper[4806]: I0126 08:16:02.920949 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpvq5\" (UniqueName: \"kubernetes.io/projected/e18dfde2-7334-4c26-a7bb-b79bf78fad03-kube-api-access-cpvq5\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-xcpsd\" (UID: \"e18dfde2-7334-4c26-a7bb-b79bf78fad03\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xcpsd" Jan 26 08:16:03 crc kubenswrapper[4806]: I0126 08:16:03.022904 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cpvq5\" (UniqueName: \"kubernetes.io/projected/e18dfde2-7334-4c26-a7bb-b79bf78fad03-kube-api-access-cpvq5\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-xcpsd\" (UID: \"e18dfde2-7334-4c26-a7bb-b79bf78fad03\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xcpsd" Jan 26 08:16:03 crc kubenswrapper[4806]: I0126 08:16:03.023070 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e18dfde2-7334-4c26-a7bb-b79bf78fad03-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-xcpsd\" (UID: \"e18dfde2-7334-4c26-a7bb-b79bf78fad03\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xcpsd" Jan 26 08:16:03 crc kubenswrapper[4806]: I0126 08:16:03.023097 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e18dfde2-7334-4c26-a7bb-b79bf78fad03-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-xcpsd\" (UID: \"e18dfde2-7334-4c26-a7bb-b79bf78fad03\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xcpsd" Jan 26 08:16:03 crc kubenswrapper[4806]: I0126 08:16:03.023135 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e18dfde2-7334-4c26-a7bb-b79bf78fad03-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-xcpsd\" (UID: \"e18dfde2-7334-4c26-a7bb-b79bf78fad03\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xcpsd" Jan 26 08:16:03 crc kubenswrapper[4806]: I0126 08:16:03.027743 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e18dfde2-7334-4c26-a7bb-b79bf78fad03-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-xcpsd\" (UID: \"e18dfde2-7334-4c26-a7bb-b79bf78fad03\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xcpsd" Jan 26 08:16:03 crc kubenswrapper[4806]: I0126 08:16:03.028075 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e18dfde2-7334-4c26-a7bb-b79bf78fad03-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-xcpsd\" (UID: \"e18dfde2-7334-4c26-a7bb-b79bf78fad03\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xcpsd" Jan 26 08:16:03 crc kubenswrapper[4806]: I0126 08:16:03.030173 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e18dfde2-7334-4c26-a7bb-b79bf78fad03-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-xcpsd\" (UID: \"e18dfde2-7334-4c26-a7bb-b79bf78fad03\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xcpsd" Jan 26 08:16:03 crc kubenswrapper[4806]: I0126 08:16:03.061374 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cpvq5\" (UniqueName: \"kubernetes.io/projected/e18dfde2-7334-4c26-a7bb-b79bf78fad03-kube-api-access-cpvq5\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-xcpsd\" (UID: \"e18dfde2-7334-4c26-a7bb-b79bf78fad03\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xcpsd" Jan 26 08:16:03 crc kubenswrapper[4806]: I0126 08:16:03.142691 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xcpsd" Jan 26 08:16:03 crc kubenswrapper[4806]: I0126 08:16:03.363330 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"86791215-4d1e-4b06-b013-fa551e935b74","Type":"ContainerStarted","Data":"fe203ad2c6d5891b1349dadb6024d489bca5cb801e233adfec5320656d02de33"} Jan 26 08:16:03 crc kubenswrapper[4806]: I0126 08:16:03.364700 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 26 08:16:03 crc kubenswrapper[4806]: I0126 08:16:03.373730 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"da8e3d3a-b943-47b1-9c7d-5c44a1816934","Type":"ContainerStarted","Data":"018cb836cef1a54a9956477c79cb314fd237151e78816b8a2da1a610b45e29d4"} Jan 26 08:16:03 crc kubenswrapper[4806]: I0126 08:16:03.374445 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 26 08:16:03 crc kubenswrapper[4806]: I0126 08:16:03.393853 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=36.393833157 podStartE2EDuration="36.393833157s" podCreationTimestamp="2026-01-26 08:15:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 08:16:03.383420616 +0000 UTC m=+1342.647828692" watchObservedRunningTime="2026-01-26 08:16:03.393833157 +0000 UTC m=+1342.658241213" Jan 26 08:16:03 crc kubenswrapper[4806]: I0126 08:16:03.415484 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=36.415464032 podStartE2EDuration="36.415464032s" podCreationTimestamp="2026-01-26 08:15:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 08:16:03.413462436 +0000 UTC m=+1342.677870492" watchObservedRunningTime="2026-01-26 08:16:03.415464032 +0000 UTC m=+1342.679872088" Jan 26 08:16:03 crc kubenswrapper[4806]: I0126 08:16:03.837967 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xcpsd"] Jan 26 08:16:04 crc kubenswrapper[4806]: I0126 08:16:04.387945 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xcpsd" event={"ID":"e18dfde2-7334-4c26-a7bb-b79bf78fad03","Type":"ContainerStarted","Data":"cbe56c2d7435e58c87c29e2798420f1065be538244cc4de238288bc5dcb46833"} Jan 26 08:16:15 crc kubenswrapper[4806]: I0126 08:16:15.493513 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xcpsd" event={"ID":"e18dfde2-7334-4c26-a7bb-b79bf78fad03","Type":"ContainerStarted","Data":"a89d45081ce3bfb3d7cd7d126d7935b36e7c53ca8437bd0fe6618287dcb5c281"} Jan 26 08:16:15 crc kubenswrapper[4806]: I0126 08:16:15.530000 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xcpsd" podStartSLOduration=2.95871505 podStartE2EDuration="13.529984474s" podCreationTimestamp="2026-01-26 08:16:02 +0000 UTC" firstStartedPulling="2026-01-26 08:16:03.839943703 +0000 UTC m=+1343.104351769" lastFinishedPulling="2026-01-26 08:16:14.411213137 +0000 UTC m=+1353.675621193" observedRunningTime="2026-01-26 08:16:15.525576111 +0000 UTC m=+1354.789984167" watchObservedRunningTime="2026-01-26 08:16:15.529984474 +0000 UTC m=+1354.794392530" Jan 26 08:16:17 crc kubenswrapper[4806]: I0126 08:16:17.480716 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 26 08:16:17 crc kubenswrapper[4806]: I0126 08:16:17.636732 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 26 08:16:27 crc kubenswrapper[4806]: I0126 08:16:27.602515 4806 generic.go:334] "Generic (PLEG): container finished" podID="e18dfde2-7334-4c26-a7bb-b79bf78fad03" containerID="a89d45081ce3bfb3d7cd7d126d7935b36e7c53ca8437bd0fe6618287dcb5c281" exitCode=0 Jan 26 08:16:27 crc kubenswrapper[4806]: I0126 08:16:27.602662 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xcpsd" event={"ID":"e18dfde2-7334-4c26-a7bb-b79bf78fad03","Type":"ContainerDied","Data":"a89d45081ce3bfb3d7cd7d126d7935b36e7c53ca8437bd0fe6618287dcb5c281"} Jan 26 08:16:29 crc kubenswrapper[4806]: I0126 08:16:29.105779 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xcpsd" Jan 26 08:16:29 crc kubenswrapper[4806]: I0126 08:16:29.179677 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e18dfde2-7334-4c26-a7bb-b79bf78fad03-inventory\") pod \"e18dfde2-7334-4c26-a7bb-b79bf78fad03\" (UID: \"e18dfde2-7334-4c26-a7bb-b79bf78fad03\") " Jan 26 08:16:29 crc kubenswrapper[4806]: I0126 08:16:29.179733 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cpvq5\" (UniqueName: \"kubernetes.io/projected/e18dfde2-7334-4c26-a7bb-b79bf78fad03-kube-api-access-cpvq5\") pod \"e18dfde2-7334-4c26-a7bb-b79bf78fad03\" (UID: \"e18dfde2-7334-4c26-a7bb-b79bf78fad03\") " Jan 26 08:16:29 crc kubenswrapper[4806]: I0126 08:16:29.179794 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e18dfde2-7334-4c26-a7bb-b79bf78fad03-repo-setup-combined-ca-bundle\") pod \"e18dfde2-7334-4c26-a7bb-b79bf78fad03\" (UID: \"e18dfde2-7334-4c26-a7bb-b79bf78fad03\") " Jan 26 08:16:29 crc kubenswrapper[4806]: I0126 08:16:29.179826 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e18dfde2-7334-4c26-a7bb-b79bf78fad03-ssh-key-openstack-edpm-ipam\") pod \"e18dfde2-7334-4c26-a7bb-b79bf78fad03\" (UID: \"e18dfde2-7334-4c26-a7bb-b79bf78fad03\") " Jan 26 08:16:29 crc kubenswrapper[4806]: I0126 08:16:29.198213 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e18dfde2-7334-4c26-a7bb-b79bf78fad03-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "e18dfde2-7334-4c26-a7bb-b79bf78fad03" (UID: "e18dfde2-7334-4c26-a7bb-b79bf78fad03"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:16:29 crc kubenswrapper[4806]: I0126 08:16:29.199447 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e18dfde2-7334-4c26-a7bb-b79bf78fad03-kube-api-access-cpvq5" (OuterVolumeSpecName: "kube-api-access-cpvq5") pod "e18dfde2-7334-4c26-a7bb-b79bf78fad03" (UID: "e18dfde2-7334-4c26-a7bb-b79bf78fad03"). InnerVolumeSpecName "kube-api-access-cpvq5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:16:29 crc kubenswrapper[4806]: I0126 08:16:29.209500 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e18dfde2-7334-4c26-a7bb-b79bf78fad03-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e18dfde2-7334-4c26-a7bb-b79bf78fad03" (UID: "e18dfde2-7334-4c26-a7bb-b79bf78fad03"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:16:29 crc kubenswrapper[4806]: I0126 08:16:29.214281 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e18dfde2-7334-4c26-a7bb-b79bf78fad03-inventory" (OuterVolumeSpecName: "inventory") pod "e18dfde2-7334-4c26-a7bb-b79bf78fad03" (UID: "e18dfde2-7334-4c26-a7bb-b79bf78fad03"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:16:29 crc kubenswrapper[4806]: I0126 08:16:29.282240 4806 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e18dfde2-7334-4c26-a7bb-b79bf78fad03-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 08:16:29 crc kubenswrapper[4806]: I0126 08:16:29.282271 4806 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e18dfde2-7334-4c26-a7bb-b79bf78fad03-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 08:16:29 crc kubenswrapper[4806]: I0126 08:16:29.282282 4806 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e18dfde2-7334-4c26-a7bb-b79bf78fad03-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 08:16:29 crc kubenswrapper[4806]: I0126 08:16:29.282291 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cpvq5\" (UniqueName: \"kubernetes.io/projected/e18dfde2-7334-4c26-a7bb-b79bf78fad03-kube-api-access-cpvq5\") on node \"crc\" DevicePath \"\"" Jan 26 08:16:29 crc kubenswrapper[4806]: I0126 08:16:29.627733 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xcpsd" event={"ID":"e18dfde2-7334-4c26-a7bb-b79bf78fad03","Type":"ContainerDied","Data":"cbe56c2d7435e58c87c29e2798420f1065be538244cc4de238288bc5dcb46833"} Jan 26 08:16:29 crc kubenswrapper[4806]: I0126 08:16:29.627772 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cbe56c2d7435e58c87c29e2798420f1065be538244cc4de238288bc5dcb46833" Jan 26 08:16:29 crc kubenswrapper[4806]: I0126 08:16:29.628257 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xcpsd" Jan 26 08:16:29 crc kubenswrapper[4806]: I0126 08:16:29.765299 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-d4bdx"] Jan 26 08:16:29 crc kubenswrapper[4806]: E0126 08:16:29.765865 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e18dfde2-7334-4c26-a7bb-b79bf78fad03" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 26 08:16:29 crc kubenswrapper[4806]: I0126 08:16:29.765892 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="e18dfde2-7334-4c26-a7bb-b79bf78fad03" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 26 08:16:29 crc kubenswrapper[4806]: I0126 08:16:29.766147 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="e18dfde2-7334-4c26-a7bb-b79bf78fad03" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 26 08:16:29 crc kubenswrapper[4806]: I0126 08:16:29.766970 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-d4bdx" Jan 26 08:16:29 crc kubenswrapper[4806]: I0126 08:16:29.770021 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 08:16:29 crc kubenswrapper[4806]: I0126 08:16:29.770287 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-tr4w7" Jan 26 08:16:29 crc kubenswrapper[4806]: I0126 08:16:29.770469 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 08:16:29 crc kubenswrapper[4806]: I0126 08:16:29.774889 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 08:16:29 crc kubenswrapper[4806]: I0126 08:16:29.776415 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-d4bdx"] Jan 26 08:16:29 crc kubenswrapper[4806]: I0126 08:16:29.899101 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2f3c6fdc-4055-4dbe-b5cb-fedcfe4f9356-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-d4bdx\" (UID: \"2f3c6fdc-4055-4dbe-b5cb-fedcfe4f9356\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-d4bdx" Jan 26 08:16:29 crc kubenswrapper[4806]: I0126 08:16:29.899223 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2wfl\" (UniqueName: \"kubernetes.io/projected/2f3c6fdc-4055-4dbe-b5cb-fedcfe4f9356-kube-api-access-q2wfl\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-d4bdx\" (UID: \"2f3c6fdc-4055-4dbe-b5cb-fedcfe4f9356\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-d4bdx" Jan 26 08:16:29 crc kubenswrapper[4806]: I0126 08:16:29.899246 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2f3c6fdc-4055-4dbe-b5cb-fedcfe4f9356-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-d4bdx\" (UID: \"2f3c6fdc-4055-4dbe-b5cb-fedcfe4f9356\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-d4bdx" Jan 26 08:16:30 crc kubenswrapper[4806]: I0126 08:16:30.000780 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2wfl\" (UniqueName: \"kubernetes.io/projected/2f3c6fdc-4055-4dbe-b5cb-fedcfe4f9356-kube-api-access-q2wfl\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-d4bdx\" (UID: \"2f3c6fdc-4055-4dbe-b5cb-fedcfe4f9356\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-d4bdx" Jan 26 08:16:30 crc kubenswrapper[4806]: I0126 08:16:30.001135 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2f3c6fdc-4055-4dbe-b5cb-fedcfe4f9356-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-d4bdx\" (UID: \"2f3c6fdc-4055-4dbe-b5cb-fedcfe4f9356\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-d4bdx" Jan 26 08:16:30 crc kubenswrapper[4806]: I0126 08:16:30.001415 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2f3c6fdc-4055-4dbe-b5cb-fedcfe4f9356-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-d4bdx\" (UID: \"2f3c6fdc-4055-4dbe-b5cb-fedcfe4f9356\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-d4bdx" Jan 26 08:16:30 crc kubenswrapper[4806]: I0126 08:16:30.008043 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2f3c6fdc-4055-4dbe-b5cb-fedcfe4f9356-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-d4bdx\" (UID: \"2f3c6fdc-4055-4dbe-b5cb-fedcfe4f9356\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-d4bdx" Jan 26 08:16:30 crc kubenswrapper[4806]: I0126 08:16:30.009565 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2f3c6fdc-4055-4dbe-b5cb-fedcfe4f9356-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-d4bdx\" (UID: \"2f3c6fdc-4055-4dbe-b5cb-fedcfe4f9356\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-d4bdx" Jan 26 08:16:30 crc kubenswrapper[4806]: I0126 08:16:30.020665 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2wfl\" (UniqueName: \"kubernetes.io/projected/2f3c6fdc-4055-4dbe-b5cb-fedcfe4f9356-kube-api-access-q2wfl\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-d4bdx\" (UID: \"2f3c6fdc-4055-4dbe-b5cb-fedcfe4f9356\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-d4bdx" Jan 26 08:16:30 crc kubenswrapper[4806]: I0126 08:16:30.083827 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-d4bdx" Jan 26 08:16:30 crc kubenswrapper[4806]: I0126 08:16:30.757436 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-d4bdx"] Jan 26 08:16:31 crc kubenswrapper[4806]: I0126 08:16:31.651592 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-d4bdx" event={"ID":"2f3c6fdc-4055-4dbe-b5cb-fedcfe4f9356","Type":"ContainerStarted","Data":"5c2334883ea999c26568f6b986c07f9b4716628f74e7f2939c7b36ba0abe6357"} Jan 26 08:16:31 crc kubenswrapper[4806]: I0126 08:16:31.651947 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-d4bdx" event={"ID":"2f3c6fdc-4055-4dbe-b5cb-fedcfe4f9356","Type":"ContainerStarted","Data":"d84e3df10a1800a2f4b671ae04ec1dfaa4cbab52af13d8f380af659b30adae4c"} Jan 26 08:16:31 crc kubenswrapper[4806]: I0126 08:16:31.668275 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-d4bdx" podStartSLOduration=2.205362678 podStartE2EDuration="2.668253893s" podCreationTimestamp="2026-01-26 08:16:29 +0000 UTC" firstStartedPulling="2026-01-26 08:16:30.709235214 +0000 UTC m=+1369.973643270" lastFinishedPulling="2026-01-26 08:16:31.172126429 +0000 UTC m=+1370.436534485" observedRunningTime="2026-01-26 08:16:31.664737265 +0000 UTC m=+1370.929145311" watchObservedRunningTime="2026-01-26 08:16:31.668253893 +0000 UTC m=+1370.932661949" Jan 26 08:16:34 crc kubenswrapper[4806]: I0126 08:16:34.682384 4806 generic.go:334] "Generic (PLEG): container finished" podID="2f3c6fdc-4055-4dbe-b5cb-fedcfe4f9356" containerID="5c2334883ea999c26568f6b986c07f9b4716628f74e7f2939c7b36ba0abe6357" exitCode=0 Jan 26 08:16:34 crc kubenswrapper[4806]: I0126 08:16:34.682480 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-d4bdx" event={"ID":"2f3c6fdc-4055-4dbe-b5cb-fedcfe4f9356","Type":"ContainerDied","Data":"5c2334883ea999c26568f6b986c07f9b4716628f74e7f2939c7b36ba0abe6357"} Jan 26 08:16:34 crc kubenswrapper[4806]: I0126 08:16:34.840707 4806 scope.go:117] "RemoveContainer" containerID="b5f28c2f6936959437a8720d86a5fd0a6d58c9f8b90ccd071827bf23cd18a21a" Jan 26 08:16:34 crc kubenswrapper[4806]: I0126 08:16:34.869247 4806 scope.go:117] "RemoveContainer" containerID="c1a8985ec4cf190a37f23e672d8a8e5ff1509ea1220c951f22b374a93149782d" Jan 26 08:16:34 crc kubenswrapper[4806]: I0126 08:16:34.916257 4806 scope.go:117] "RemoveContainer" containerID="f5b7aba37df1ab70703a3ef3dc28df0cf9e18d2c32129934f84e93f139ee5b72" Jan 26 08:16:34 crc kubenswrapper[4806]: I0126 08:16:34.959853 4806 scope.go:117] "RemoveContainer" containerID="344e67022fbd133bb9dec4fc7a0fe008a30abc1aa17d81e24539680056095056" Jan 26 08:16:36 crc kubenswrapper[4806]: I0126 08:16:36.154119 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-d4bdx" Jan 26 08:16:36 crc kubenswrapper[4806]: I0126 08:16:36.251418 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2f3c6fdc-4055-4dbe-b5cb-fedcfe4f9356-ssh-key-openstack-edpm-ipam\") pod \"2f3c6fdc-4055-4dbe-b5cb-fedcfe4f9356\" (UID: \"2f3c6fdc-4055-4dbe-b5cb-fedcfe4f9356\") " Jan 26 08:16:36 crc kubenswrapper[4806]: I0126 08:16:36.251596 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q2wfl\" (UniqueName: \"kubernetes.io/projected/2f3c6fdc-4055-4dbe-b5cb-fedcfe4f9356-kube-api-access-q2wfl\") pod \"2f3c6fdc-4055-4dbe-b5cb-fedcfe4f9356\" (UID: \"2f3c6fdc-4055-4dbe-b5cb-fedcfe4f9356\") " Jan 26 08:16:36 crc kubenswrapper[4806]: I0126 08:16:36.251657 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2f3c6fdc-4055-4dbe-b5cb-fedcfe4f9356-inventory\") pod \"2f3c6fdc-4055-4dbe-b5cb-fedcfe4f9356\" (UID: \"2f3c6fdc-4055-4dbe-b5cb-fedcfe4f9356\") " Jan 26 08:16:36 crc kubenswrapper[4806]: I0126 08:16:36.258771 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f3c6fdc-4055-4dbe-b5cb-fedcfe4f9356-kube-api-access-q2wfl" (OuterVolumeSpecName: "kube-api-access-q2wfl") pod "2f3c6fdc-4055-4dbe-b5cb-fedcfe4f9356" (UID: "2f3c6fdc-4055-4dbe-b5cb-fedcfe4f9356"). InnerVolumeSpecName "kube-api-access-q2wfl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:16:36 crc kubenswrapper[4806]: I0126 08:16:36.281423 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f3c6fdc-4055-4dbe-b5cb-fedcfe4f9356-inventory" (OuterVolumeSpecName: "inventory") pod "2f3c6fdc-4055-4dbe-b5cb-fedcfe4f9356" (UID: "2f3c6fdc-4055-4dbe-b5cb-fedcfe4f9356"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:16:36 crc kubenswrapper[4806]: I0126 08:16:36.284667 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f3c6fdc-4055-4dbe-b5cb-fedcfe4f9356-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "2f3c6fdc-4055-4dbe-b5cb-fedcfe4f9356" (UID: "2f3c6fdc-4055-4dbe-b5cb-fedcfe4f9356"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:16:36 crc kubenswrapper[4806]: I0126 08:16:36.354066 4806 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2f3c6fdc-4055-4dbe-b5cb-fedcfe4f9356-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 08:16:36 crc kubenswrapper[4806]: I0126 08:16:36.354144 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q2wfl\" (UniqueName: \"kubernetes.io/projected/2f3c6fdc-4055-4dbe-b5cb-fedcfe4f9356-kube-api-access-q2wfl\") on node \"crc\" DevicePath \"\"" Jan 26 08:16:36 crc kubenswrapper[4806]: I0126 08:16:36.354159 4806 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2f3c6fdc-4055-4dbe-b5cb-fedcfe4f9356-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 08:16:36 crc kubenswrapper[4806]: I0126 08:16:36.707309 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-d4bdx" event={"ID":"2f3c6fdc-4055-4dbe-b5cb-fedcfe4f9356","Type":"ContainerDied","Data":"d84e3df10a1800a2f4b671ae04ec1dfaa4cbab52af13d8f380af659b30adae4c"} Jan 26 08:16:36 crc kubenswrapper[4806]: I0126 08:16:36.707354 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d84e3df10a1800a2f4b671ae04ec1dfaa4cbab52af13d8f380af659b30adae4c" Jan 26 08:16:36 crc kubenswrapper[4806]: I0126 08:16:36.707367 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-d4bdx" Jan 26 08:16:36 crc kubenswrapper[4806]: I0126 08:16:36.772748 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lc8v5"] Jan 26 08:16:36 crc kubenswrapper[4806]: E0126 08:16:36.773162 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f3c6fdc-4055-4dbe-b5cb-fedcfe4f9356" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 26 08:16:36 crc kubenswrapper[4806]: I0126 08:16:36.773180 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f3c6fdc-4055-4dbe-b5cb-fedcfe4f9356" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 26 08:16:36 crc kubenswrapper[4806]: I0126 08:16:36.773363 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f3c6fdc-4055-4dbe-b5cb-fedcfe4f9356" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 26 08:16:36 crc kubenswrapper[4806]: I0126 08:16:36.774026 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lc8v5" Jan 26 08:16:36 crc kubenswrapper[4806]: I0126 08:16:36.776994 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 08:16:36 crc kubenswrapper[4806]: I0126 08:16:36.777039 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 08:16:36 crc kubenswrapper[4806]: I0126 08:16:36.776994 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-tr4w7" Jan 26 08:16:36 crc kubenswrapper[4806]: I0126 08:16:36.778346 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 08:16:36 crc kubenswrapper[4806]: I0126 08:16:36.807075 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lc8v5"] Jan 26 08:16:36 crc kubenswrapper[4806]: I0126 08:16:36.864215 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b38882dc-facd-46ab-96ce-176528439b16-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-lc8v5\" (UID: \"b38882dc-facd-46ab-96ce-176528439b16\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lc8v5" Jan 26 08:16:36 crc kubenswrapper[4806]: I0126 08:16:36.864341 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b38882dc-facd-46ab-96ce-176528439b16-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-lc8v5\" (UID: \"b38882dc-facd-46ab-96ce-176528439b16\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lc8v5" Jan 26 08:16:36 crc kubenswrapper[4806]: I0126 08:16:36.864413 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b38882dc-facd-46ab-96ce-176528439b16-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-lc8v5\" (UID: \"b38882dc-facd-46ab-96ce-176528439b16\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lc8v5" Jan 26 08:16:36 crc kubenswrapper[4806]: I0126 08:16:36.864511 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gp4ch\" (UniqueName: \"kubernetes.io/projected/b38882dc-facd-46ab-96ce-176528439b16-kube-api-access-gp4ch\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-lc8v5\" (UID: \"b38882dc-facd-46ab-96ce-176528439b16\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lc8v5" Jan 26 08:16:36 crc kubenswrapper[4806]: I0126 08:16:36.966598 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gp4ch\" (UniqueName: \"kubernetes.io/projected/b38882dc-facd-46ab-96ce-176528439b16-kube-api-access-gp4ch\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-lc8v5\" (UID: \"b38882dc-facd-46ab-96ce-176528439b16\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lc8v5" Jan 26 08:16:36 crc kubenswrapper[4806]: I0126 08:16:36.966734 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b38882dc-facd-46ab-96ce-176528439b16-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-lc8v5\" (UID: \"b38882dc-facd-46ab-96ce-176528439b16\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lc8v5" Jan 26 08:16:36 crc kubenswrapper[4806]: I0126 08:16:36.966816 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b38882dc-facd-46ab-96ce-176528439b16-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-lc8v5\" (UID: \"b38882dc-facd-46ab-96ce-176528439b16\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lc8v5" Jan 26 08:16:36 crc kubenswrapper[4806]: I0126 08:16:36.966853 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b38882dc-facd-46ab-96ce-176528439b16-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-lc8v5\" (UID: \"b38882dc-facd-46ab-96ce-176528439b16\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lc8v5" Jan 26 08:16:36 crc kubenswrapper[4806]: I0126 08:16:36.971184 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b38882dc-facd-46ab-96ce-176528439b16-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-lc8v5\" (UID: \"b38882dc-facd-46ab-96ce-176528439b16\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lc8v5" Jan 26 08:16:36 crc kubenswrapper[4806]: I0126 08:16:36.971480 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b38882dc-facd-46ab-96ce-176528439b16-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-lc8v5\" (UID: \"b38882dc-facd-46ab-96ce-176528439b16\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lc8v5" Jan 26 08:16:36 crc kubenswrapper[4806]: I0126 08:16:36.975934 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b38882dc-facd-46ab-96ce-176528439b16-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-lc8v5\" (UID: \"b38882dc-facd-46ab-96ce-176528439b16\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lc8v5" Jan 26 08:16:36 crc kubenswrapper[4806]: I0126 08:16:36.984702 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gp4ch\" (UniqueName: \"kubernetes.io/projected/b38882dc-facd-46ab-96ce-176528439b16-kube-api-access-gp4ch\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-lc8v5\" (UID: \"b38882dc-facd-46ab-96ce-176528439b16\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lc8v5" Jan 26 08:16:37 crc kubenswrapper[4806]: I0126 08:16:37.100664 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lc8v5" Jan 26 08:16:37 crc kubenswrapper[4806]: I0126 08:16:37.625379 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lc8v5"] Jan 26 08:16:37 crc kubenswrapper[4806]: I0126 08:16:37.717144 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lc8v5" event={"ID":"b38882dc-facd-46ab-96ce-176528439b16","Type":"ContainerStarted","Data":"b359cb48cbb1098af00c44e30926cef84a84bff4b65e0fa5a4b3c68c408c4129"} Jan 26 08:16:38 crc kubenswrapper[4806]: I0126 08:16:38.727044 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lc8v5" event={"ID":"b38882dc-facd-46ab-96ce-176528439b16","Type":"ContainerStarted","Data":"7ea37ca8b41b004c3f179412c32c2a0b4dd6856565bb6146bcbeceffd1ce0e2c"} Jan 26 08:17:15 crc kubenswrapper[4806]: I0126 08:17:15.806375 4806 patch_prober.go:28] interesting pod/machine-config-daemon-k2tlk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 08:17:15 crc kubenswrapper[4806]: I0126 08:17:15.808187 4806 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 08:17:35 crc kubenswrapper[4806]: I0126 08:17:35.080227 4806 scope.go:117] "RemoveContainer" containerID="eec2b19530f5fed972c2de385b241db04e751ef987f525135d79fd350fcc0a31" Jan 26 08:17:35 crc kubenswrapper[4806]: I0126 08:17:35.159481 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lc8v5" podStartSLOduration=58.747147218 podStartE2EDuration="59.159463288s" podCreationTimestamp="2026-01-26 08:16:36 +0000 UTC" firstStartedPulling="2026-01-26 08:16:37.629846743 +0000 UTC m=+1376.894254799" lastFinishedPulling="2026-01-26 08:16:38.042162813 +0000 UTC m=+1377.306570869" observedRunningTime="2026-01-26 08:16:38.755015309 +0000 UTC m=+1378.019423375" watchObservedRunningTime="2026-01-26 08:17:35.159463288 +0000 UTC m=+1434.423871344" Jan 26 08:17:35 crc kubenswrapper[4806]: I0126 08:17:35.161540 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-5rjsr"] Jan 26 08:17:35 crc kubenswrapper[4806]: I0126 08:17:35.163906 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5rjsr" Jan 26 08:17:35 crc kubenswrapper[4806]: I0126 08:17:35.177361 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5rjsr"] Jan 26 08:17:35 crc kubenswrapper[4806]: I0126 08:17:35.333422 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8a53787-c16b-4a37-8090-f88af2cf74b8-utilities\") pod \"redhat-operators-5rjsr\" (UID: \"d8a53787-c16b-4a37-8090-f88af2cf74b8\") " pod="openshift-marketplace/redhat-operators-5rjsr" Jan 26 08:17:35 crc kubenswrapper[4806]: I0126 08:17:35.333542 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcxsn\" (UniqueName: \"kubernetes.io/projected/d8a53787-c16b-4a37-8090-f88af2cf74b8-kube-api-access-bcxsn\") pod \"redhat-operators-5rjsr\" (UID: \"d8a53787-c16b-4a37-8090-f88af2cf74b8\") " pod="openshift-marketplace/redhat-operators-5rjsr" Jan 26 08:17:35 crc kubenswrapper[4806]: I0126 08:17:35.333590 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8a53787-c16b-4a37-8090-f88af2cf74b8-catalog-content\") pod \"redhat-operators-5rjsr\" (UID: \"d8a53787-c16b-4a37-8090-f88af2cf74b8\") " pod="openshift-marketplace/redhat-operators-5rjsr" Jan 26 08:17:35 crc kubenswrapper[4806]: I0126 08:17:35.436048 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8a53787-c16b-4a37-8090-f88af2cf74b8-utilities\") pod \"redhat-operators-5rjsr\" (UID: \"d8a53787-c16b-4a37-8090-f88af2cf74b8\") " pod="openshift-marketplace/redhat-operators-5rjsr" Jan 26 08:17:35 crc kubenswrapper[4806]: I0126 08:17:35.436235 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bcxsn\" (UniqueName: \"kubernetes.io/projected/d8a53787-c16b-4a37-8090-f88af2cf74b8-kube-api-access-bcxsn\") pod \"redhat-operators-5rjsr\" (UID: \"d8a53787-c16b-4a37-8090-f88af2cf74b8\") " pod="openshift-marketplace/redhat-operators-5rjsr" Jan 26 08:17:35 crc kubenswrapper[4806]: I0126 08:17:35.436319 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8a53787-c16b-4a37-8090-f88af2cf74b8-catalog-content\") pod \"redhat-operators-5rjsr\" (UID: \"d8a53787-c16b-4a37-8090-f88af2cf74b8\") " pod="openshift-marketplace/redhat-operators-5rjsr" Jan 26 08:17:35 crc kubenswrapper[4806]: I0126 08:17:35.436924 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8a53787-c16b-4a37-8090-f88af2cf74b8-utilities\") pod \"redhat-operators-5rjsr\" (UID: \"d8a53787-c16b-4a37-8090-f88af2cf74b8\") " pod="openshift-marketplace/redhat-operators-5rjsr" Jan 26 08:17:35 crc kubenswrapper[4806]: I0126 08:17:35.437113 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8a53787-c16b-4a37-8090-f88af2cf74b8-catalog-content\") pod \"redhat-operators-5rjsr\" (UID: \"d8a53787-c16b-4a37-8090-f88af2cf74b8\") " pod="openshift-marketplace/redhat-operators-5rjsr" Jan 26 08:17:35 crc kubenswrapper[4806]: I0126 08:17:35.456936 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bcxsn\" (UniqueName: \"kubernetes.io/projected/d8a53787-c16b-4a37-8090-f88af2cf74b8-kube-api-access-bcxsn\") pod \"redhat-operators-5rjsr\" (UID: \"d8a53787-c16b-4a37-8090-f88af2cf74b8\") " pod="openshift-marketplace/redhat-operators-5rjsr" Jan 26 08:17:35 crc kubenswrapper[4806]: I0126 08:17:35.485197 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5rjsr" Jan 26 08:17:36 crc kubenswrapper[4806]: I0126 08:17:36.006422 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5rjsr"] Jan 26 08:17:36 crc kubenswrapper[4806]: I0126 08:17:36.258397 4806 generic.go:334] "Generic (PLEG): container finished" podID="d8a53787-c16b-4a37-8090-f88af2cf74b8" containerID="a3b76f80cb778ac443a58b6804ba32cd06e1e7ea2e62cadcdf30351db2093b56" exitCode=0 Jan 26 08:17:36 crc kubenswrapper[4806]: I0126 08:17:36.258501 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5rjsr" event={"ID":"d8a53787-c16b-4a37-8090-f88af2cf74b8","Type":"ContainerDied","Data":"a3b76f80cb778ac443a58b6804ba32cd06e1e7ea2e62cadcdf30351db2093b56"} Jan 26 08:17:36 crc kubenswrapper[4806]: I0126 08:17:36.258792 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5rjsr" event={"ID":"d8a53787-c16b-4a37-8090-f88af2cf74b8","Type":"ContainerStarted","Data":"f421bcad75396f5f68a773f335920f0c9fdb83e52e7f60d70116a2b96f2b91b4"} Jan 26 08:17:37 crc kubenswrapper[4806]: I0126 08:17:37.270583 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5rjsr" event={"ID":"d8a53787-c16b-4a37-8090-f88af2cf74b8","Type":"ContainerStarted","Data":"8d37a9a4fc34d282eeb14faf9759e36aab47b72b802f67f938482a0fe86152c9"} Jan 26 08:17:40 crc kubenswrapper[4806]: I0126 08:17:40.297915 4806 generic.go:334] "Generic (PLEG): container finished" podID="d8a53787-c16b-4a37-8090-f88af2cf74b8" containerID="8d37a9a4fc34d282eeb14faf9759e36aab47b72b802f67f938482a0fe86152c9" exitCode=0 Jan 26 08:17:40 crc kubenswrapper[4806]: I0126 08:17:40.297987 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5rjsr" event={"ID":"d8a53787-c16b-4a37-8090-f88af2cf74b8","Type":"ContainerDied","Data":"8d37a9a4fc34d282eeb14faf9759e36aab47b72b802f67f938482a0fe86152c9"} Jan 26 08:17:41 crc kubenswrapper[4806]: I0126 08:17:41.313659 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5rjsr" event={"ID":"d8a53787-c16b-4a37-8090-f88af2cf74b8","Type":"ContainerStarted","Data":"cf414828bd88588a7197a5c3ee2e392c0b4b698b8f71b3c19ab12bd37d16bce5"} Jan 26 08:17:41 crc kubenswrapper[4806]: I0126 08:17:41.333042 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-5rjsr" podStartSLOduration=1.8614498560000001 podStartE2EDuration="6.333026407s" podCreationTimestamp="2026-01-26 08:17:35 +0000 UTC" firstStartedPulling="2026-01-26 08:17:36.261094916 +0000 UTC m=+1435.525503012" lastFinishedPulling="2026-01-26 08:17:40.732671507 +0000 UTC m=+1439.997079563" observedRunningTime="2026-01-26 08:17:41.329758115 +0000 UTC m=+1440.594166171" watchObservedRunningTime="2026-01-26 08:17:41.333026407 +0000 UTC m=+1440.597434463" Jan 26 08:17:45 crc kubenswrapper[4806]: I0126 08:17:45.486135 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-5rjsr" Jan 26 08:17:45 crc kubenswrapper[4806]: I0126 08:17:45.486817 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-5rjsr" Jan 26 08:17:45 crc kubenswrapper[4806]: I0126 08:17:45.806443 4806 patch_prober.go:28] interesting pod/machine-config-daemon-k2tlk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 08:17:45 crc kubenswrapper[4806]: I0126 08:17:45.806499 4806 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 08:17:46 crc kubenswrapper[4806]: I0126 08:17:46.542306 4806 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5rjsr" podUID="d8a53787-c16b-4a37-8090-f88af2cf74b8" containerName="registry-server" probeResult="failure" output=< Jan 26 08:17:46 crc kubenswrapper[4806]: timeout: failed to connect service ":50051" within 1s Jan 26 08:17:46 crc kubenswrapper[4806]: > Jan 26 08:17:55 crc kubenswrapper[4806]: I0126 08:17:55.539824 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-5rjsr" Jan 26 08:17:55 crc kubenswrapper[4806]: I0126 08:17:55.589249 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-5rjsr" Jan 26 08:17:55 crc kubenswrapper[4806]: I0126 08:17:55.794003 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5rjsr"] Jan 26 08:17:57 crc kubenswrapper[4806]: I0126 08:17:57.451289 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-5rjsr" podUID="d8a53787-c16b-4a37-8090-f88af2cf74b8" containerName="registry-server" containerID="cri-o://cf414828bd88588a7197a5c3ee2e392c0b4b698b8f71b3c19ab12bd37d16bce5" gracePeriod=2 Jan 26 08:17:57 crc kubenswrapper[4806]: I0126 08:17:57.930498 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5rjsr" Jan 26 08:17:58 crc kubenswrapper[4806]: I0126 08:17:58.099689 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8a53787-c16b-4a37-8090-f88af2cf74b8-catalog-content\") pod \"d8a53787-c16b-4a37-8090-f88af2cf74b8\" (UID: \"d8a53787-c16b-4a37-8090-f88af2cf74b8\") " Jan 26 08:17:58 crc kubenswrapper[4806]: I0126 08:17:58.099872 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bcxsn\" (UniqueName: \"kubernetes.io/projected/d8a53787-c16b-4a37-8090-f88af2cf74b8-kube-api-access-bcxsn\") pod \"d8a53787-c16b-4a37-8090-f88af2cf74b8\" (UID: \"d8a53787-c16b-4a37-8090-f88af2cf74b8\") " Jan 26 08:17:58 crc kubenswrapper[4806]: I0126 08:17:58.099922 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8a53787-c16b-4a37-8090-f88af2cf74b8-utilities\") pod \"d8a53787-c16b-4a37-8090-f88af2cf74b8\" (UID: \"d8a53787-c16b-4a37-8090-f88af2cf74b8\") " Jan 26 08:17:58 crc kubenswrapper[4806]: I0126 08:17:58.100836 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d8a53787-c16b-4a37-8090-f88af2cf74b8-utilities" (OuterVolumeSpecName: "utilities") pod "d8a53787-c16b-4a37-8090-f88af2cf74b8" (UID: "d8a53787-c16b-4a37-8090-f88af2cf74b8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 08:17:58 crc kubenswrapper[4806]: I0126 08:17:58.112873 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8a53787-c16b-4a37-8090-f88af2cf74b8-kube-api-access-bcxsn" (OuterVolumeSpecName: "kube-api-access-bcxsn") pod "d8a53787-c16b-4a37-8090-f88af2cf74b8" (UID: "d8a53787-c16b-4a37-8090-f88af2cf74b8"). InnerVolumeSpecName "kube-api-access-bcxsn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:17:58 crc kubenswrapper[4806]: I0126 08:17:58.202032 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bcxsn\" (UniqueName: \"kubernetes.io/projected/d8a53787-c16b-4a37-8090-f88af2cf74b8-kube-api-access-bcxsn\") on node \"crc\" DevicePath \"\"" Jan 26 08:17:58 crc kubenswrapper[4806]: I0126 08:17:58.202065 4806 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8a53787-c16b-4a37-8090-f88af2cf74b8-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 08:17:58 crc kubenswrapper[4806]: I0126 08:17:58.223396 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d8a53787-c16b-4a37-8090-f88af2cf74b8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d8a53787-c16b-4a37-8090-f88af2cf74b8" (UID: "d8a53787-c16b-4a37-8090-f88af2cf74b8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 08:17:58 crc kubenswrapper[4806]: I0126 08:17:58.304364 4806 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8a53787-c16b-4a37-8090-f88af2cf74b8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 08:17:58 crc kubenswrapper[4806]: I0126 08:17:58.462786 4806 generic.go:334] "Generic (PLEG): container finished" podID="d8a53787-c16b-4a37-8090-f88af2cf74b8" containerID="cf414828bd88588a7197a5c3ee2e392c0b4b698b8f71b3c19ab12bd37d16bce5" exitCode=0 Jan 26 08:17:58 crc kubenswrapper[4806]: I0126 08:17:58.462909 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5rjsr" Jan 26 08:17:58 crc kubenswrapper[4806]: I0126 08:17:58.462945 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5rjsr" event={"ID":"d8a53787-c16b-4a37-8090-f88af2cf74b8","Type":"ContainerDied","Data":"cf414828bd88588a7197a5c3ee2e392c0b4b698b8f71b3c19ab12bd37d16bce5"} Jan 26 08:17:58 crc kubenswrapper[4806]: I0126 08:17:58.463302 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5rjsr" event={"ID":"d8a53787-c16b-4a37-8090-f88af2cf74b8","Type":"ContainerDied","Data":"f421bcad75396f5f68a773f335920f0c9fdb83e52e7f60d70116a2b96f2b91b4"} Jan 26 08:17:58 crc kubenswrapper[4806]: I0126 08:17:58.463333 4806 scope.go:117] "RemoveContainer" containerID="cf414828bd88588a7197a5c3ee2e392c0b4b698b8f71b3c19ab12bd37d16bce5" Jan 26 08:17:58 crc kubenswrapper[4806]: I0126 08:17:58.481960 4806 scope.go:117] "RemoveContainer" containerID="8d37a9a4fc34d282eeb14faf9759e36aab47b72b802f67f938482a0fe86152c9" Jan 26 08:17:58 crc kubenswrapper[4806]: I0126 08:17:58.508469 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5rjsr"] Jan 26 08:17:58 crc kubenswrapper[4806]: I0126 08:17:58.516688 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-5rjsr"] Jan 26 08:17:58 crc kubenswrapper[4806]: I0126 08:17:58.534135 4806 scope.go:117] "RemoveContainer" containerID="a3b76f80cb778ac443a58b6804ba32cd06e1e7ea2e62cadcdf30351db2093b56" Jan 26 08:17:58 crc kubenswrapper[4806]: I0126 08:17:58.560792 4806 scope.go:117] "RemoveContainer" containerID="cf414828bd88588a7197a5c3ee2e392c0b4b698b8f71b3c19ab12bd37d16bce5" Jan 26 08:17:58 crc kubenswrapper[4806]: E0126 08:17:58.561380 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf414828bd88588a7197a5c3ee2e392c0b4b698b8f71b3c19ab12bd37d16bce5\": container with ID starting with cf414828bd88588a7197a5c3ee2e392c0b4b698b8f71b3c19ab12bd37d16bce5 not found: ID does not exist" containerID="cf414828bd88588a7197a5c3ee2e392c0b4b698b8f71b3c19ab12bd37d16bce5" Jan 26 08:17:58 crc kubenswrapper[4806]: I0126 08:17:58.561427 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf414828bd88588a7197a5c3ee2e392c0b4b698b8f71b3c19ab12bd37d16bce5"} err="failed to get container status \"cf414828bd88588a7197a5c3ee2e392c0b4b698b8f71b3c19ab12bd37d16bce5\": rpc error: code = NotFound desc = could not find container \"cf414828bd88588a7197a5c3ee2e392c0b4b698b8f71b3c19ab12bd37d16bce5\": container with ID starting with cf414828bd88588a7197a5c3ee2e392c0b4b698b8f71b3c19ab12bd37d16bce5 not found: ID does not exist" Jan 26 08:17:58 crc kubenswrapper[4806]: I0126 08:17:58.561449 4806 scope.go:117] "RemoveContainer" containerID="8d37a9a4fc34d282eeb14faf9759e36aab47b72b802f67f938482a0fe86152c9" Jan 26 08:17:58 crc kubenswrapper[4806]: E0126 08:17:58.561757 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d37a9a4fc34d282eeb14faf9759e36aab47b72b802f67f938482a0fe86152c9\": container with ID starting with 8d37a9a4fc34d282eeb14faf9759e36aab47b72b802f67f938482a0fe86152c9 not found: ID does not exist" containerID="8d37a9a4fc34d282eeb14faf9759e36aab47b72b802f67f938482a0fe86152c9" Jan 26 08:17:58 crc kubenswrapper[4806]: I0126 08:17:58.561807 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d37a9a4fc34d282eeb14faf9759e36aab47b72b802f67f938482a0fe86152c9"} err="failed to get container status \"8d37a9a4fc34d282eeb14faf9759e36aab47b72b802f67f938482a0fe86152c9\": rpc error: code = NotFound desc = could not find container \"8d37a9a4fc34d282eeb14faf9759e36aab47b72b802f67f938482a0fe86152c9\": container with ID starting with 8d37a9a4fc34d282eeb14faf9759e36aab47b72b802f67f938482a0fe86152c9 not found: ID does not exist" Jan 26 08:17:58 crc kubenswrapper[4806]: I0126 08:17:58.561826 4806 scope.go:117] "RemoveContainer" containerID="a3b76f80cb778ac443a58b6804ba32cd06e1e7ea2e62cadcdf30351db2093b56" Jan 26 08:17:58 crc kubenswrapper[4806]: E0126 08:17:58.562191 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a3b76f80cb778ac443a58b6804ba32cd06e1e7ea2e62cadcdf30351db2093b56\": container with ID starting with a3b76f80cb778ac443a58b6804ba32cd06e1e7ea2e62cadcdf30351db2093b56 not found: ID does not exist" containerID="a3b76f80cb778ac443a58b6804ba32cd06e1e7ea2e62cadcdf30351db2093b56" Jan 26 08:17:58 crc kubenswrapper[4806]: I0126 08:17:58.562231 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3b76f80cb778ac443a58b6804ba32cd06e1e7ea2e62cadcdf30351db2093b56"} err="failed to get container status \"a3b76f80cb778ac443a58b6804ba32cd06e1e7ea2e62cadcdf30351db2093b56\": rpc error: code = NotFound desc = could not find container \"a3b76f80cb778ac443a58b6804ba32cd06e1e7ea2e62cadcdf30351db2093b56\": container with ID starting with a3b76f80cb778ac443a58b6804ba32cd06e1e7ea2e62cadcdf30351db2093b56 not found: ID does not exist" Jan 26 08:17:59 crc kubenswrapper[4806]: I0126 08:17:59.051857 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8a53787-c16b-4a37-8090-f88af2cf74b8" path="/var/lib/kubelet/pods/d8a53787-c16b-4a37-8090-f88af2cf74b8/volumes" Jan 26 08:18:15 crc kubenswrapper[4806]: I0126 08:18:15.806198 4806 patch_prober.go:28] interesting pod/machine-config-daemon-k2tlk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 08:18:15 crc kubenswrapper[4806]: I0126 08:18:15.806742 4806 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 08:18:15 crc kubenswrapper[4806]: I0126 08:18:15.806800 4806 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" Jan 26 08:18:15 crc kubenswrapper[4806]: I0126 08:18:15.807648 4806 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"09669619f64d4d35cd31b87d98b04e88f92b9a54a34f625c50be4875e6fefe66"} pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 08:18:15 crc kubenswrapper[4806]: I0126 08:18:15.807714 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" containerName="machine-config-daemon" containerID="cri-o://09669619f64d4d35cd31b87d98b04e88f92b9a54a34f625c50be4875e6fefe66" gracePeriod=600 Jan 26 08:18:16 crc kubenswrapper[4806]: I0126 08:18:16.631636 4806 generic.go:334] "Generic (PLEG): container finished" podID="d07502a2-50b0-4012-b335-340a1c694c50" containerID="09669619f64d4d35cd31b87d98b04e88f92b9a54a34f625c50be4875e6fefe66" exitCode=0 Jan 26 08:18:16 crc kubenswrapper[4806]: I0126 08:18:16.631739 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" event={"ID":"d07502a2-50b0-4012-b335-340a1c694c50","Type":"ContainerDied","Data":"09669619f64d4d35cd31b87d98b04e88f92b9a54a34f625c50be4875e6fefe66"} Jan 26 08:18:16 crc kubenswrapper[4806]: I0126 08:18:16.631992 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" event={"ID":"d07502a2-50b0-4012-b335-340a1c694c50","Type":"ContainerStarted","Data":"55ec0c990dc91e0e377b96ad87f2e5ff0605f7a996ad405f57b4bd86d9139349"} Jan 26 08:18:16 crc kubenswrapper[4806]: I0126 08:18:16.632013 4806 scope.go:117] "RemoveContainer" containerID="8880d10e53faf854bc25456c263d76882c8161d6eb264ea6dd36a69766a56246" Jan 26 08:18:19 crc kubenswrapper[4806]: I0126 08:18:19.110483 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-gq8rw"] Jan 26 08:18:19 crc kubenswrapper[4806]: E0126 08:18:19.111463 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8a53787-c16b-4a37-8090-f88af2cf74b8" containerName="extract-content" Jan 26 08:18:19 crc kubenswrapper[4806]: I0126 08:18:19.111479 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8a53787-c16b-4a37-8090-f88af2cf74b8" containerName="extract-content" Jan 26 08:18:19 crc kubenswrapper[4806]: E0126 08:18:19.111510 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8a53787-c16b-4a37-8090-f88af2cf74b8" containerName="registry-server" Jan 26 08:18:19 crc kubenswrapper[4806]: I0126 08:18:19.111523 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8a53787-c16b-4a37-8090-f88af2cf74b8" containerName="registry-server" Jan 26 08:18:19 crc kubenswrapper[4806]: E0126 08:18:19.111561 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8a53787-c16b-4a37-8090-f88af2cf74b8" containerName="extract-utilities" Jan 26 08:18:19 crc kubenswrapper[4806]: I0126 08:18:19.111572 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8a53787-c16b-4a37-8090-f88af2cf74b8" containerName="extract-utilities" Jan 26 08:18:19 crc kubenswrapper[4806]: I0126 08:18:19.111806 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8a53787-c16b-4a37-8090-f88af2cf74b8" containerName="registry-server" Jan 26 08:18:19 crc kubenswrapper[4806]: I0126 08:18:19.113445 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gq8rw" Jan 26 08:18:19 crc kubenswrapper[4806]: I0126 08:18:19.136936 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gq8rw"] Jan 26 08:18:19 crc kubenswrapper[4806]: I0126 08:18:19.197718 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2972aa43-2198-44bf-9975-beb252cffb1c-utilities\") pod \"certified-operators-gq8rw\" (UID: \"2972aa43-2198-44bf-9975-beb252cffb1c\") " pod="openshift-marketplace/certified-operators-gq8rw" Jan 26 08:18:19 crc kubenswrapper[4806]: I0126 08:18:19.198082 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xlvh\" (UniqueName: \"kubernetes.io/projected/2972aa43-2198-44bf-9975-beb252cffb1c-kube-api-access-2xlvh\") pod \"certified-operators-gq8rw\" (UID: \"2972aa43-2198-44bf-9975-beb252cffb1c\") " pod="openshift-marketplace/certified-operators-gq8rw" Jan 26 08:18:19 crc kubenswrapper[4806]: I0126 08:18:19.198126 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2972aa43-2198-44bf-9975-beb252cffb1c-catalog-content\") pod \"certified-operators-gq8rw\" (UID: \"2972aa43-2198-44bf-9975-beb252cffb1c\") " pod="openshift-marketplace/certified-operators-gq8rw" Jan 26 08:18:19 crc kubenswrapper[4806]: I0126 08:18:19.300199 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xlvh\" (UniqueName: \"kubernetes.io/projected/2972aa43-2198-44bf-9975-beb252cffb1c-kube-api-access-2xlvh\") pod \"certified-operators-gq8rw\" (UID: \"2972aa43-2198-44bf-9975-beb252cffb1c\") " pod="openshift-marketplace/certified-operators-gq8rw" Jan 26 08:18:19 crc kubenswrapper[4806]: I0126 08:18:19.300731 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2972aa43-2198-44bf-9975-beb252cffb1c-catalog-content\") pod \"certified-operators-gq8rw\" (UID: \"2972aa43-2198-44bf-9975-beb252cffb1c\") " pod="openshift-marketplace/certified-operators-gq8rw" Jan 26 08:18:19 crc kubenswrapper[4806]: I0126 08:18:19.301264 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2972aa43-2198-44bf-9975-beb252cffb1c-utilities\") pod \"certified-operators-gq8rw\" (UID: \"2972aa43-2198-44bf-9975-beb252cffb1c\") " pod="openshift-marketplace/certified-operators-gq8rw" Jan 26 08:18:19 crc kubenswrapper[4806]: I0126 08:18:19.301170 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2972aa43-2198-44bf-9975-beb252cffb1c-catalog-content\") pod \"certified-operators-gq8rw\" (UID: \"2972aa43-2198-44bf-9975-beb252cffb1c\") " pod="openshift-marketplace/certified-operators-gq8rw" Jan 26 08:18:19 crc kubenswrapper[4806]: I0126 08:18:19.301598 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2972aa43-2198-44bf-9975-beb252cffb1c-utilities\") pod \"certified-operators-gq8rw\" (UID: \"2972aa43-2198-44bf-9975-beb252cffb1c\") " pod="openshift-marketplace/certified-operators-gq8rw" Jan 26 08:18:19 crc kubenswrapper[4806]: I0126 08:18:19.317985 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xlvh\" (UniqueName: \"kubernetes.io/projected/2972aa43-2198-44bf-9975-beb252cffb1c-kube-api-access-2xlvh\") pod \"certified-operators-gq8rw\" (UID: \"2972aa43-2198-44bf-9975-beb252cffb1c\") " pod="openshift-marketplace/certified-operators-gq8rw" Jan 26 08:18:19 crc kubenswrapper[4806]: I0126 08:18:19.452899 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gq8rw" Jan 26 08:18:19 crc kubenswrapper[4806]: I0126 08:18:19.928024 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gq8rw"] Jan 26 08:18:19 crc kubenswrapper[4806]: W0126 08:18:19.937719 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2972aa43_2198_44bf_9975_beb252cffb1c.slice/crio-0802695d34da7fa302a510d440233f3030698de6a1d4b6b916ea0b1ec711bfd8 WatchSource:0}: Error finding container 0802695d34da7fa302a510d440233f3030698de6a1d4b6b916ea0b1ec711bfd8: Status 404 returned error can't find the container with id 0802695d34da7fa302a510d440233f3030698de6a1d4b6b916ea0b1ec711bfd8 Jan 26 08:18:20 crc kubenswrapper[4806]: I0126 08:18:20.675825 4806 generic.go:334] "Generic (PLEG): container finished" podID="2972aa43-2198-44bf-9975-beb252cffb1c" containerID="d3ca3ba37c314a354a9014d6550b8e1904bb8b9cce46071a7e0a67b9f90f21f5" exitCode=0 Jan 26 08:18:20 crc kubenswrapper[4806]: I0126 08:18:20.675889 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gq8rw" event={"ID":"2972aa43-2198-44bf-9975-beb252cffb1c","Type":"ContainerDied","Data":"d3ca3ba37c314a354a9014d6550b8e1904bb8b9cce46071a7e0a67b9f90f21f5"} Jan 26 08:18:20 crc kubenswrapper[4806]: I0126 08:18:20.676141 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gq8rw" event={"ID":"2972aa43-2198-44bf-9975-beb252cffb1c","Type":"ContainerStarted","Data":"0802695d34da7fa302a510d440233f3030698de6a1d4b6b916ea0b1ec711bfd8"} Jan 26 08:18:21 crc kubenswrapper[4806]: I0126 08:18:21.686924 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gq8rw" event={"ID":"2972aa43-2198-44bf-9975-beb252cffb1c","Type":"ContainerStarted","Data":"7f2b633662a2f6bb9089c484f76b29f7faaf30d9f43d85f6cbcd54daa57e8513"} Jan 26 08:18:22 crc kubenswrapper[4806]: I0126 08:18:22.701349 4806 generic.go:334] "Generic (PLEG): container finished" podID="2972aa43-2198-44bf-9975-beb252cffb1c" containerID="7f2b633662a2f6bb9089c484f76b29f7faaf30d9f43d85f6cbcd54daa57e8513" exitCode=0 Jan 26 08:18:22 crc kubenswrapper[4806]: I0126 08:18:22.701391 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gq8rw" event={"ID":"2972aa43-2198-44bf-9975-beb252cffb1c","Type":"ContainerDied","Data":"7f2b633662a2f6bb9089c484f76b29f7faaf30d9f43d85f6cbcd54daa57e8513"} Jan 26 08:18:23 crc kubenswrapper[4806]: I0126 08:18:23.713043 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gq8rw" event={"ID":"2972aa43-2198-44bf-9975-beb252cffb1c","Type":"ContainerStarted","Data":"05687417b51633ae8127da3aab2662299f1394a7295caf427f8441fc61baf4d0"} Jan 26 08:18:23 crc kubenswrapper[4806]: I0126 08:18:23.734635 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-gq8rw" podStartSLOduration=2.291585828 podStartE2EDuration="4.734616558s" podCreationTimestamp="2026-01-26 08:18:19 +0000 UTC" firstStartedPulling="2026-01-26 08:18:20.677383631 +0000 UTC m=+1479.941791687" lastFinishedPulling="2026-01-26 08:18:23.120414361 +0000 UTC m=+1482.384822417" observedRunningTime="2026-01-26 08:18:23.727598852 +0000 UTC m=+1482.992006928" watchObservedRunningTime="2026-01-26 08:18:23.734616558 +0000 UTC m=+1482.999024614" Jan 26 08:18:26 crc kubenswrapper[4806]: I0126 08:18:26.479906 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-7q4r5"] Jan 26 08:18:26 crc kubenswrapper[4806]: I0126 08:18:26.482177 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7q4r5" Jan 26 08:18:26 crc kubenswrapper[4806]: I0126 08:18:26.506239 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7q4r5"] Jan 26 08:18:26 crc kubenswrapper[4806]: I0126 08:18:26.534702 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/124f49f5-a51a-48b0-98cf-f4d85e7e57c8-utilities\") pod \"redhat-marketplace-7q4r5\" (UID: \"124f49f5-a51a-48b0-98cf-f4d85e7e57c8\") " pod="openshift-marketplace/redhat-marketplace-7q4r5" Jan 26 08:18:26 crc kubenswrapper[4806]: I0126 08:18:26.534929 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ssc2t\" (UniqueName: \"kubernetes.io/projected/124f49f5-a51a-48b0-98cf-f4d85e7e57c8-kube-api-access-ssc2t\") pod \"redhat-marketplace-7q4r5\" (UID: \"124f49f5-a51a-48b0-98cf-f4d85e7e57c8\") " pod="openshift-marketplace/redhat-marketplace-7q4r5" Jan 26 08:18:26 crc kubenswrapper[4806]: I0126 08:18:26.534993 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/124f49f5-a51a-48b0-98cf-f4d85e7e57c8-catalog-content\") pod \"redhat-marketplace-7q4r5\" (UID: \"124f49f5-a51a-48b0-98cf-f4d85e7e57c8\") " pod="openshift-marketplace/redhat-marketplace-7q4r5" Jan 26 08:18:26 crc kubenswrapper[4806]: I0126 08:18:26.637424 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/124f49f5-a51a-48b0-98cf-f4d85e7e57c8-catalog-content\") pod \"redhat-marketplace-7q4r5\" (UID: \"124f49f5-a51a-48b0-98cf-f4d85e7e57c8\") " pod="openshift-marketplace/redhat-marketplace-7q4r5" Jan 26 08:18:26 crc kubenswrapper[4806]: I0126 08:18:26.637500 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/124f49f5-a51a-48b0-98cf-f4d85e7e57c8-utilities\") pod \"redhat-marketplace-7q4r5\" (UID: \"124f49f5-a51a-48b0-98cf-f4d85e7e57c8\") " pod="openshift-marketplace/redhat-marketplace-7q4r5" Jan 26 08:18:26 crc kubenswrapper[4806]: I0126 08:18:26.637680 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ssc2t\" (UniqueName: \"kubernetes.io/projected/124f49f5-a51a-48b0-98cf-f4d85e7e57c8-kube-api-access-ssc2t\") pod \"redhat-marketplace-7q4r5\" (UID: \"124f49f5-a51a-48b0-98cf-f4d85e7e57c8\") " pod="openshift-marketplace/redhat-marketplace-7q4r5" Jan 26 08:18:26 crc kubenswrapper[4806]: I0126 08:18:26.638205 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/124f49f5-a51a-48b0-98cf-f4d85e7e57c8-catalog-content\") pod \"redhat-marketplace-7q4r5\" (UID: \"124f49f5-a51a-48b0-98cf-f4d85e7e57c8\") " pod="openshift-marketplace/redhat-marketplace-7q4r5" Jan 26 08:18:26 crc kubenswrapper[4806]: I0126 08:18:26.638414 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/124f49f5-a51a-48b0-98cf-f4d85e7e57c8-utilities\") pod \"redhat-marketplace-7q4r5\" (UID: \"124f49f5-a51a-48b0-98cf-f4d85e7e57c8\") " pod="openshift-marketplace/redhat-marketplace-7q4r5" Jan 26 08:18:26 crc kubenswrapper[4806]: I0126 08:18:26.655629 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ssc2t\" (UniqueName: \"kubernetes.io/projected/124f49f5-a51a-48b0-98cf-f4d85e7e57c8-kube-api-access-ssc2t\") pod \"redhat-marketplace-7q4r5\" (UID: \"124f49f5-a51a-48b0-98cf-f4d85e7e57c8\") " pod="openshift-marketplace/redhat-marketplace-7q4r5" Jan 26 08:18:26 crc kubenswrapper[4806]: I0126 08:18:26.803972 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7q4r5" Jan 26 08:18:27 crc kubenswrapper[4806]: I0126 08:18:27.474032 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7q4r5"] Jan 26 08:18:27 crc kubenswrapper[4806]: I0126 08:18:27.753409 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7q4r5" event={"ID":"124f49f5-a51a-48b0-98cf-f4d85e7e57c8","Type":"ContainerStarted","Data":"2874ddc21c534eac3668e2b74c8af4c89b7a4905b71e080b0337c20a7acb3d20"} Jan 26 08:18:28 crc kubenswrapper[4806]: I0126 08:18:28.763746 4806 generic.go:334] "Generic (PLEG): container finished" podID="124f49f5-a51a-48b0-98cf-f4d85e7e57c8" containerID="0262f7ede546c2fda9108c75cbb18faa31e946590aa20c53ea26bf2fd6ddd916" exitCode=0 Jan 26 08:18:28 crc kubenswrapper[4806]: I0126 08:18:28.763833 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7q4r5" event={"ID":"124f49f5-a51a-48b0-98cf-f4d85e7e57c8","Type":"ContainerDied","Data":"0262f7ede546c2fda9108c75cbb18faa31e946590aa20c53ea26bf2fd6ddd916"} Jan 26 08:18:29 crc kubenswrapper[4806]: I0126 08:18:29.454152 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-gq8rw" Jan 26 08:18:29 crc kubenswrapper[4806]: I0126 08:18:29.454196 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-gq8rw" Jan 26 08:18:29 crc kubenswrapper[4806]: I0126 08:18:29.506497 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-gq8rw" Jan 26 08:18:29 crc kubenswrapper[4806]: I0126 08:18:29.819082 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-gq8rw" Jan 26 08:18:30 crc kubenswrapper[4806]: I0126 08:18:30.782331 4806 generic.go:334] "Generic (PLEG): container finished" podID="124f49f5-a51a-48b0-98cf-f4d85e7e57c8" containerID="11ed914452ca3a33307c08355ba2238d08a7d4649e96897717ab2cfc363b7c3e" exitCode=0 Jan 26 08:18:30 crc kubenswrapper[4806]: I0126 08:18:30.782438 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7q4r5" event={"ID":"124f49f5-a51a-48b0-98cf-f4d85e7e57c8","Type":"ContainerDied","Data":"11ed914452ca3a33307c08355ba2238d08a7d4649e96897717ab2cfc363b7c3e"} Jan 26 08:18:31 crc kubenswrapper[4806]: I0126 08:18:31.792959 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7q4r5" event={"ID":"124f49f5-a51a-48b0-98cf-f4d85e7e57c8","Type":"ContainerStarted","Data":"6a20f56cbd4bad4b2cc32a8c5bc8b8f8332a20f2270fe933b8765369a42eca5d"} Jan 26 08:18:31 crc kubenswrapper[4806]: I0126 08:18:31.819350 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-7q4r5" podStartSLOduration=3.369156883 podStartE2EDuration="5.819333094s" podCreationTimestamp="2026-01-26 08:18:26 +0000 UTC" firstStartedPulling="2026-01-26 08:18:28.765767648 +0000 UTC m=+1488.030175704" lastFinishedPulling="2026-01-26 08:18:31.215943869 +0000 UTC m=+1490.480351915" observedRunningTime="2026-01-26 08:18:31.812648957 +0000 UTC m=+1491.077057023" watchObservedRunningTime="2026-01-26 08:18:31.819333094 +0000 UTC m=+1491.083741150" Jan 26 08:18:31 crc kubenswrapper[4806]: I0126 08:18:31.874491 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gq8rw"] Jan 26 08:18:31 crc kubenswrapper[4806]: I0126 08:18:31.876217 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-gq8rw" podUID="2972aa43-2198-44bf-9975-beb252cffb1c" containerName="registry-server" containerID="cri-o://05687417b51633ae8127da3aab2662299f1394a7295caf427f8441fc61baf4d0" gracePeriod=2 Jan 26 08:18:32 crc kubenswrapper[4806]: I0126 08:18:32.396471 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gq8rw" Jan 26 08:18:32 crc kubenswrapper[4806]: I0126 08:18:32.449718 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2972aa43-2198-44bf-9975-beb252cffb1c-utilities\") pod \"2972aa43-2198-44bf-9975-beb252cffb1c\" (UID: \"2972aa43-2198-44bf-9975-beb252cffb1c\") " Jan 26 08:18:32 crc kubenswrapper[4806]: I0126 08:18:32.449829 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2xlvh\" (UniqueName: \"kubernetes.io/projected/2972aa43-2198-44bf-9975-beb252cffb1c-kube-api-access-2xlvh\") pod \"2972aa43-2198-44bf-9975-beb252cffb1c\" (UID: \"2972aa43-2198-44bf-9975-beb252cffb1c\") " Jan 26 08:18:32 crc kubenswrapper[4806]: I0126 08:18:32.449863 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2972aa43-2198-44bf-9975-beb252cffb1c-catalog-content\") pod \"2972aa43-2198-44bf-9975-beb252cffb1c\" (UID: \"2972aa43-2198-44bf-9975-beb252cffb1c\") " Jan 26 08:18:32 crc kubenswrapper[4806]: I0126 08:18:32.450923 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2972aa43-2198-44bf-9975-beb252cffb1c-utilities" (OuterVolumeSpecName: "utilities") pod "2972aa43-2198-44bf-9975-beb252cffb1c" (UID: "2972aa43-2198-44bf-9975-beb252cffb1c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 08:18:32 crc kubenswrapper[4806]: I0126 08:18:32.465811 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2972aa43-2198-44bf-9975-beb252cffb1c-kube-api-access-2xlvh" (OuterVolumeSpecName: "kube-api-access-2xlvh") pod "2972aa43-2198-44bf-9975-beb252cffb1c" (UID: "2972aa43-2198-44bf-9975-beb252cffb1c"). InnerVolumeSpecName "kube-api-access-2xlvh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:18:32 crc kubenswrapper[4806]: I0126 08:18:32.506916 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2972aa43-2198-44bf-9975-beb252cffb1c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2972aa43-2198-44bf-9975-beb252cffb1c" (UID: "2972aa43-2198-44bf-9975-beb252cffb1c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 08:18:32 crc kubenswrapper[4806]: I0126 08:18:32.552653 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2xlvh\" (UniqueName: \"kubernetes.io/projected/2972aa43-2198-44bf-9975-beb252cffb1c-kube-api-access-2xlvh\") on node \"crc\" DevicePath \"\"" Jan 26 08:18:32 crc kubenswrapper[4806]: I0126 08:18:32.552680 4806 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2972aa43-2198-44bf-9975-beb252cffb1c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 08:18:32 crc kubenswrapper[4806]: I0126 08:18:32.552691 4806 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2972aa43-2198-44bf-9975-beb252cffb1c-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 08:18:32 crc kubenswrapper[4806]: I0126 08:18:32.802495 4806 generic.go:334] "Generic (PLEG): container finished" podID="2972aa43-2198-44bf-9975-beb252cffb1c" containerID="05687417b51633ae8127da3aab2662299f1394a7295caf427f8441fc61baf4d0" exitCode=0 Jan 26 08:18:32 crc kubenswrapper[4806]: I0126 08:18:32.802569 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gq8rw" Jan 26 08:18:32 crc kubenswrapper[4806]: I0126 08:18:32.802586 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gq8rw" event={"ID":"2972aa43-2198-44bf-9975-beb252cffb1c","Type":"ContainerDied","Data":"05687417b51633ae8127da3aab2662299f1394a7295caf427f8441fc61baf4d0"} Jan 26 08:18:32 crc kubenswrapper[4806]: I0126 08:18:32.802721 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gq8rw" event={"ID":"2972aa43-2198-44bf-9975-beb252cffb1c","Type":"ContainerDied","Data":"0802695d34da7fa302a510d440233f3030698de6a1d4b6b916ea0b1ec711bfd8"} Jan 26 08:18:32 crc kubenswrapper[4806]: I0126 08:18:32.802786 4806 scope.go:117] "RemoveContainer" containerID="05687417b51633ae8127da3aab2662299f1394a7295caf427f8441fc61baf4d0" Jan 26 08:18:32 crc kubenswrapper[4806]: I0126 08:18:32.838603 4806 scope.go:117] "RemoveContainer" containerID="7f2b633662a2f6bb9089c484f76b29f7faaf30d9f43d85f6cbcd54daa57e8513" Jan 26 08:18:32 crc kubenswrapper[4806]: I0126 08:18:32.847893 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gq8rw"] Jan 26 08:18:32 crc kubenswrapper[4806]: I0126 08:18:32.857312 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-gq8rw"] Jan 26 08:18:32 crc kubenswrapper[4806]: I0126 08:18:32.862608 4806 scope.go:117] "RemoveContainer" containerID="d3ca3ba37c314a354a9014d6550b8e1904bb8b9cce46071a7e0a67b9f90f21f5" Jan 26 08:18:32 crc kubenswrapper[4806]: I0126 08:18:32.907046 4806 scope.go:117] "RemoveContainer" containerID="05687417b51633ae8127da3aab2662299f1394a7295caf427f8441fc61baf4d0" Jan 26 08:18:32 crc kubenswrapper[4806]: E0126 08:18:32.907769 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"05687417b51633ae8127da3aab2662299f1394a7295caf427f8441fc61baf4d0\": container with ID starting with 05687417b51633ae8127da3aab2662299f1394a7295caf427f8441fc61baf4d0 not found: ID does not exist" containerID="05687417b51633ae8127da3aab2662299f1394a7295caf427f8441fc61baf4d0" Jan 26 08:18:32 crc kubenswrapper[4806]: I0126 08:18:32.907804 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05687417b51633ae8127da3aab2662299f1394a7295caf427f8441fc61baf4d0"} err="failed to get container status \"05687417b51633ae8127da3aab2662299f1394a7295caf427f8441fc61baf4d0\": rpc error: code = NotFound desc = could not find container \"05687417b51633ae8127da3aab2662299f1394a7295caf427f8441fc61baf4d0\": container with ID starting with 05687417b51633ae8127da3aab2662299f1394a7295caf427f8441fc61baf4d0 not found: ID does not exist" Jan 26 08:18:32 crc kubenswrapper[4806]: I0126 08:18:32.907829 4806 scope.go:117] "RemoveContainer" containerID="7f2b633662a2f6bb9089c484f76b29f7faaf30d9f43d85f6cbcd54daa57e8513" Jan 26 08:18:32 crc kubenswrapper[4806]: E0126 08:18:32.908194 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f2b633662a2f6bb9089c484f76b29f7faaf30d9f43d85f6cbcd54daa57e8513\": container with ID starting with 7f2b633662a2f6bb9089c484f76b29f7faaf30d9f43d85f6cbcd54daa57e8513 not found: ID does not exist" containerID="7f2b633662a2f6bb9089c484f76b29f7faaf30d9f43d85f6cbcd54daa57e8513" Jan 26 08:18:32 crc kubenswrapper[4806]: I0126 08:18:32.908220 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f2b633662a2f6bb9089c484f76b29f7faaf30d9f43d85f6cbcd54daa57e8513"} err="failed to get container status \"7f2b633662a2f6bb9089c484f76b29f7faaf30d9f43d85f6cbcd54daa57e8513\": rpc error: code = NotFound desc = could not find container \"7f2b633662a2f6bb9089c484f76b29f7faaf30d9f43d85f6cbcd54daa57e8513\": container with ID starting with 7f2b633662a2f6bb9089c484f76b29f7faaf30d9f43d85f6cbcd54daa57e8513 not found: ID does not exist" Jan 26 08:18:32 crc kubenswrapper[4806]: I0126 08:18:32.908241 4806 scope.go:117] "RemoveContainer" containerID="d3ca3ba37c314a354a9014d6550b8e1904bb8b9cce46071a7e0a67b9f90f21f5" Jan 26 08:18:32 crc kubenswrapper[4806]: E0126 08:18:32.908498 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3ca3ba37c314a354a9014d6550b8e1904bb8b9cce46071a7e0a67b9f90f21f5\": container with ID starting with d3ca3ba37c314a354a9014d6550b8e1904bb8b9cce46071a7e0a67b9f90f21f5 not found: ID does not exist" containerID="d3ca3ba37c314a354a9014d6550b8e1904bb8b9cce46071a7e0a67b9f90f21f5" Jan 26 08:18:32 crc kubenswrapper[4806]: I0126 08:18:32.908541 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3ca3ba37c314a354a9014d6550b8e1904bb8b9cce46071a7e0a67b9f90f21f5"} err="failed to get container status \"d3ca3ba37c314a354a9014d6550b8e1904bb8b9cce46071a7e0a67b9f90f21f5\": rpc error: code = NotFound desc = could not find container \"d3ca3ba37c314a354a9014d6550b8e1904bb8b9cce46071a7e0a67b9f90f21f5\": container with ID starting with d3ca3ba37c314a354a9014d6550b8e1904bb8b9cce46071a7e0a67b9f90f21f5 not found: ID does not exist" Jan 26 08:18:33 crc kubenswrapper[4806]: I0126 08:18:33.052904 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2972aa43-2198-44bf-9975-beb252cffb1c" path="/var/lib/kubelet/pods/2972aa43-2198-44bf-9975-beb252cffb1c/volumes" Jan 26 08:18:36 crc kubenswrapper[4806]: I0126 08:18:36.804903 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-7q4r5" Jan 26 08:18:36 crc kubenswrapper[4806]: I0126 08:18:36.805252 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-7q4r5" Jan 26 08:18:36 crc kubenswrapper[4806]: I0126 08:18:36.857982 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-7q4r5" Jan 26 08:18:36 crc kubenswrapper[4806]: I0126 08:18:36.916150 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-7q4r5" Jan 26 08:18:37 crc kubenswrapper[4806]: I0126 08:18:37.894051 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7q4r5"] Jan 26 08:18:38 crc kubenswrapper[4806]: I0126 08:18:38.856200 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-7q4r5" podUID="124f49f5-a51a-48b0-98cf-f4d85e7e57c8" containerName="registry-server" containerID="cri-o://6a20f56cbd4bad4b2cc32a8c5bc8b8f8332a20f2270fe933b8765369a42eca5d" gracePeriod=2 Jan 26 08:18:39 crc kubenswrapper[4806]: I0126 08:18:39.331933 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7q4r5" Jan 26 08:18:39 crc kubenswrapper[4806]: I0126 08:18:39.393487 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ssc2t\" (UniqueName: \"kubernetes.io/projected/124f49f5-a51a-48b0-98cf-f4d85e7e57c8-kube-api-access-ssc2t\") pod \"124f49f5-a51a-48b0-98cf-f4d85e7e57c8\" (UID: \"124f49f5-a51a-48b0-98cf-f4d85e7e57c8\") " Jan 26 08:18:39 crc kubenswrapper[4806]: I0126 08:18:39.393571 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/124f49f5-a51a-48b0-98cf-f4d85e7e57c8-utilities\") pod \"124f49f5-a51a-48b0-98cf-f4d85e7e57c8\" (UID: \"124f49f5-a51a-48b0-98cf-f4d85e7e57c8\") " Jan 26 08:18:39 crc kubenswrapper[4806]: I0126 08:18:39.393643 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/124f49f5-a51a-48b0-98cf-f4d85e7e57c8-catalog-content\") pod \"124f49f5-a51a-48b0-98cf-f4d85e7e57c8\" (UID: \"124f49f5-a51a-48b0-98cf-f4d85e7e57c8\") " Jan 26 08:18:39 crc kubenswrapper[4806]: I0126 08:18:39.394429 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/124f49f5-a51a-48b0-98cf-f4d85e7e57c8-utilities" (OuterVolumeSpecName: "utilities") pod "124f49f5-a51a-48b0-98cf-f4d85e7e57c8" (UID: "124f49f5-a51a-48b0-98cf-f4d85e7e57c8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 08:18:39 crc kubenswrapper[4806]: I0126 08:18:39.399737 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/124f49f5-a51a-48b0-98cf-f4d85e7e57c8-kube-api-access-ssc2t" (OuterVolumeSpecName: "kube-api-access-ssc2t") pod "124f49f5-a51a-48b0-98cf-f4d85e7e57c8" (UID: "124f49f5-a51a-48b0-98cf-f4d85e7e57c8"). InnerVolumeSpecName "kube-api-access-ssc2t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:18:39 crc kubenswrapper[4806]: I0126 08:18:39.437867 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/124f49f5-a51a-48b0-98cf-f4d85e7e57c8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "124f49f5-a51a-48b0-98cf-f4d85e7e57c8" (UID: "124f49f5-a51a-48b0-98cf-f4d85e7e57c8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 08:18:39 crc kubenswrapper[4806]: I0126 08:18:39.496788 4806 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/124f49f5-a51a-48b0-98cf-f4d85e7e57c8-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 08:18:39 crc kubenswrapper[4806]: I0126 08:18:39.496820 4806 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/124f49f5-a51a-48b0-98cf-f4d85e7e57c8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 08:18:39 crc kubenswrapper[4806]: I0126 08:18:39.496832 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ssc2t\" (UniqueName: \"kubernetes.io/projected/124f49f5-a51a-48b0-98cf-f4d85e7e57c8-kube-api-access-ssc2t\") on node \"crc\" DevicePath \"\"" Jan 26 08:18:39 crc kubenswrapper[4806]: I0126 08:18:39.867166 4806 generic.go:334] "Generic (PLEG): container finished" podID="124f49f5-a51a-48b0-98cf-f4d85e7e57c8" containerID="6a20f56cbd4bad4b2cc32a8c5bc8b8f8332a20f2270fe933b8765369a42eca5d" exitCode=0 Jan 26 08:18:39 crc kubenswrapper[4806]: I0126 08:18:39.867214 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7q4r5" event={"ID":"124f49f5-a51a-48b0-98cf-f4d85e7e57c8","Type":"ContainerDied","Data":"6a20f56cbd4bad4b2cc32a8c5bc8b8f8332a20f2270fe933b8765369a42eca5d"} Jan 26 08:18:39 crc kubenswrapper[4806]: I0126 08:18:39.867247 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7q4r5" event={"ID":"124f49f5-a51a-48b0-98cf-f4d85e7e57c8","Type":"ContainerDied","Data":"2874ddc21c534eac3668e2b74c8af4c89b7a4905b71e080b0337c20a7acb3d20"} Jan 26 08:18:39 crc kubenswrapper[4806]: I0126 08:18:39.867267 4806 scope.go:117] "RemoveContainer" containerID="6a20f56cbd4bad4b2cc32a8c5bc8b8f8332a20f2270fe933b8765369a42eca5d" Jan 26 08:18:39 crc kubenswrapper[4806]: I0126 08:18:39.867418 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7q4r5" Jan 26 08:18:39 crc kubenswrapper[4806]: I0126 08:18:39.887741 4806 scope.go:117] "RemoveContainer" containerID="11ed914452ca3a33307c08355ba2238d08a7d4649e96897717ab2cfc363b7c3e" Jan 26 08:18:39 crc kubenswrapper[4806]: I0126 08:18:39.904089 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7q4r5"] Jan 26 08:18:39 crc kubenswrapper[4806]: I0126 08:18:39.915050 4806 scope.go:117] "RemoveContainer" containerID="0262f7ede546c2fda9108c75cbb18faa31e946590aa20c53ea26bf2fd6ddd916" Jan 26 08:18:39 crc kubenswrapper[4806]: I0126 08:18:39.920992 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-7q4r5"] Jan 26 08:18:39 crc kubenswrapper[4806]: I0126 08:18:39.978650 4806 scope.go:117] "RemoveContainer" containerID="6a20f56cbd4bad4b2cc32a8c5bc8b8f8332a20f2270fe933b8765369a42eca5d" Jan 26 08:18:39 crc kubenswrapper[4806]: E0126 08:18:39.979182 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a20f56cbd4bad4b2cc32a8c5bc8b8f8332a20f2270fe933b8765369a42eca5d\": container with ID starting with 6a20f56cbd4bad4b2cc32a8c5bc8b8f8332a20f2270fe933b8765369a42eca5d not found: ID does not exist" containerID="6a20f56cbd4bad4b2cc32a8c5bc8b8f8332a20f2270fe933b8765369a42eca5d" Jan 26 08:18:39 crc kubenswrapper[4806]: I0126 08:18:39.979235 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a20f56cbd4bad4b2cc32a8c5bc8b8f8332a20f2270fe933b8765369a42eca5d"} err="failed to get container status \"6a20f56cbd4bad4b2cc32a8c5bc8b8f8332a20f2270fe933b8765369a42eca5d\": rpc error: code = NotFound desc = could not find container \"6a20f56cbd4bad4b2cc32a8c5bc8b8f8332a20f2270fe933b8765369a42eca5d\": container with ID starting with 6a20f56cbd4bad4b2cc32a8c5bc8b8f8332a20f2270fe933b8765369a42eca5d not found: ID does not exist" Jan 26 08:18:39 crc kubenswrapper[4806]: I0126 08:18:39.979268 4806 scope.go:117] "RemoveContainer" containerID="11ed914452ca3a33307c08355ba2238d08a7d4649e96897717ab2cfc363b7c3e" Jan 26 08:18:39 crc kubenswrapper[4806]: E0126 08:18:39.979638 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"11ed914452ca3a33307c08355ba2238d08a7d4649e96897717ab2cfc363b7c3e\": container with ID starting with 11ed914452ca3a33307c08355ba2238d08a7d4649e96897717ab2cfc363b7c3e not found: ID does not exist" containerID="11ed914452ca3a33307c08355ba2238d08a7d4649e96897717ab2cfc363b7c3e" Jan 26 08:18:39 crc kubenswrapper[4806]: I0126 08:18:39.979672 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11ed914452ca3a33307c08355ba2238d08a7d4649e96897717ab2cfc363b7c3e"} err="failed to get container status \"11ed914452ca3a33307c08355ba2238d08a7d4649e96897717ab2cfc363b7c3e\": rpc error: code = NotFound desc = could not find container \"11ed914452ca3a33307c08355ba2238d08a7d4649e96897717ab2cfc363b7c3e\": container with ID starting with 11ed914452ca3a33307c08355ba2238d08a7d4649e96897717ab2cfc363b7c3e not found: ID does not exist" Jan 26 08:18:39 crc kubenswrapper[4806]: I0126 08:18:39.979691 4806 scope.go:117] "RemoveContainer" containerID="0262f7ede546c2fda9108c75cbb18faa31e946590aa20c53ea26bf2fd6ddd916" Jan 26 08:18:39 crc kubenswrapper[4806]: E0126 08:18:39.979937 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0262f7ede546c2fda9108c75cbb18faa31e946590aa20c53ea26bf2fd6ddd916\": container with ID starting with 0262f7ede546c2fda9108c75cbb18faa31e946590aa20c53ea26bf2fd6ddd916 not found: ID does not exist" containerID="0262f7ede546c2fda9108c75cbb18faa31e946590aa20c53ea26bf2fd6ddd916" Jan 26 08:18:39 crc kubenswrapper[4806]: I0126 08:18:39.979970 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0262f7ede546c2fda9108c75cbb18faa31e946590aa20c53ea26bf2fd6ddd916"} err="failed to get container status \"0262f7ede546c2fda9108c75cbb18faa31e946590aa20c53ea26bf2fd6ddd916\": rpc error: code = NotFound desc = could not find container \"0262f7ede546c2fda9108c75cbb18faa31e946590aa20c53ea26bf2fd6ddd916\": container with ID starting with 0262f7ede546c2fda9108c75cbb18faa31e946590aa20c53ea26bf2fd6ddd916 not found: ID does not exist" Jan 26 08:18:41 crc kubenswrapper[4806]: I0126 08:18:41.056995 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="124f49f5-a51a-48b0-98cf-f4d85e7e57c8" path="/var/lib/kubelet/pods/124f49f5-a51a-48b0-98cf-f4d85e7e57c8/volumes" Jan 26 08:19:10 crc kubenswrapper[4806]: I0126 08:19:10.443059 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4r2qm"] Jan 26 08:19:10 crc kubenswrapper[4806]: E0126 08:19:10.443857 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="124f49f5-a51a-48b0-98cf-f4d85e7e57c8" containerName="registry-server" Jan 26 08:19:10 crc kubenswrapper[4806]: I0126 08:19:10.443870 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="124f49f5-a51a-48b0-98cf-f4d85e7e57c8" containerName="registry-server" Jan 26 08:19:10 crc kubenswrapper[4806]: E0126 08:19:10.443886 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2972aa43-2198-44bf-9975-beb252cffb1c" containerName="extract-utilities" Jan 26 08:19:10 crc kubenswrapper[4806]: I0126 08:19:10.443892 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="2972aa43-2198-44bf-9975-beb252cffb1c" containerName="extract-utilities" Jan 26 08:19:10 crc kubenswrapper[4806]: E0126 08:19:10.443910 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="124f49f5-a51a-48b0-98cf-f4d85e7e57c8" containerName="extract-content" Jan 26 08:19:10 crc kubenswrapper[4806]: I0126 08:19:10.443916 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="124f49f5-a51a-48b0-98cf-f4d85e7e57c8" containerName="extract-content" Jan 26 08:19:10 crc kubenswrapper[4806]: E0126 08:19:10.443925 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2972aa43-2198-44bf-9975-beb252cffb1c" containerName="extract-content" Jan 26 08:19:10 crc kubenswrapper[4806]: I0126 08:19:10.443930 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="2972aa43-2198-44bf-9975-beb252cffb1c" containerName="extract-content" Jan 26 08:19:10 crc kubenswrapper[4806]: E0126 08:19:10.443941 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="124f49f5-a51a-48b0-98cf-f4d85e7e57c8" containerName="extract-utilities" Jan 26 08:19:10 crc kubenswrapper[4806]: I0126 08:19:10.443947 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="124f49f5-a51a-48b0-98cf-f4d85e7e57c8" containerName="extract-utilities" Jan 26 08:19:10 crc kubenswrapper[4806]: E0126 08:19:10.443962 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2972aa43-2198-44bf-9975-beb252cffb1c" containerName="registry-server" Jan 26 08:19:10 crc kubenswrapper[4806]: I0126 08:19:10.443967 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="2972aa43-2198-44bf-9975-beb252cffb1c" containerName="registry-server" Jan 26 08:19:10 crc kubenswrapper[4806]: I0126 08:19:10.444128 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="2972aa43-2198-44bf-9975-beb252cffb1c" containerName="registry-server" Jan 26 08:19:10 crc kubenswrapper[4806]: I0126 08:19:10.444153 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="124f49f5-a51a-48b0-98cf-f4d85e7e57c8" containerName="registry-server" Jan 26 08:19:10 crc kubenswrapper[4806]: I0126 08:19:10.445420 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4r2qm" Jan 26 08:19:10 crc kubenswrapper[4806]: I0126 08:19:10.471248 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4r2qm"] Jan 26 08:19:10 crc kubenswrapper[4806]: I0126 08:19:10.591767 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db26beed-354c-4f85-bacd-6c069431fa0d-utilities\") pod \"community-operators-4r2qm\" (UID: \"db26beed-354c-4f85-bacd-6c069431fa0d\") " pod="openshift-marketplace/community-operators-4r2qm" Jan 26 08:19:10 crc kubenswrapper[4806]: I0126 08:19:10.591977 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7d8g\" (UniqueName: \"kubernetes.io/projected/db26beed-354c-4f85-bacd-6c069431fa0d-kube-api-access-z7d8g\") pod \"community-operators-4r2qm\" (UID: \"db26beed-354c-4f85-bacd-6c069431fa0d\") " pod="openshift-marketplace/community-operators-4r2qm" Jan 26 08:19:10 crc kubenswrapper[4806]: I0126 08:19:10.592004 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db26beed-354c-4f85-bacd-6c069431fa0d-catalog-content\") pod \"community-operators-4r2qm\" (UID: \"db26beed-354c-4f85-bacd-6c069431fa0d\") " pod="openshift-marketplace/community-operators-4r2qm" Jan 26 08:19:10 crc kubenswrapper[4806]: I0126 08:19:10.693499 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db26beed-354c-4f85-bacd-6c069431fa0d-utilities\") pod \"community-operators-4r2qm\" (UID: \"db26beed-354c-4f85-bacd-6c069431fa0d\") " pod="openshift-marketplace/community-operators-4r2qm" Jan 26 08:19:10 crc kubenswrapper[4806]: I0126 08:19:10.693689 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7d8g\" (UniqueName: \"kubernetes.io/projected/db26beed-354c-4f85-bacd-6c069431fa0d-kube-api-access-z7d8g\") pod \"community-operators-4r2qm\" (UID: \"db26beed-354c-4f85-bacd-6c069431fa0d\") " pod="openshift-marketplace/community-operators-4r2qm" Jan 26 08:19:10 crc kubenswrapper[4806]: I0126 08:19:10.693717 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db26beed-354c-4f85-bacd-6c069431fa0d-catalog-content\") pod \"community-operators-4r2qm\" (UID: \"db26beed-354c-4f85-bacd-6c069431fa0d\") " pod="openshift-marketplace/community-operators-4r2qm" Jan 26 08:19:10 crc kubenswrapper[4806]: I0126 08:19:10.694114 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db26beed-354c-4f85-bacd-6c069431fa0d-utilities\") pod \"community-operators-4r2qm\" (UID: \"db26beed-354c-4f85-bacd-6c069431fa0d\") " pod="openshift-marketplace/community-operators-4r2qm" Jan 26 08:19:10 crc kubenswrapper[4806]: I0126 08:19:10.694171 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db26beed-354c-4f85-bacd-6c069431fa0d-catalog-content\") pod \"community-operators-4r2qm\" (UID: \"db26beed-354c-4f85-bacd-6c069431fa0d\") " pod="openshift-marketplace/community-operators-4r2qm" Jan 26 08:19:10 crc kubenswrapper[4806]: I0126 08:19:10.719718 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7d8g\" (UniqueName: \"kubernetes.io/projected/db26beed-354c-4f85-bacd-6c069431fa0d-kube-api-access-z7d8g\") pod \"community-operators-4r2qm\" (UID: \"db26beed-354c-4f85-bacd-6c069431fa0d\") " pod="openshift-marketplace/community-operators-4r2qm" Jan 26 08:19:10 crc kubenswrapper[4806]: I0126 08:19:10.769114 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4r2qm" Jan 26 08:19:11 crc kubenswrapper[4806]: I0126 08:19:11.255888 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4r2qm"] Jan 26 08:19:12 crc kubenswrapper[4806]: I0126 08:19:12.159272 4806 generic.go:334] "Generic (PLEG): container finished" podID="db26beed-354c-4f85-bacd-6c069431fa0d" containerID="9313ce5730bbfbc94876abebeef59a286d7dedb62bb081c69ccb0fb8d0f40aae" exitCode=0 Jan 26 08:19:12 crc kubenswrapper[4806]: I0126 08:19:12.159355 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4r2qm" event={"ID":"db26beed-354c-4f85-bacd-6c069431fa0d","Type":"ContainerDied","Data":"9313ce5730bbfbc94876abebeef59a286d7dedb62bb081c69ccb0fb8d0f40aae"} Jan 26 08:19:12 crc kubenswrapper[4806]: I0126 08:19:12.159760 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4r2qm" event={"ID":"db26beed-354c-4f85-bacd-6c069431fa0d","Type":"ContainerStarted","Data":"06d85710edabac3ccd4592ddcc19a84c011333f5d81b03262fdf88e9055b22d7"} Jan 26 08:19:12 crc kubenswrapper[4806]: I0126 08:19:12.161198 4806 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 08:19:13 crc kubenswrapper[4806]: I0126 08:19:13.168611 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4r2qm" event={"ID":"db26beed-354c-4f85-bacd-6c069431fa0d","Type":"ContainerStarted","Data":"2379791753c90d5d1f1550bc7582cd6fae348069bfc24840d68ae18bb7bf106f"} Jan 26 08:19:14 crc kubenswrapper[4806]: I0126 08:19:14.179844 4806 generic.go:334] "Generic (PLEG): container finished" podID="db26beed-354c-4f85-bacd-6c069431fa0d" containerID="2379791753c90d5d1f1550bc7582cd6fae348069bfc24840d68ae18bb7bf106f" exitCode=0 Jan 26 08:19:14 crc kubenswrapper[4806]: I0126 08:19:14.179885 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4r2qm" event={"ID":"db26beed-354c-4f85-bacd-6c069431fa0d","Type":"ContainerDied","Data":"2379791753c90d5d1f1550bc7582cd6fae348069bfc24840d68ae18bb7bf106f"} Jan 26 08:19:15 crc kubenswrapper[4806]: I0126 08:19:15.208092 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4r2qm" event={"ID":"db26beed-354c-4f85-bacd-6c069431fa0d","Type":"ContainerStarted","Data":"101c433f84458f8194fd7a64fe95a5d9654c56da14d3ad53a903300ced722ccc"} Jan 26 08:19:15 crc kubenswrapper[4806]: I0126 08:19:15.228196 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4r2qm" podStartSLOduration=2.827834356 podStartE2EDuration="5.228179174s" podCreationTimestamp="2026-01-26 08:19:10 +0000 UTC" firstStartedPulling="2026-01-26 08:19:12.160965566 +0000 UTC m=+1531.425373622" lastFinishedPulling="2026-01-26 08:19:14.561310384 +0000 UTC m=+1533.825718440" observedRunningTime="2026-01-26 08:19:15.221679142 +0000 UTC m=+1534.486087198" watchObservedRunningTime="2026-01-26 08:19:15.228179174 +0000 UTC m=+1534.492587230" Jan 26 08:19:20 crc kubenswrapper[4806]: I0126 08:19:20.770203 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4r2qm" Jan 26 08:19:20 crc kubenswrapper[4806]: I0126 08:19:20.770806 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-4r2qm" Jan 26 08:19:20 crc kubenswrapper[4806]: I0126 08:19:20.817278 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4r2qm" Jan 26 08:19:21 crc kubenswrapper[4806]: I0126 08:19:21.311002 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4r2qm" Jan 26 08:19:21 crc kubenswrapper[4806]: I0126 08:19:21.366751 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4r2qm"] Jan 26 08:19:23 crc kubenswrapper[4806]: I0126 08:19:23.281725 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-4r2qm" podUID="db26beed-354c-4f85-bacd-6c069431fa0d" containerName="registry-server" containerID="cri-o://101c433f84458f8194fd7a64fe95a5d9654c56da14d3ad53a903300ced722ccc" gracePeriod=2 Jan 26 08:19:23 crc kubenswrapper[4806]: I0126 08:19:23.778542 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4r2qm" Jan 26 08:19:23 crc kubenswrapper[4806]: I0126 08:19:23.968912 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db26beed-354c-4f85-bacd-6c069431fa0d-utilities\") pod \"db26beed-354c-4f85-bacd-6c069431fa0d\" (UID: \"db26beed-354c-4f85-bacd-6c069431fa0d\") " Jan 26 08:19:23 crc kubenswrapper[4806]: I0126 08:19:23.969438 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db26beed-354c-4f85-bacd-6c069431fa0d-catalog-content\") pod \"db26beed-354c-4f85-bacd-6c069431fa0d\" (UID: \"db26beed-354c-4f85-bacd-6c069431fa0d\") " Jan 26 08:19:23 crc kubenswrapper[4806]: I0126 08:19:23.969469 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z7d8g\" (UniqueName: \"kubernetes.io/projected/db26beed-354c-4f85-bacd-6c069431fa0d-kube-api-access-z7d8g\") pod \"db26beed-354c-4f85-bacd-6c069431fa0d\" (UID: \"db26beed-354c-4f85-bacd-6c069431fa0d\") " Jan 26 08:19:23 crc kubenswrapper[4806]: I0126 08:19:23.969544 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db26beed-354c-4f85-bacd-6c069431fa0d-utilities" (OuterVolumeSpecName: "utilities") pod "db26beed-354c-4f85-bacd-6c069431fa0d" (UID: "db26beed-354c-4f85-bacd-6c069431fa0d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 08:19:23 crc kubenswrapper[4806]: I0126 08:19:23.969946 4806 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db26beed-354c-4f85-bacd-6c069431fa0d-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 08:19:23 crc kubenswrapper[4806]: I0126 08:19:23.976591 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db26beed-354c-4f85-bacd-6c069431fa0d-kube-api-access-z7d8g" (OuterVolumeSpecName: "kube-api-access-z7d8g") pod "db26beed-354c-4f85-bacd-6c069431fa0d" (UID: "db26beed-354c-4f85-bacd-6c069431fa0d"). InnerVolumeSpecName "kube-api-access-z7d8g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:19:24 crc kubenswrapper[4806]: I0126 08:19:24.019479 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db26beed-354c-4f85-bacd-6c069431fa0d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "db26beed-354c-4f85-bacd-6c069431fa0d" (UID: "db26beed-354c-4f85-bacd-6c069431fa0d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 08:19:24 crc kubenswrapper[4806]: I0126 08:19:24.072308 4806 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db26beed-354c-4f85-bacd-6c069431fa0d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 08:19:24 crc kubenswrapper[4806]: I0126 08:19:24.072355 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z7d8g\" (UniqueName: \"kubernetes.io/projected/db26beed-354c-4f85-bacd-6c069431fa0d-kube-api-access-z7d8g\") on node \"crc\" DevicePath \"\"" Jan 26 08:19:24 crc kubenswrapper[4806]: I0126 08:19:24.293850 4806 generic.go:334] "Generic (PLEG): container finished" podID="db26beed-354c-4f85-bacd-6c069431fa0d" containerID="101c433f84458f8194fd7a64fe95a5d9654c56da14d3ad53a903300ced722ccc" exitCode=0 Jan 26 08:19:24 crc kubenswrapper[4806]: I0126 08:19:24.294786 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4r2qm" Jan 26 08:19:24 crc kubenswrapper[4806]: I0126 08:19:24.294756 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4r2qm" event={"ID":"db26beed-354c-4f85-bacd-6c069431fa0d","Type":"ContainerDied","Data":"101c433f84458f8194fd7a64fe95a5d9654c56da14d3ad53a903300ced722ccc"} Jan 26 08:19:24 crc kubenswrapper[4806]: I0126 08:19:24.294949 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4r2qm" event={"ID":"db26beed-354c-4f85-bacd-6c069431fa0d","Type":"ContainerDied","Data":"06d85710edabac3ccd4592ddcc19a84c011333f5d81b03262fdf88e9055b22d7"} Jan 26 08:19:24 crc kubenswrapper[4806]: I0126 08:19:24.295000 4806 scope.go:117] "RemoveContainer" containerID="101c433f84458f8194fd7a64fe95a5d9654c56da14d3ad53a903300ced722ccc" Jan 26 08:19:24 crc kubenswrapper[4806]: I0126 08:19:24.328018 4806 scope.go:117] "RemoveContainer" containerID="2379791753c90d5d1f1550bc7582cd6fae348069bfc24840d68ae18bb7bf106f" Jan 26 08:19:24 crc kubenswrapper[4806]: I0126 08:19:24.332950 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4r2qm"] Jan 26 08:19:24 crc kubenswrapper[4806]: I0126 08:19:24.342479 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-4r2qm"] Jan 26 08:19:24 crc kubenswrapper[4806]: I0126 08:19:24.374792 4806 scope.go:117] "RemoveContainer" containerID="9313ce5730bbfbc94876abebeef59a286d7dedb62bb081c69ccb0fb8d0f40aae" Jan 26 08:19:24 crc kubenswrapper[4806]: I0126 08:19:24.429158 4806 scope.go:117] "RemoveContainer" containerID="101c433f84458f8194fd7a64fe95a5d9654c56da14d3ad53a903300ced722ccc" Jan 26 08:19:24 crc kubenswrapper[4806]: E0126 08:19:24.430182 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"101c433f84458f8194fd7a64fe95a5d9654c56da14d3ad53a903300ced722ccc\": container with ID starting with 101c433f84458f8194fd7a64fe95a5d9654c56da14d3ad53a903300ced722ccc not found: ID does not exist" containerID="101c433f84458f8194fd7a64fe95a5d9654c56da14d3ad53a903300ced722ccc" Jan 26 08:19:24 crc kubenswrapper[4806]: I0126 08:19:24.430207 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"101c433f84458f8194fd7a64fe95a5d9654c56da14d3ad53a903300ced722ccc"} err="failed to get container status \"101c433f84458f8194fd7a64fe95a5d9654c56da14d3ad53a903300ced722ccc\": rpc error: code = NotFound desc = could not find container \"101c433f84458f8194fd7a64fe95a5d9654c56da14d3ad53a903300ced722ccc\": container with ID starting with 101c433f84458f8194fd7a64fe95a5d9654c56da14d3ad53a903300ced722ccc not found: ID does not exist" Jan 26 08:19:24 crc kubenswrapper[4806]: I0126 08:19:24.430228 4806 scope.go:117] "RemoveContainer" containerID="2379791753c90d5d1f1550bc7582cd6fae348069bfc24840d68ae18bb7bf106f" Jan 26 08:19:24 crc kubenswrapper[4806]: E0126 08:19:24.430454 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2379791753c90d5d1f1550bc7582cd6fae348069bfc24840d68ae18bb7bf106f\": container with ID starting with 2379791753c90d5d1f1550bc7582cd6fae348069bfc24840d68ae18bb7bf106f not found: ID does not exist" containerID="2379791753c90d5d1f1550bc7582cd6fae348069bfc24840d68ae18bb7bf106f" Jan 26 08:19:24 crc kubenswrapper[4806]: I0126 08:19:24.430475 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2379791753c90d5d1f1550bc7582cd6fae348069bfc24840d68ae18bb7bf106f"} err="failed to get container status \"2379791753c90d5d1f1550bc7582cd6fae348069bfc24840d68ae18bb7bf106f\": rpc error: code = NotFound desc = could not find container \"2379791753c90d5d1f1550bc7582cd6fae348069bfc24840d68ae18bb7bf106f\": container with ID starting with 2379791753c90d5d1f1550bc7582cd6fae348069bfc24840d68ae18bb7bf106f not found: ID does not exist" Jan 26 08:19:24 crc kubenswrapper[4806]: I0126 08:19:24.430491 4806 scope.go:117] "RemoveContainer" containerID="9313ce5730bbfbc94876abebeef59a286d7dedb62bb081c69ccb0fb8d0f40aae" Jan 26 08:19:24 crc kubenswrapper[4806]: E0126 08:19:24.430807 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9313ce5730bbfbc94876abebeef59a286d7dedb62bb081c69ccb0fb8d0f40aae\": container with ID starting with 9313ce5730bbfbc94876abebeef59a286d7dedb62bb081c69ccb0fb8d0f40aae not found: ID does not exist" containerID="9313ce5730bbfbc94876abebeef59a286d7dedb62bb081c69ccb0fb8d0f40aae" Jan 26 08:19:24 crc kubenswrapper[4806]: I0126 08:19:24.430825 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9313ce5730bbfbc94876abebeef59a286d7dedb62bb081c69ccb0fb8d0f40aae"} err="failed to get container status \"9313ce5730bbfbc94876abebeef59a286d7dedb62bb081c69ccb0fb8d0f40aae\": rpc error: code = NotFound desc = could not find container \"9313ce5730bbfbc94876abebeef59a286d7dedb62bb081c69ccb0fb8d0f40aae\": container with ID starting with 9313ce5730bbfbc94876abebeef59a286d7dedb62bb081c69ccb0fb8d0f40aae not found: ID does not exist" Jan 26 08:19:25 crc kubenswrapper[4806]: I0126 08:19:25.068591 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db26beed-354c-4f85-bacd-6c069431fa0d" path="/var/lib/kubelet/pods/db26beed-354c-4f85-bacd-6c069431fa0d/volumes" Jan 26 08:19:58 crc kubenswrapper[4806]: I0126 08:19:58.604257 4806 generic.go:334] "Generic (PLEG): container finished" podID="b38882dc-facd-46ab-96ce-176528439b16" containerID="7ea37ca8b41b004c3f179412c32c2a0b4dd6856565bb6146bcbeceffd1ce0e2c" exitCode=0 Jan 26 08:19:58 crc kubenswrapper[4806]: I0126 08:19:58.604346 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lc8v5" event={"ID":"b38882dc-facd-46ab-96ce-176528439b16","Type":"ContainerDied","Data":"7ea37ca8b41b004c3f179412c32c2a0b4dd6856565bb6146bcbeceffd1ce0e2c"} Jan 26 08:19:59 crc kubenswrapper[4806]: I0126 08:19:59.057177 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-65f6-account-create-update-l6nk9"] Jan 26 08:19:59 crc kubenswrapper[4806]: I0126 08:19:59.059677 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-jlcbm"] Jan 26 08:19:59 crc kubenswrapper[4806]: I0126 08:19:59.069229 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-jlcbm"] Jan 26 08:19:59 crc kubenswrapper[4806]: I0126 08:19:59.081540 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-65f6-account-create-update-l6nk9"] Jan 26 08:20:00 crc kubenswrapper[4806]: I0126 08:20:00.047763 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lc8v5" Jan 26 08:20:00 crc kubenswrapper[4806]: I0126 08:20:00.117543 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b38882dc-facd-46ab-96ce-176528439b16-inventory\") pod \"b38882dc-facd-46ab-96ce-176528439b16\" (UID: \"b38882dc-facd-46ab-96ce-176528439b16\") " Jan 26 08:20:00 crc kubenswrapper[4806]: I0126 08:20:00.117583 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b38882dc-facd-46ab-96ce-176528439b16-bootstrap-combined-ca-bundle\") pod \"b38882dc-facd-46ab-96ce-176528439b16\" (UID: \"b38882dc-facd-46ab-96ce-176528439b16\") " Jan 26 08:20:00 crc kubenswrapper[4806]: I0126 08:20:00.117642 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b38882dc-facd-46ab-96ce-176528439b16-ssh-key-openstack-edpm-ipam\") pod \"b38882dc-facd-46ab-96ce-176528439b16\" (UID: \"b38882dc-facd-46ab-96ce-176528439b16\") " Jan 26 08:20:00 crc kubenswrapper[4806]: I0126 08:20:00.117682 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gp4ch\" (UniqueName: \"kubernetes.io/projected/b38882dc-facd-46ab-96ce-176528439b16-kube-api-access-gp4ch\") pod \"b38882dc-facd-46ab-96ce-176528439b16\" (UID: \"b38882dc-facd-46ab-96ce-176528439b16\") " Jan 26 08:20:00 crc kubenswrapper[4806]: I0126 08:20:00.130268 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b38882dc-facd-46ab-96ce-176528439b16-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "b38882dc-facd-46ab-96ce-176528439b16" (UID: "b38882dc-facd-46ab-96ce-176528439b16"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:20:00 crc kubenswrapper[4806]: I0126 08:20:00.142355 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b38882dc-facd-46ab-96ce-176528439b16-kube-api-access-gp4ch" (OuterVolumeSpecName: "kube-api-access-gp4ch") pod "b38882dc-facd-46ab-96ce-176528439b16" (UID: "b38882dc-facd-46ab-96ce-176528439b16"). InnerVolumeSpecName "kube-api-access-gp4ch". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:20:00 crc kubenswrapper[4806]: I0126 08:20:00.147616 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b38882dc-facd-46ab-96ce-176528439b16-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "b38882dc-facd-46ab-96ce-176528439b16" (UID: "b38882dc-facd-46ab-96ce-176528439b16"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:20:00 crc kubenswrapper[4806]: I0126 08:20:00.147684 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b38882dc-facd-46ab-96ce-176528439b16-inventory" (OuterVolumeSpecName: "inventory") pod "b38882dc-facd-46ab-96ce-176528439b16" (UID: "b38882dc-facd-46ab-96ce-176528439b16"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:20:00 crc kubenswrapper[4806]: I0126 08:20:00.220443 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gp4ch\" (UniqueName: \"kubernetes.io/projected/b38882dc-facd-46ab-96ce-176528439b16-kube-api-access-gp4ch\") on node \"crc\" DevicePath \"\"" Jan 26 08:20:00 crc kubenswrapper[4806]: I0126 08:20:00.220476 4806 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b38882dc-facd-46ab-96ce-176528439b16-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 08:20:00 crc kubenswrapper[4806]: I0126 08:20:00.220486 4806 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b38882dc-facd-46ab-96ce-176528439b16-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 08:20:00 crc kubenswrapper[4806]: I0126 08:20:00.220496 4806 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b38882dc-facd-46ab-96ce-176528439b16-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 08:20:00 crc kubenswrapper[4806]: I0126 08:20:00.621409 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lc8v5" event={"ID":"b38882dc-facd-46ab-96ce-176528439b16","Type":"ContainerDied","Data":"b359cb48cbb1098af00c44e30926cef84a84bff4b65e0fa5a4b3c68c408c4129"} Jan 26 08:20:00 crc kubenswrapper[4806]: I0126 08:20:00.621805 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b359cb48cbb1098af00c44e30926cef84a84bff4b65e0fa5a4b3c68c408c4129" Jan 26 08:20:00 crc kubenswrapper[4806]: I0126 08:20:00.621512 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-lc8v5" Jan 26 08:20:00 crc kubenswrapper[4806]: I0126 08:20:00.735474 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6zgmb"] Jan 26 08:20:00 crc kubenswrapper[4806]: E0126 08:20:00.735894 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db26beed-354c-4f85-bacd-6c069431fa0d" containerName="extract-utilities" Jan 26 08:20:00 crc kubenswrapper[4806]: I0126 08:20:00.735915 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="db26beed-354c-4f85-bacd-6c069431fa0d" containerName="extract-utilities" Jan 26 08:20:00 crc kubenswrapper[4806]: E0126 08:20:00.735930 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db26beed-354c-4f85-bacd-6c069431fa0d" containerName="registry-server" Jan 26 08:20:00 crc kubenswrapper[4806]: I0126 08:20:00.735937 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="db26beed-354c-4f85-bacd-6c069431fa0d" containerName="registry-server" Jan 26 08:20:00 crc kubenswrapper[4806]: E0126 08:20:00.735964 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db26beed-354c-4f85-bacd-6c069431fa0d" containerName="extract-content" Jan 26 08:20:00 crc kubenswrapper[4806]: I0126 08:20:00.735969 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="db26beed-354c-4f85-bacd-6c069431fa0d" containerName="extract-content" Jan 26 08:20:00 crc kubenswrapper[4806]: E0126 08:20:00.735979 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b38882dc-facd-46ab-96ce-176528439b16" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 26 08:20:00 crc kubenswrapper[4806]: I0126 08:20:00.735985 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="b38882dc-facd-46ab-96ce-176528439b16" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 26 08:20:00 crc kubenswrapper[4806]: I0126 08:20:00.736163 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="b38882dc-facd-46ab-96ce-176528439b16" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 26 08:20:00 crc kubenswrapper[4806]: I0126 08:20:00.736189 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="db26beed-354c-4f85-bacd-6c069431fa0d" containerName="registry-server" Jan 26 08:20:00 crc kubenswrapper[4806]: I0126 08:20:00.736797 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6zgmb" Jan 26 08:20:00 crc kubenswrapper[4806]: I0126 08:20:00.738787 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 08:20:00 crc kubenswrapper[4806]: I0126 08:20:00.738995 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-tr4w7" Jan 26 08:20:00 crc kubenswrapper[4806]: I0126 08:20:00.742063 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 08:20:00 crc kubenswrapper[4806]: I0126 08:20:00.742242 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 08:20:00 crc kubenswrapper[4806]: I0126 08:20:00.751095 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6zgmb"] Jan 26 08:20:00 crc kubenswrapper[4806]: I0126 08:20:00.832683 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/12a200ee-7089-445b-a0eb-ae7fce15f5ec-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-6zgmb\" (UID: \"12a200ee-7089-445b-a0eb-ae7fce15f5ec\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6zgmb" Jan 26 08:20:00 crc kubenswrapper[4806]: I0126 08:20:00.832768 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/12a200ee-7089-445b-a0eb-ae7fce15f5ec-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-6zgmb\" (UID: \"12a200ee-7089-445b-a0eb-ae7fce15f5ec\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6zgmb" Jan 26 08:20:00 crc kubenswrapper[4806]: I0126 08:20:00.832866 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgmfc\" (UniqueName: \"kubernetes.io/projected/12a200ee-7089-445b-a0eb-ae7fce15f5ec-kube-api-access-qgmfc\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-6zgmb\" (UID: \"12a200ee-7089-445b-a0eb-ae7fce15f5ec\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6zgmb" Jan 26 08:20:00 crc kubenswrapper[4806]: I0126 08:20:00.934309 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qgmfc\" (UniqueName: \"kubernetes.io/projected/12a200ee-7089-445b-a0eb-ae7fce15f5ec-kube-api-access-qgmfc\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-6zgmb\" (UID: \"12a200ee-7089-445b-a0eb-ae7fce15f5ec\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6zgmb" Jan 26 08:20:00 crc kubenswrapper[4806]: I0126 08:20:00.934445 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/12a200ee-7089-445b-a0eb-ae7fce15f5ec-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-6zgmb\" (UID: \"12a200ee-7089-445b-a0eb-ae7fce15f5ec\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6zgmb" Jan 26 08:20:00 crc kubenswrapper[4806]: I0126 08:20:00.934510 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/12a200ee-7089-445b-a0eb-ae7fce15f5ec-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-6zgmb\" (UID: \"12a200ee-7089-445b-a0eb-ae7fce15f5ec\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6zgmb" Jan 26 08:20:00 crc kubenswrapper[4806]: I0126 08:20:00.939778 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/12a200ee-7089-445b-a0eb-ae7fce15f5ec-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-6zgmb\" (UID: \"12a200ee-7089-445b-a0eb-ae7fce15f5ec\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6zgmb" Jan 26 08:20:00 crc kubenswrapper[4806]: I0126 08:20:00.943008 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/12a200ee-7089-445b-a0eb-ae7fce15f5ec-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-6zgmb\" (UID: \"12a200ee-7089-445b-a0eb-ae7fce15f5ec\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6zgmb" Jan 26 08:20:00 crc kubenswrapper[4806]: I0126 08:20:00.953738 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qgmfc\" (UniqueName: \"kubernetes.io/projected/12a200ee-7089-445b-a0eb-ae7fce15f5ec-kube-api-access-qgmfc\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-6zgmb\" (UID: \"12a200ee-7089-445b-a0eb-ae7fce15f5ec\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6zgmb" Jan 26 08:20:01 crc kubenswrapper[4806]: I0126 08:20:01.052448 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6zgmb" Jan 26 08:20:01 crc kubenswrapper[4806]: I0126 08:20:01.057275 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="301f16bd-223a-43a2-89cb-1bff1beac16e" path="/var/lib/kubelet/pods/301f16bd-223a-43a2-89cb-1bff1beac16e/volumes" Jan 26 08:20:01 crc kubenswrapper[4806]: I0126 08:20:01.059217 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fecccc8-6319-47b6-9dcb-e1d09c53cc1f" path="/var/lib/kubelet/pods/7fecccc8-6319-47b6-9dcb-e1d09c53cc1f/volumes" Jan 26 08:20:01 crc kubenswrapper[4806]: I0126 08:20:01.571670 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6zgmb"] Jan 26 08:20:01 crc kubenswrapper[4806]: I0126 08:20:01.637237 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6zgmb" event={"ID":"12a200ee-7089-445b-a0eb-ae7fce15f5ec","Type":"ContainerStarted","Data":"67c0f64f101c24e7085a14d96eec692664bca8293ec79ae174dc87a717063410"} Jan 26 08:20:02 crc kubenswrapper[4806]: I0126 08:20:02.651786 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6zgmb" event={"ID":"12a200ee-7089-445b-a0eb-ae7fce15f5ec","Type":"ContainerStarted","Data":"c9e2a5af94d911ebba33cc7c62b3882ac4718ed5f4c34e1b38968309d55d2c84"} Jan 26 08:20:02 crc kubenswrapper[4806]: I0126 08:20:02.686806 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6zgmb" podStartSLOduration=2.252179173 podStartE2EDuration="2.686784637s" podCreationTimestamp="2026-01-26 08:20:00 +0000 UTC" firstStartedPulling="2026-01-26 08:20:01.583339609 +0000 UTC m=+1580.847747665" lastFinishedPulling="2026-01-26 08:20:02.017945073 +0000 UTC m=+1581.282353129" observedRunningTime="2026-01-26 08:20:02.668507226 +0000 UTC m=+1581.932915292" watchObservedRunningTime="2026-01-26 08:20:02.686784637 +0000 UTC m=+1581.951192703" Jan 26 08:20:07 crc kubenswrapper[4806]: I0126 08:20:07.054215 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-8eec-account-create-update-5h69f"] Jan 26 08:20:07 crc kubenswrapper[4806]: I0126 08:20:07.059630 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-4502-account-create-update-kd7nt"] Jan 26 08:20:07 crc kubenswrapper[4806]: I0126 08:20:07.070640 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-v7b2d"] Jan 26 08:20:07 crc kubenswrapper[4806]: I0126 08:20:07.081170 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-x6n6z"] Jan 26 08:20:07 crc kubenswrapper[4806]: I0126 08:20:07.090836 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-v7b2d"] Jan 26 08:20:07 crc kubenswrapper[4806]: I0126 08:20:07.099032 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-8eec-account-create-update-5h69f"] Jan 26 08:20:07 crc kubenswrapper[4806]: I0126 08:20:07.107668 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-x6n6z"] Jan 26 08:20:07 crc kubenswrapper[4806]: I0126 08:20:07.116168 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-4502-account-create-update-kd7nt"] Jan 26 08:20:09 crc kubenswrapper[4806]: I0126 08:20:09.054465 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="035ee86e-30e8-4e6c-9e99-6e0abca4fa67" path="/var/lib/kubelet/pods/035ee86e-30e8-4e6c-9e99-6e0abca4fa67/volumes" Jan 26 08:20:09 crc kubenswrapper[4806]: I0126 08:20:09.056878 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="317adfae-9113-4a76-964b-063e9c840848" path="/var/lib/kubelet/pods/317adfae-9113-4a76-964b-063e9c840848/volumes" Jan 26 08:20:09 crc kubenswrapper[4806]: I0126 08:20:09.058450 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37971779-caab-4f56-a749-e545819352ce" path="/var/lib/kubelet/pods/37971779-caab-4f56-a749-e545819352ce/volumes" Jan 26 08:20:09 crc kubenswrapper[4806]: I0126 08:20:09.059326 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43b0a02c-c897-4cb4-bc1e-f478cec82e6a" path="/var/lib/kubelet/pods/43b0a02c-c897-4cb4-bc1e-f478cec82e6a/volumes" Jan 26 08:20:24 crc kubenswrapper[4806]: I0126 08:20:24.040567 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-4xvcz"] Jan 26 08:20:24 crc kubenswrapper[4806]: I0126 08:20:24.050773 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-4xvcz"] Jan 26 08:20:25 crc kubenswrapper[4806]: I0126 08:20:25.052564 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e77d0cb-b2a5-443f-a47a-7ab76309eee5" path="/var/lib/kubelet/pods/5e77d0cb-b2a5-443f-a47a-7ab76309eee5/volumes" Jan 26 08:20:34 crc kubenswrapper[4806]: I0126 08:20:34.039483 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-77pc8"] Jan 26 08:20:34 crc kubenswrapper[4806]: I0126 08:20:34.051954 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-77pc8"] Jan 26 08:20:35 crc kubenswrapper[4806]: I0126 08:20:35.054079 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ec02cc0-9e30-460d-938a-b04b357649d3" path="/var/lib/kubelet/pods/6ec02cc0-9e30-460d-938a-b04b357649d3/volumes" Jan 26 08:20:35 crc kubenswrapper[4806]: I0126 08:20:35.060226 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-1468-account-create-update-kfzbt"] Jan 26 08:20:35 crc kubenswrapper[4806]: I0126 08:20:35.074502 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-7f99-account-create-update-7hh6g"] Jan 26 08:20:35 crc kubenswrapper[4806]: I0126 08:20:35.088255 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-afd2-account-create-update-dnjwx"] Jan 26 08:20:35 crc kubenswrapper[4806]: I0126 08:20:35.098404 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-afd2-account-create-update-dnjwx"] Jan 26 08:20:35 crc kubenswrapper[4806]: I0126 08:20:35.112350 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-7f99-account-create-update-7hh6g"] Jan 26 08:20:35 crc kubenswrapper[4806]: I0126 08:20:35.120193 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-1468-account-create-update-kfzbt"] Jan 26 08:20:35 crc kubenswrapper[4806]: I0126 08:20:35.130174 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-mj4kn"] Jan 26 08:20:35 crc kubenswrapper[4806]: I0126 08:20:35.141701 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-9cfwh"] Jan 26 08:20:35 crc kubenswrapper[4806]: I0126 08:20:35.150356 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-mj4kn"] Jan 26 08:20:35 crc kubenswrapper[4806]: I0126 08:20:35.158976 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-9cfwh"] Jan 26 08:20:35 crc kubenswrapper[4806]: I0126 08:20:35.301639 4806 scope.go:117] "RemoveContainer" containerID="b8c9a49db432ddc8cd606bc0771f04ae2157303dc1d575109f83630db4d47dd2" Jan 26 08:20:35 crc kubenswrapper[4806]: I0126 08:20:35.336936 4806 scope.go:117] "RemoveContainer" containerID="907e206e7d1b1b2da686c4378556c94068073ada938878ed7245f28ca8dc6a50" Jan 26 08:20:35 crc kubenswrapper[4806]: I0126 08:20:35.383051 4806 scope.go:117] "RemoveContainer" containerID="c677256371bd406932dc9892939917e181d2d520d385383e77cf3beeff2cd9c4" Jan 26 08:20:35 crc kubenswrapper[4806]: I0126 08:20:35.427067 4806 scope.go:117] "RemoveContainer" containerID="a7f1a5a5ef35b3b2a50d9879b06911860606d70b6a99d186239de2d0e6c1503c" Jan 26 08:20:35 crc kubenswrapper[4806]: I0126 08:20:35.469330 4806 scope.go:117] "RemoveContainer" containerID="7a6f6c0bca9dca4ecbf086f470e8ef2e80044c7ff32490763c6c78221d131739" Jan 26 08:20:35 crc kubenswrapper[4806]: I0126 08:20:35.493668 4806 scope.go:117] "RemoveContainer" containerID="f2bdba503a501b38e95f010154a4d5eb0e0014df07100e037c08be6a8074d791" Jan 26 08:20:35 crc kubenswrapper[4806]: I0126 08:20:35.558808 4806 scope.go:117] "RemoveContainer" containerID="d839a21d9f5680a0000ddc6233a37b8e2d9de5992e5ef6d9eb9cb408d42deadb" Jan 26 08:20:35 crc kubenswrapper[4806]: I0126 08:20:35.583437 4806 scope.go:117] "RemoveContainer" containerID="e4a3dabd6e807e0d9aa6e5dfe597dc616e6a1ced935240104640811eeb270686" Jan 26 08:20:35 crc kubenswrapper[4806]: I0126 08:20:35.605548 4806 scope.go:117] "RemoveContainer" containerID="371966ff7583c52a4523d9a493ecf17e3934207445d577ff97a9b3c9d9d84fe7" Jan 26 08:20:35 crc kubenswrapper[4806]: I0126 08:20:35.623899 4806 scope.go:117] "RemoveContainer" containerID="1c2571f0fb7c51c720262fd6186f9b1504a9adee5c43e49791b0b411023e70f7" Jan 26 08:20:37 crc kubenswrapper[4806]: I0126 08:20:37.053912 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0bffa786-3a1c-4303-b303-8500b3614ab8" path="/var/lib/kubelet/pods/0bffa786-3a1c-4303-b303-8500b3614ab8/volumes" Jan 26 08:20:37 crc kubenswrapper[4806]: I0126 08:20:37.055131 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ce3e16b-0b9d-42fc-bd7d-d8e01a190a5f" path="/var/lib/kubelet/pods/2ce3e16b-0b9d-42fc-bd7d-d8e01a190a5f/volumes" Jan 26 08:20:37 crc kubenswrapper[4806]: I0126 08:20:37.056623 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="979ab357-98b5-4ee2-87d8-678702adfab2" path="/var/lib/kubelet/pods/979ab357-98b5-4ee2-87d8-678702adfab2/volumes" Jan 26 08:20:37 crc kubenswrapper[4806]: I0126 08:20:37.058079 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7cf5184-3f6b-426b-a01c-07ba5de2b9fc" path="/var/lib/kubelet/pods/c7cf5184-3f6b-426b-a01c-07ba5de2b9fc/volumes" Jan 26 08:20:37 crc kubenswrapper[4806]: I0126 08:20:37.060065 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ece2b5de-984b-4a8c-8115-e84363f5f599" path="/var/lib/kubelet/pods/ece2b5de-984b-4a8c-8115-e84363f5f599/volumes" Jan 26 08:20:39 crc kubenswrapper[4806]: I0126 08:20:39.057052 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-b311-account-create-update-mv65h"] Jan 26 08:20:39 crc kubenswrapper[4806]: I0126 08:20:39.076612 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-create-qc6mc"] Jan 26 08:20:39 crc kubenswrapper[4806]: I0126 08:20:39.096023 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-b311-account-create-update-mv65h"] Jan 26 08:20:39 crc kubenswrapper[4806]: I0126 08:20:39.109415 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-create-qc6mc"] Jan 26 08:20:41 crc kubenswrapper[4806]: I0126 08:20:41.064200 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78af78c0-adca-4ff1-960f-5d8f918e2a1a" path="/var/lib/kubelet/pods/78af78c0-adca-4ff1-960f-5d8f918e2a1a/volumes" Jan 26 08:20:41 crc kubenswrapper[4806]: I0126 08:20:41.066217 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb76c3b5-a137-4408-9cc4-7e17505b7989" path="/var/lib/kubelet/pods/bb76c3b5-a137-4408-9cc4-7e17505b7989/volumes" Jan 26 08:20:43 crc kubenswrapper[4806]: I0126 08:20:43.029889 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-pbrx8"] Jan 26 08:20:43 crc kubenswrapper[4806]: I0126 08:20:43.039423 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-pbrx8"] Jan 26 08:20:43 crc kubenswrapper[4806]: I0126 08:20:43.055667 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ba70c1a-7213-421b-b154-ac57621252b8" path="/var/lib/kubelet/pods/0ba70c1a-7213-421b-b154-ac57621252b8/volumes" Jan 26 08:20:45 crc kubenswrapper[4806]: I0126 08:20:45.806237 4806 patch_prober.go:28] interesting pod/machine-config-daemon-k2tlk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 08:20:45 crc kubenswrapper[4806]: I0126 08:20:45.806629 4806 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 08:20:55 crc kubenswrapper[4806]: I0126 08:20:55.062191 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-hfpkt"] Jan 26 08:20:55 crc kubenswrapper[4806]: I0126 08:20:55.062892 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-hfpkt"] Jan 26 08:20:57 crc kubenswrapper[4806]: I0126 08:20:57.052625 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f602c552-c375-4d9b-96fc-633ad5811f7d" path="/var/lib/kubelet/pods/f602c552-c375-4d9b-96fc-633ad5811f7d/volumes" Jan 26 08:21:15 crc kubenswrapper[4806]: I0126 08:21:15.806291 4806 patch_prober.go:28] interesting pod/machine-config-daemon-k2tlk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 08:21:15 crc kubenswrapper[4806]: I0126 08:21:15.806772 4806 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 08:21:20 crc kubenswrapper[4806]: I0126 08:21:20.059394 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-r5vvf"] Jan 26 08:21:20 crc kubenswrapper[4806]: I0126 08:21:20.069152 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-r5vvf"] Jan 26 08:21:21 crc kubenswrapper[4806]: I0126 08:21:21.058002 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0a51881-d18e-40dd-8dfb-a243d798133a" path="/var/lib/kubelet/pods/b0a51881-d18e-40dd-8dfb-a243d798133a/volumes" Jan 26 08:21:29 crc kubenswrapper[4806]: I0126 08:21:29.028329 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-hnszz"] Jan 26 08:21:29 crc kubenswrapper[4806]: I0126 08:21:29.037557 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-hnszz"] Jan 26 08:21:29 crc kubenswrapper[4806]: I0126 08:21:29.052679 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86ed2345-2edc-46bb-a416-3cfa5c01b38d" path="/var/lib/kubelet/pods/86ed2345-2edc-46bb-a416-3cfa5c01b38d/volumes" Jan 26 08:21:30 crc kubenswrapper[4806]: I0126 08:21:30.030560 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-btkzm"] Jan 26 08:21:30 crc kubenswrapper[4806]: I0126 08:21:30.039476 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-btkzm"] Jan 26 08:21:31 crc kubenswrapper[4806]: I0126 08:21:31.055761 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a63a316-0795-4795-8662-5c0b2de2597f" path="/var/lib/kubelet/pods/6a63a316-0795-4795-8662-5c0b2de2597f/volumes" Jan 26 08:21:33 crc kubenswrapper[4806]: I0126 08:21:33.056239 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-8dksv"] Jan 26 08:21:33 crc kubenswrapper[4806]: I0126 08:21:33.060867 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-8dksv"] Jan 26 08:21:35 crc kubenswrapper[4806]: I0126 08:21:35.075975 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4588263f-b01b-4a54-829f-1cef11d1dbd3" path="/var/lib/kubelet/pods/4588263f-b01b-4a54-829f-1cef11d1dbd3/volumes" Jan 26 08:21:35 crc kubenswrapper[4806]: I0126 08:21:35.810410 4806 scope.go:117] "RemoveContainer" containerID="0905251ab4adf243890245d4a714a0591b6c1f6013cf18411905feebb817b2c3" Jan 26 08:21:35 crc kubenswrapper[4806]: I0126 08:21:35.842679 4806 scope.go:117] "RemoveContainer" containerID="15a235ce747e901deba70eabeb9041e9204463033815d5fac24a664085f0b122" Jan 26 08:21:35 crc kubenswrapper[4806]: I0126 08:21:35.887267 4806 scope.go:117] "RemoveContainer" containerID="ae8bbb0365b30b6e5f5085780b82e519ddb37473f7440ea8445c9598ce41ad54" Jan 26 08:21:35 crc kubenswrapper[4806]: I0126 08:21:35.932033 4806 scope.go:117] "RemoveContainer" containerID="2d56710877ad8fe55db99dc1c23ddf68117a93ae6e90fdf4e80f5fbff5a790c3" Jan 26 08:21:36 crc kubenswrapper[4806]: I0126 08:21:36.003617 4806 scope.go:117] "RemoveContainer" containerID="8a25122d6c9fad04d754046184c87ea909a7a3437fdcfded27819a663fb1f063" Jan 26 08:21:36 crc kubenswrapper[4806]: I0126 08:21:36.043195 4806 scope.go:117] "RemoveContainer" containerID="ed2485be131e4cad4b6ee955c2e0c99fc91b6551e9083f5ac7d8e12c02c13027" Jan 26 08:21:36 crc kubenswrapper[4806]: I0126 08:21:36.081170 4806 scope.go:117] "RemoveContainer" containerID="05c7a5902e97ab25c9fcfb02045aa100127567179f4be68e49409958a671cc9c" Jan 26 08:21:36 crc kubenswrapper[4806]: I0126 08:21:36.099177 4806 scope.go:117] "RemoveContainer" containerID="9a79b1dcfcbfab56017d36700ba15abd4bff6228c5d823928b0e2b73ce2cf02b" Jan 26 08:21:36 crc kubenswrapper[4806]: I0126 08:21:36.123871 4806 scope.go:117] "RemoveContainer" containerID="fe8f1dbf123a7ed8f81d7773dea0015a57089e658ae2a3760eead5826aeece01" Jan 26 08:21:36 crc kubenswrapper[4806]: I0126 08:21:36.147038 4806 scope.go:117] "RemoveContainer" containerID="f5afbd855fc295ff0dfcceb591fa970b5bf97f180e0f1d5686519f0411226b48" Jan 26 08:21:36 crc kubenswrapper[4806]: I0126 08:21:36.170512 4806 scope.go:117] "RemoveContainer" containerID="bb20d674a5a435f4c70852326dd1a654cf2c0661e2fe07882dc9bad948f27578" Jan 26 08:21:36 crc kubenswrapper[4806]: I0126 08:21:36.193170 4806 scope.go:117] "RemoveContainer" containerID="cb07ccce8416892e78fbdd092a4131eded9eeed3dcf01fc6280660e1a124e48a" Jan 26 08:21:36 crc kubenswrapper[4806]: I0126 08:21:36.212507 4806 scope.go:117] "RemoveContainer" containerID="a4fedad710b52b7d491be18c79d774060b6fd791076c1359f72a6fc755541add" Jan 26 08:21:42 crc kubenswrapper[4806]: I0126 08:21:42.038669 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-bjtkx"] Jan 26 08:21:42 crc kubenswrapper[4806]: I0126 08:21:42.047081 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-bjtkx"] Jan 26 08:21:43 crc kubenswrapper[4806]: I0126 08:21:43.051651 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19528149-09a1-44a5-b419-bbe91789d493" path="/var/lib/kubelet/pods/19528149-09a1-44a5-b419-bbe91789d493/volumes" Jan 26 08:21:45 crc kubenswrapper[4806]: I0126 08:21:45.031296 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-qw29c"] Jan 26 08:21:45 crc kubenswrapper[4806]: I0126 08:21:45.074835 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-qw29c"] Jan 26 08:21:45 crc kubenswrapper[4806]: I0126 08:21:45.806636 4806 patch_prober.go:28] interesting pod/machine-config-daemon-k2tlk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 08:21:45 crc kubenswrapper[4806]: I0126 08:21:45.806748 4806 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 08:21:45 crc kubenswrapper[4806]: I0126 08:21:45.806843 4806 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" Jan 26 08:21:45 crc kubenswrapper[4806]: I0126 08:21:45.808737 4806 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"55ec0c990dc91e0e377b96ad87f2e5ff0605f7a996ad405f57b4bd86d9139349"} pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 08:21:45 crc kubenswrapper[4806]: I0126 08:21:45.808889 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" containerName="machine-config-daemon" containerID="cri-o://55ec0c990dc91e0e377b96ad87f2e5ff0605f7a996ad405f57b4bd86d9139349" gracePeriod=600 Jan 26 08:21:45 crc kubenswrapper[4806]: E0126 08:21:45.931396 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 08:21:46 crc kubenswrapper[4806]: I0126 08:21:46.640154 4806 generic.go:334] "Generic (PLEG): container finished" podID="12a200ee-7089-445b-a0eb-ae7fce15f5ec" containerID="c9e2a5af94d911ebba33cc7c62b3882ac4718ed5f4c34e1b38968309d55d2c84" exitCode=0 Jan 26 08:21:46 crc kubenswrapper[4806]: I0126 08:21:46.640236 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6zgmb" event={"ID":"12a200ee-7089-445b-a0eb-ae7fce15f5ec","Type":"ContainerDied","Data":"c9e2a5af94d911ebba33cc7c62b3882ac4718ed5f4c34e1b38968309d55d2c84"} Jan 26 08:21:46 crc kubenswrapper[4806]: I0126 08:21:46.643458 4806 generic.go:334] "Generic (PLEG): container finished" podID="d07502a2-50b0-4012-b335-340a1c694c50" containerID="55ec0c990dc91e0e377b96ad87f2e5ff0605f7a996ad405f57b4bd86d9139349" exitCode=0 Jan 26 08:21:46 crc kubenswrapper[4806]: I0126 08:21:46.643504 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" event={"ID":"d07502a2-50b0-4012-b335-340a1c694c50","Type":"ContainerDied","Data":"55ec0c990dc91e0e377b96ad87f2e5ff0605f7a996ad405f57b4bd86d9139349"} Jan 26 08:21:46 crc kubenswrapper[4806]: I0126 08:21:46.643555 4806 scope.go:117] "RemoveContainer" containerID="09669619f64d4d35cd31b87d98b04e88f92b9a54a34f625c50be4875e6fefe66" Jan 26 08:21:46 crc kubenswrapper[4806]: I0126 08:21:46.644643 4806 scope.go:117] "RemoveContainer" containerID="55ec0c990dc91e0e377b96ad87f2e5ff0605f7a996ad405f57b4bd86d9139349" Jan 26 08:21:46 crc kubenswrapper[4806]: E0126 08:21:46.645167 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 08:21:47 crc kubenswrapper[4806]: I0126 08:21:47.054461 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc6102bf-7483-4063-af9d-841e78398b0c" path="/var/lib/kubelet/pods/bc6102bf-7483-4063-af9d-841e78398b0c/volumes" Jan 26 08:21:48 crc kubenswrapper[4806]: I0126 08:21:48.051080 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6zgmb" Jan 26 08:21:48 crc kubenswrapper[4806]: I0126 08:21:48.217141 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/12a200ee-7089-445b-a0eb-ae7fce15f5ec-ssh-key-openstack-edpm-ipam\") pod \"12a200ee-7089-445b-a0eb-ae7fce15f5ec\" (UID: \"12a200ee-7089-445b-a0eb-ae7fce15f5ec\") " Jan 26 08:21:48 crc kubenswrapper[4806]: I0126 08:21:48.217344 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/12a200ee-7089-445b-a0eb-ae7fce15f5ec-inventory\") pod \"12a200ee-7089-445b-a0eb-ae7fce15f5ec\" (UID: \"12a200ee-7089-445b-a0eb-ae7fce15f5ec\") " Jan 26 08:21:48 crc kubenswrapper[4806]: I0126 08:21:48.217385 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qgmfc\" (UniqueName: \"kubernetes.io/projected/12a200ee-7089-445b-a0eb-ae7fce15f5ec-kube-api-access-qgmfc\") pod \"12a200ee-7089-445b-a0eb-ae7fce15f5ec\" (UID: \"12a200ee-7089-445b-a0eb-ae7fce15f5ec\") " Jan 26 08:21:48 crc kubenswrapper[4806]: I0126 08:21:48.222573 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12a200ee-7089-445b-a0eb-ae7fce15f5ec-kube-api-access-qgmfc" (OuterVolumeSpecName: "kube-api-access-qgmfc") pod "12a200ee-7089-445b-a0eb-ae7fce15f5ec" (UID: "12a200ee-7089-445b-a0eb-ae7fce15f5ec"). InnerVolumeSpecName "kube-api-access-qgmfc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:21:48 crc kubenswrapper[4806]: I0126 08:21:48.243714 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12a200ee-7089-445b-a0eb-ae7fce15f5ec-inventory" (OuterVolumeSpecName: "inventory") pod "12a200ee-7089-445b-a0eb-ae7fce15f5ec" (UID: "12a200ee-7089-445b-a0eb-ae7fce15f5ec"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:21:48 crc kubenswrapper[4806]: I0126 08:21:48.243934 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12a200ee-7089-445b-a0eb-ae7fce15f5ec-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "12a200ee-7089-445b-a0eb-ae7fce15f5ec" (UID: "12a200ee-7089-445b-a0eb-ae7fce15f5ec"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:21:48 crc kubenswrapper[4806]: I0126 08:21:48.319452 4806 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/12a200ee-7089-445b-a0eb-ae7fce15f5ec-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 08:21:48 crc kubenswrapper[4806]: I0126 08:21:48.319482 4806 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/12a200ee-7089-445b-a0eb-ae7fce15f5ec-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 08:21:48 crc kubenswrapper[4806]: I0126 08:21:48.319491 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qgmfc\" (UniqueName: \"kubernetes.io/projected/12a200ee-7089-445b-a0eb-ae7fce15f5ec-kube-api-access-qgmfc\") on node \"crc\" DevicePath \"\"" Jan 26 08:21:48 crc kubenswrapper[4806]: I0126 08:21:48.676211 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6zgmb" event={"ID":"12a200ee-7089-445b-a0eb-ae7fce15f5ec","Type":"ContainerDied","Data":"67c0f64f101c24e7085a14d96eec692664bca8293ec79ae174dc87a717063410"} Jan 26 08:21:48 crc kubenswrapper[4806]: I0126 08:21:48.676746 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="67c0f64f101c24e7085a14d96eec692664bca8293ec79ae174dc87a717063410" Jan 26 08:21:48 crc kubenswrapper[4806]: I0126 08:21:48.676554 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-6zgmb" Jan 26 08:21:48 crc kubenswrapper[4806]: I0126 08:21:48.767094 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vb25w"] Jan 26 08:21:48 crc kubenswrapper[4806]: E0126 08:21:48.767971 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12a200ee-7089-445b-a0eb-ae7fce15f5ec" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 26 08:21:48 crc kubenswrapper[4806]: I0126 08:21:48.768123 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="12a200ee-7089-445b-a0eb-ae7fce15f5ec" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 26 08:21:48 crc kubenswrapper[4806]: I0126 08:21:48.768551 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="12a200ee-7089-445b-a0eb-ae7fce15f5ec" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 26 08:21:48 crc kubenswrapper[4806]: I0126 08:21:48.769727 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vb25w" Jan 26 08:21:48 crc kubenswrapper[4806]: I0126 08:21:48.773512 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-tr4w7" Jan 26 08:21:48 crc kubenswrapper[4806]: I0126 08:21:48.773908 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 08:21:48 crc kubenswrapper[4806]: I0126 08:21:48.776368 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 08:21:48 crc kubenswrapper[4806]: I0126 08:21:48.779198 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 08:21:48 crc kubenswrapper[4806]: I0126 08:21:48.783558 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vb25w"] Jan 26 08:21:48 crc kubenswrapper[4806]: I0126 08:21:48.934985 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/290f172f-1b02-41e0-a865-c926792e9121-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-vb25w\" (UID: \"290f172f-1b02-41e0-a865-c926792e9121\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vb25w" Jan 26 08:21:48 crc kubenswrapper[4806]: I0126 08:21:48.935093 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/290f172f-1b02-41e0-a865-c926792e9121-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-vb25w\" (UID: \"290f172f-1b02-41e0-a865-c926792e9121\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vb25w" Jan 26 08:21:48 crc kubenswrapper[4806]: I0126 08:21:48.935133 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znvxd\" (UniqueName: \"kubernetes.io/projected/290f172f-1b02-41e0-a865-c926792e9121-kube-api-access-znvxd\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-vb25w\" (UID: \"290f172f-1b02-41e0-a865-c926792e9121\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vb25w" Jan 26 08:21:49 crc kubenswrapper[4806]: I0126 08:21:49.036733 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/290f172f-1b02-41e0-a865-c926792e9121-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-vb25w\" (UID: \"290f172f-1b02-41e0-a865-c926792e9121\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vb25w" Jan 26 08:21:49 crc kubenswrapper[4806]: I0126 08:21:49.036988 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-znvxd\" (UniqueName: \"kubernetes.io/projected/290f172f-1b02-41e0-a865-c926792e9121-kube-api-access-znvxd\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-vb25w\" (UID: \"290f172f-1b02-41e0-a865-c926792e9121\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vb25w" Jan 26 08:21:49 crc kubenswrapper[4806]: I0126 08:21:49.037162 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/290f172f-1b02-41e0-a865-c926792e9121-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-vb25w\" (UID: \"290f172f-1b02-41e0-a865-c926792e9121\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vb25w" Jan 26 08:21:49 crc kubenswrapper[4806]: I0126 08:21:49.042944 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/290f172f-1b02-41e0-a865-c926792e9121-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-vb25w\" (UID: \"290f172f-1b02-41e0-a865-c926792e9121\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vb25w" Jan 26 08:21:49 crc kubenswrapper[4806]: I0126 08:21:49.043919 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/290f172f-1b02-41e0-a865-c926792e9121-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-vb25w\" (UID: \"290f172f-1b02-41e0-a865-c926792e9121\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vb25w" Jan 26 08:21:49 crc kubenswrapper[4806]: I0126 08:21:49.053118 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-znvxd\" (UniqueName: \"kubernetes.io/projected/290f172f-1b02-41e0-a865-c926792e9121-kube-api-access-znvxd\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-vb25w\" (UID: \"290f172f-1b02-41e0-a865-c926792e9121\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vb25w" Jan 26 08:21:49 crc kubenswrapper[4806]: I0126 08:21:49.098230 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vb25w" Jan 26 08:21:49 crc kubenswrapper[4806]: I0126 08:21:49.618559 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vb25w"] Jan 26 08:21:49 crc kubenswrapper[4806]: I0126 08:21:49.683902 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vb25w" event={"ID":"290f172f-1b02-41e0-a865-c926792e9121","Type":"ContainerStarted","Data":"2296eb72b7dee013c62b493f3dd069e3b8663629d79ca57227b9d6cc6f9739b8"} Jan 26 08:21:50 crc kubenswrapper[4806]: I0126 08:21:50.694918 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vb25w" event={"ID":"290f172f-1b02-41e0-a865-c926792e9121","Type":"ContainerStarted","Data":"215467bdb782e4070bd758d02b723f745b6f31b9c4c8119efc114c2bdcdb9234"} Jan 26 08:21:50 crc kubenswrapper[4806]: I0126 08:21:50.733191 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vb25w" podStartSLOduration=2.261380401 podStartE2EDuration="2.733170403s" podCreationTimestamp="2026-01-26 08:21:48 +0000 UTC" firstStartedPulling="2026-01-26 08:21:49.630627414 +0000 UTC m=+1688.895035480" lastFinishedPulling="2026-01-26 08:21:50.102417396 +0000 UTC m=+1689.366825482" observedRunningTime="2026-01-26 08:21:50.725767835 +0000 UTC m=+1689.990175921" watchObservedRunningTime="2026-01-26 08:21:50.733170403 +0000 UTC m=+1689.997578469" Jan 26 08:22:02 crc kubenswrapper[4806]: I0126 08:22:02.042808 4806 scope.go:117] "RemoveContainer" containerID="55ec0c990dc91e0e377b96ad87f2e5ff0605f7a996ad405f57b4bd86d9139349" Jan 26 08:22:02 crc kubenswrapper[4806]: E0126 08:22:02.045272 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 08:22:16 crc kubenswrapper[4806]: I0126 08:22:16.042026 4806 scope.go:117] "RemoveContainer" containerID="55ec0c990dc91e0e377b96ad87f2e5ff0605f7a996ad405f57b4bd86d9139349" Jan 26 08:22:16 crc kubenswrapper[4806]: E0126 08:22:16.042721 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 08:22:27 crc kubenswrapper[4806]: I0126 08:22:27.042617 4806 scope.go:117] "RemoveContainer" containerID="55ec0c990dc91e0e377b96ad87f2e5ff0605f7a996ad405f57b4bd86d9139349" Jan 26 08:22:27 crc kubenswrapper[4806]: E0126 08:22:27.043401 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 08:22:36 crc kubenswrapper[4806]: I0126 08:22:36.447645 4806 scope.go:117] "RemoveContainer" containerID="2fe5ae91a9473734ce41faf4efb4de45a5d442716ca0e10fd78e7008169ce5c0" Jan 26 08:22:36 crc kubenswrapper[4806]: I0126 08:22:36.473669 4806 scope.go:117] "RemoveContainer" containerID="a9564fe8c4b2397cae0a3995b2fda49cd35376fd0adf3daf75579d44162e21a2" Jan 26 08:22:39 crc kubenswrapper[4806]: I0126 08:22:39.043489 4806 scope.go:117] "RemoveContainer" containerID="55ec0c990dc91e0e377b96ad87f2e5ff0605f7a996ad405f57b4bd86d9139349" Jan 26 08:22:39 crc kubenswrapper[4806]: E0126 08:22:39.044136 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 08:22:50 crc kubenswrapper[4806]: I0126 08:22:50.042114 4806 scope.go:117] "RemoveContainer" containerID="55ec0c990dc91e0e377b96ad87f2e5ff0605f7a996ad405f57b4bd86d9139349" Jan 26 08:22:50 crc kubenswrapper[4806]: E0126 08:22:50.042997 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 08:22:56 crc kubenswrapper[4806]: I0126 08:22:56.047239 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-c947-account-create-update-7ctwz"] Jan 26 08:22:56 crc kubenswrapper[4806]: I0126 08:22:56.058497 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-kljvf"] Jan 26 08:22:56 crc kubenswrapper[4806]: I0126 08:22:56.067561 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-kljvf"] Jan 26 08:22:56 crc kubenswrapper[4806]: I0126 08:22:56.075036 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-c947-account-create-update-7ctwz"] Jan 26 08:22:57 crc kubenswrapper[4806]: I0126 08:22:57.031008 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-fe96-account-create-update-fc2tj"] Jan 26 08:22:57 crc kubenswrapper[4806]: I0126 08:22:57.039960 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-t5vh7"] Jan 26 08:22:57 crc kubenswrapper[4806]: I0126 08:22:57.053290 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7eda8877-1136-4de4-8bdf-b53e018a7a7b" path="/var/lib/kubelet/pods/7eda8877-1136-4de4-8bdf-b53e018a7a7b/volumes" Jan 26 08:22:57 crc kubenswrapper[4806]: I0126 08:22:57.055117 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8e1e344-4554-4155-bb19-26a51af1af1a" path="/var/lib/kubelet/pods/f8e1e344-4554-4155-bb19-26a51af1af1a/volumes" Jan 26 08:22:57 crc kubenswrapper[4806]: I0126 08:22:57.055815 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-t5vh7"] Jan 26 08:22:57 crc kubenswrapper[4806]: I0126 08:22:57.062187 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-5sxlk"] Jan 26 08:22:57 crc kubenswrapper[4806]: I0126 08:22:57.070282 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-fe96-account-create-update-fc2tj"] Jan 26 08:22:57 crc kubenswrapper[4806]: I0126 08:22:57.077364 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-fb8d-account-create-update-jt484"] Jan 26 08:22:57 crc kubenswrapper[4806]: I0126 08:22:57.084140 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-fb8d-account-create-update-jt484"] Jan 26 08:22:57 crc kubenswrapper[4806]: I0126 08:22:57.091543 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-5sxlk"] Jan 26 08:22:59 crc kubenswrapper[4806]: I0126 08:22:59.053065 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e44fbc1-418c-4be1-bd7e-70489014622c" path="/var/lib/kubelet/pods/2e44fbc1-418c-4be1-bd7e-70489014622c/volumes" Jan 26 08:22:59 crc kubenswrapper[4806]: I0126 08:22:59.054797 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72d6ee23-489e-4a9c-a0d6-277b06b2616f" path="/var/lib/kubelet/pods/72d6ee23-489e-4a9c-a0d6-277b06b2616f/volumes" Jan 26 08:22:59 crc kubenswrapper[4806]: I0126 08:22:59.056214 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f48c46b-0896-4b60-8c97-f9b6608a368f" path="/var/lib/kubelet/pods/8f48c46b-0896-4b60-8c97-f9b6608a368f/volumes" Jan 26 08:22:59 crc kubenswrapper[4806]: I0126 08:22:59.057370 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd3fae76-85a4-45ab-87b8-4ccd3303cd0e" path="/var/lib/kubelet/pods/bd3fae76-85a4-45ab-87b8-4ccd3303cd0e/volumes" Jan 26 08:23:04 crc kubenswrapper[4806]: I0126 08:23:04.042225 4806 scope.go:117] "RemoveContainer" containerID="55ec0c990dc91e0e377b96ad87f2e5ff0605f7a996ad405f57b4bd86d9139349" Jan 26 08:23:04 crc kubenswrapper[4806]: E0126 08:23:04.042754 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 08:23:11 crc kubenswrapper[4806]: I0126 08:23:11.459582 4806 generic.go:334] "Generic (PLEG): container finished" podID="290f172f-1b02-41e0-a865-c926792e9121" containerID="215467bdb782e4070bd758d02b723f745b6f31b9c4c8119efc114c2bdcdb9234" exitCode=0 Jan 26 08:23:11 crc kubenswrapper[4806]: I0126 08:23:11.459683 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vb25w" event={"ID":"290f172f-1b02-41e0-a865-c926792e9121","Type":"ContainerDied","Data":"215467bdb782e4070bd758d02b723f745b6f31b9c4c8119efc114c2bdcdb9234"} Jan 26 08:23:12 crc kubenswrapper[4806]: I0126 08:23:12.934224 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vb25w" Jan 26 08:23:13 crc kubenswrapper[4806]: I0126 08:23:13.060063 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/290f172f-1b02-41e0-a865-c926792e9121-ssh-key-openstack-edpm-ipam\") pod \"290f172f-1b02-41e0-a865-c926792e9121\" (UID: \"290f172f-1b02-41e0-a865-c926792e9121\") " Jan 26 08:23:13 crc kubenswrapper[4806]: I0126 08:23:13.060130 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/290f172f-1b02-41e0-a865-c926792e9121-inventory\") pod \"290f172f-1b02-41e0-a865-c926792e9121\" (UID: \"290f172f-1b02-41e0-a865-c926792e9121\") " Jan 26 08:23:13 crc kubenswrapper[4806]: I0126 08:23:13.060257 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-znvxd\" (UniqueName: \"kubernetes.io/projected/290f172f-1b02-41e0-a865-c926792e9121-kube-api-access-znvxd\") pod \"290f172f-1b02-41e0-a865-c926792e9121\" (UID: \"290f172f-1b02-41e0-a865-c926792e9121\") " Jan 26 08:23:13 crc kubenswrapper[4806]: I0126 08:23:13.091979 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/290f172f-1b02-41e0-a865-c926792e9121-kube-api-access-znvxd" (OuterVolumeSpecName: "kube-api-access-znvxd") pod "290f172f-1b02-41e0-a865-c926792e9121" (UID: "290f172f-1b02-41e0-a865-c926792e9121"). InnerVolumeSpecName "kube-api-access-znvxd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:23:13 crc kubenswrapper[4806]: I0126 08:23:13.106066 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/290f172f-1b02-41e0-a865-c926792e9121-inventory" (OuterVolumeSpecName: "inventory") pod "290f172f-1b02-41e0-a865-c926792e9121" (UID: "290f172f-1b02-41e0-a865-c926792e9121"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:23:13 crc kubenswrapper[4806]: I0126 08:23:13.142427 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/290f172f-1b02-41e0-a865-c926792e9121-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "290f172f-1b02-41e0-a865-c926792e9121" (UID: "290f172f-1b02-41e0-a865-c926792e9121"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:23:13 crc kubenswrapper[4806]: I0126 08:23:13.172403 4806 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/290f172f-1b02-41e0-a865-c926792e9121-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 08:23:13 crc kubenswrapper[4806]: I0126 08:23:13.172438 4806 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/290f172f-1b02-41e0-a865-c926792e9121-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 08:23:13 crc kubenswrapper[4806]: I0126 08:23:13.172448 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-znvxd\" (UniqueName: \"kubernetes.io/projected/290f172f-1b02-41e0-a865-c926792e9121-kube-api-access-znvxd\") on node \"crc\" DevicePath \"\"" Jan 26 08:23:13 crc kubenswrapper[4806]: I0126 08:23:13.478700 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vb25w" event={"ID":"290f172f-1b02-41e0-a865-c926792e9121","Type":"ContainerDied","Data":"2296eb72b7dee013c62b493f3dd069e3b8663629d79ca57227b9d6cc6f9739b8"} Jan 26 08:23:13 crc kubenswrapper[4806]: I0126 08:23:13.478746 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2296eb72b7dee013c62b493f3dd069e3b8663629d79ca57227b9d6cc6f9739b8" Jan 26 08:23:13 crc kubenswrapper[4806]: I0126 08:23:13.478822 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vb25w" Jan 26 08:23:13 crc kubenswrapper[4806]: I0126 08:23:13.582229 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-kq6ht"] Jan 26 08:23:13 crc kubenswrapper[4806]: E0126 08:23:13.582656 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="290f172f-1b02-41e0-a865-c926792e9121" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 26 08:23:13 crc kubenswrapper[4806]: I0126 08:23:13.582671 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="290f172f-1b02-41e0-a865-c926792e9121" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 26 08:23:13 crc kubenswrapper[4806]: I0126 08:23:13.582849 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="290f172f-1b02-41e0-a865-c926792e9121" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 26 08:23:13 crc kubenswrapper[4806]: I0126 08:23:13.583531 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-kq6ht" Jan 26 08:23:13 crc kubenswrapper[4806]: I0126 08:23:13.585627 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 08:23:13 crc kubenswrapper[4806]: I0126 08:23:13.585871 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-tr4w7" Jan 26 08:23:13 crc kubenswrapper[4806]: I0126 08:23:13.586217 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 08:23:13 crc kubenswrapper[4806]: I0126 08:23:13.589416 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 08:23:13 crc kubenswrapper[4806]: I0126 08:23:13.603461 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-kq6ht"] Jan 26 08:23:13 crc kubenswrapper[4806]: I0126 08:23:13.783104 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/47febe81-62f5-4336-a165-bbc520756fc7-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-kq6ht\" (UID: \"47febe81-62f5-4336-a165-bbc520756fc7\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-kq6ht" Jan 26 08:23:13 crc kubenswrapper[4806]: I0126 08:23:13.783254 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvpc6\" (UniqueName: \"kubernetes.io/projected/47febe81-62f5-4336-a165-bbc520756fc7-kube-api-access-dvpc6\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-kq6ht\" (UID: \"47febe81-62f5-4336-a165-bbc520756fc7\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-kq6ht" Jan 26 08:23:13 crc kubenswrapper[4806]: I0126 08:23:13.783286 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/47febe81-62f5-4336-a165-bbc520756fc7-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-kq6ht\" (UID: \"47febe81-62f5-4336-a165-bbc520756fc7\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-kq6ht" Jan 26 08:23:13 crc kubenswrapper[4806]: I0126 08:23:13.885560 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvpc6\" (UniqueName: \"kubernetes.io/projected/47febe81-62f5-4336-a165-bbc520756fc7-kube-api-access-dvpc6\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-kq6ht\" (UID: \"47febe81-62f5-4336-a165-bbc520756fc7\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-kq6ht" Jan 26 08:23:13 crc kubenswrapper[4806]: I0126 08:23:13.885605 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/47febe81-62f5-4336-a165-bbc520756fc7-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-kq6ht\" (UID: \"47febe81-62f5-4336-a165-bbc520756fc7\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-kq6ht" Jan 26 08:23:13 crc kubenswrapper[4806]: I0126 08:23:13.885703 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/47febe81-62f5-4336-a165-bbc520756fc7-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-kq6ht\" (UID: \"47febe81-62f5-4336-a165-bbc520756fc7\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-kq6ht" Jan 26 08:23:13 crc kubenswrapper[4806]: I0126 08:23:13.889655 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/47febe81-62f5-4336-a165-bbc520756fc7-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-kq6ht\" (UID: \"47febe81-62f5-4336-a165-bbc520756fc7\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-kq6ht" Jan 26 08:23:13 crc kubenswrapper[4806]: I0126 08:23:13.889743 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/47febe81-62f5-4336-a165-bbc520756fc7-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-kq6ht\" (UID: \"47febe81-62f5-4336-a165-bbc520756fc7\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-kq6ht" Jan 26 08:23:13 crc kubenswrapper[4806]: I0126 08:23:13.909002 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvpc6\" (UniqueName: \"kubernetes.io/projected/47febe81-62f5-4336-a165-bbc520756fc7-kube-api-access-dvpc6\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-kq6ht\" (UID: \"47febe81-62f5-4336-a165-bbc520756fc7\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-kq6ht" Jan 26 08:23:14 crc kubenswrapper[4806]: I0126 08:23:14.206338 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-kq6ht" Jan 26 08:23:14 crc kubenswrapper[4806]: I0126 08:23:14.768376 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-kq6ht"] Jan 26 08:23:15 crc kubenswrapper[4806]: I0126 08:23:15.503534 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-kq6ht" event={"ID":"47febe81-62f5-4336-a165-bbc520756fc7","Type":"ContainerStarted","Data":"56058a7c9f3e5411f0ddf6acc427d849613d03511d3b4a6ea4a7003769284b7f"} Jan 26 08:23:15 crc kubenswrapper[4806]: I0126 08:23:15.503886 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-kq6ht" event={"ID":"47febe81-62f5-4336-a165-bbc520756fc7","Type":"ContainerStarted","Data":"8fca49077108ce8b8ffdb14aa088f311b255c8b2f61b640694e068f204837fc2"} Jan 26 08:23:15 crc kubenswrapper[4806]: I0126 08:23:15.525271 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-kq6ht" podStartSLOduration=2.089644005 podStartE2EDuration="2.52524651s" podCreationTimestamp="2026-01-26 08:23:13 +0000 UTC" firstStartedPulling="2026-01-26 08:23:14.778879197 +0000 UTC m=+1774.043287253" lastFinishedPulling="2026-01-26 08:23:15.214481692 +0000 UTC m=+1774.478889758" observedRunningTime="2026-01-26 08:23:15.519084687 +0000 UTC m=+1774.783492783" watchObservedRunningTime="2026-01-26 08:23:15.52524651 +0000 UTC m=+1774.789654606" Jan 26 08:23:17 crc kubenswrapper[4806]: I0126 08:23:17.043110 4806 scope.go:117] "RemoveContainer" containerID="55ec0c990dc91e0e377b96ad87f2e5ff0605f7a996ad405f57b4bd86d9139349" Jan 26 08:23:17 crc kubenswrapper[4806]: E0126 08:23:17.043777 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 08:23:20 crc kubenswrapper[4806]: E0126 08:23:20.718336 4806 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod47febe81_62f5_4336_a165_bbc520756fc7.slice/crio-56058a7c9f3e5411f0ddf6acc427d849613d03511d3b4a6ea4a7003769284b7f.scope\": RecentStats: unable to find data in memory cache]" Jan 26 08:23:21 crc kubenswrapper[4806]: I0126 08:23:21.552099 4806 generic.go:334] "Generic (PLEG): container finished" podID="47febe81-62f5-4336-a165-bbc520756fc7" containerID="56058a7c9f3e5411f0ddf6acc427d849613d03511d3b4a6ea4a7003769284b7f" exitCode=0 Jan 26 08:23:21 crc kubenswrapper[4806]: I0126 08:23:21.552167 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-kq6ht" event={"ID":"47febe81-62f5-4336-a165-bbc520756fc7","Type":"ContainerDied","Data":"56058a7c9f3e5411f0ddf6acc427d849613d03511d3b4a6ea4a7003769284b7f"} Jan 26 08:23:22 crc kubenswrapper[4806]: I0126 08:23:22.960284 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-kq6ht" Jan 26 08:23:23 crc kubenswrapper[4806]: I0126 08:23:23.064326 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dvpc6\" (UniqueName: \"kubernetes.io/projected/47febe81-62f5-4336-a165-bbc520756fc7-kube-api-access-dvpc6\") pod \"47febe81-62f5-4336-a165-bbc520756fc7\" (UID: \"47febe81-62f5-4336-a165-bbc520756fc7\") " Jan 26 08:23:23 crc kubenswrapper[4806]: I0126 08:23:23.064947 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/47febe81-62f5-4336-a165-bbc520756fc7-inventory\") pod \"47febe81-62f5-4336-a165-bbc520756fc7\" (UID: \"47febe81-62f5-4336-a165-bbc520756fc7\") " Jan 26 08:23:23 crc kubenswrapper[4806]: I0126 08:23:23.065013 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/47febe81-62f5-4336-a165-bbc520756fc7-ssh-key-openstack-edpm-ipam\") pod \"47febe81-62f5-4336-a165-bbc520756fc7\" (UID: \"47febe81-62f5-4336-a165-bbc520756fc7\") " Jan 26 08:23:23 crc kubenswrapper[4806]: I0126 08:23:23.071621 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47febe81-62f5-4336-a165-bbc520756fc7-kube-api-access-dvpc6" (OuterVolumeSpecName: "kube-api-access-dvpc6") pod "47febe81-62f5-4336-a165-bbc520756fc7" (UID: "47febe81-62f5-4336-a165-bbc520756fc7"). InnerVolumeSpecName "kube-api-access-dvpc6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:23:23 crc kubenswrapper[4806]: I0126 08:23:23.100152 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47febe81-62f5-4336-a165-bbc520756fc7-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "47febe81-62f5-4336-a165-bbc520756fc7" (UID: "47febe81-62f5-4336-a165-bbc520756fc7"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:23:23 crc kubenswrapper[4806]: I0126 08:23:23.108926 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47febe81-62f5-4336-a165-bbc520756fc7-inventory" (OuterVolumeSpecName: "inventory") pod "47febe81-62f5-4336-a165-bbc520756fc7" (UID: "47febe81-62f5-4336-a165-bbc520756fc7"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:23:23 crc kubenswrapper[4806]: I0126 08:23:23.167598 4806 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/47febe81-62f5-4336-a165-bbc520756fc7-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 08:23:23 crc kubenswrapper[4806]: I0126 08:23:23.168000 4806 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/47febe81-62f5-4336-a165-bbc520756fc7-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 08:23:23 crc kubenswrapper[4806]: I0126 08:23:23.168018 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dvpc6\" (UniqueName: \"kubernetes.io/projected/47febe81-62f5-4336-a165-bbc520756fc7-kube-api-access-dvpc6\") on node \"crc\" DevicePath \"\"" Jan 26 08:23:23 crc kubenswrapper[4806]: I0126 08:23:23.569572 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-kq6ht" event={"ID":"47febe81-62f5-4336-a165-bbc520756fc7","Type":"ContainerDied","Data":"8fca49077108ce8b8ffdb14aa088f311b255c8b2f61b640694e068f204837fc2"} Jan 26 08:23:23 crc kubenswrapper[4806]: I0126 08:23:23.569630 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8fca49077108ce8b8ffdb14aa088f311b255c8b2f61b640694e068f204837fc2" Jan 26 08:23:23 crc kubenswrapper[4806]: I0126 08:23:23.570033 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-kq6ht" Jan 26 08:23:23 crc kubenswrapper[4806]: I0126 08:23:23.682051 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-jzfsf"] Jan 26 08:23:23 crc kubenswrapper[4806]: E0126 08:23:23.683781 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47febe81-62f5-4336-a165-bbc520756fc7" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 26 08:23:23 crc kubenswrapper[4806]: I0126 08:23:23.683901 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="47febe81-62f5-4336-a165-bbc520756fc7" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 26 08:23:23 crc kubenswrapper[4806]: I0126 08:23:23.684209 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="47febe81-62f5-4336-a165-bbc520756fc7" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 26 08:23:23 crc kubenswrapper[4806]: I0126 08:23:23.685043 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-jzfsf" Jan 26 08:23:23 crc kubenswrapper[4806]: I0126 08:23:23.689105 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 08:23:23 crc kubenswrapper[4806]: I0126 08:23:23.689371 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-tr4w7" Jan 26 08:23:23 crc kubenswrapper[4806]: I0126 08:23:23.689738 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 08:23:23 crc kubenswrapper[4806]: I0126 08:23:23.690178 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 08:23:23 crc kubenswrapper[4806]: I0126 08:23:23.690228 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-jzfsf"] Jan 26 08:23:23 crc kubenswrapper[4806]: I0126 08:23:23.879726 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8s7lq\" (UniqueName: \"kubernetes.io/projected/b3d151b0-221e-46fe-a24a-cb842d74c532-kube-api-access-8s7lq\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-jzfsf\" (UID: \"b3d151b0-221e-46fe-a24a-cb842d74c532\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-jzfsf" Jan 26 08:23:23 crc kubenswrapper[4806]: I0126 08:23:23.879796 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b3d151b0-221e-46fe-a24a-cb842d74c532-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-jzfsf\" (UID: \"b3d151b0-221e-46fe-a24a-cb842d74c532\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-jzfsf" Jan 26 08:23:23 crc kubenswrapper[4806]: I0126 08:23:23.879920 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b3d151b0-221e-46fe-a24a-cb842d74c532-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-jzfsf\" (UID: \"b3d151b0-221e-46fe-a24a-cb842d74c532\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-jzfsf" Jan 26 08:23:23 crc kubenswrapper[4806]: I0126 08:23:23.981850 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b3d151b0-221e-46fe-a24a-cb842d74c532-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-jzfsf\" (UID: \"b3d151b0-221e-46fe-a24a-cb842d74c532\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-jzfsf" Jan 26 08:23:23 crc kubenswrapper[4806]: I0126 08:23:23.981978 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b3d151b0-221e-46fe-a24a-cb842d74c532-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-jzfsf\" (UID: \"b3d151b0-221e-46fe-a24a-cb842d74c532\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-jzfsf" Jan 26 08:23:23 crc kubenswrapper[4806]: I0126 08:23:23.982101 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8s7lq\" (UniqueName: \"kubernetes.io/projected/b3d151b0-221e-46fe-a24a-cb842d74c532-kube-api-access-8s7lq\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-jzfsf\" (UID: \"b3d151b0-221e-46fe-a24a-cb842d74c532\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-jzfsf" Jan 26 08:23:23 crc kubenswrapper[4806]: I0126 08:23:23.986790 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b3d151b0-221e-46fe-a24a-cb842d74c532-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-jzfsf\" (UID: \"b3d151b0-221e-46fe-a24a-cb842d74c532\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-jzfsf" Jan 26 08:23:23 crc kubenswrapper[4806]: I0126 08:23:23.987066 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b3d151b0-221e-46fe-a24a-cb842d74c532-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-jzfsf\" (UID: \"b3d151b0-221e-46fe-a24a-cb842d74c532\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-jzfsf" Jan 26 08:23:24 crc kubenswrapper[4806]: I0126 08:23:24.011853 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8s7lq\" (UniqueName: \"kubernetes.io/projected/b3d151b0-221e-46fe-a24a-cb842d74c532-kube-api-access-8s7lq\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-jzfsf\" (UID: \"b3d151b0-221e-46fe-a24a-cb842d74c532\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-jzfsf" Jan 26 08:23:24 crc kubenswrapper[4806]: I0126 08:23:24.305609 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-jzfsf" Jan 26 08:23:24 crc kubenswrapper[4806]: I0126 08:23:24.884969 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-jzfsf"] Jan 26 08:23:25 crc kubenswrapper[4806]: I0126 08:23:25.589954 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-jzfsf" event={"ID":"b3d151b0-221e-46fe-a24a-cb842d74c532","Type":"ContainerStarted","Data":"cad5a072c5e00efb5f7a4e2017b40d08aa141a924821eb485bd4e135d633339a"} Jan 26 08:23:25 crc kubenswrapper[4806]: I0126 08:23:25.590011 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-jzfsf" event={"ID":"b3d151b0-221e-46fe-a24a-cb842d74c532","Type":"ContainerStarted","Data":"c25074e8f2bca419852af17f535f952282c262601ba38e3c2ea9ee3b39ad1483"} Jan 26 08:23:25 crc kubenswrapper[4806]: I0126 08:23:25.611652 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-jzfsf" podStartSLOduration=2.208326903 podStartE2EDuration="2.611636012s" podCreationTimestamp="2026-01-26 08:23:23 +0000 UTC" firstStartedPulling="2026-01-26 08:23:24.885335151 +0000 UTC m=+1784.149743207" lastFinishedPulling="2026-01-26 08:23:25.28864426 +0000 UTC m=+1784.553052316" observedRunningTime="2026-01-26 08:23:25.608981448 +0000 UTC m=+1784.873389504" watchObservedRunningTime="2026-01-26 08:23:25.611636012 +0000 UTC m=+1784.876044068" Jan 26 08:23:26 crc kubenswrapper[4806]: I0126 08:23:26.044319 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-wvpkk"] Jan 26 08:23:26 crc kubenswrapper[4806]: I0126 08:23:26.053577 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-wvpkk"] Jan 26 08:23:27 crc kubenswrapper[4806]: I0126 08:23:27.051730 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e29f7cf-8720-4555-8418-e53025a6bdac" path="/var/lib/kubelet/pods/9e29f7cf-8720-4555-8418-e53025a6bdac/volumes" Jan 26 08:23:28 crc kubenswrapper[4806]: I0126 08:23:28.041725 4806 scope.go:117] "RemoveContainer" containerID="55ec0c990dc91e0e377b96ad87f2e5ff0605f7a996ad405f57b4bd86d9139349" Jan 26 08:23:28 crc kubenswrapper[4806]: E0126 08:23:28.042242 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 08:23:36 crc kubenswrapper[4806]: I0126 08:23:36.574264 4806 scope.go:117] "RemoveContainer" containerID="bc384626e62ad7f9443fb10ffc2563ce19f784c897274d712c0aab1ff5997125" Jan 26 08:23:36 crc kubenswrapper[4806]: I0126 08:23:36.618870 4806 scope.go:117] "RemoveContainer" containerID="ef7bc0b4fd6f0106085491af672fa3981bd6bac246489f5f455caab983c8d6d4" Jan 26 08:23:36 crc kubenswrapper[4806]: I0126 08:23:36.659382 4806 scope.go:117] "RemoveContainer" containerID="c756510f061ea927de4e4bc3a2c55f0d5e110989246304ce40a43967c0a820a2" Jan 26 08:23:36 crc kubenswrapper[4806]: I0126 08:23:36.694974 4806 scope.go:117] "RemoveContainer" containerID="7cce2122e8c14d1dedda1a85e55dec327911cbf5b8dbc8a94719235fa96e32c1" Jan 26 08:23:36 crc kubenswrapper[4806]: I0126 08:23:36.773565 4806 scope.go:117] "RemoveContainer" containerID="856e69703456bc95bc94372c06144d9d777d50e705747fdc72ee9acee8ba2ac0" Jan 26 08:23:36 crc kubenswrapper[4806]: I0126 08:23:36.797390 4806 scope.go:117] "RemoveContainer" containerID="c27d308f427d70d8482fc608a3c50ed809de85b99715a9227d9e13d191388913" Jan 26 08:23:36 crc kubenswrapper[4806]: I0126 08:23:36.839224 4806 scope.go:117] "RemoveContainer" containerID="69d771cfe1e96645f8c20328d8864ae997495298b7110716e40c992feda75e0f" Jan 26 08:23:43 crc kubenswrapper[4806]: I0126 08:23:43.042505 4806 scope.go:117] "RemoveContainer" containerID="55ec0c990dc91e0e377b96ad87f2e5ff0605f7a996ad405f57b4bd86d9139349" Jan 26 08:23:43 crc kubenswrapper[4806]: E0126 08:23:43.043956 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 08:23:49 crc kubenswrapper[4806]: I0126 08:23:49.053151 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-pwr48"] Jan 26 08:23:49 crc kubenswrapper[4806]: I0126 08:23:49.054590 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-pwr48"] Jan 26 08:23:51 crc kubenswrapper[4806]: I0126 08:23:51.054024 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d495701-d98d-4c0a-be75-2330f3589594" path="/var/lib/kubelet/pods/0d495701-d98d-4c0a-be75-2330f3589594/volumes" Jan 26 08:23:55 crc kubenswrapper[4806]: I0126 08:23:55.030210 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-ltfdd"] Jan 26 08:23:55 crc kubenswrapper[4806]: I0126 08:23:55.055658 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-ltfdd"] Jan 26 08:23:57 crc kubenswrapper[4806]: I0126 08:23:57.043818 4806 scope.go:117] "RemoveContainer" containerID="55ec0c990dc91e0e377b96ad87f2e5ff0605f7a996ad405f57b4bd86d9139349" Jan 26 08:23:57 crc kubenswrapper[4806]: E0126 08:23:57.044387 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 08:23:57 crc kubenswrapper[4806]: I0126 08:23:57.054850 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17a62481-034a-4042-b58d-a3ebf9e99202" path="/var/lib/kubelet/pods/17a62481-034a-4042-b58d-a3ebf9e99202/volumes" Jan 26 08:24:10 crc kubenswrapper[4806]: I0126 08:24:10.042632 4806 scope.go:117] "RemoveContainer" containerID="55ec0c990dc91e0e377b96ad87f2e5ff0605f7a996ad405f57b4bd86d9139349" Jan 26 08:24:10 crc kubenswrapper[4806]: E0126 08:24:10.043160 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 08:24:10 crc kubenswrapper[4806]: I0126 08:24:10.992887 4806 generic.go:334] "Generic (PLEG): container finished" podID="b3d151b0-221e-46fe-a24a-cb842d74c532" containerID="cad5a072c5e00efb5f7a4e2017b40d08aa141a924821eb485bd4e135d633339a" exitCode=0 Jan 26 08:24:10 crc kubenswrapper[4806]: I0126 08:24:10.992981 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-jzfsf" event={"ID":"b3d151b0-221e-46fe-a24a-cb842d74c532","Type":"ContainerDied","Data":"cad5a072c5e00efb5f7a4e2017b40d08aa141a924821eb485bd4e135d633339a"} Jan 26 08:24:12 crc kubenswrapper[4806]: I0126 08:24:12.382160 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-jzfsf" Jan 26 08:24:12 crc kubenswrapper[4806]: I0126 08:24:12.512790 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8s7lq\" (UniqueName: \"kubernetes.io/projected/b3d151b0-221e-46fe-a24a-cb842d74c532-kube-api-access-8s7lq\") pod \"b3d151b0-221e-46fe-a24a-cb842d74c532\" (UID: \"b3d151b0-221e-46fe-a24a-cb842d74c532\") " Jan 26 08:24:12 crc kubenswrapper[4806]: I0126 08:24:12.513242 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b3d151b0-221e-46fe-a24a-cb842d74c532-ssh-key-openstack-edpm-ipam\") pod \"b3d151b0-221e-46fe-a24a-cb842d74c532\" (UID: \"b3d151b0-221e-46fe-a24a-cb842d74c532\") " Jan 26 08:24:12 crc kubenswrapper[4806]: I0126 08:24:12.513375 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b3d151b0-221e-46fe-a24a-cb842d74c532-inventory\") pod \"b3d151b0-221e-46fe-a24a-cb842d74c532\" (UID: \"b3d151b0-221e-46fe-a24a-cb842d74c532\") " Jan 26 08:24:12 crc kubenswrapper[4806]: I0126 08:24:12.523198 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3d151b0-221e-46fe-a24a-cb842d74c532-kube-api-access-8s7lq" (OuterVolumeSpecName: "kube-api-access-8s7lq") pod "b3d151b0-221e-46fe-a24a-cb842d74c532" (UID: "b3d151b0-221e-46fe-a24a-cb842d74c532"). InnerVolumeSpecName "kube-api-access-8s7lq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:24:12 crc kubenswrapper[4806]: I0126 08:24:12.544849 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3d151b0-221e-46fe-a24a-cb842d74c532-inventory" (OuterVolumeSpecName: "inventory") pod "b3d151b0-221e-46fe-a24a-cb842d74c532" (UID: "b3d151b0-221e-46fe-a24a-cb842d74c532"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:24:12 crc kubenswrapper[4806]: I0126 08:24:12.546239 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3d151b0-221e-46fe-a24a-cb842d74c532-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "b3d151b0-221e-46fe-a24a-cb842d74c532" (UID: "b3d151b0-221e-46fe-a24a-cb842d74c532"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:24:12 crc kubenswrapper[4806]: I0126 08:24:12.615272 4806 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b3d151b0-221e-46fe-a24a-cb842d74c532-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 08:24:12 crc kubenswrapper[4806]: I0126 08:24:12.615301 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8s7lq\" (UniqueName: \"kubernetes.io/projected/b3d151b0-221e-46fe-a24a-cb842d74c532-kube-api-access-8s7lq\") on node \"crc\" DevicePath \"\"" Jan 26 08:24:12 crc kubenswrapper[4806]: I0126 08:24:12.615983 4806 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b3d151b0-221e-46fe-a24a-cb842d74c532-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 08:24:13 crc kubenswrapper[4806]: I0126 08:24:13.017348 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-jzfsf" event={"ID":"b3d151b0-221e-46fe-a24a-cb842d74c532","Type":"ContainerDied","Data":"c25074e8f2bca419852af17f535f952282c262601ba38e3c2ea9ee3b39ad1483"} Jan 26 08:24:13 crc kubenswrapper[4806]: I0126 08:24:13.017397 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c25074e8f2bca419852af17f535f952282c262601ba38e3c2ea9ee3b39ad1483" Jan 26 08:24:13 crc kubenswrapper[4806]: I0126 08:24:13.017894 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-jzfsf" Jan 26 08:24:13 crc kubenswrapper[4806]: I0126 08:24:13.157986 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-htjzk"] Jan 26 08:24:13 crc kubenswrapper[4806]: E0126 08:24:13.158386 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3d151b0-221e-46fe-a24a-cb842d74c532" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 26 08:24:13 crc kubenswrapper[4806]: I0126 08:24:13.158404 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3d151b0-221e-46fe-a24a-cb842d74c532" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 26 08:24:13 crc kubenswrapper[4806]: I0126 08:24:13.158686 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3d151b0-221e-46fe-a24a-cb842d74c532" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 26 08:24:13 crc kubenswrapper[4806]: I0126 08:24:13.159309 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-htjzk" Jan 26 08:24:13 crc kubenswrapper[4806]: I0126 08:24:13.161461 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-tr4w7" Jan 26 08:24:13 crc kubenswrapper[4806]: I0126 08:24:13.161722 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 08:24:13 crc kubenswrapper[4806]: I0126 08:24:13.164297 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 08:24:13 crc kubenswrapper[4806]: I0126 08:24:13.164458 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 08:24:13 crc kubenswrapper[4806]: I0126 08:24:13.167066 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-htjzk"] Jan 26 08:24:13 crc kubenswrapper[4806]: I0126 08:24:13.334671 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ffea3d90-3e30-4e6e-9c01-ee7411638bc1-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-htjzk\" (UID: \"ffea3d90-3e30-4e6e-9c01-ee7411638bc1\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-htjzk" Jan 26 08:24:13 crc kubenswrapper[4806]: I0126 08:24:13.335015 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4g2r8\" (UniqueName: \"kubernetes.io/projected/ffea3d90-3e30-4e6e-9c01-ee7411638bc1-kube-api-access-4g2r8\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-htjzk\" (UID: \"ffea3d90-3e30-4e6e-9c01-ee7411638bc1\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-htjzk" Jan 26 08:24:13 crc kubenswrapper[4806]: I0126 08:24:13.335099 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ffea3d90-3e30-4e6e-9c01-ee7411638bc1-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-htjzk\" (UID: \"ffea3d90-3e30-4e6e-9c01-ee7411638bc1\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-htjzk" Jan 26 08:24:13 crc kubenswrapper[4806]: I0126 08:24:13.437264 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ffea3d90-3e30-4e6e-9c01-ee7411638bc1-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-htjzk\" (UID: \"ffea3d90-3e30-4e6e-9c01-ee7411638bc1\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-htjzk" Jan 26 08:24:13 crc kubenswrapper[4806]: I0126 08:24:13.437479 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4g2r8\" (UniqueName: \"kubernetes.io/projected/ffea3d90-3e30-4e6e-9c01-ee7411638bc1-kube-api-access-4g2r8\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-htjzk\" (UID: \"ffea3d90-3e30-4e6e-9c01-ee7411638bc1\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-htjzk" Jan 26 08:24:13 crc kubenswrapper[4806]: I0126 08:24:13.437563 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ffea3d90-3e30-4e6e-9c01-ee7411638bc1-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-htjzk\" (UID: \"ffea3d90-3e30-4e6e-9c01-ee7411638bc1\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-htjzk" Jan 26 08:24:13 crc kubenswrapper[4806]: I0126 08:24:13.442904 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ffea3d90-3e30-4e6e-9c01-ee7411638bc1-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-htjzk\" (UID: \"ffea3d90-3e30-4e6e-9c01-ee7411638bc1\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-htjzk" Jan 26 08:24:13 crc kubenswrapper[4806]: I0126 08:24:13.442956 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ffea3d90-3e30-4e6e-9c01-ee7411638bc1-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-htjzk\" (UID: \"ffea3d90-3e30-4e6e-9c01-ee7411638bc1\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-htjzk" Jan 26 08:24:13 crc kubenswrapper[4806]: I0126 08:24:13.457376 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4g2r8\" (UniqueName: \"kubernetes.io/projected/ffea3d90-3e30-4e6e-9c01-ee7411638bc1-kube-api-access-4g2r8\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-htjzk\" (UID: \"ffea3d90-3e30-4e6e-9c01-ee7411638bc1\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-htjzk" Jan 26 08:24:13 crc kubenswrapper[4806]: I0126 08:24:13.477362 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-htjzk" Jan 26 08:24:14 crc kubenswrapper[4806]: I0126 08:24:14.031559 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-htjzk"] Jan 26 08:24:14 crc kubenswrapper[4806]: I0126 08:24:14.035050 4806 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 08:24:15 crc kubenswrapper[4806]: I0126 08:24:15.039294 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-htjzk" event={"ID":"ffea3d90-3e30-4e6e-9c01-ee7411638bc1","Type":"ContainerStarted","Data":"184cbd7f52d723ae4c0512f5f4c4cf88bdcc6faded29620067428f7124422988"} Jan 26 08:24:15 crc kubenswrapper[4806]: I0126 08:24:15.062323 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-htjzk" event={"ID":"ffea3d90-3e30-4e6e-9c01-ee7411638bc1","Type":"ContainerStarted","Data":"dacbf614e3f2c3973c51e545f62c93b8fa6d08a9a72eb133f011c818f5a50fac"} Jan 26 08:24:25 crc kubenswrapper[4806]: I0126 08:24:25.042226 4806 scope.go:117] "RemoveContainer" containerID="55ec0c990dc91e0e377b96ad87f2e5ff0605f7a996ad405f57b4bd86d9139349" Jan 26 08:24:25 crc kubenswrapper[4806]: E0126 08:24:25.043048 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 08:24:31 crc kubenswrapper[4806]: I0126 08:24:31.066171 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-htjzk" podStartSLOduration=17.504987604 podStartE2EDuration="18.066154257s" podCreationTimestamp="2026-01-26 08:24:13 +0000 UTC" firstStartedPulling="2026-01-26 08:24:14.034516246 +0000 UTC m=+1833.298924342" lastFinishedPulling="2026-01-26 08:24:14.595682939 +0000 UTC m=+1833.860090995" observedRunningTime="2026-01-26 08:24:15.064757235 +0000 UTC m=+1834.329165301" watchObservedRunningTime="2026-01-26 08:24:31.066154257 +0000 UTC m=+1850.330562313" Jan 26 08:24:31 crc kubenswrapper[4806]: I0126 08:24:31.073472 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-j5f4m"] Jan 26 08:24:31 crc kubenswrapper[4806]: I0126 08:24:31.080536 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-j5f4m"] Jan 26 08:24:33 crc kubenswrapper[4806]: I0126 08:24:33.067273 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ad06e81-5ace-4cc0-9c53-aee0ec57425b" path="/var/lib/kubelet/pods/6ad06e81-5ace-4cc0-9c53-aee0ec57425b/volumes" Jan 26 08:24:36 crc kubenswrapper[4806]: I0126 08:24:36.974666 4806 scope.go:117] "RemoveContainer" containerID="89e896068718c5e18ba6426ea7fd689d74dd449ffa6ca67a6ea77d410009d80e" Jan 26 08:24:37 crc kubenswrapper[4806]: I0126 08:24:37.011481 4806 scope.go:117] "RemoveContainer" containerID="8e58449f0541bdbd765bf2724cf689af99f9580047a40d7e34b5976769a0b19a" Jan 26 08:24:37 crc kubenswrapper[4806]: I0126 08:24:37.047023 4806 scope.go:117] "RemoveContainer" containerID="bdc730cb52f083e34a1b3265bcf8dcfa6ebd679236ed4fe2dae841cec138882b" Jan 26 08:24:39 crc kubenswrapper[4806]: I0126 08:24:39.042458 4806 scope.go:117] "RemoveContainer" containerID="55ec0c990dc91e0e377b96ad87f2e5ff0605f7a996ad405f57b4bd86d9139349" Jan 26 08:24:39 crc kubenswrapper[4806]: E0126 08:24:39.042896 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 08:24:52 crc kubenswrapper[4806]: I0126 08:24:52.042086 4806 scope.go:117] "RemoveContainer" containerID="55ec0c990dc91e0e377b96ad87f2e5ff0605f7a996ad405f57b4bd86d9139349" Jan 26 08:24:52 crc kubenswrapper[4806]: E0126 08:24:52.042963 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 08:25:05 crc kubenswrapper[4806]: I0126 08:25:05.041724 4806 scope.go:117] "RemoveContainer" containerID="55ec0c990dc91e0e377b96ad87f2e5ff0605f7a996ad405f57b4bd86d9139349" Jan 26 08:25:05 crc kubenswrapper[4806]: E0126 08:25:05.042478 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 08:25:15 crc kubenswrapper[4806]: I0126 08:25:15.570772 4806 generic.go:334] "Generic (PLEG): container finished" podID="ffea3d90-3e30-4e6e-9c01-ee7411638bc1" containerID="184cbd7f52d723ae4c0512f5f4c4cf88bdcc6faded29620067428f7124422988" exitCode=0 Jan 26 08:25:15 crc kubenswrapper[4806]: I0126 08:25:15.570850 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-htjzk" event={"ID":"ffea3d90-3e30-4e6e-9c01-ee7411638bc1","Type":"ContainerDied","Data":"184cbd7f52d723ae4c0512f5f4c4cf88bdcc6faded29620067428f7124422988"} Jan 26 08:25:17 crc kubenswrapper[4806]: I0126 08:25:17.045843 4806 scope.go:117] "RemoveContainer" containerID="55ec0c990dc91e0e377b96ad87f2e5ff0605f7a996ad405f57b4bd86d9139349" Jan 26 08:25:17 crc kubenswrapper[4806]: E0126 08:25:17.046417 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 08:25:17 crc kubenswrapper[4806]: I0126 08:25:17.054256 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-htjzk" Jan 26 08:25:17 crc kubenswrapper[4806]: I0126 08:25:17.115655 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4g2r8\" (UniqueName: \"kubernetes.io/projected/ffea3d90-3e30-4e6e-9c01-ee7411638bc1-kube-api-access-4g2r8\") pod \"ffea3d90-3e30-4e6e-9c01-ee7411638bc1\" (UID: \"ffea3d90-3e30-4e6e-9c01-ee7411638bc1\") " Jan 26 08:25:17 crc kubenswrapper[4806]: I0126 08:25:17.115752 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ffea3d90-3e30-4e6e-9c01-ee7411638bc1-inventory\") pod \"ffea3d90-3e30-4e6e-9c01-ee7411638bc1\" (UID: \"ffea3d90-3e30-4e6e-9c01-ee7411638bc1\") " Jan 26 08:25:17 crc kubenswrapper[4806]: I0126 08:25:17.115837 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ffea3d90-3e30-4e6e-9c01-ee7411638bc1-ssh-key-openstack-edpm-ipam\") pod \"ffea3d90-3e30-4e6e-9c01-ee7411638bc1\" (UID: \"ffea3d90-3e30-4e6e-9c01-ee7411638bc1\") " Jan 26 08:25:17 crc kubenswrapper[4806]: I0126 08:25:17.135758 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffea3d90-3e30-4e6e-9c01-ee7411638bc1-kube-api-access-4g2r8" (OuterVolumeSpecName: "kube-api-access-4g2r8") pod "ffea3d90-3e30-4e6e-9c01-ee7411638bc1" (UID: "ffea3d90-3e30-4e6e-9c01-ee7411638bc1"). InnerVolumeSpecName "kube-api-access-4g2r8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:25:17 crc kubenswrapper[4806]: I0126 08:25:17.151647 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffea3d90-3e30-4e6e-9c01-ee7411638bc1-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ffea3d90-3e30-4e6e-9c01-ee7411638bc1" (UID: "ffea3d90-3e30-4e6e-9c01-ee7411638bc1"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:25:17 crc kubenswrapper[4806]: I0126 08:25:17.153973 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffea3d90-3e30-4e6e-9c01-ee7411638bc1-inventory" (OuterVolumeSpecName: "inventory") pod "ffea3d90-3e30-4e6e-9c01-ee7411638bc1" (UID: "ffea3d90-3e30-4e6e-9c01-ee7411638bc1"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:25:17 crc kubenswrapper[4806]: I0126 08:25:17.218748 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4g2r8\" (UniqueName: \"kubernetes.io/projected/ffea3d90-3e30-4e6e-9c01-ee7411638bc1-kube-api-access-4g2r8\") on node \"crc\" DevicePath \"\"" Jan 26 08:25:17 crc kubenswrapper[4806]: I0126 08:25:17.218954 4806 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ffea3d90-3e30-4e6e-9c01-ee7411638bc1-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 08:25:17 crc kubenswrapper[4806]: I0126 08:25:17.219079 4806 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ffea3d90-3e30-4e6e-9c01-ee7411638bc1-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 08:25:17 crc kubenswrapper[4806]: I0126 08:25:17.590160 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-htjzk" event={"ID":"ffea3d90-3e30-4e6e-9c01-ee7411638bc1","Type":"ContainerDied","Data":"dacbf614e3f2c3973c51e545f62c93b8fa6d08a9a72eb133f011c818f5a50fac"} Jan 26 08:25:17 crc kubenswrapper[4806]: I0126 08:25:17.590377 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dacbf614e3f2c3973c51e545f62c93b8fa6d08a9a72eb133f011c818f5a50fac" Jan 26 08:25:17 crc kubenswrapper[4806]: I0126 08:25:17.590378 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-htjzk" Jan 26 08:25:17 crc kubenswrapper[4806]: I0126 08:25:17.715563 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-wdztn"] Jan 26 08:25:17 crc kubenswrapper[4806]: E0126 08:25:17.716226 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffea3d90-3e30-4e6e-9c01-ee7411638bc1" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 26 08:25:17 crc kubenswrapper[4806]: I0126 08:25:17.716336 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffea3d90-3e30-4e6e-9c01-ee7411638bc1" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 26 08:25:17 crc kubenswrapper[4806]: I0126 08:25:17.716634 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="ffea3d90-3e30-4e6e-9c01-ee7411638bc1" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 26 08:25:17 crc kubenswrapper[4806]: I0126 08:25:17.717548 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-wdztn" Jan 26 08:25:17 crc kubenswrapper[4806]: I0126 08:25:17.758761 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 08:25:17 crc kubenswrapper[4806]: I0126 08:25:17.759048 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 08:25:17 crc kubenswrapper[4806]: I0126 08:25:17.759685 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 08:25:17 crc kubenswrapper[4806]: I0126 08:25:17.759825 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-tr4w7" Jan 26 08:25:17 crc kubenswrapper[4806]: I0126 08:25:17.767131 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-wdztn"] Jan 26 08:25:17 crc kubenswrapper[4806]: I0126 08:25:17.862968 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/382a8851-c811-4842-a4a9-e0a2e6d7f2e6-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-wdztn\" (UID: \"382a8851-c811-4842-a4a9-e0a2e6d7f2e6\") " pod="openstack/ssh-known-hosts-edpm-deployment-wdztn" Jan 26 08:25:17 crc kubenswrapper[4806]: I0126 08:25:17.863047 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/382a8851-c811-4842-a4a9-e0a2e6d7f2e6-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-wdztn\" (UID: \"382a8851-c811-4842-a4a9-e0a2e6d7f2e6\") " pod="openstack/ssh-known-hosts-edpm-deployment-wdztn" Jan 26 08:25:17 crc kubenswrapper[4806]: I0126 08:25:17.863317 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4257\" (UniqueName: \"kubernetes.io/projected/382a8851-c811-4842-a4a9-e0a2e6d7f2e6-kube-api-access-z4257\") pod \"ssh-known-hosts-edpm-deployment-wdztn\" (UID: \"382a8851-c811-4842-a4a9-e0a2e6d7f2e6\") " pod="openstack/ssh-known-hosts-edpm-deployment-wdztn" Jan 26 08:25:17 crc kubenswrapper[4806]: I0126 08:25:17.965977 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4257\" (UniqueName: \"kubernetes.io/projected/382a8851-c811-4842-a4a9-e0a2e6d7f2e6-kube-api-access-z4257\") pod \"ssh-known-hosts-edpm-deployment-wdztn\" (UID: \"382a8851-c811-4842-a4a9-e0a2e6d7f2e6\") " pod="openstack/ssh-known-hosts-edpm-deployment-wdztn" Jan 26 08:25:17 crc kubenswrapper[4806]: I0126 08:25:17.966198 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/382a8851-c811-4842-a4a9-e0a2e6d7f2e6-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-wdztn\" (UID: \"382a8851-c811-4842-a4a9-e0a2e6d7f2e6\") " pod="openstack/ssh-known-hosts-edpm-deployment-wdztn" Jan 26 08:25:17 crc kubenswrapper[4806]: I0126 08:25:17.966282 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/382a8851-c811-4842-a4a9-e0a2e6d7f2e6-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-wdztn\" (UID: \"382a8851-c811-4842-a4a9-e0a2e6d7f2e6\") " pod="openstack/ssh-known-hosts-edpm-deployment-wdztn" Jan 26 08:25:17 crc kubenswrapper[4806]: I0126 08:25:17.972624 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/382a8851-c811-4842-a4a9-e0a2e6d7f2e6-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-wdztn\" (UID: \"382a8851-c811-4842-a4a9-e0a2e6d7f2e6\") " pod="openstack/ssh-known-hosts-edpm-deployment-wdztn" Jan 26 08:25:17 crc kubenswrapper[4806]: I0126 08:25:17.974438 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/382a8851-c811-4842-a4a9-e0a2e6d7f2e6-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-wdztn\" (UID: \"382a8851-c811-4842-a4a9-e0a2e6d7f2e6\") " pod="openstack/ssh-known-hosts-edpm-deployment-wdztn" Jan 26 08:25:17 crc kubenswrapper[4806]: I0126 08:25:17.993346 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4257\" (UniqueName: \"kubernetes.io/projected/382a8851-c811-4842-a4a9-e0a2e6d7f2e6-kube-api-access-z4257\") pod \"ssh-known-hosts-edpm-deployment-wdztn\" (UID: \"382a8851-c811-4842-a4a9-e0a2e6d7f2e6\") " pod="openstack/ssh-known-hosts-edpm-deployment-wdztn" Jan 26 08:25:18 crc kubenswrapper[4806]: I0126 08:25:18.084022 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-wdztn" Jan 26 08:25:18 crc kubenswrapper[4806]: I0126 08:25:18.603341 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-wdztn"] Jan 26 08:25:19 crc kubenswrapper[4806]: I0126 08:25:19.614144 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-wdztn" event={"ID":"382a8851-c811-4842-a4a9-e0a2e6d7f2e6","Type":"ContainerStarted","Data":"622546b37f5303a3d7efe637e8790bba1e0962619d65cf9668f3661651f96577"} Jan 26 08:25:19 crc kubenswrapper[4806]: I0126 08:25:19.614848 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-wdztn" event={"ID":"382a8851-c811-4842-a4a9-e0a2e6d7f2e6","Type":"ContainerStarted","Data":"42ad627323a67e0888cb97dd821fbf52a8055663ea8f3610a39a0b8666dd66a5"} Jan 26 08:25:19 crc kubenswrapper[4806]: I0126 08:25:19.639879 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-wdztn" podStartSLOduration=2.144631646 podStartE2EDuration="2.639860029s" podCreationTimestamp="2026-01-26 08:25:17 +0000 UTC" firstStartedPulling="2026-01-26 08:25:18.613216001 +0000 UTC m=+1897.877624057" lastFinishedPulling="2026-01-26 08:25:19.108444354 +0000 UTC m=+1898.372852440" observedRunningTime="2026-01-26 08:25:19.638645605 +0000 UTC m=+1898.903053681" watchObservedRunningTime="2026-01-26 08:25:19.639860029 +0000 UTC m=+1898.904268085" Jan 26 08:25:27 crc kubenswrapper[4806]: I0126 08:25:27.681495 4806 generic.go:334] "Generic (PLEG): container finished" podID="382a8851-c811-4842-a4a9-e0a2e6d7f2e6" containerID="622546b37f5303a3d7efe637e8790bba1e0962619d65cf9668f3661651f96577" exitCode=0 Jan 26 08:25:27 crc kubenswrapper[4806]: I0126 08:25:27.681674 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-wdztn" event={"ID":"382a8851-c811-4842-a4a9-e0a2e6d7f2e6","Type":"ContainerDied","Data":"622546b37f5303a3d7efe637e8790bba1e0962619d65cf9668f3661651f96577"} Jan 26 08:25:29 crc kubenswrapper[4806]: I0126 08:25:29.156947 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-wdztn" Jan 26 08:25:29 crc kubenswrapper[4806]: I0126 08:25:29.207992 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/382a8851-c811-4842-a4a9-e0a2e6d7f2e6-inventory-0\") pod \"382a8851-c811-4842-a4a9-e0a2e6d7f2e6\" (UID: \"382a8851-c811-4842-a4a9-e0a2e6d7f2e6\") " Jan 26 08:25:29 crc kubenswrapper[4806]: I0126 08:25:29.208080 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z4257\" (UniqueName: \"kubernetes.io/projected/382a8851-c811-4842-a4a9-e0a2e6d7f2e6-kube-api-access-z4257\") pod \"382a8851-c811-4842-a4a9-e0a2e6d7f2e6\" (UID: \"382a8851-c811-4842-a4a9-e0a2e6d7f2e6\") " Jan 26 08:25:29 crc kubenswrapper[4806]: I0126 08:25:29.208130 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/382a8851-c811-4842-a4a9-e0a2e6d7f2e6-ssh-key-openstack-edpm-ipam\") pod \"382a8851-c811-4842-a4a9-e0a2e6d7f2e6\" (UID: \"382a8851-c811-4842-a4a9-e0a2e6d7f2e6\") " Jan 26 08:25:29 crc kubenswrapper[4806]: I0126 08:25:29.216090 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/382a8851-c811-4842-a4a9-e0a2e6d7f2e6-kube-api-access-z4257" (OuterVolumeSpecName: "kube-api-access-z4257") pod "382a8851-c811-4842-a4a9-e0a2e6d7f2e6" (UID: "382a8851-c811-4842-a4a9-e0a2e6d7f2e6"). InnerVolumeSpecName "kube-api-access-z4257". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:25:29 crc kubenswrapper[4806]: I0126 08:25:29.237968 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/382a8851-c811-4842-a4a9-e0a2e6d7f2e6-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "382a8851-c811-4842-a4a9-e0a2e6d7f2e6" (UID: "382a8851-c811-4842-a4a9-e0a2e6d7f2e6"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:25:29 crc kubenswrapper[4806]: I0126 08:25:29.249687 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/382a8851-c811-4842-a4a9-e0a2e6d7f2e6-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "382a8851-c811-4842-a4a9-e0a2e6d7f2e6" (UID: "382a8851-c811-4842-a4a9-e0a2e6d7f2e6"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:25:29 crc kubenswrapper[4806]: I0126 08:25:29.311773 4806 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/382a8851-c811-4842-a4a9-e0a2e6d7f2e6-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 08:25:29 crc kubenswrapper[4806]: I0126 08:25:29.311819 4806 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/382a8851-c811-4842-a4a9-e0a2e6d7f2e6-inventory-0\") on node \"crc\" DevicePath \"\"" Jan 26 08:25:29 crc kubenswrapper[4806]: I0126 08:25:29.311834 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z4257\" (UniqueName: \"kubernetes.io/projected/382a8851-c811-4842-a4a9-e0a2e6d7f2e6-kube-api-access-z4257\") on node \"crc\" DevicePath \"\"" Jan 26 08:25:29 crc kubenswrapper[4806]: I0126 08:25:29.702492 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-wdztn" event={"ID":"382a8851-c811-4842-a4a9-e0a2e6d7f2e6","Type":"ContainerDied","Data":"42ad627323a67e0888cb97dd821fbf52a8055663ea8f3610a39a0b8666dd66a5"} Jan 26 08:25:29 crc kubenswrapper[4806]: I0126 08:25:29.702563 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-wdztn" Jan 26 08:25:29 crc kubenswrapper[4806]: I0126 08:25:29.702579 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="42ad627323a67e0888cb97dd821fbf52a8055663ea8f3610a39a0b8666dd66a5" Jan 26 08:25:29 crc kubenswrapper[4806]: I0126 08:25:29.799876 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-pc9sc"] Jan 26 08:25:29 crc kubenswrapper[4806]: E0126 08:25:29.800378 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="382a8851-c811-4842-a4a9-e0a2e6d7f2e6" containerName="ssh-known-hosts-edpm-deployment" Jan 26 08:25:29 crc kubenswrapper[4806]: I0126 08:25:29.800401 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="382a8851-c811-4842-a4a9-e0a2e6d7f2e6" containerName="ssh-known-hosts-edpm-deployment" Jan 26 08:25:29 crc kubenswrapper[4806]: I0126 08:25:29.800706 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="382a8851-c811-4842-a4a9-e0a2e6d7f2e6" containerName="ssh-known-hosts-edpm-deployment" Jan 26 08:25:29 crc kubenswrapper[4806]: I0126 08:25:29.801549 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-pc9sc" Jan 26 08:25:29 crc kubenswrapper[4806]: I0126 08:25:29.806953 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 08:25:29 crc kubenswrapper[4806]: I0126 08:25:29.807039 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 08:25:29 crc kubenswrapper[4806]: I0126 08:25:29.807161 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-tr4w7" Jan 26 08:25:29 crc kubenswrapper[4806]: I0126 08:25:29.807459 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 08:25:29 crc kubenswrapper[4806]: I0126 08:25:29.817826 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-pc9sc"] Jan 26 08:25:29 crc kubenswrapper[4806]: I0126 08:25:29.938732 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmpvc\" (UniqueName: \"kubernetes.io/projected/ee3e603a-b29b-4774-87b1-83e26920dfde-kube-api-access-nmpvc\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-pc9sc\" (UID: \"ee3e603a-b29b-4774-87b1-83e26920dfde\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-pc9sc" Jan 26 08:25:29 crc kubenswrapper[4806]: I0126 08:25:29.938782 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ee3e603a-b29b-4774-87b1-83e26920dfde-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-pc9sc\" (UID: \"ee3e603a-b29b-4774-87b1-83e26920dfde\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-pc9sc" Jan 26 08:25:29 crc kubenswrapper[4806]: I0126 08:25:29.938990 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ee3e603a-b29b-4774-87b1-83e26920dfde-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-pc9sc\" (UID: \"ee3e603a-b29b-4774-87b1-83e26920dfde\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-pc9sc" Jan 26 08:25:30 crc kubenswrapper[4806]: I0126 08:25:30.040512 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmpvc\" (UniqueName: \"kubernetes.io/projected/ee3e603a-b29b-4774-87b1-83e26920dfde-kube-api-access-nmpvc\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-pc9sc\" (UID: \"ee3e603a-b29b-4774-87b1-83e26920dfde\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-pc9sc" Jan 26 08:25:30 crc kubenswrapper[4806]: I0126 08:25:30.040569 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ee3e603a-b29b-4774-87b1-83e26920dfde-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-pc9sc\" (UID: \"ee3e603a-b29b-4774-87b1-83e26920dfde\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-pc9sc" Jan 26 08:25:30 crc kubenswrapper[4806]: I0126 08:25:30.040663 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ee3e603a-b29b-4774-87b1-83e26920dfde-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-pc9sc\" (UID: \"ee3e603a-b29b-4774-87b1-83e26920dfde\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-pc9sc" Jan 26 08:25:30 crc kubenswrapper[4806]: I0126 08:25:30.048177 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ee3e603a-b29b-4774-87b1-83e26920dfde-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-pc9sc\" (UID: \"ee3e603a-b29b-4774-87b1-83e26920dfde\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-pc9sc" Jan 26 08:25:30 crc kubenswrapper[4806]: I0126 08:25:30.048790 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ee3e603a-b29b-4774-87b1-83e26920dfde-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-pc9sc\" (UID: \"ee3e603a-b29b-4774-87b1-83e26920dfde\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-pc9sc" Jan 26 08:25:30 crc kubenswrapper[4806]: I0126 08:25:30.058494 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmpvc\" (UniqueName: \"kubernetes.io/projected/ee3e603a-b29b-4774-87b1-83e26920dfde-kube-api-access-nmpvc\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-pc9sc\" (UID: \"ee3e603a-b29b-4774-87b1-83e26920dfde\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-pc9sc" Jan 26 08:25:30 crc kubenswrapper[4806]: I0126 08:25:30.116949 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-pc9sc" Jan 26 08:25:30 crc kubenswrapper[4806]: I0126 08:25:30.720062 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-pc9sc"] Jan 26 08:25:31 crc kubenswrapper[4806]: I0126 08:25:31.720349 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-pc9sc" event={"ID":"ee3e603a-b29b-4774-87b1-83e26920dfde","Type":"ContainerStarted","Data":"66cd71c900218465280d3ee1c3b49c25825c7f646bfa4d054b0cd29300e50fce"} Jan 26 08:25:31 crc kubenswrapper[4806]: I0126 08:25:31.720686 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-pc9sc" event={"ID":"ee3e603a-b29b-4774-87b1-83e26920dfde","Type":"ContainerStarted","Data":"4ded4f2c223033cbd5a1d7cb21bd075a2715dbd3726dc607408e792356574567"} Jan 26 08:25:31 crc kubenswrapper[4806]: I0126 08:25:31.741422 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-pc9sc" podStartSLOduration=2.332483054 podStartE2EDuration="2.741405176s" podCreationTimestamp="2026-01-26 08:25:29 +0000 UTC" firstStartedPulling="2026-01-26 08:25:30.730463336 +0000 UTC m=+1909.994871392" lastFinishedPulling="2026-01-26 08:25:31.139385458 +0000 UTC m=+1910.403793514" observedRunningTime="2026-01-26 08:25:31.732626745 +0000 UTC m=+1910.997034811" watchObservedRunningTime="2026-01-26 08:25:31.741405176 +0000 UTC m=+1911.005813232" Jan 26 08:25:32 crc kubenswrapper[4806]: I0126 08:25:32.043217 4806 scope.go:117] "RemoveContainer" containerID="55ec0c990dc91e0e377b96ad87f2e5ff0605f7a996ad405f57b4bd86d9139349" Jan 26 08:25:32 crc kubenswrapper[4806]: E0126 08:25:32.043883 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 08:25:40 crc kubenswrapper[4806]: I0126 08:25:40.788854 4806 generic.go:334] "Generic (PLEG): container finished" podID="ee3e603a-b29b-4774-87b1-83e26920dfde" containerID="66cd71c900218465280d3ee1c3b49c25825c7f646bfa4d054b0cd29300e50fce" exitCode=0 Jan 26 08:25:40 crc kubenswrapper[4806]: I0126 08:25:40.788941 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-pc9sc" event={"ID":"ee3e603a-b29b-4774-87b1-83e26920dfde","Type":"ContainerDied","Data":"66cd71c900218465280d3ee1c3b49c25825c7f646bfa4d054b0cd29300e50fce"} Jan 26 08:25:42 crc kubenswrapper[4806]: I0126 08:25:42.194755 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-pc9sc" Jan 26 08:25:42 crc kubenswrapper[4806]: I0126 08:25:42.318414 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ee3e603a-b29b-4774-87b1-83e26920dfde-inventory\") pod \"ee3e603a-b29b-4774-87b1-83e26920dfde\" (UID: \"ee3e603a-b29b-4774-87b1-83e26920dfde\") " Jan 26 08:25:42 crc kubenswrapper[4806]: I0126 08:25:42.318768 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmpvc\" (UniqueName: \"kubernetes.io/projected/ee3e603a-b29b-4774-87b1-83e26920dfde-kube-api-access-nmpvc\") pod \"ee3e603a-b29b-4774-87b1-83e26920dfde\" (UID: \"ee3e603a-b29b-4774-87b1-83e26920dfde\") " Jan 26 08:25:42 crc kubenswrapper[4806]: I0126 08:25:42.318951 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ee3e603a-b29b-4774-87b1-83e26920dfde-ssh-key-openstack-edpm-ipam\") pod \"ee3e603a-b29b-4774-87b1-83e26920dfde\" (UID: \"ee3e603a-b29b-4774-87b1-83e26920dfde\") " Jan 26 08:25:42 crc kubenswrapper[4806]: I0126 08:25:42.329012 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee3e603a-b29b-4774-87b1-83e26920dfde-kube-api-access-nmpvc" (OuterVolumeSpecName: "kube-api-access-nmpvc") pod "ee3e603a-b29b-4774-87b1-83e26920dfde" (UID: "ee3e603a-b29b-4774-87b1-83e26920dfde"). InnerVolumeSpecName "kube-api-access-nmpvc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:25:42 crc kubenswrapper[4806]: I0126 08:25:42.345771 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee3e603a-b29b-4774-87b1-83e26920dfde-inventory" (OuterVolumeSpecName: "inventory") pod "ee3e603a-b29b-4774-87b1-83e26920dfde" (UID: "ee3e603a-b29b-4774-87b1-83e26920dfde"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:25:42 crc kubenswrapper[4806]: I0126 08:25:42.350512 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee3e603a-b29b-4774-87b1-83e26920dfde-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ee3e603a-b29b-4774-87b1-83e26920dfde" (UID: "ee3e603a-b29b-4774-87b1-83e26920dfde"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:25:42 crc kubenswrapper[4806]: I0126 08:25:42.422329 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nmpvc\" (UniqueName: \"kubernetes.io/projected/ee3e603a-b29b-4774-87b1-83e26920dfde-kube-api-access-nmpvc\") on node \"crc\" DevicePath \"\"" Jan 26 08:25:42 crc kubenswrapper[4806]: I0126 08:25:42.422972 4806 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ee3e603a-b29b-4774-87b1-83e26920dfde-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 08:25:42 crc kubenswrapper[4806]: I0126 08:25:42.422994 4806 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ee3e603a-b29b-4774-87b1-83e26920dfde-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 08:25:42 crc kubenswrapper[4806]: I0126 08:25:42.811071 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-pc9sc" event={"ID":"ee3e603a-b29b-4774-87b1-83e26920dfde","Type":"ContainerDied","Data":"4ded4f2c223033cbd5a1d7cb21bd075a2715dbd3726dc607408e792356574567"} Jan 26 08:25:42 crc kubenswrapper[4806]: I0126 08:25:42.811107 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ded4f2c223033cbd5a1d7cb21bd075a2715dbd3726dc607408e792356574567" Jan 26 08:25:42 crc kubenswrapper[4806]: I0126 08:25:42.811160 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-pc9sc" Jan 26 08:25:42 crc kubenswrapper[4806]: I0126 08:25:42.916847 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5gckz"] Jan 26 08:25:42 crc kubenswrapper[4806]: E0126 08:25:42.917331 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee3e603a-b29b-4774-87b1-83e26920dfde" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 26 08:25:42 crc kubenswrapper[4806]: I0126 08:25:42.917352 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee3e603a-b29b-4774-87b1-83e26920dfde" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 26 08:25:42 crc kubenswrapper[4806]: I0126 08:25:42.917642 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee3e603a-b29b-4774-87b1-83e26920dfde" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 26 08:25:42 crc kubenswrapper[4806]: I0126 08:25:42.918434 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5gckz" Jan 26 08:25:42 crc kubenswrapper[4806]: I0126 08:25:42.921910 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 08:25:42 crc kubenswrapper[4806]: I0126 08:25:42.922199 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 08:25:42 crc kubenswrapper[4806]: I0126 08:25:42.922832 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-tr4w7" Jan 26 08:25:42 crc kubenswrapper[4806]: I0126 08:25:42.922976 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 08:25:42 crc kubenswrapper[4806]: I0126 08:25:42.925458 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5gckz"] Jan 26 08:25:43 crc kubenswrapper[4806]: I0126 08:25:43.066581 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9a8bd166-d69f-424d-b0b6-bfb56d092e7d-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-5gckz\" (UID: \"9a8bd166-d69f-424d-b0b6-bfb56d092e7d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5gckz" Jan 26 08:25:43 crc kubenswrapper[4806]: I0126 08:25:43.066664 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9a8bd166-d69f-424d-b0b6-bfb56d092e7d-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-5gckz\" (UID: \"9a8bd166-d69f-424d-b0b6-bfb56d092e7d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5gckz" Jan 26 08:25:43 crc kubenswrapper[4806]: I0126 08:25:43.066823 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fj8s\" (UniqueName: \"kubernetes.io/projected/9a8bd166-d69f-424d-b0b6-bfb56d092e7d-kube-api-access-8fj8s\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-5gckz\" (UID: \"9a8bd166-d69f-424d-b0b6-bfb56d092e7d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5gckz" Jan 26 08:25:43 crc kubenswrapper[4806]: I0126 08:25:43.169562 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8fj8s\" (UniqueName: \"kubernetes.io/projected/9a8bd166-d69f-424d-b0b6-bfb56d092e7d-kube-api-access-8fj8s\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-5gckz\" (UID: \"9a8bd166-d69f-424d-b0b6-bfb56d092e7d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5gckz" Jan 26 08:25:43 crc kubenswrapper[4806]: I0126 08:25:43.170143 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9a8bd166-d69f-424d-b0b6-bfb56d092e7d-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-5gckz\" (UID: \"9a8bd166-d69f-424d-b0b6-bfb56d092e7d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5gckz" Jan 26 08:25:43 crc kubenswrapper[4806]: I0126 08:25:43.170294 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9a8bd166-d69f-424d-b0b6-bfb56d092e7d-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-5gckz\" (UID: \"9a8bd166-d69f-424d-b0b6-bfb56d092e7d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5gckz" Jan 26 08:25:43 crc kubenswrapper[4806]: I0126 08:25:43.174333 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9a8bd166-d69f-424d-b0b6-bfb56d092e7d-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-5gckz\" (UID: \"9a8bd166-d69f-424d-b0b6-bfb56d092e7d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5gckz" Jan 26 08:25:43 crc kubenswrapper[4806]: I0126 08:25:43.179965 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9a8bd166-d69f-424d-b0b6-bfb56d092e7d-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-5gckz\" (UID: \"9a8bd166-d69f-424d-b0b6-bfb56d092e7d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5gckz" Jan 26 08:25:43 crc kubenswrapper[4806]: I0126 08:25:43.189265 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8fj8s\" (UniqueName: \"kubernetes.io/projected/9a8bd166-d69f-424d-b0b6-bfb56d092e7d-kube-api-access-8fj8s\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-5gckz\" (UID: \"9a8bd166-d69f-424d-b0b6-bfb56d092e7d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5gckz" Jan 26 08:25:43 crc kubenswrapper[4806]: I0126 08:25:43.283263 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5gckz" Jan 26 08:25:43 crc kubenswrapper[4806]: I0126 08:25:43.873660 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5gckz"] Jan 26 08:25:44 crc kubenswrapper[4806]: I0126 08:25:44.042197 4806 scope.go:117] "RemoveContainer" containerID="55ec0c990dc91e0e377b96ad87f2e5ff0605f7a996ad405f57b4bd86d9139349" Jan 26 08:25:44 crc kubenswrapper[4806]: E0126 08:25:44.042514 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 08:25:44 crc kubenswrapper[4806]: I0126 08:25:44.830726 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5gckz" event={"ID":"9a8bd166-d69f-424d-b0b6-bfb56d092e7d","Type":"ContainerStarted","Data":"300679e6a7572421ddf072056cf8e88d3aae02344d7c4404c28847b8d3b964a3"} Jan 26 08:25:44 crc kubenswrapper[4806]: I0126 08:25:44.830769 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5gckz" event={"ID":"9a8bd166-d69f-424d-b0b6-bfb56d092e7d","Type":"ContainerStarted","Data":"ec14a24805f1ce1cb545cf44b767ee40840c6ff2fe7f5373a37258b35fd7629f"} Jan 26 08:25:44 crc kubenswrapper[4806]: I0126 08:25:44.850694 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5gckz" podStartSLOduration=2.414378547 podStartE2EDuration="2.85067542s" podCreationTimestamp="2026-01-26 08:25:42 +0000 UTC" firstStartedPulling="2026-01-26 08:25:43.901512241 +0000 UTC m=+1923.165920307" lastFinishedPulling="2026-01-26 08:25:44.337809134 +0000 UTC m=+1923.602217180" observedRunningTime="2026-01-26 08:25:44.843541296 +0000 UTC m=+1924.107949352" watchObservedRunningTime="2026-01-26 08:25:44.85067542 +0000 UTC m=+1924.115083476" Jan 26 08:25:56 crc kubenswrapper[4806]: I0126 08:25:56.182143 4806 generic.go:334] "Generic (PLEG): container finished" podID="9a8bd166-d69f-424d-b0b6-bfb56d092e7d" containerID="300679e6a7572421ddf072056cf8e88d3aae02344d7c4404c28847b8d3b964a3" exitCode=0 Jan 26 08:25:56 crc kubenswrapper[4806]: I0126 08:25:56.182886 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5gckz" event={"ID":"9a8bd166-d69f-424d-b0b6-bfb56d092e7d","Type":"ContainerDied","Data":"300679e6a7572421ddf072056cf8e88d3aae02344d7c4404c28847b8d3b964a3"} Jan 26 08:25:57 crc kubenswrapper[4806]: I0126 08:25:57.747726 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5gckz" Jan 26 08:25:57 crc kubenswrapper[4806]: I0126 08:25:57.856288 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9a8bd166-d69f-424d-b0b6-bfb56d092e7d-ssh-key-openstack-edpm-ipam\") pod \"9a8bd166-d69f-424d-b0b6-bfb56d092e7d\" (UID: \"9a8bd166-d69f-424d-b0b6-bfb56d092e7d\") " Jan 26 08:25:57 crc kubenswrapper[4806]: I0126 08:25:57.856380 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8fj8s\" (UniqueName: \"kubernetes.io/projected/9a8bd166-d69f-424d-b0b6-bfb56d092e7d-kube-api-access-8fj8s\") pod \"9a8bd166-d69f-424d-b0b6-bfb56d092e7d\" (UID: \"9a8bd166-d69f-424d-b0b6-bfb56d092e7d\") " Jan 26 08:25:57 crc kubenswrapper[4806]: I0126 08:25:57.856534 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9a8bd166-d69f-424d-b0b6-bfb56d092e7d-inventory\") pod \"9a8bd166-d69f-424d-b0b6-bfb56d092e7d\" (UID: \"9a8bd166-d69f-424d-b0b6-bfb56d092e7d\") " Jan 26 08:25:57 crc kubenswrapper[4806]: I0126 08:25:57.867720 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a8bd166-d69f-424d-b0b6-bfb56d092e7d-kube-api-access-8fj8s" (OuterVolumeSpecName: "kube-api-access-8fj8s") pod "9a8bd166-d69f-424d-b0b6-bfb56d092e7d" (UID: "9a8bd166-d69f-424d-b0b6-bfb56d092e7d"). InnerVolumeSpecName "kube-api-access-8fj8s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:25:57 crc kubenswrapper[4806]: I0126 08:25:57.883396 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a8bd166-d69f-424d-b0b6-bfb56d092e7d-inventory" (OuterVolumeSpecName: "inventory") pod "9a8bd166-d69f-424d-b0b6-bfb56d092e7d" (UID: "9a8bd166-d69f-424d-b0b6-bfb56d092e7d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:25:57 crc kubenswrapper[4806]: I0126 08:25:57.883811 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a8bd166-d69f-424d-b0b6-bfb56d092e7d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "9a8bd166-d69f-424d-b0b6-bfb56d092e7d" (UID: "9a8bd166-d69f-424d-b0b6-bfb56d092e7d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:25:57 crc kubenswrapper[4806]: I0126 08:25:57.958290 4806 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9a8bd166-d69f-424d-b0b6-bfb56d092e7d-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 08:25:57 crc kubenswrapper[4806]: I0126 08:25:57.958694 4806 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9a8bd166-d69f-424d-b0b6-bfb56d092e7d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 08:25:57 crc kubenswrapper[4806]: I0126 08:25:57.958708 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8fj8s\" (UniqueName: \"kubernetes.io/projected/9a8bd166-d69f-424d-b0b6-bfb56d092e7d-kube-api-access-8fj8s\") on node \"crc\" DevicePath \"\"" Jan 26 08:25:58 crc kubenswrapper[4806]: I0126 08:25:58.207160 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5gckz" event={"ID":"9a8bd166-d69f-424d-b0b6-bfb56d092e7d","Type":"ContainerDied","Data":"ec14a24805f1ce1cb545cf44b767ee40840c6ff2fe7f5373a37258b35fd7629f"} Jan 26 08:25:58 crc kubenswrapper[4806]: I0126 08:25:58.207642 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ec14a24805f1ce1cb545cf44b767ee40840c6ff2fe7f5373a37258b35fd7629f" Jan 26 08:25:58 crc kubenswrapper[4806]: I0126 08:25:58.207296 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-5gckz" Jan 26 08:25:58 crc kubenswrapper[4806]: I0126 08:25:58.318472 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8nw2s"] Jan 26 08:25:58 crc kubenswrapper[4806]: E0126 08:25:58.319026 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a8bd166-d69f-424d-b0b6-bfb56d092e7d" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 26 08:25:58 crc kubenswrapper[4806]: I0126 08:25:58.319048 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a8bd166-d69f-424d-b0b6-bfb56d092e7d" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 26 08:25:58 crc kubenswrapper[4806]: I0126 08:25:58.319315 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a8bd166-d69f-424d-b0b6-bfb56d092e7d" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 26 08:25:58 crc kubenswrapper[4806]: I0126 08:25:58.320281 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8nw2s" Jan 26 08:25:58 crc kubenswrapper[4806]: I0126 08:25:58.323032 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Jan 26 08:25:58 crc kubenswrapper[4806]: I0126 08:25:58.323481 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 08:25:58 crc kubenswrapper[4806]: I0126 08:25:58.324249 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Jan 26 08:25:58 crc kubenswrapper[4806]: I0126 08:25:58.324432 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Jan 26 08:25:58 crc kubenswrapper[4806]: I0126 08:25:58.324651 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 08:25:58 crc kubenswrapper[4806]: I0126 08:25:58.324737 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-tr4w7" Jan 26 08:25:58 crc kubenswrapper[4806]: I0126 08:25:58.324819 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Jan 26 08:25:58 crc kubenswrapper[4806]: I0126 08:25:58.324894 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 08:25:58 crc kubenswrapper[4806]: I0126 08:25:58.340041 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8nw2s"] Jan 26 08:25:58 crc kubenswrapper[4806]: I0126 08:25:58.365887 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/597e4bf2-e48f-4f61-90a0-2e930444f754-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8nw2s\" (UID: \"597e4bf2-e48f-4f61-90a0-2e930444f754\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8nw2s" Jan 26 08:25:58 crc kubenswrapper[4806]: I0126 08:25:58.365934 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/597e4bf2-e48f-4f61-90a0-2e930444f754-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8nw2s\" (UID: \"597e4bf2-e48f-4f61-90a0-2e930444f754\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8nw2s" Jan 26 08:25:58 crc kubenswrapper[4806]: I0126 08:25:58.365976 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/597e4bf2-e48f-4f61-90a0-2e930444f754-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8nw2s\" (UID: \"597e4bf2-e48f-4f61-90a0-2e930444f754\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8nw2s" Jan 26 08:25:58 crc kubenswrapper[4806]: I0126 08:25:58.366037 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/597e4bf2-e48f-4f61-90a0-2e930444f754-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8nw2s\" (UID: \"597e4bf2-e48f-4f61-90a0-2e930444f754\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8nw2s" Jan 26 08:25:58 crc kubenswrapper[4806]: I0126 08:25:58.366062 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/597e4bf2-e48f-4f61-90a0-2e930444f754-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8nw2s\" (UID: \"597e4bf2-e48f-4f61-90a0-2e930444f754\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8nw2s" Jan 26 08:25:58 crc kubenswrapper[4806]: I0126 08:25:58.366117 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/597e4bf2-e48f-4f61-90a0-2e930444f754-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8nw2s\" (UID: \"597e4bf2-e48f-4f61-90a0-2e930444f754\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8nw2s" Jan 26 08:25:58 crc kubenswrapper[4806]: I0126 08:25:58.366138 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/597e4bf2-e48f-4f61-90a0-2e930444f754-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8nw2s\" (UID: \"597e4bf2-e48f-4f61-90a0-2e930444f754\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8nw2s" Jan 26 08:25:58 crc kubenswrapper[4806]: I0126 08:25:58.366185 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/597e4bf2-e48f-4f61-90a0-2e930444f754-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8nw2s\" (UID: \"597e4bf2-e48f-4f61-90a0-2e930444f754\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8nw2s" Jan 26 08:25:58 crc kubenswrapper[4806]: I0126 08:25:58.366222 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/597e4bf2-e48f-4f61-90a0-2e930444f754-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8nw2s\" (UID: \"597e4bf2-e48f-4f61-90a0-2e930444f754\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8nw2s" Jan 26 08:25:58 crc kubenswrapper[4806]: I0126 08:25:58.366243 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdbcg\" (UniqueName: \"kubernetes.io/projected/597e4bf2-e48f-4f61-90a0-2e930444f754-kube-api-access-jdbcg\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8nw2s\" (UID: \"597e4bf2-e48f-4f61-90a0-2e930444f754\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8nw2s" Jan 26 08:25:58 crc kubenswrapper[4806]: I0126 08:25:58.366294 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/597e4bf2-e48f-4f61-90a0-2e930444f754-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8nw2s\" (UID: \"597e4bf2-e48f-4f61-90a0-2e930444f754\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8nw2s" Jan 26 08:25:58 crc kubenswrapper[4806]: I0126 08:25:58.366343 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/597e4bf2-e48f-4f61-90a0-2e930444f754-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8nw2s\" (UID: \"597e4bf2-e48f-4f61-90a0-2e930444f754\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8nw2s" Jan 26 08:25:58 crc kubenswrapper[4806]: I0126 08:25:58.366367 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/597e4bf2-e48f-4f61-90a0-2e930444f754-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8nw2s\" (UID: \"597e4bf2-e48f-4f61-90a0-2e930444f754\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8nw2s" Jan 26 08:25:58 crc kubenswrapper[4806]: I0126 08:25:58.366388 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/597e4bf2-e48f-4f61-90a0-2e930444f754-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8nw2s\" (UID: \"597e4bf2-e48f-4f61-90a0-2e930444f754\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8nw2s" Jan 26 08:25:58 crc kubenswrapper[4806]: I0126 08:25:58.467749 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/597e4bf2-e48f-4f61-90a0-2e930444f754-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8nw2s\" (UID: \"597e4bf2-e48f-4f61-90a0-2e930444f754\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8nw2s" Jan 26 08:25:58 crc kubenswrapper[4806]: I0126 08:25:58.467796 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/597e4bf2-e48f-4f61-90a0-2e930444f754-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8nw2s\" (UID: \"597e4bf2-e48f-4f61-90a0-2e930444f754\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8nw2s" Jan 26 08:25:58 crc kubenswrapper[4806]: I0126 08:25:58.467846 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/597e4bf2-e48f-4f61-90a0-2e930444f754-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8nw2s\" (UID: \"597e4bf2-e48f-4f61-90a0-2e930444f754\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8nw2s" Jan 26 08:25:58 crc kubenswrapper[4806]: I0126 08:25:58.467867 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/597e4bf2-e48f-4f61-90a0-2e930444f754-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8nw2s\" (UID: \"597e4bf2-e48f-4f61-90a0-2e930444f754\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8nw2s" Jan 26 08:25:58 crc kubenswrapper[4806]: I0126 08:25:58.467902 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/597e4bf2-e48f-4f61-90a0-2e930444f754-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8nw2s\" (UID: \"597e4bf2-e48f-4f61-90a0-2e930444f754\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8nw2s" Jan 26 08:25:58 crc kubenswrapper[4806]: I0126 08:25:58.467934 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/597e4bf2-e48f-4f61-90a0-2e930444f754-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8nw2s\" (UID: \"597e4bf2-e48f-4f61-90a0-2e930444f754\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8nw2s" Jan 26 08:25:58 crc kubenswrapper[4806]: I0126 08:25:58.467952 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdbcg\" (UniqueName: \"kubernetes.io/projected/597e4bf2-e48f-4f61-90a0-2e930444f754-kube-api-access-jdbcg\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8nw2s\" (UID: \"597e4bf2-e48f-4f61-90a0-2e930444f754\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8nw2s" Jan 26 08:25:58 crc kubenswrapper[4806]: I0126 08:25:58.467990 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/597e4bf2-e48f-4f61-90a0-2e930444f754-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8nw2s\" (UID: \"597e4bf2-e48f-4f61-90a0-2e930444f754\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8nw2s" Jan 26 08:25:58 crc kubenswrapper[4806]: I0126 08:25:58.468033 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/597e4bf2-e48f-4f61-90a0-2e930444f754-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8nw2s\" (UID: \"597e4bf2-e48f-4f61-90a0-2e930444f754\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8nw2s" Jan 26 08:25:58 crc kubenswrapper[4806]: I0126 08:25:58.468056 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/597e4bf2-e48f-4f61-90a0-2e930444f754-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8nw2s\" (UID: \"597e4bf2-e48f-4f61-90a0-2e930444f754\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8nw2s" Jan 26 08:25:58 crc kubenswrapper[4806]: I0126 08:25:58.468082 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/597e4bf2-e48f-4f61-90a0-2e930444f754-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8nw2s\" (UID: \"597e4bf2-e48f-4f61-90a0-2e930444f754\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8nw2s" Jan 26 08:25:58 crc kubenswrapper[4806]: I0126 08:25:58.468117 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/597e4bf2-e48f-4f61-90a0-2e930444f754-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8nw2s\" (UID: \"597e4bf2-e48f-4f61-90a0-2e930444f754\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8nw2s" Jan 26 08:25:58 crc kubenswrapper[4806]: I0126 08:25:58.468139 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/597e4bf2-e48f-4f61-90a0-2e930444f754-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8nw2s\" (UID: \"597e4bf2-e48f-4f61-90a0-2e930444f754\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8nw2s" Jan 26 08:25:58 crc kubenswrapper[4806]: I0126 08:25:58.468172 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/597e4bf2-e48f-4f61-90a0-2e930444f754-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8nw2s\" (UID: \"597e4bf2-e48f-4f61-90a0-2e930444f754\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8nw2s" Jan 26 08:25:58 crc kubenswrapper[4806]: I0126 08:25:58.474845 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/597e4bf2-e48f-4f61-90a0-2e930444f754-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8nw2s\" (UID: \"597e4bf2-e48f-4f61-90a0-2e930444f754\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8nw2s" Jan 26 08:25:58 crc kubenswrapper[4806]: I0126 08:25:58.472892 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/597e4bf2-e48f-4f61-90a0-2e930444f754-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8nw2s\" (UID: \"597e4bf2-e48f-4f61-90a0-2e930444f754\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8nw2s" Jan 26 08:25:58 crc kubenswrapper[4806]: I0126 08:25:58.482872 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/597e4bf2-e48f-4f61-90a0-2e930444f754-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8nw2s\" (UID: \"597e4bf2-e48f-4f61-90a0-2e930444f754\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8nw2s" Jan 26 08:25:58 crc kubenswrapper[4806]: I0126 08:25:58.484003 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/597e4bf2-e48f-4f61-90a0-2e930444f754-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8nw2s\" (UID: \"597e4bf2-e48f-4f61-90a0-2e930444f754\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8nw2s" Jan 26 08:25:58 crc kubenswrapper[4806]: I0126 08:25:58.483985 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/597e4bf2-e48f-4f61-90a0-2e930444f754-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8nw2s\" (UID: \"597e4bf2-e48f-4f61-90a0-2e930444f754\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8nw2s" Jan 26 08:25:58 crc kubenswrapper[4806]: I0126 08:25:58.482967 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/597e4bf2-e48f-4f61-90a0-2e930444f754-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8nw2s\" (UID: \"597e4bf2-e48f-4f61-90a0-2e930444f754\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8nw2s" Jan 26 08:25:58 crc kubenswrapper[4806]: I0126 08:25:58.484939 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/597e4bf2-e48f-4f61-90a0-2e930444f754-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8nw2s\" (UID: \"597e4bf2-e48f-4f61-90a0-2e930444f754\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8nw2s" Jan 26 08:25:58 crc kubenswrapper[4806]: I0126 08:25:58.485037 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/597e4bf2-e48f-4f61-90a0-2e930444f754-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8nw2s\" (UID: \"597e4bf2-e48f-4f61-90a0-2e930444f754\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8nw2s" Jan 26 08:25:58 crc kubenswrapper[4806]: I0126 08:25:58.485821 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/597e4bf2-e48f-4f61-90a0-2e930444f754-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8nw2s\" (UID: \"597e4bf2-e48f-4f61-90a0-2e930444f754\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8nw2s" Jan 26 08:25:58 crc kubenswrapper[4806]: I0126 08:25:58.485957 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/597e4bf2-e48f-4f61-90a0-2e930444f754-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8nw2s\" (UID: \"597e4bf2-e48f-4f61-90a0-2e930444f754\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8nw2s" Jan 26 08:25:58 crc kubenswrapper[4806]: I0126 08:25:58.486424 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/597e4bf2-e48f-4f61-90a0-2e930444f754-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8nw2s\" (UID: \"597e4bf2-e48f-4f61-90a0-2e930444f754\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8nw2s" Jan 26 08:25:58 crc kubenswrapper[4806]: I0126 08:25:58.486455 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdbcg\" (UniqueName: \"kubernetes.io/projected/597e4bf2-e48f-4f61-90a0-2e930444f754-kube-api-access-jdbcg\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8nw2s\" (UID: \"597e4bf2-e48f-4f61-90a0-2e930444f754\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8nw2s" Jan 26 08:25:58 crc kubenswrapper[4806]: I0126 08:25:58.487461 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/597e4bf2-e48f-4f61-90a0-2e930444f754-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8nw2s\" (UID: \"597e4bf2-e48f-4f61-90a0-2e930444f754\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8nw2s" Jan 26 08:25:58 crc kubenswrapper[4806]: I0126 08:25:58.494250 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/597e4bf2-e48f-4f61-90a0-2e930444f754-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-8nw2s\" (UID: \"597e4bf2-e48f-4f61-90a0-2e930444f754\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8nw2s" Jan 26 08:25:58 crc kubenswrapper[4806]: I0126 08:25:58.646388 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8nw2s" Jan 26 08:25:59 crc kubenswrapper[4806]: I0126 08:25:59.042815 4806 scope.go:117] "RemoveContainer" containerID="55ec0c990dc91e0e377b96ad87f2e5ff0605f7a996ad405f57b4bd86d9139349" Jan 26 08:25:59 crc kubenswrapper[4806]: E0126 08:25:59.043318 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 08:25:59 crc kubenswrapper[4806]: W0126 08:25:59.275236 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod597e4bf2_e48f_4f61_90a0_2e930444f754.slice/crio-35a1b9e231e49c8573559bd593d3dd5bb968755c4913be2f90393d296ead9523 WatchSource:0}: Error finding container 35a1b9e231e49c8573559bd593d3dd5bb968755c4913be2f90393d296ead9523: Status 404 returned error can't find the container with id 35a1b9e231e49c8573559bd593d3dd5bb968755c4913be2f90393d296ead9523 Jan 26 08:25:59 crc kubenswrapper[4806]: I0126 08:25:59.293735 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8nw2s"] Jan 26 08:26:00 crc kubenswrapper[4806]: I0126 08:26:00.223727 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8nw2s" event={"ID":"597e4bf2-e48f-4f61-90a0-2e930444f754","Type":"ContainerStarted","Data":"7ebdc7083addcd49e05297f9f37b53dc3084b5fd477c6e58a9f1f4b103c77402"} Jan 26 08:26:00 crc kubenswrapper[4806]: I0126 08:26:00.224072 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8nw2s" event={"ID":"597e4bf2-e48f-4f61-90a0-2e930444f754","Type":"ContainerStarted","Data":"35a1b9e231e49c8573559bd593d3dd5bb968755c4913be2f90393d296ead9523"} Jan 26 08:26:00 crc kubenswrapper[4806]: I0126 08:26:00.247933 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8nw2s" podStartSLOduration=1.862920542 podStartE2EDuration="2.247914381s" podCreationTimestamp="2026-01-26 08:25:58 +0000 UTC" firstStartedPulling="2026-01-26 08:25:59.278095745 +0000 UTC m=+1938.542503801" lastFinishedPulling="2026-01-26 08:25:59.663089564 +0000 UTC m=+1938.927497640" observedRunningTime="2026-01-26 08:26:00.240997854 +0000 UTC m=+1939.505405910" watchObservedRunningTime="2026-01-26 08:26:00.247914381 +0000 UTC m=+1939.512322437" Jan 26 08:26:13 crc kubenswrapper[4806]: I0126 08:26:13.042361 4806 scope.go:117] "RemoveContainer" containerID="55ec0c990dc91e0e377b96ad87f2e5ff0605f7a996ad405f57b4bd86d9139349" Jan 26 08:26:13 crc kubenswrapper[4806]: E0126 08:26:13.043216 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 08:26:28 crc kubenswrapper[4806]: I0126 08:26:28.041741 4806 scope.go:117] "RemoveContainer" containerID="55ec0c990dc91e0e377b96ad87f2e5ff0605f7a996ad405f57b4bd86d9139349" Jan 26 08:26:28 crc kubenswrapper[4806]: E0126 08:26:28.042513 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 08:26:43 crc kubenswrapper[4806]: I0126 08:26:43.042424 4806 scope.go:117] "RemoveContainer" containerID="55ec0c990dc91e0e377b96ad87f2e5ff0605f7a996ad405f57b4bd86d9139349" Jan 26 08:26:43 crc kubenswrapper[4806]: E0126 08:26:43.043262 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 08:26:44 crc kubenswrapper[4806]: I0126 08:26:44.625708 4806 generic.go:334] "Generic (PLEG): container finished" podID="597e4bf2-e48f-4f61-90a0-2e930444f754" containerID="7ebdc7083addcd49e05297f9f37b53dc3084b5fd477c6e58a9f1f4b103c77402" exitCode=0 Jan 26 08:26:44 crc kubenswrapper[4806]: I0126 08:26:44.625765 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8nw2s" event={"ID":"597e4bf2-e48f-4f61-90a0-2e930444f754","Type":"ContainerDied","Data":"7ebdc7083addcd49e05297f9f37b53dc3084b5fd477c6e58a9f1f4b103c77402"} Jan 26 08:26:46 crc kubenswrapper[4806]: I0126 08:26:46.232719 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8nw2s" Jan 26 08:26:46 crc kubenswrapper[4806]: I0126 08:26:46.382926 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/597e4bf2-e48f-4f61-90a0-2e930444f754-bootstrap-combined-ca-bundle\") pod \"597e4bf2-e48f-4f61-90a0-2e930444f754\" (UID: \"597e4bf2-e48f-4f61-90a0-2e930444f754\") " Jan 26 08:26:46 crc kubenswrapper[4806]: I0126 08:26:46.382997 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/597e4bf2-e48f-4f61-90a0-2e930444f754-repo-setup-combined-ca-bundle\") pod \"597e4bf2-e48f-4f61-90a0-2e930444f754\" (UID: \"597e4bf2-e48f-4f61-90a0-2e930444f754\") " Jan 26 08:26:46 crc kubenswrapper[4806]: I0126 08:26:46.383025 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/597e4bf2-e48f-4f61-90a0-2e930444f754-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"597e4bf2-e48f-4f61-90a0-2e930444f754\" (UID: \"597e4bf2-e48f-4f61-90a0-2e930444f754\") " Jan 26 08:26:46 crc kubenswrapper[4806]: I0126 08:26:46.383146 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/597e4bf2-e48f-4f61-90a0-2e930444f754-openstack-edpm-ipam-ovn-default-certs-0\") pod \"597e4bf2-e48f-4f61-90a0-2e930444f754\" (UID: \"597e4bf2-e48f-4f61-90a0-2e930444f754\") " Jan 26 08:26:46 crc kubenswrapper[4806]: I0126 08:26:46.383183 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/597e4bf2-e48f-4f61-90a0-2e930444f754-ovn-combined-ca-bundle\") pod \"597e4bf2-e48f-4f61-90a0-2e930444f754\" (UID: \"597e4bf2-e48f-4f61-90a0-2e930444f754\") " Jan 26 08:26:46 crc kubenswrapper[4806]: I0126 08:26:46.383232 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/597e4bf2-e48f-4f61-90a0-2e930444f754-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"597e4bf2-e48f-4f61-90a0-2e930444f754\" (UID: \"597e4bf2-e48f-4f61-90a0-2e930444f754\") " Jan 26 08:26:46 crc kubenswrapper[4806]: I0126 08:26:46.383268 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/597e4bf2-e48f-4f61-90a0-2e930444f754-telemetry-combined-ca-bundle\") pod \"597e4bf2-e48f-4f61-90a0-2e930444f754\" (UID: \"597e4bf2-e48f-4f61-90a0-2e930444f754\") " Jan 26 08:26:46 crc kubenswrapper[4806]: I0126 08:26:46.383305 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/597e4bf2-e48f-4f61-90a0-2e930444f754-inventory\") pod \"597e4bf2-e48f-4f61-90a0-2e930444f754\" (UID: \"597e4bf2-e48f-4f61-90a0-2e930444f754\") " Jan 26 08:26:46 crc kubenswrapper[4806]: I0126 08:26:46.383329 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/597e4bf2-e48f-4f61-90a0-2e930444f754-neutron-metadata-combined-ca-bundle\") pod \"597e4bf2-e48f-4f61-90a0-2e930444f754\" (UID: \"597e4bf2-e48f-4f61-90a0-2e930444f754\") " Jan 26 08:26:46 crc kubenswrapper[4806]: I0126 08:26:46.383368 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/597e4bf2-e48f-4f61-90a0-2e930444f754-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"597e4bf2-e48f-4f61-90a0-2e930444f754\" (UID: \"597e4bf2-e48f-4f61-90a0-2e930444f754\") " Jan 26 08:26:46 crc kubenswrapper[4806]: I0126 08:26:46.383391 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jdbcg\" (UniqueName: \"kubernetes.io/projected/597e4bf2-e48f-4f61-90a0-2e930444f754-kube-api-access-jdbcg\") pod \"597e4bf2-e48f-4f61-90a0-2e930444f754\" (UID: \"597e4bf2-e48f-4f61-90a0-2e930444f754\") " Jan 26 08:26:46 crc kubenswrapper[4806]: I0126 08:26:46.383453 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/597e4bf2-e48f-4f61-90a0-2e930444f754-libvirt-combined-ca-bundle\") pod \"597e4bf2-e48f-4f61-90a0-2e930444f754\" (UID: \"597e4bf2-e48f-4f61-90a0-2e930444f754\") " Jan 26 08:26:46 crc kubenswrapper[4806]: I0126 08:26:46.383480 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/597e4bf2-e48f-4f61-90a0-2e930444f754-ssh-key-openstack-edpm-ipam\") pod \"597e4bf2-e48f-4f61-90a0-2e930444f754\" (UID: \"597e4bf2-e48f-4f61-90a0-2e930444f754\") " Jan 26 08:26:46 crc kubenswrapper[4806]: I0126 08:26:46.383585 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/597e4bf2-e48f-4f61-90a0-2e930444f754-nova-combined-ca-bundle\") pod \"597e4bf2-e48f-4f61-90a0-2e930444f754\" (UID: \"597e4bf2-e48f-4f61-90a0-2e930444f754\") " Jan 26 08:26:46 crc kubenswrapper[4806]: I0126 08:26:46.393101 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/597e4bf2-e48f-4f61-90a0-2e930444f754-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "597e4bf2-e48f-4f61-90a0-2e930444f754" (UID: "597e4bf2-e48f-4f61-90a0-2e930444f754"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:26:46 crc kubenswrapper[4806]: I0126 08:26:46.393421 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/597e4bf2-e48f-4f61-90a0-2e930444f754-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "597e4bf2-e48f-4f61-90a0-2e930444f754" (UID: "597e4bf2-e48f-4f61-90a0-2e930444f754"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:26:46 crc kubenswrapper[4806]: I0126 08:26:46.393554 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/597e4bf2-e48f-4f61-90a0-2e930444f754-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "597e4bf2-e48f-4f61-90a0-2e930444f754" (UID: "597e4bf2-e48f-4f61-90a0-2e930444f754"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:26:46 crc kubenswrapper[4806]: I0126 08:26:46.393619 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/597e4bf2-e48f-4f61-90a0-2e930444f754-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "597e4bf2-e48f-4f61-90a0-2e930444f754" (UID: "597e4bf2-e48f-4f61-90a0-2e930444f754"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:26:46 crc kubenswrapper[4806]: I0126 08:26:46.393682 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/597e4bf2-e48f-4f61-90a0-2e930444f754-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "597e4bf2-e48f-4f61-90a0-2e930444f754" (UID: "597e4bf2-e48f-4f61-90a0-2e930444f754"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:26:46 crc kubenswrapper[4806]: I0126 08:26:46.395169 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/597e4bf2-e48f-4f61-90a0-2e930444f754-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "597e4bf2-e48f-4f61-90a0-2e930444f754" (UID: "597e4bf2-e48f-4f61-90a0-2e930444f754"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:26:46 crc kubenswrapper[4806]: I0126 08:26:46.398769 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/597e4bf2-e48f-4f61-90a0-2e930444f754-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "597e4bf2-e48f-4f61-90a0-2e930444f754" (UID: "597e4bf2-e48f-4f61-90a0-2e930444f754"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:26:46 crc kubenswrapper[4806]: I0126 08:26:46.398813 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/597e4bf2-e48f-4f61-90a0-2e930444f754-kube-api-access-jdbcg" (OuterVolumeSpecName: "kube-api-access-jdbcg") pod "597e4bf2-e48f-4f61-90a0-2e930444f754" (UID: "597e4bf2-e48f-4f61-90a0-2e930444f754"). InnerVolumeSpecName "kube-api-access-jdbcg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:26:46 crc kubenswrapper[4806]: I0126 08:26:46.400422 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/597e4bf2-e48f-4f61-90a0-2e930444f754-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "597e4bf2-e48f-4f61-90a0-2e930444f754" (UID: "597e4bf2-e48f-4f61-90a0-2e930444f754"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:26:46 crc kubenswrapper[4806]: I0126 08:26:46.402893 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/597e4bf2-e48f-4f61-90a0-2e930444f754-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "597e4bf2-e48f-4f61-90a0-2e930444f754" (UID: "597e4bf2-e48f-4f61-90a0-2e930444f754"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:26:46 crc kubenswrapper[4806]: I0126 08:26:46.402934 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/597e4bf2-e48f-4f61-90a0-2e930444f754-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "597e4bf2-e48f-4f61-90a0-2e930444f754" (UID: "597e4bf2-e48f-4f61-90a0-2e930444f754"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:26:46 crc kubenswrapper[4806]: I0126 08:26:46.419790 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/597e4bf2-e48f-4f61-90a0-2e930444f754-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "597e4bf2-e48f-4f61-90a0-2e930444f754" (UID: "597e4bf2-e48f-4f61-90a0-2e930444f754"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:26:46 crc kubenswrapper[4806]: I0126 08:26:46.426051 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/597e4bf2-e48f-4f61-90a0-2e930444f754-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "597e4bf2-e48f-4f61-90a0-2e930444f754" (UID: "597e4bf2-e48f-4f61-90a0-2e930444f754"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:26:46 crc kubenswrapper[4806]: I0126 08:26:46.429122 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/597e4bf2-e48f-4f61-90a0-2e930444f754-inventory" (OuterVolumeSpecName: "inventory") pod "597e4bf2-e48f-4f61-90a0-2e930444f754" (UID: "597e4bf2-e48f-4f61-90a0-2e930444f754"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:26:46 crc kubenswrapper[4806]: I0126 08:26:46.486833 4806 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/597e4bf2-e48f-4f61-90a0-2e930444f754-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 26 08:26:46 crc kubenswrapper[4806]: I0126 08:26:46.486863 4806 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/597e4bf2-e48f-4f61-90a0-2e930444f754-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 08:26:46 crc kubenswrapper[4806]: I0126 08:26:46.486874 4806 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/597e4bf2-e48f-4f61-90a0-2e930444f754-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 26 08:26:46 crc kubenswrapper[4806]: I0126 08:26:46.486884 4806 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/597e4bf2-e48f-4f61-90a0-2e930444f754-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 08:26:46 crc kubenswrapper[4806]: I0126 08:26:46.486894 4806 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/597e4bf2-e48f-4f61-90a0-2e930444f754-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 08:26:46 crc kubenswrapper[4806]: I0126 08:26:46.486902 4806 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/597e4bf2-e48f-4f61-90a0-2e930444f754-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 08:26:46 crc kubenswrapper[4806]: I0126 08:26:46.486911 4806 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/597e4bf2-e48f-4f61-90a0-2e930444f754-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 26 08:26:46 crc kubenswrapper[4806]: I0126 08:26:46.486920 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jdbcg\" (UniqueName: \"kubernetes.io/projected/597e4bf2-e48f-4f61-90a0-2e930444f754-kube-api-access-jdbcg\") on node \"crc\" DevicePath \"\"" Jan 26 08:26:46 crc kubenswrapper[4806]: I0126 08:26:46.486928 4806 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/597e4bf2-e48f-4f61-90a0-2e930444f754-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 08:26:46 crc kubenswrapper[4806]: I0126 08:26:46.486938 4806 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/597e4bf2-e48f-4f61-90a0-2e930444f754-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 08:26:46 crc kubenswrapper[4806]: I0126 08:26:46.486946 4806 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/597e4bf2-e48f-4f61-90a0-2e930444f754-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 08:26:46 crc kubenswrapper[4806]: I0126 08:26:46.486955 4806 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/597e4bf2-e48f-4f61-90a0-2e930444f754-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 08:26:46 crc kubenswrapper[4806]: I0126 08:26:46.486963 4806 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/597e4bf2-e48f-4f61-90a0-2e930444f754-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 08:26:46 crc kubenswrapper[4806]: I0126 08:26:46.486972 4806 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/597e4bf2-e48f-4f61-90a0-2e930444f754-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 26 08:26:46 crc kubenswrapper[4806]: I0126 08:26:46.647778 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8nw2s" Jan 26 08:26:46 crc kubenswrapper[4806]: I0126 08:26:46.647693 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-8nw2s" event={"ID":"597e4bf2-e48f-4f61-90a0-2e930444f754","Type":"ContainerDied","Data":"35a1b9e231e49c8573559bd593d3dd5bb968755c4913be2f90393d296ead9523"} Jan 26 08:26:46 crc kubenswrapper[4806]: I0126 08:26:46.651655 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="35a1b9e231e49c8573559bd593d3dd5bb968755c4913be2f90393d296ead9523" Jan 26 08:26:46 crc kubenswrapper[4806]: I0126 08:26:46.784001 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-ks5cf"] Jan 26 08:26:46 crc kubenswrapper[4806]: E0126 08:26:46.784902 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="597e4bf2-e48f-4f61-90a0-2e930444f754" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 26 08:26:46 crc kubenswrapper[4806]: I0126 08:26:46.784932 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="597e4bf2-e48f-4f61-90a0-2e930444f754" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 26 08:26:46 crc kubenswrapper[4806]: I0126 08:26:46.785149 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="597e4bf2-e48f-4f61-90a0-2e930444f754" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 26 08:26:46 crc kubenswrapper[4806]: I0126 08:26:46.785929 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ks5cf" Jan 26 08:26:46 crc kubenswrapper[4806]: I0126 08:26:46.793739 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 08:26:46 crc kubenswrapper[4806]: I0126 08:26:46.794042 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Jan 26 08:26:46 crc kubenswrapper[4806]: I0126 08:26:46.794118 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-tr4w7" Jan 26 08:26:46 crc kubenswrapper[4806]: I0126 08:26:46.794361 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 08:26:46 crc kubenswrapper[4806]: I0126 08:26:46.794905 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 08:26:46 crc kubenswrapper[4806]: I0126 08:26:46.828065 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-ks5cf"] Jan 26 08:26:46 crc kubenswrapper[4806]: I0126 08:26:46.900166 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b91476e2-3e0d-4447-b7d8-f9f4696ca1c7-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ks5cf\" (UID: \"b91476e2-3e0d-4447-b7d8-f9f4696ca1c7\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ks5cf" Jan 26 08:26:46 crc kubenswrapper[4806]: I0126 08:26:46.900494 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/b91476e2-3e0d-4447-b7d8-f9f4696ca1c7-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ks5cf\" (UID: \"b91476e2-3e0d-4447-b7d8-f9f4696ca1c7\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ks5cf" Jan 26 08:26:46 crc kubenswrapper[4806]: I0126 08:26:46.900727 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b91476e2-3e0d-4447-b7d8-f9f4696ca1c7-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ks5cf\" (UID: \"b91476e2-3e0d-4447-b7d8-f9f4696ca1c7\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ks5cf" Jan 26 08:26:46 crc kubenswrapper[4806]: I0126 08:26:46.900889 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwz9l\" (UniqueName: \"kubernetes.io/projected/b91476e2-3e0d-4447-b7d8-f9f4696ca1c7-kube-api-access-hwz9l\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ks5cf\" (UID: \"b91476e2-3e0d-4447-b7d8-f9f4696ca1c7\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ks5cf" Jan 26 08:26:46 crc kubenswrapper[4806]: I0126 08:26:46.901017 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b91476e2-3e0d-4447-b7d8-f9f4696ca1c7-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ks5cf\" (UID: \"b91476e2-3e0d-4447-b7d8-f9f4696ca1c7\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ks5cf" Jan 26 08:26:47 crc kubenswrapper[4806]: I0126 08:26:47.003448 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b91476e2-3e0d-4447-b7d8-f9f4696ca1c7-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ks5cf\" (UID: \"b91476e2-3e0d-4447-b7d8-f9f4696ca1c7\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ks5cf" Jan 26 08:26:47 crc kubenswrapper[4806]: I0126 08:26:47.003554 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/b91476e2-3e0d-4447-b7d8-f9f4696ca1c7-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ks5cf\" (UID: \"b91476e2-3e0d-4447-b7d8-f9f4696ca1c7\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ks5cf" Jan 26 08:26:47 crc kubenswrapper[4806]: I0126 08:26:47.003611 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b91476e2-3e0d-4447-b7d8-f9f4696ca1c7-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ks5cf\" (UID: \"b91476e2-3e0d-4447-b7d8-f9f4696ca1c7\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ks5cf" Jan 26 08:26:47 crc kubenswrapper[4806]: I0126 08:26:47.003679 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwz9l\" (UniqueName: \"kubernetes.io/projected/b91476e2-3e0d-4447-b7d8-f9f4696ca1c7-kube-api-access-hwz9l\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ks5cf\" (UID: \"b91476e2-3e0d-4447-b7d8-f9f4696ca1c7\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ks5cf" Jan 26 08:26:47 crc kubenswrapper[4806]: I0126 08:26:47.003711 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b91476e2-3e0d-4447-b7d8-f9f4696ca1c7-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ks5cf\" (UID: \"b91476e2-3e0d-4447-b7d8-f9f4696ca1c7\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ks5cf" Jan 26 08:26:47 crc kubenswrapper[4806]: I0126 08:26:47.004593 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/b91476e2-3e0d-4447-b7d8-f9f4696ca1c7-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ks5cf\" (UID: \"b91476e2-3e0d-4447-b7d8-f9f4696ca1c7\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ks5cf" Jan 26 08:26:47 crc kubenswrapper[4806]: I0126 08:26:47.007482 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b91476e2-3e0d-4447-b7d8-f9f4696ca1c7-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ks5cf\" (UID: \"b91476e2-3e0d-4447-b7d8-f9f4696ca1c7\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ks5cf" Jan 26 08:26:47 crc kubenswrapper[4806]: I0126 08:26:47.008136 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b91476e2-3e0d-4447-b7d8-f9f4696ca1c7-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ks5cf\" (UID: \"b91476e2-3e0d-4447-b7d8-f9f4696ca1c7\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ks5cf" Jan 26 08:26:47 crc kubenswrapper[4806]: I0126 08:26:47.013835 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b91476e2-3e0d-4447-b7d8-f9f4696ca1c7-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ks5cf\" (UID: \"b91476e2-3e0d-4447-b7d8-f9f4696ca1c7\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ks5cf" Jan 26 08:26:47 crc kubenswrapper[4806]: I0126 08:26:47.026157 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwz9l\" (UniqueName: \"kubernetes.io/projected/b91476e2-3e0d-4447-b7d8-f9f4696ca1c7-kube-api-access-hwz9l\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-ks5cf\" (UID: \"b91476e2-3e0d-4447-b7d8-f9f4696ca1c7\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ks5cf" Jan 26 08:26:47 crc kubenswrapper[4806]: I0126 08:26:47.131427 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ks5cf" Jan 26 08:26:47 crc kubenswrapper[4806]: I0126 08:26:47.638183 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-ks5cf"] Jan 26 08:26:47 crc kubenswrapper[4806]: I0126 08:26:47.660375 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ks5cf" event={"ID":"b91476e2-3e0d-4447-b7d8-f9f4696ca1c7","Type":"ContainerStarted","Data":"1fab2ad2bc001172b9a168d723986963e38b95da7161ce88514f89ceeb22127c"} Jan 26 08:26:48 crc kubenswrapper[4806]: I0126 08:26:48.668979 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ks5cf" event={"ID":"b91476e2-3e0d-4447-b7d8-f9f4696ca1c7","Type":"ContainerStarted","Data":"3ec561f268068f626835dca66ba130f09a46b9f040aa5d0b2a749649691e563a"} Jan 26 08:26:48 crc kubenswrapper[4806]: I0126 08:26:48.706951 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ks5cf" podStartSLOduration=2.246341147 podStartE2EDuration="2.706926712s" podCreationTimestamp="2026-01-26 08:26:46 +0000 UTC" firstStartedPulling="2026-01-26 08:26:47.64718376 +0000 UTC m=+1986.911591836" lastFinishedPulling="2026-01-26 08:26:48.107769355 +0000 UTC m=+1987.372177401" observedRunningTime="2026-01-26 08:26:48.698935404 +0000 UTC m=+1987.963343480" watchObservedRunningTime="2026-01-26 08:26:48.706926712 +0000 UTC m=+1987.971334788" Jan 26 08:26:57 crc kubenswrapper[4806]: I0126 08:26:57.042282 4806 scope.go:117] "RemoveContainer" containerID="55ec0c990dc91e0e377b96ad87f2e5ff0605f7a996ad405f57b4bd86d9139349" Jan 26 08:26:57 crc kubenswrapper[4806]: I0126 08:26:57.751987 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" event={"ID":"d07502a2-50b0-4012-b335-340a1c694c50","Type":"ContainerStarted","Data":"4674bef7a8379021c9731cea3355fb93223c54af51ccd66095b35a4c211c8d3c"} Jan 26 08:28:04 crc kubenswrapper[4806]: I0126 08:28:04.357917 4806 generic.go:334] "Generic (PLEG): container finished" podID="b91476e2-3e0d-4447-b7d8-f9f4696ca1c7" containerID="3ec561f268068f626835dca66ba130f09a46b9f040aa5d0b2a749649691e563a" exitCode=0 Jan 26 08:28:04 crc kubenswrapper[4806]: I0126 08:28:04.358015 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ks5cf" event={"ID":"b91476e2-3e0d-4447-b7d8-f9f4696ca1c7","Type":"ContainerDied","Data":"3ec561f268068f626835dca66ba130f09a46b9f040aa5d0b2a749649691e563a"} Jan 26 08:28:05 crc kubenswrapper[4806]: I0126 08:28:05.742245 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ks5cf" Jan 26 08:28:05 crc kubenswrapper[4806]: I0126 08:28:05.767546 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b91476e2-3e0d-4447-b7d8-f9f4696ca1c7-inventory\") pod \"b91476e2-3e0d-4447-b7d8-f9f4696ca1c7\" (UID: \"b91476e2-3e0d-4447-b7d8-f9f4696ca1c7\") " Jan 26 08:28:05 crc kubenswrapper[4806]: I0126 08:28:05.767690 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b91476e2-3e0d-4447-b7d8-f9f4696ca1c7-ovn-combined-ca-bundle\") pod \"b91476e2-3e0d-4447-b7d8-f9f4696ca1c7\" (UID: \"b91476e2-3e0d-4447-b7d8-f9f4696ca1c7\") " Jan 26 08:28:05 crc kubenswrapper[4806]: I0126 08:28:05.767792 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hwz9l\" (UniqueName: \"kubernetes.io/projected/b91476e2-3e0d-4447-b7d8-f9f4696ca1c7-kube-api-access-hwz9l\") pod \"b91476e2-3e0d-4447-b7d8-f9f4696ca1c7\" (UID: \"b91476e2-3e0d-4447-b7d8-f9f4696ca1c7\") " Jan 26 08:28:05 crc kubenswrapper[4806]: I0126 08:28:05.767821 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/b91476e2-3e0d-4447-b7d8-f9f4696ca1c7-ovncontroller-config-0\") pod \"b91476e2-3e0d-4447-b7d8-f9f4696ca1c7\" (UID: \"b91476e2-3e0d-4447-b7d8-f9f4696ca1c7\") " Jan 26 08:28:05 crc kubenswrapper[4806]: I0126 08:28:05.767847 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b91476e2-3e0d-4447-b7d8-f9f4696ca1c7-ssh-key-openstack-edpm-ipam\") pod \"b91476e2-3e0d-4447-b7d8-f9f4696ca1c7\" (UID: \"b91476e2-3e0d-4447-b7d8-f9f4696ca1c7\") " Jan 26 08:28:05 crc kubenswrapper[4806]: I0126 08:28:05.788634 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b91476e2-3e0d-4447-b7d8-f9f4696ca1c7-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "b91476e2-3e0d-4447-b7d8-f9f4696ca1c7" (UID: "b91476e2-3e0d-4447-b7d8-f9f4696ca1c7"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:28:05 crc kubenswrapper[4806]: I0126 08:28:05.788730 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b91476e2-3e0d-4447-b7d8-f9f4696ca1c7-kube-api-access-hwz9l" (OuterVolumeSpecName: "kube-api-access-hwz9l") pod "b91476e2-3e0d-4447-b7d8-f9f4696ca1c7" (UID: "b91476e2-3e0d-4447-b7d8-f9f4696ca1c7"). InnerVolumeSpecName "kube-api-access-hwz9l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:28:05 crc kubenswrapper[4806]: I0126 08:28:05.798627 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b91476e2-3e0d-4447-b7d8-f9f4696ca1c7-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "b91476e2-3e0d-4447-b7d8-f9f4696ca1c7" (UID: "b91476e2-3e0d-4447-b7d8-f9f4696ca1c7"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:28:05 crc kubenswrapper[4806]: I0126 08:28:05.801757 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b91476e2-3e0d-4447-b7d8-f9f4696ca1c7-inventory" (OuterVolumeSpecName: "inventory") pod "b91476e2-3e0d-4447-b7d8-f9f4696ca1c7" (UID: "b91476e2-3e0d-4447-b7d8-f9f4696ca1c7"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:28:05 crc kubenswrapper[4806]: I0126 08:28:05.812026 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b91476e2-3e0d-4447-b7d8-f9f4696ca1c7-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "b91476e2-3e0d-4447-b7d8-f9f4696ca1c7" (UID: "b91476e2-3e0d-4447-b7d8-f9f4696ca1c7"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 08:28:05 crc kubenswrapper[4806]: I0126 08:28:05.870960 4806 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b91476e2-3e0d-4447-b7d8-f9f4696ca1c7-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 08:28:05 crc kubenswrapper[4806]: I0126 08:28:05.871172 4806 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b91476e2-3e0d-4447-b7d8-f9f4696ca1c7-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 08:28:05 crc kubenswrapper[4806]: I0126 08:28:05.871272 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hwz9l\" (UniqueName: \"kubernetes.io/projected/b91476e2-3e0d-4447-b7d8-f9f4696ca1c7-kube-api-access-hwz9l\") on node \"crc\" DevicePath \"\"" Jan 26 08:28:05 crc kubenswrapper[4806]: I0126 08:28:05.871351 4806 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/b91476e2-3e0d-4447-b7d8-f9f4696ca1c7-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Jan 26 08:28:05 crc kubenswrapper[4806]: I0126 08:28:05.871417 4806 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b91476e2-3e0d-4447-b7d8-f9f4696ca1c7-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 08:28:06 crc kubenswrapper[4806]: I0126 08:28:06.376777 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ks5cf" event={"ID":"b91476e2-3e0d-4447-b7d8-f9f4696ca1c7","Type":"ContainerDied","Data":"1fab2ad2bc001172b9a168d723986963e38b95da7161ce88514f89ceeb22127c"} Jan 26 08:28:06 crc kubenswrapper[4806]: I0126 08:28:06.376812 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-ks5cf" Jan 26 08:28:06 crc kubenswrapper[4806]: I0126 08:28:06.376832 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1fab2ad2bc001172b9a168d723986963e38b95da7161ce88514f89ceeb22127c" Jan 26 08:28:06 crc kubenswrapper[4806]: I0126 08:28:06.486195 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-snd4r"] Jan 26 08:28:06 crc kubenswrapper[4806]: E0126 08:28:06.486663 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b91476e2-3e0d-4447-b7d8-f9f4696ca1c7" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 26 08:28:06 crc kubenswrapper[4806]: I0126 08:28:06.486683 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="b91476e2-3e0d-4447-b7d8-f9f4696ca1c7" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 26 08:28:06 crc kubenswrapper[4806]: I0126 08:28:06.486935 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="b91476e2-3e0d-4447-b7d8-f9f4696ca1c7" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 26 08:28:06 crc kubenswrapper[4806]: I0126 08:28:06.487780 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-snd4r" Jan 26 08:28:06 crc kubenswrapper[4806]: I0126 08:28:06.492463 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-tr4w7" Jan 26 08:28:06 crc kubenswrapper[4806]: I0126 08:28:06.493082 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Jan 26 08:28:06 crc kubenswrapper[4806]: I0126 08:28:06.493341 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Jan 26 08:28:06 crc kubenswrapper[4806]: I0126 08:28:06.495174 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 08:28:06 crc kubenswrapper[4806]: I0126 08:28:06.498255 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 08:28:06 crc kubenswrapper[4806]: I0126 08:28:06.498912 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 08:28:06 crc kubenswrapper[4806]: I0126 08:28:06.505575 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-snd4r"] Jan 26 08:28:06 crc kubenswrapper[4806]: I0126 08:28:06.584909 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/0c4ea336-2189-42c5-9c34-1ad75642efd0-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-snd4r\" (UID: \"0c4ea336-2189-42c5-9c34-1ad75642efd0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-snd4r" Jan 26 08:28:06 crc kubenswrapper[4806]: I0126 08:28:06.584986 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z92cx\" (UniqueName: \"kubernetes.io/projected/0c4ea336-2189-42c5-9c34-1ad75642efd0-kube-api-access-z92cx\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-snd4r\" (UID: \"0c4ea336-2189-42c5-9c34-1ad75642efd0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-snd4r" Jan 26 08:28:06 crc kubenswrapper[4806]: I0126 08:28:06.585099 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c4ea336-2189-42c5-9c34-1ad75642efd0-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-snd4r\" (UID: \"0c4ea336-2189-42c5-9c34-1ad75642efd0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-snd4r" Jan 26 08:28:06 crc kubenswrapper[4806]: I0126 08:28:06.585178 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0c4ea336-2189-42c5-9c34-1ad75642efd0-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-snd4r\" (UID: \"0c4ea336-2189-42c5-9c34-1ad75642efd0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-snd4r" Jan 26 08:28:06 crc kubenswrapper[4806]: I0126 08:28:06.585313 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/0c4ea336-2189-42c5-9c34-1ad75642efd0-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-snd4r\" (UID: \"0c4ea336-2189-42c5-9c34-1ad75642efd0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-snd4r" Jan 26 08:28:06 crc kubenswrapper[4806]: I0126 08:28:06.585464 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0c4ea336-2189-42c5-9c34-1ad75642efd0-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-snd4r\" (UID: \"0c4ea336-2189-42c5-9c34-1ad75642efd0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-snd4r" Jan 26 08:28:06 crc kubenswrapper[4806]: I0126 08:28:06.686589 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c4ea336-2189-42c5-9c34-1ad75642efd0-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-snd4r\" (UID: \"0c4ea336-2189-42c5-9c34-1ad75642efd0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-snd4r" Jan 26 08:28:06 crc kubenswrapper[4806]: I0126 08:28:06.686659 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0c4ea336-2189-42c5-9c34-1ad75642efd0-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-snd4r\" (UID: \"0c4ea336-2189-42c5-9c34-1ad75642efd0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-snd4r" Jan 26 08:28:06 crc kubenswrapper[4806]: I0126 08:28:06.686695 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/0c4ea336-2189-42c5-9c34-1ad75642efd0-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-snd4r\" (UID: \"0c4ea336-2189-42c5-9c34-1ad75642efd0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-snd4r" Jan 26 08:28:06 crc kubenswrapper[4806]: I0126 08:28:06.686765 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0c4ea336-2189-42c5-9c34-1ad75642efd0-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-snd4r\" (UID: \"0c4ea336-2189-42c5-9c34-1ad75642efd0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-snd4r" Jan 26 08:28:06 crc kubenswrapper[4806]: I0126 08:28:06.686796 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/0c4ea336-2189-42c5-9c34-1ad75642efd0-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-snd4r\" (UID: \"0c4ea336-2189-42c5-9c34-1ad75642efd0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-snd4r" Jan 26 08:28:06 crc kubenswrapper[4806]: I0126 08:28:06.686825 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z92cx\" (UniqueName: \"kubernetes.io/projected/0c4ea336-2189-42c5-9c34-1ad75642efd0-kube-api-access-z92cx\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-snd4r\" (UID: \"0c4ea336-2189-42c5-9c34-1ad75642efd0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-snd4r" Jan 26 08:28:06 crc kubenswrapper[4806]: I0126 08:28:06.690369 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/0c4ea336-2189-42c5-9c34-1ad75642efd0-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-snd4r\" (UID: \"0c4ea336-2189-42c5-9c34-1ad75642efd0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-snd4r" Jan 26 08:28:06 crc kubenswrapper[4806]: I0126 08:28:06.690382 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0c4ea336-2189-42c5-9c34-1ad75642efd0-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-snd4r\" (UID: \"0c4ea336-2189-42c5-9c34-1ad75642efd0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-snd4r" Jan 26 08:28:06 crc kubenswrapper[4806]: I0126 08:28:06.692145 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0c4ea336-2189-42c5-9c34-1ad75642efd0-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-snd4r\" (UID: \"0c4ea336-2189-42c5-9c34-1ad75642efd0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-snd4r" Jan 26 08:28:06 crc kubenswrapper[4806]: I0126 08:28:06.692178 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c4ea336-2189-42c5-9c34-1ad75642efd0-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-snd4r\" (UID: \"0c4ea336-2189-42c5-9c34-1ad75642efd0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-snd4r" Jan 26 08:28:06 crc kubenswrapper[4806]: I0126 08:28:06.698380 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/0c4ea336-2189-42c5-9c34-1ad75642efd0-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-snd4r\" (UID: \"0c4ea336-2189-42c5-9c34-1ad75642efd0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-snd4r" Jan 26 08:28:06 crc kubenswrapper[4806]: I0126 08:28:06.709125 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z92cx\" (UniqueName: \"kubernetes.io/projected/0c4ea336-2189-42c5-9c34-1ad75642efd0-kube-api-access-z92cx\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-snd4r\" (UID: \"0c4ea336-2189-42c5-9c34-1ad75642efd0\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-snd4r" Jan 26 08:28:06 crc kubenswrapper[4806]: I0126 08:28:06.802150 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-snd4r" Jan 26 08:28:07 crc kubenswrapper[4806]: I0126 08:28:07.295385 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-snd4r"] Jan 26 08:28:07 crc kubenswrapper[4806]: I0126 08:28:07.384309 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-snd4r" event={"ID":"0c4ea336-2189-42c5-9c34-1ad75642efd0","Type":"ContainerStarted","Data":"dd426b8f54bbab600b98513d55c707a8e28f4381a8c4c22b4642dbcb9e841366"} Jan 26 08:28:08 crc kubenswrapper[4806]: I0126 08:28:08.395960 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-snd4r" event={"ID":"0c4ea336-2189-42c5-9c34-1ad75642efd0","Type":"ContainerStarted","Data":"4abc80601f5ad74f92c41493a75e136c23ff5242347b9368c403e6ec72246213"} Jan 26 08:28:08 crc kubenswrapper[4806]: I0126 08:28:08.416899 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-snd4r" podStartSLOduration=1.9304107080000001 podStartE2EDuration="2.416882561s" podCreationTimestamp="2026-01-26 08:28:06 +0000 UTC" firstStartedPulling="2026-01-26 08:28:07.303017096 +0000 UTC m=+2066.567425152" lastFinishedPulling="2026-01-26 08:28:07.789488929 +0000 UTC m=+2067.053897005" observedRunningTime="2026-01-26 08:28:08.412563218 +0000 UTC m=+2067.676971274" watchObservedRunningTime="2026-01-26 08:28:08.416882561 +0000 UTC m=+2067.681290617" Jan 26 08:28:13 crc kubenswrapper[4806]: I0126 08:28:13.450919 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-56fcf"] Jan 26 08:28:13 crc kubenswrapper[4806]: I0126 08:28:13.454275 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-56fcf" Jan 26 08:28:13 crc kubenswrapper[4806]: I0126 08:28:13.462389 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-56fcf"] Jan 26 08:28:13 crc kubenswrapper[4806]: I0126 08:28:13.557739 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/413fed7a-d662-4789-b5c4-32998b12ebdb-catalog-content\") pod \"redhat-operators-56fcf\" (UID: \"413fed7a-d662-4789-b5c4-32998b12ebdb\") " pod="openshift-marketplace/redhat-operators-56fcf" Jan 26 08:28:13 crc kubenswrapper[4806]: I0126 08:28:13.557855 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/413fed7a-d662-4789-b5c4-32998b12ebdb-utilities\") pod \"redhat-operators-56fcf\" (UID: \"413fed7a-d662-4789-b5c4-32998b12ebdb\") " pod="openshift-marketplace/redhat-operators-56fcf" Jan 26 08:28:13 crc kubenswrapper[4806]: I0126 08:28:13.557946 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knr92\" (UniqueName: \"kubernetes.io/projected/413fed7a-d662-4789-b5c4-32998b12ebdb-kube-api-access-knr92\") pod \"redhat-operators-56fcf\" (UID: \"413fed7a-d662-4789-b5c4-32998b12ebdb\") " pod="openshift-marketplace/redhat-operators-56fcf" Jan 26 08:28:13 crc kubenswrapper[4806]: I0126 08:28:13.659550 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-knr92\" (UniqueName: \"kubernetes.io/projected/413fed7a-d662-4789-b5c4-32998b12ebdb-kube-api-access-knr92\") pod \"redhat-operators-56fcf\" (UID: \"413fed7a-d662-4789-b5c4-32998b12ebdb\") " pod="openshift-marketplace/redhat-operators-56fcf" Jan 26 08:28:13 crc kubenswrapper[4806]: I0126 08:28:13.659884 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/413fed7a-d662-4789-b5c4-32998b12ebdb-catalog-content\") pod \"redhat-operators-56fcf\" (UID: \"413fed7a-d662-4789-b5c4-32998b12ebdb\") " pod="openshift-marketplace/redhat-operators-56fcf" Jan 26 08:28:13 crc kubenswrapper[4806]: I0126 08:28:13.660005 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/413fed7a-d662-4789-b5c4-32998b12ebdb-utilities\") pod \"redhat-operators-56fcf\" (UID: \"413fed7a-d662-4789-b5c4-32998b12ebdb\") " pod="openshift-marketplace/redhat-operators-56fcf" Jan 26 08:28:13 crc kubenswrapper[4806]: I0126 08:28:13.660414 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/413fed7a-d662-4789-b5c4-32998b12ebdb-catalog-content\") pod \"redhat-operators-56fcf\" (UID: \"413fed7a-d662-4789-b5c4-32998b12ebdb\") " pod="openshift-marketplace/redhat-operators-56fcf" Jan 26 08:28:13 crc kubenswrapper[4806]: I0126 08:28:13.660464 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/413fed7a-d662-4789-b5c4-32998b12ebdb-utilities\") pod \"redhat-operators-56fcf\" (UID: \"413fed7a-d662-4789-b5c4-32998b12ebdb\") " pod="openshift-marketplace/redhat-operators-56fcf" Jan 26 08:28:13 crc kubenswrapper[4806]: I0126 08:28:13.687720 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-knr92\" (UniqueName: \"kubernetes.io/projected/413fed7a-d662-4789-b5c4-32998b12ebdb-kube-api-access-knr92\") pod \"redhat-operators-56fcf\" (UID: \"413fed7a-d662-4789-b5c4-32998b12ebdb\") " pod="openshift-marketplace/redhat-operators-56fcf" Jan 26 08:28:13 crc kubenswrapper[4806]: I0126 08:28:13.797033 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-56fcf" Jan 26 08:28:14 crc kubenswrapper[4806]: I0126 08:28:14.265276 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-56fcf"] Jan 26 08:28:14 crc kubenswrapper[4806]: I0126 08:28:14.448898 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-56fcf" event={"ID":"413fed7a-d662-4789-b5c4-32998b12ebdb","Type":"ContainerStarted","Data":"799a8a62601eda8ee01f82f8963c124f413debda2e2298b3082969be3b034c7a"} Jan 26 08:28:15 crc kubenswrapper[4806]: I0126 08:28:15.456801 4806 generic.go:334] "Generic (PLEG): container finished" podID="413fed7a-d662-4789-b5c4-32998b12ebdb" containerID="8e6a86b0fe33caf64a917ce54833af8ccd702affc910a83e718cdc4db2c70ece" exitCode=0 Jan 26 08:28:15 crc kubenswrapper[4806]: I0126 08:28:15.457016 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-56fcf" event={"ID":"413fed7a-d662-4789-b5c4-32998b12ebdb","Type":"ContainerDied","Data":"8e6a86b0fe33caf64a917ce54833af8ccd702affc910a83e718cdc4db2c70ece"} Jan 26 08:28:16 crc kubenswrapper[4806]: I0126 08:28:16.468249 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-56fcf" event={"ID":"413fed7a-d662-4789-b5c4-32998b12ebdb","Type":"ContainerStarted","Data":"d764f09b636d492d88516798065fb0a614463904c51620c6041af382973715bd"} Jan 26 08:28:20 crc kubenswrapper[4806]: I0126 08:28:20.510855 4806 generic.go:334] "Generic (PLEG): container finished" podID="413fed7a-d662-4789-b5c4-32998b12ebdb" containerID="d764f09b636d492d88516798065fb0a614463904c51620c6041af382973715bd" exitCode=0 Jan 26 08:28:20 crc kubenswrapper[4806]: I0126 08:28:20.511445 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-56fcf" event={"ID":"413fed7a-d662-4789-b5c4-32998b12ebdb","Type":"ContainerDied","Data":"d764f09b636d492d88516798065fb0a614463904c51620c6041af382973715bd"} Jan 26 08:28:21 crc kubenswrapper[4806]: I0126 08:28:21.522474 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-56fcf" event={"ID":"413fed7a-d662-4789-b5c4-32998b12ebdb","Type":"ContainerStarted","Data":"12c017799a8f416bee7cf24566d60ec19f1bb4efd2e3855380a2acc153fb4254"} Jan 26 08:28:21 crc kubenswrapper[4806]: I0126 08:28:21.561407 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-56fcf" podStartSLOduration=2.800221382 podStartE2EDuration="8.561376199s" podCreationTimestamp="2026-01-26 08:28:13 +0000 UTC" firstStartedPulling="2026-01-26 08:28:15.458955399 +0000 UTC m=+2074.723363465" lastFinishedPulling="2026-01-26 08:28:21.220110226 +0000 UTC m=+2080.484518282" observedRunningTime="2026-01-26 08:28:21.540881444 +0000 UTC m=+2080.805289540" watchObservedRunningTime="2026-01-26 08:28:21.561376199 +0000 UTC m=+2080.825784295" Jan 26 08:28:22 crc kubenswrapper[4806]: I0126 08:28:22.493132 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-dqwkd"] Jan 26 08:28:22 crc kubenswrapper[4806]: I0126 08:28:22.496116 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dqwkd" Jan 26 08:28:22 crc kubenswrapper[4806]: I0126 08:28:22.511462 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dqwkd"] Jan 26 08:28:22 crc kubenswrapper[4806]: I0126 08:28:22.536864 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69f440e9-a1c9-4d00-acef-68b67429269d-utilities\") pod \"certified-operators-dqwkd\" (UID: \"69f440e9-a1c9-4d00-acef-68b67429269d\") " pod="openshift-marketplace/certified-operators-dqwkd" Jan 26 08:28:22 crc kubenswrapper[4806]: I0126 08:28:22.537005 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vb9c4\" (UniqueName: \"kubernetes.io/projected/69f440e9-a1c9-4d00-acef-68b67429269d-kube-api-access-vb9c4\") pod \"certified-operators-dqwkd\" (UID: \"69f440e9-a1c9-4d00-acef-68b67429269d\") " pod="openshift-marketplace/certified-operators-dqwkd" Jan 26 08:28:22 crc kubenswrapper[4806]: I0126 08:28:22.537041 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69f440e9-a1c9-4d00-acef-68b67429269d-catalog-content\") pod \"certified-operators-dqwkd\" (UID: \"69f440e9-a1c9-4d00-acef-68b67429269d\") " pod="openshift-marketplace/certified-operators-dqwkd" Jan 26 08:28:22 crc kubenswrapper[4806]: I0126 08:28:22.638552 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vb9c4\" (UniqueName: \"kubernetes.io/projected/69f440e9-a1c9-4d00-acef-68b67429269d-kube-api-access-vb9c4\") pod \"certified-operators-dqwkd\" (UID: \"69f440e9-a1c9-4d00-acef-68b67429269d\") " pod="openshift-marketplace/certified-operators-dqwkd" Jan 26 08:28:22 crc kubenswrapper[4806]: I0126 08:28:22.638632 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69f440e9-a1c9-4d00-acef-68b67429269d-catalog-content\") pod \"certified-operators-dqwkd\" (UID: \"69f440e9-a1c9-4d00-acef-68b67429269d\") " pod="openshift-marketplace/certified-operators-dqwkd" Jan 26 08:28:22 crc kubenswrapper[4806]: I0126 08:28:22.638697 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69f440e9-a1c9-4d00-acef-68b67429269d-utilities\") pod \"certified-operators-dqwkd\" (UID: \"69f440e9-a1c9-4d00-acef-68b67429269d\") " pod="openshift-marketplace/certified-operators-dqwkd" Jan 26 08:28:22 crc kubenswrapper[4806]: I0126 08:28:22.639324 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69f440e9-a1c9-4d00-acef-68b67429269d-utilities\") pod \"certified-operators-dqwkd\" (UID: \"69f440e9-a1c9-4d00-acef-68b67429269d\") " pod="openshift-marketplace/certified-operators-dqwkd" Jan 26 08:28:22 crc kubenswrapper[4806]: I0126 08:28:22.640109 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69f440e9-a1c9-4d00-acef-68b67429269d-catalog-content\") pod \"certified-operators-dqwkd\" (UID: \"69f440e9-a1c9-4d00-acef-68b67429269d\") " pod="openshift-marketplace/certified-operators-dqwkd" Jan 26 08:28:22 crc kubenswrapper[4806]: I0126 08:28:22.663389 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vb9c4\" (UniqueName: \"kubernetes.io/projected/69f440e9-a1c9-4d00-acef-68b67429269d-kube-api-access-vb9c4\") pod \"certified-operators-dqwkd\" (UID: \"69f440e9-a1c9-4d00-acef-68b67429269d\") " pod="openshift-marketplace/certified-operators-dqwkd" Jan 26 08:28:22 crc kubenswrapper[4806]: I0126 08:28:22.814608 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dqwkd" Jan 26 08:28:23 crc kubenswrapper[4806]: W0126 08:28:23.410099 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod69f440e9_a1c9_4d00_acef_68b67429269d.slice/crio-9a35a6d3b154359506f1732d21addea9d50c657da321d726c1744cd8286da771 WatchSource:0}: Error finding container 9a35a6d3b154359506f1732d21addea9d50c657da321d726c1744cd8286da771: Status 404 returned error can't find the container with id 9a35a6d3b154359506f1732d21addea9d50c657da321d726c1744cd8286da771 Jan 26 08:28:23 crc kubenswrapper[4806]: I0126 08:28:23.411512 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dqwkd"] Jan 26 08:28:23 crc kubenswrapper[4806]: I0126 08:28:23.554907 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dqwkd" event={"ID":"69f440e9-a1c9-4d00-acef-68b67429269d","Type":"ContainerStarted","Data":"9a35a6d3b154359506f1732d21addea9d50c657da321d726c1744cd8286da771"} Jan 26 08:28:23 crc kubenswrapper[4806]: I0126 08:28:23.798318 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-56fcf" Jan 26 08:28:23 crc kubenswrapper[4806]: I0126 08:28:23.798884 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-56fcf" Jan 26 08:28:24 crc kubenswrapper[4806]: I0126 08:28:24.601625 4806 generic.go:334] "Generic (PLEG): container finished" podID="69f440e9-a1c9-4d00-acef-68b67429269d" containerID="06f9ca10a12505be9b3c4d720fbb5d90c1b7274182f67deb489c84477c64d988" exitCode=0 Jan 26 08:28:24 crc kubenswrapper[4806]: I0126 08:28:24.601679 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dqwkd" event={"ID":"69f440e9-a1c9-4d00-acef-68b67429269d","Type":"ContainerDied","Data":"06f9ca10a12505be9b3c4d720fbb5d90c1b7274182f67deb489c84477c64d988"} Jan 26 08:28:24 crc kubenswrapper[4806]: I0126 08:28:24.854079 4806 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-56fcf" podUID="413fed7a-d662-4789-b5c4-32998b12ebdb" containerName="registry-server" probeResult="failure" output=< Jan 26 08:28:24 crc kubenswrapper[4806]: timeout: failed to connect service ":50051" within 1s Jan 26 08:28:24 crc kubenswrapper[4806]: > Jan 26 08:28:25 crc kubenswrapper[4806]: I0126 08:28:25.613477 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dqwkd" event={"ID":"69f440e9-a1c9-4d00-acef-68b67429269d","Type":"ContainerStarted","Data":"4101f349678218d7180ccb52a5e0afec7b7b22e73affd14eadfb25a2f62218e9"} Jan 26 08:28:27 crc kubenswrapper[4806]: I0126 08:28:27.638446 4806 generic.go:334] "Generic (PLEG): container finished" podID="69f440e9-a1c9-4d00-acef-68b67429269d" containerID="4101f349678218d7180ccb52a5e0afec7b7b22e73affd14eadfb25a2f62218e9" exitCode=0 Jan 26 08:28:27 crc kubenswrapper[4806]: I0126 08:28:27.638560 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dqwkd" event={"ID":"69f440e9-a1c9-4d00-acef-68b67429269d","Type":"ContainerDied","Data":"4101f349678218d7180ccb52a5e0afec7b7b22e73affd14eadfb25a2f62218e9"} Jan 26 08:28:28 crc kubenswrapper[4806]: I0126 08:28:28.648878 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dqwkd" event={"ID":"69f440e9-a1c9-4d00-acef-68b67429269d","Type":"ContainerStarted","Data":"95a2958f5cb591440a9344381956914599eec60f5f930a2da07dac73d6b35ba9"} Jan 26 08:28:28 crc kubenswrapper[4806]: I0126 08:28:28.670838 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-dqwkd" podStartSLOduration=3.229560449 podStartE2EDuration="6.670815808s" podCreationTimestamp="2026-01-26 08:28:22 +0000 UTC" firstStartedPulling="2026-01-26 08:28:24.605498332 +0000 UTC m=+2083.869906428" lastFinishedPulling="2026-01-26 08:28:28.046753731 +0000 UTC m=+2087.311161787" observedRunningTime="2026-01-26 08:28:28.666487194 +0000 UTC m=+2087.930895260" watchObservedRunningTime="2026-01-26 08:28:28.670815808 +0000 UTC m=+2087.935223874" Jan 26 08:28:32 crc kubenswrapper[4806]: I0126 08:28:32.815483 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-dqwkd" Jan 26 08:28:32 crc kubenswrapper[4806]: I0126 08:28:32.817028 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-dqwkd" Jan 26 08:28:32 crc kubenswrapper[4806]: I0126 08:28:32.859782 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-dqwkd" Jan 26 08:28:33 crc kubenswrapper[4806]: I0126 08:28:33.742641 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-dqwkd" Jan 26 08:28:33 crc kubenswrapper[4806]: I0126 08:28:33.836264 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dqwkd"] Jan 26 08:28:34 crc kubenswrapper[4806]: I0126 08:28:34.850647 4806 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-56fcf" podUID="413fed7a-d662-4789-b5c4-32998b12ebdb" containerName="registry-server" probeResult="failure" output=< Jan 26 08:28:34 crc kubenswrapper[4806]: timeout: failed to connect service ":50051" within 1s Jan 26 08:28:34 crc kubenswrapper[4806]: > Jan 26 08:28:35 crc kubenswrapper[4806]: I0126 08:28:35.702774 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-dqwkd" podUID="69f440e9-a1c9-4d00-acef-68b67429269d" containerName="registry-server" containerID="cri-o://95a2958f5cb591440a9344381956914599eec60f5f930a2da07dac73d6b35ba9" gracePeriod=2 Jan 26 08:28:36 crc kubenswrapper[4806]: I0126 08:28:36.160945 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dqwkd" Jan 26 08:28:36 crc kubenswrapper[4806]: I0126 08:28:36.308785 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vb9c4\" (UniqueName: \"kubernetes.io/projected/69f440e9-a1c9-4d00-acef-68b67429269d-kube-api-access-vb9c4\") pod \"69f440e9-a1c9-4d00-acef-68b67429269d\" (UID: \"69f440e9-a1c9-4d00-acef-68b67429269d\") " Jan 26 08:28:36 crc kubenswrapper[4806]: I0126 08:28:36.308986 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69f440e9-a1c9-4d00-acef-68b67429269d-catalog-content\") pod \"69f440e9-a1c9-4d00-acef-68b67429269d\" (UID: \"69f440e9-a1c9-4d00-acef-68b67429269d\") " Jan 26 08:28:36 crc kubenswrapper[4806]: I0126 08:28:36.309084 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69f440e9-a1c9-4d00-acef-68b67429269d-utilities\") pod \"69f440e9-a1c9-4d00-acef-68b67429269d\" (UID: \"69f440e9-a1c9-4d00-acef-68b67429269d\") " Jan 26 08:28:36 crc kubenswrapper[4806]: I0126 08:28:36.309610 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/69f440e9-a1c9-4d00-acef-68b67429269d-utilities" (OuterVolumeSpecName: "utilities") pod "69f440e9-a1c9-4d00-acef-68b67429269d" (UID: "69f440e9-a1c9-4d00-acef-68b67429269d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 08:28:36 crc kubenswrapper[4806]: I0126 08:28:36.317237 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69f440e9-a1c9-4d00-acef-68b67429269d-kube-api-access-vb9c4" (OuterVolumeSpecName: "kube-api-access-vb9c4") pod "69f440e9-a1c9-4d00-acef-68b67429269d" (UID: "69f440e9-a1c9-4d00-acef-68b67429269d"). InnerVolumeSpecName "kube-api-access-vb9c4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:28:36 crc kubenswrapper[4806]: I0126 08:28:36.373372 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/69f440e9-a1c9-4d00-acef-68b67429269d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "69f440e9-a1c9-4d00-acef-68b67429269d" (UID: "69f440e9-a1c9-4d00-acef-68b67429269d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 08:28:36 crc kubenswrapper[4806]: I0126 08:28:36.411473 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vb9c4\" (UniqueName: \"kubernetes.io/projected/69f440e9-a1c9-4d00-acef-68b67429269d-kube-api-access-vb9c4\") on node \"crc\" DevicePath \"\"" Jan 26 08:28:36 crc kubenswrapper[4806]: I0126 08:28:36.411505 4806 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69f440e9-a1c9-4d00-acef-68b67429269d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 08:28:36 crc kubenswrapper[4806]: I0126 08:28:36.411514 4806 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69f440e9-a1c9-4d00-acef-68b67429269d-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 08:28:36 crc kubenswrapper[4806]: I0126 08:28:36.714200 4806 generic.go:334] "Generic (PLEG): container finished" podID="69f440e9-a1c9-4d00-acef-68b67429269d" containerID="95a2958f5cb591440a9344381956914599eec60f5f930a2da07dac73d6b35ba9" exitCode=0 Jan 26 08:28:36 crc kubenswrapper[4806]: I0126 08:28:36.714246 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dqwkd" Jan 26 08:28:36 crc kubenswrapper[4806]: I0126 08:28:36.714264 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dqwkd" event={"ID":"69f440e9-a1c9-4d00-acef-68b67429269d","Type":"ContainerDied","Data":"95a2958f5cb591440a9344381956914599eec60f5f930a2da07dac73d6b35ba9"} Jan 26 08:28:36 crc kubenswrapper[4806]: I0126 08:28:36.714715 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dqwkd" event={"ID":"69f440e9-a1c9-4d00-acef-68b67429269d","Type":"ContainerDied","Data":"9a35a6d3b154359506f1732d21addea9d50c657da321d726c1744cd8286da771"} Jan 26 08:28:36 crc kubenswrapper[4806]: I0126 08:28:36.714741 4806 scope.go:117] "RemoveContainer" containerID="95a2958f5cb591440a9344381956914599eec60f5f930a2da07dac73d6b35ba9" Jan 26 08:28:36 crc kubenswrapper[4806]: I0126 08:28:36.748405 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dqwkd"] Jan 26 08:28:36 crc kubenswrapper[4806]: I0126 08:28:36.748619 4806 scope.go:117] "RemoveContainer" containerID="4101f349678218d7180ccb52a5e0afec7b7b22e73affd14eadfb25a2f62218e9" Jan 26 08:28:36 crc kubenswrapper[4806]: I0126 08:28:36.760617 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-dqwkd"] Jan 26 08:28:36 crc kubenswrapper[4806]: I0126 08:28:36.784361 4806 scope.go:117] "RemoveContainer" containerID="06f9ca10a12505be9b3c4d720fbb5d90c1b7274182f67deb489c84477c64d988" Jan 26 08:28:36 crc kubenswrapper[4806]: I0126 08:28:36.821084 4806 scope.go:117] "RemoveContainer" containerID="95a2958f5cb591440a9344381956914599eec60f5f930a2da07dac73d6b35ba9" Jan 26 08:28:36 crc kubenswrapper[4806]: E0126 08:28:36.821499 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"95a2958f5cb591440a9344381956914599eec60f5f930a2da07dac73d6b35ba9\": container with ID starting with 95a2958f5cb591440a9344381956914599eec60f5f930a2da07dac73d6b35ba9 not found: ID does not exist" containerID="95a2958f5cb591440a9344381956914599eec60f5f930a2da07dac73d6b35ba9" Jan 26 08:28:36 crc kubenswrapper[4806]: I0126 08:28:36.821628 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95a2958f5cb591440a9344381956914599eec60f5f930a2da07dac73d6b35ba9"} err="failed to get container status \"95a2958f5cb591440a9344381956914599eec60f5f930a2da07dac73d6b35ba9\": rpc error: code = NotFound desc = could not find container \"95a2958f5cb591440a9344381956914599eec60f5f930a2da07dac73d6b35ba9\": container with ID starting with 95a2958f5cb591440a9344381956914599eec60f5f930a2da07dac73d6b35ba9 not found: ID does not exist" Jan 26 08:28:36 crc kubenswrapper[4806]: I0126 08:28:36.821661 4806 scope.go:117] "RemoveContainer" containerID="4101f349678218d7180ccb52a5e0afec7b7b22e73affd14eadfb25a2f62218e9" Jan 26 08:28:36 crc kubenswrapper[4806]: E0126 08:28:36.822237 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4101f349678218d7180ccb52a5e0afec7b7b22e73affd14eadfb25a2f62218e9\": container with ID starting with 4101f349678218d7180ccb52a5e0afec7b7b22e73affd14eadfb25a2f62218e9 not found: ID does not exist" containerID="4101f349678218d7180ccb52a5e0afec7b7b22e73affd14eadfb25a2f62218e9" Jan 26 08:28:36 crc kubenswrapper[4806]: I0126 08:28:36.822266 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4101f349678218d7180ccb52a5e0afec7b7b22e73affd14eadfb25a2f62218e9"} err="failed to get container status \"4101f349678218d7180ccb52a5e0afec7b7b22e73affd14eadfb25a2f62218e9\": rpc error: code = NotFound desc = could not find container \"4101f349678218d7180ccb52a5e0afec7b7b22e73affd14eadfb25a2f62218e9\": container with ID starting with 4101f349678218d7180ccb52a5e0afec7b7b22e73affd14eadfb25a2f62218e9 not found: ID does not exist" Jan 26 08:28:36 crc kubenswrapper[4806]: I0126 08:28:36.822284 4806 scope.go:117] "RemoveContainer" containerID="06f9ca10a12505be9b3c4d720fbb5d90c1b7274182f67deb489c84477c64d988" Jan 26 08:28:36 crc kubenswrapper[4806]: E0126 08:28:36.822687 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06f9ca10a12505be9b3c4d720fbb5d90c1b7274182f67deb489c84477c64d988\": container with ID starting with 06f9ca10a12505be9b3c4d720fbb5d90c1b7274182f67deb489c84477c64d988 not found: ID does not exist" containerID="06f9ca10a12505be9b3c4d720fbb5d90c1b7274182f67deb489c84477c64d988" Jan 26 08:28:36 crc kubenswrapper[4806]: I0126 08:28:36.822722 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06f9ca10a12505be9b3c4d720fbb5d90c1b7274182f67deb489c84477c64d988"} err="failed to get container status \"06f9ca10a12505be9b3c4d720fbb5d90c1b7274182f67deb489c84477c64d988\": rpc error: code = NotFound desc = could not find container \"06f9ca10a12505be9b3c4d720fbb5d90c1b7274182f67deb489c84477c64d988\": container with ID starting with 06f9ca10a12505be9b3c4d720fbb5d90c1b7274182f67deb489c84477c64d988 not found: ID does not exist" Jan 26 08:28:37 crc kubenswrapper[4806]: I0126 08:28:37.052930 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69f440e9-a1c9-4d00-acef-68b67429269d" path="/var/lib/kubelet/pods/69f440e9-a1c9-4d00-acef-68b67429269d/volumes" Jan 26 08:28:43 crc kubenswrapper[4806]: I0126 08:28:43.915890 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-56fcf" Jan 26 08:28:43 crc kubenswrapper[4806]: I0126 08:28:43.969802 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-56fcf" Jan 26 08:28:44 crc kubenswrapper[4806]: I0126 08:28:44.651830 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-56fcf"] Jan 26 08:28:45 crc kubenswrapper[4806]: I0126 08:28:45.798161 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-56fcf" podUID="413fed7a-d662-4789-b5c4-32998b12ebdb" containerName="registry-server" containerID="cri-o://12c017799a8f416bee7cf24566d60ec19f1bb4efd2e3855380a2acc153fb4254" gracePeriod=2 Jan 26 08:28:46 crc kubenswrapper[4806]: I0126 08:28:46.220438 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-56fcf" Jan 26 08:28:46 crc kubenswrapper[4806]: I0126 08:28:46.229569 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/413fed7a-d662-4789-b5c4-32998b12ebdb-catalog-content\") pod \"413fed7a-d662-4789-b5c4-32998b12ebdb\" (UID: \"413fed7a-d662-4789-b5c4-32998b12ebdb\") " Jan 26 08:28:46 crc kubenswrapper[4806]: I0126 08:28:46.330970 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-knr92\" (UniqueName: \"kubernetes.io/projected/413fed7a-d662-4789-b5c4-32998b12ebdb-kube-api-access-knr92\") pod \"413fed7a-d662-4789-b5c4-32998b12ebdb\" (UID: \"413fed7a-d662-4789-b5c4-32998b12ebdb\") " Jan 26 08:28:46 crc kubenswrapper[4806]: I0126 08:28:46.331026 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/413fed7a-d662-4789-b5c4-32998b12ebdb-utilities\") pod \"413fed7a-d662-4789-b5c4-32998b12ebdb\" (UID: \"413fed7a-d662-4789-b5c4-32998b12ebdb\") " Jan 26 08:28:46 crc kubenswrapper[4806]: I0126 08:28:46.331856 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/413fed7a-d662-4789-b5c4-32998b12ebdb-utilities" (OuterVolumeSpecName: "utilities") pod "413fed7a-d662-4789-b5c4-32998b12ebdb" (UID: "413fed7a-d662-4789-b5c4-32998b12ebdb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 08:28:46 crc kubenswrapper[4806]: I0126 08:28:46.340234 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/413fed7a-d662-4789-b5c4-32998b12ebdb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "413fed7a-d662-4789-b5c4-32998b12ebdb" (UID: "413fed7a-d662-4789-b5c4-32998b12ebdb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 08:28:46 crc kubenswrapper[4806]: I0126 08:28:46.351806 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/413fed7a-d662-4789-b5c4-32998b12ebdb-kube-api-access-knr92" (OuterVolumeSpecName: "kube-api-access-knr92") pod "413fed7a-d662-4789-b5c4-32998b12ebdb" (UID: "413fed7a-d662-4789-b5c4-32998b12ebdb"). InnerVolumeSpecName "kube-api-access-knr92". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:28:46 crc kubenswrapper[4806]: I0126 08:28:46.433785 4806 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/413fed7a-d662-4789-b5c4-32998b12ebdb-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 08:28:46 crc kubenswrapper[4806]: I0126 08:28:46.433815 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-knr92\" (UniqueName: \"kubernetes.io/projected/413fed7a-d662-4789-b5c4-32998b12ebdb-kube-api-access-knr92\") on node \"crc\" DevicePath \"\"" Jan 26 08:28:46 crc kubenswrapper[4806]: I0126 08:28:46.433827 4806 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/413fed7a-d662-4789-b5c4-32998b12ebdb-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 08:28:46 crc kubenswrapper[4806]: I0126 08:28:46.809145 4806 generic.go:334] "Generic (PLEG): container finished" podID="413fed7a-d662-4789-b5c4-32998b12ebdb" containerID="12c017799a8f416bee7cf24566d60ec19f1bb4efd2e3855380a2acc153fb4254" exitCode=0 Jan 26 08:28:46 crc kubenswrapper[4806]: I0126 08:28:46.809200 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-56fcf" event={"ID":"413fed7a-d662-4789-b5c4-32998b12ebdb","Type":"ContainerDied","Data":"12c017799a8f416bee7cf24566d60ec19f1bb4efd2e3855380a2acc153fb4254"} Jan 26 08:28:46 crc kubenswrapper[4806]: I0126 08:28:46.809496 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-56fcf" event={"ID":"413fed7a-d662-4789-b5c4-32998b12ebdb","Type":"ContainerDied","Data":"799a8a62601eda8ee01f82f8963c124f413debda2e2298b3082969be3b034c7a"} Jan 26 08:28:46 crc kubenswrapper[4806]: I0126 08:28:46.809536 4806 scope.go:117] "RemoveContainer" containerID="12c017799a8f416bee7cf24566d60ec19f1bb4efd2e3855380a2acc153fb4254" Jan 26 08:28:46 crc kubenswrapper[4806]: I0126 08:28:46.809233 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-56fcf" Jan 26 08:28:46 crc kubenswrapper[4806]: I0126 08:28:46.832832 4806 scope.go:117] "RemoveContainer" containerID="d764f09b636d492d88516798065fb0a614463904c51620c6041af382973715bd" Jan 26 08:28:46 crc kubenswrapper[4806]: I0126 08:28:46.852666 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-56fcf"] Jan 26 08:28:46 crc kubenswrapper[4806]: I0126 08:28:46.862012 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-56fcf"] Jan 26 08:28:46 crc kubenswrapper[4806]: I0126 08:28:46.865669 4806 scope.go:117] "RemoveContainer" containerID="8e6a86b0fe33caf64a917ce54833af8ccd702affc910a83e718cdc4db2c70ece" Jan 26 08:28:47 crc kubenswrapper[4806]: I0126 08:28:47.050743 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="413fed7a-d662-4789-b5c4-32998b12ebdb" path="/var/lib/kubelet/pods/413fed7a-d662-4789-b5c4-32998b12ebdb/volumes" Jan 26 08:28:47 crc kubenswrapper[4806]: I0126 08:28:47.203727 4806 scope.go:117] "RemoveContainer" containerID="12c017799a8f416bee7cf24566d60ec19f1bb4efd2e3855380a2acc153fb4254" Jan 26 08:28:47 crc kubenswrapper[4806]: E0126 08:28:47.204321 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"12c017799a8f416bee7cf24566d60ec19f1bb4efd2e3855380a2acc153fb4254\": container with ID starting with 12c017799a8f416bee7cf24566d60ec19f1bb4efd2e3855380a2acc153fb4254 not found: ID does not exist" containerID="12c017799a8f416bee7cf24566d60ec19f1bb4efd2e3855380a2acc153fb4254" Jan 26 08:28:47 crc kubenswrapper[4806]: I0126 08:28:47.204372 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12c017799a8f416bee7cf24566d60ec19f1bb4efd2e3855380a2acc153fb4254"} err="failed to get container status \"12c017799a8f416bee7cf24566d60ec19f1bb4efd2e3855380a2acc153fb4254\": rpc error: code = NotFound desc = could not find container \"12c017799a8f416bee7cf24566d60ec19f1bb4efd2e3855380a2acc153fb4254\": container with ID starting with 12c017799a8f416bee7cf24566d60ec19f1bb4efd2e3855380a2acc153fb4254 not found: ID does not exist" Jan 26 08:28:47 crc kubenswrapper[4806]: I0126 08:28:47.204401 4806 scope.go:117] "RemoveContainer" containerID="d764f09b636d492d88516798065fb0a614463904c51620c6041af382973715bd" Jan 26 08:28:47 crc kubenswrapper[4806]: E0126 08:28:47.204803 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d764f09b636d492d88516798065fb0a614463904c51620c6041af382973715bd\": container with ID starting with d764f09b636d492d88516798065fb0a614463904c51620c6041af382973715bd not found: ID does not exist" containerID="d764f09b636d492d88516798065fb0a614463904c51620c6041af382973715bd" Jan 26 08:28:47 crc kubenswrapper[4806]: I0126 08:28:47.204829 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d764f09b636d492d88516798065fb0a614463904c51620c6041af382973715bd"} err="failed to get container status \"d764f09b636d492d88516798065fb0a614463904c51620c6041af382973715bd\": rpc error: code = NotFound desc = could not find container \"d764f09b636d492d88516798065fb0a614463904c51620c6041af382973715bd\": container with ID starting with d764f09b636d492d88516798065fb0a614463904c51620c6041af382973715bd not found: ID does not exist" Jan 26 08:28:47 crc kubenswrapper[4806]: I0126 08:28:47.204844 4806 scope.go:117] "RemoveContainer" containerID="8e6a86b0fe33caf64a917ce54833af8ccd702affc910a83e718cdc4db2c70ece" Jan 26 08:28:47 crc kubenswrapper[4806]: E0126 08:28:47.205282 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e6a86b0fe33caf64a917ce54833af8ccd702affc910a83e718cdc4db2c70ece\": container with ID starting with 8e6a86b0fe33caf64a917ce54833af8ccd702affc910a83e718cdc4db2c70ece not found: ID does not exist" containerID="8e6a86b0fe33caf64a917ce54833af8ccd702affc910a83e718cdc4db2c70ece" Jan 26 08:28:47 crc kubenswrapper[4806]: I0126 08:28:47.205318 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e6a86b0fe33caf64a917ce54833af8ccd702affc910a83e718cdc4db2c70ece"} err="failed to get container status \"8e6a86b0fe33caf64a917ce54833af8ccd702affc910a83e718cdc4db2c70ece\": rpc error: code = NotFound desc = could not find container \"8e6a86b0fe33caf64a917ce54833af8ccd702affc910a83e718cdc4db2c70ece\": container with ID starting with 8e6a86b0fe33caf64a917ce54833af8ccd702affc910a83e718cdc4db2c70ece not found: ID does not exist" Jan 26 08:28:55 crc kubenswrapper[4806]: I0126 08:28:55.333971 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-nb8n9"] Jan 26 08:28:55 crc kubenswrapper[4806]: E0126 08:28:55.335204 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69f440e9-a1c9-4d00-acef-68b67429269d" containerName="extract-utilities" Jan 26 08:28:55 crc kubenswrapper[4806]: I0126 08:28:55.335225 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="69f440e9-a1c9-4d00-acef-68b67429269d" containerName="extract-utilities" Jan 26 08:28:55 crc kubenswrapper[4806]: E0126 08:28:55.335274 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="413fed7a-d662-4789-b5c4-32998b12ebdb" containerName="extract-content" Jan 26 08:28:55 crc kubenswrapper[4806]: I0126 08:28:55.335285 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="413fed7a-d662-4789-b5c4-32998b12ebdb" containerName="extract-content" Jan 26 08:28:55 crc kubenswrapper[4806]: E0126 08:28:55.335308 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69f440e9-a1c9-4d00-acef-68b67429269d" containerName="registry-server" Jan 26 08:28:55 crc kubenswrapper[4806]: I0126 08:28:55.335322 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="69f440e9-a1c9-4d00-acef-68b67429269d" containerName="registry-server" Jan 26 08:28:55 crc kubenswrapper[4806]: E0126 08:28:55.335340 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69f440e9-a1c9-4d00-acef-68b67429269d" containerName="extract-content" Jan 26 08:28:55 crc kubenswrapper[4806]: I0126 08:28:55.335350 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="69f440e9-a1c9-4d00-acef-68b67429269d" containerName="extract-content" Jan 26 08:28:55 crc kubenswrapper[4806]: E0126 08:28:55.335365 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="413fed7a-d662-4789-b5c4-32998b12ebdb" containerName="extract-utilities" Jan 26 08:28:55 crc kubenswrapper[4806]: I0126 08:28:55.335375 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="413fed7a-d662-4789-b5c4-32998b12ebdb" containerName="extract-utilities" Jan 26 08:28:55 crc kubenswrapper[4806]: E0126 08:28:55.335391 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="413fed7a-d662-4789-b5c4-32998b12ebdb" containerName="registry-server" Jan 26 08:28:55 crc kubenswrapper[4806]: I0126 08:28:55.335401 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="413fed7a-d662-4789-b5c4-32998b12ebdb" containerName="registry-server" Jan 26 08:28:55 crc kubenswrapper[4806]: I0126 08:28:55.335734 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="69f440e9-a1c9-4d00-acef-68b67429269d" containerName="registry-server" Jan 26 08:28:55 crc kubenswrapper[4806]: I0126 08:28:55.335756 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="413fed7a-d662-4789-b5c4-32998b12ebdb" containerName="registry-server" Jan 26 08:28:55 crc kubenswrapper[4806]: I0126 08:28:55.337658 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nb8n9" Jan 26 08:28:55 crc kubenswrapper[4806]: I0126 08:28:55.342307 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nb8n9"] Jan 26 08:28:55 crc kubenswrapper[4806]: I0126 08:28:55.512213 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b9a302e-05ed-41eb-83b9-f8a734b49ea1-catalog-content\") pod \"redhat-marketplace-nb8n9\" (UID: \"5b9a302e-05ed-41eb-83b9-f8a734b49ea1\") " pod="openshift-marketplace/redhat-marketplace-nb8n9" Jan 26 08:28:55 crc kubenswrapper[4806]: I0126 08:28:55.512251 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrt87\" (UniqueName: \"kubernetes.io/projected/5b9a302e-05ed-41eb-83b9-f8a734b49ea1-kube-api-access-wrt87\") pod \"redhat-marketplace-nb8n9\" (UID: \"5b9a302e-05ed-41eb-83b9-f8a734b49ea1\") " pod="openshift-marketplace/redhat-marketplace-nb8n9" Jan 26 08:28:55 crc kubenswrapper[4806]: I0126 08:28:55.512286 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b9a302e-05ed-41eb-83b9-f8a734b49ea1-utilities\") pod \"redhat-marketplace-nb8n9\" (UID: \"5b9a302e-05ed-41eb-83b9-f8a734b49ea1\") " pod="openshift-marketplace/redhat-marketplace-nb8n9" Jan 26 08:28:55 crc kubenswrapper[4806]: I0126 08:28:55.614335 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b9a302e-05ed-41eb-83b9-f8a734b49ea1-catalog-content\") pod \"redhat-marketplace-nb8n9\" (UID: \"5b9a302e-05ed-41eb-83b9-f8a734b49ea1\") " pod="openshift-marketplace/redhat-marketplace-nb8n9" Jan 26 08:28:55 crc kubenswrapper[4806]: I0126 08:28:55.614610 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrt87\" (UniqueName: \"kubernetes.io/projected/5b9a302e-05ed-41eb-83b9-f8a734b49ea1-kube-api-access-wrt87\") pod \"redhat-marketplace-nb8n9\" (UID: \"5b9a302e-05ed-41eb-83b9-f8a734b49ea1\") " pod="openshift-marketplace/redhat-marketplace-nb8n9" Jan 26 08:28:55 crc kubenswrapper[4806]: I0126 08:28:55.614641 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b9a302e-05ed-41eb-83b9-f8a734b49ea1-utilities\") pod \"redhat-marketplace-nb8n9\" (UID: \"5b9a302e-05ed-41eb-83b9-f8a734b49ea1\") " pod="openshift-marketplace/redhat-marketplace-nb8n9" Jan 26 08:28:55 crc kubenswrapper[4806]: I0126 08:28:55.614900 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b9a302e-05ed-41eb-83b9-f8a734b49ea1-catalog-content\") pod \"redhat-marketplace-nb8n9\" (UID: \"5b9a302e-05ed-41eb-83b9-f8a734b49ea1\") " pod="openshift-marketplace/redhat-marketplace-nb8n9" Jan 26 08:28:55 crc kubenswrapper[4806]: I0126 08:28:55.614993 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b9a302e-05ed-41eb-83b9-f8a734b49ea1-utilities\") pod \"redhat-marketplace-nb8n9\" (UID: \"5b9a302e-05ed-41eb-83b9-f8a734b49ea1\") " pod="openshift-marketplace/redhat-marketplace-nb8n9" Jan 26 08:28:55 crc kubenswrapper[4806]: I0126 08:28:55.649143 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrt87\" (UniqueName: \"kubernetes.io/projected/5b9a302e-05ed-41eb-83b9-f8a734b49ea1-kube-api-access-wrt87\") pod \"redhat-marketplace-nb8n9\" (UID: \"5b9a302e-05ed-41eb-83b9-f8a734b49ea1\") " pod="openshift-marketplace/redhat-marketplace-nb8n9" Jan 26 08:28:55 crc kubenswrapper[4806]: I0126 08:28:55.673732 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nb8n9" Jan 26 08:28:56 crc kubenswrapper[4806]: I0126 08:28:56.165681 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nb8n9"] Jan 26 08:28:56 crc kubenswrapper[4806]: I0126 08:28:56.897308 4806 generic.go:334] "Generic (PLEG): container finished" podID="5b9a302e-05ed-41eb-83b9-f8a734b49ea1" containerID="10b6ead55ac9f82e5ee889306d6ccfb665c4ef5fef323627fa0e82afc8c4430f" exitCode=0 Jan 26 08:28:56 crc kubenswrapper[4806]: I0126 08:28:56.897386 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nb8n9" event={"ID":"5b9a302e-05ed-41eb-83b9-f8a734b49ea1","Type":"ContainerDied","Data":"10b6ead55ac9f82e5ee889306d6ccfb665c4ef5fef323627fa0e82afc8c4430f"} Jan 26 08:28:56 crc kubenswrapper[4806]: I0126 08:28:56.897702 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nb8n9" event={"ID":"5b9a302e-05ed-41eb-83b9-f8a734b49ea1","Type":"ContainerStarted","Data":"665907d2e16faaa249dad5bef86606834e9ad30a05663583968539a1dd9db6e6"} Jan 26 08:28:57 crc kubenswrapper[4806]: I0126 08:28:57.909558 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nb8n9" event={"ID":"5b9a302e-05ed-41eb-83b9-f8a734b49ea1","Type":"ContainerStarted","Data":"7d13b05e302c3e38aaf7e7cbfe3cb3252b008c5dc4ff69f4e2b586c6373fbbb6"} Jan 26 08:28:58 crc kubenswrapper[4806]: I0126 08:28:58.925336 4806 generic.go:334] "Generic (PLEG): container finished" podID="5b9a302e-05ed-41eb-83b9-f8a734b49ea1" containerID="7d13b05e302c3e38aaf7e7cbfe3cb3252b008c5dc4ff69f4e2b586c6373fbbb6" exitCode=0 Jan 26 08:28:58 crc kubenswrapper[4806]: I0126 08:28:58.925376 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nb8n9" event={"ID":"5b9a302e-05ed-41eb-83b9-f8a734b49ea1","Type":"ContainerDied","Data":"7d13b05e302c3e38aaf7e7cbfe3cb3252b008c5dc4ff69f4e2b586c6373fbbb6"} Jan 26 08:28:59 crc kubenswrapper[4806]: I0126 08:28:59.936631 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nb8n9" event={"ID":"5b9a302e-05ed-41eb-83b9-f8a734b49ea1","Type":"ContainerStarted","Data":"5db4db124c20863c1a50a6415e84ba450551c276846ace689231dfe5e1859e05"} Jan 26 08:28:59 crc kubenswrapper[4806]: I0126 08:28:59.966863 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-nb8n9" podStartSLOduration=2.328063243 podStartE2EDuration="4.966843467s" podCreationTimestamp="2026-01-26 08:28:55 +0000 UTC" firstStartedPulling="2026-01-26 08:28:56.899764628 +0000 UTC m=+2116.164172684" lastFinishedPulling="2026-01-26 08:28:59.538544862 +0000 UTC m=+2118.802952908" observedRunningTime="2026-01-26 08:28:59.957951863 +0000 UTC m=+2119.222359949" watchObservedRunningTime="2026-01-26 08:28:59.966843467 +0000 UTC m=+2119.231251523" Jan 26 08:29:05 crc kubenswrapper[4806]: I0126 08:29:05.674551 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-nb8n9" Jan 26 08:29:05 crc kubenswrapper[4806]: I0126 08:29:05.675292 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-nb8n9" Jan 26 08:29:05 crc kubenswrapper[4806]: I0126 08:29:05.729178 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-nb8n9" Jan 26 08:29:06 crc kubenswrapper[4806]: I0126 08:29:06.037152 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-nb8n9" Jan 26 08:29:06 crc kubenswrapper[4806]: I0126 08:29:06.086728 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nb8n9"] Jan 26 08:29:08 crc kubenswrapper[4806]: I0126 08:29:08.009726 4806 generic.go:334] "Generic (PLEG): container finished" podID="0c4ea336-2189-42c5-9c34-1ad75642efd0" containerID="4abc80601f5ad74f92c41493a75e136c23ff5242347b9368c403e6ec72246213" exitCode=0 Jan 26 08:29:08 crc kubenswrapper[4806]: I0126 08:29:08.009810 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-snd4r" event={"ID":"0c4ea336-2189-42c5-9c34-1ad75642efd0","Type":"ContainerDied","Data":"4abc80601f5ad74f92c41493a75e136c23ff5242347b9368c403e6ec72246213"} Jan 26 08:29:08 crc kubenswrapper[4806]: I0126 08:29:08.010223 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-nb8n9" podUID="5b9a302e-05ed-41eb-83b9-f8a734b49ea1" containerName="registry-server" containerID="cri-o://5db4db124c20863c1a50a6415e84ba450551c276846ace689231dfe5e1859e05" gracePeriod=2 Jan 26 08:29:08 crc kubenswrapper[4806]: I0126 08:29:08.466792 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nb8n9" Jan 26 08:29:08 crc kubenswrapper[4806]: I0126 08:29:08.565080 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wrt87\" (UniqueName: \"kubernetes.io/projected/5b9a302e-05ed-41eb-83b9-f8a734b49ea1-kube-api-access-wrt87\") pod \"5b9a302e-05ed-41eb-83b9-f8a734b49ea1\" (UID: \"5b9a302e-05ed-41eb-83b9-f8a734b49ea1\") " Jan 26 08:29:08 crc kubenswrapper[4806]: I0126 08:29:08.565134 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b9a302e-05ed-41eb-83b9-f8a734b49ea1-catalog-content\") pod \"5b9a302e-05ed-41eb-83b9-f8a734b49ea1\" (UID: \"5b9a302e-05ed-41eb-83b9-f8a734b49ea1\") " Jan 26 08:29:08 crc kubenswrapper[4806]: I0126 08:29:08.565472 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b9a302e-05ed-41eb-83b9-f8a734b49ea1-utilities\") pod \"5b9a302e-05ed-41eb-83b9-f8a734b49ea1\" (UID: \"5b9a302e-05ed-41eb-83b9-f8a734b49ea1\") " Jan 26 08:29:08 crc kubenswrapper[4806]: I0126 08:29:08.566497 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5b9a302e-05ed-41eb-83b9-f8a734b49ea1-utilities" (OuterVolumeSpecName: "utilities") pod "5b9a302e-05ed-41eb-83b9-f8a734b49ea1" (UID: "5b9a302e-05ed-41eb-83b9-f8a734b49ea1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 08:29:08 crc kubenswrapper[4806]: I0126 08:29:08.574806 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b9a302e-05ed-41eb-83b9-f8a734b49ea1-kube-api-access-wrt87" (OuterVolumeSpecName: "kube-api-access-wrt87") pod "5b9a302e-05ed-41eb-83b9-f8a734b49ea1" (UID: "5b9a302e-05ed-41eb-83b9-f8a734b49ea1"). InnerVolumeSpecName "kube-api-access-wrt87". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:29:08 crc kubenswrapper[4806]: I0126 08:29:08.591732 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5b9a302e-05ed-41eb-83b9-f8a734b49ea1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5b9a302e-05ed-41eb-83b9-f8a734b49ea1" (UID: "5b9a302e-05ed-41eb-83b9-f8a734b49ea1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 08:29:08 crc kubenswrapper[4806]: I0126 08:29:08.667416 4806 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5b9a302e-05ed-41eb-83b9-f8a734b49ea1-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 08:29:08 crc kubenswrapper[4806]: I0126 08:29:08.667452 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wrt87\" (UniqueName: \"kubernetes.io/projected/5b9a302e-05ed-41eb-83b9-f8a734b49ea1-kube-api-access-wrt87\") on node \"crc\" DevicePath \"\"" Jan 26 08:29:08 crc kubenswrapper[4806]: I0126 08:29:08.667463 4806 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5b9a302e-05ed-41eb-83b9-f8a734b49ea1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 08:29:09 crc kubenswrapper[4806]: I0126 08:29:09.018594 4806 generic.go:334] "Generic (PLEG): container finished" podID="5b9a302e-05ed-41eb-83b9-f8a734b49ea1" containerID="5db4db124c20863c1a50a6415e84ba450551c276846ace689231dfe5e1859e05" exitCode=0 Jan 26 08:29:09 crc kubenswrapper[4806]: I0126 08:29:09.018773 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nb8n9" event={"ID":"5b9a302e-05ed-41eb-83b9-f8a734b49ea1","Type":"ContainerDied","Data":"5db4db124c20863c1a50a6415e84ba450551c276846ace689231dfe5e1859e05"} Jan 26 08:29:09 crc kubenswrapper[4806]: I0126 08:29:09.019832 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nb8n9" event={"ID":"5b9a302e-05ed-41eb-83b9-f8a734b49ea1","Type":"ContainerDied","Data":"665907d2e16faaa249dad5bef86606834e9ad30a05663583968539a1dd9db6e6"} Jan 26 08:29:09 crc kubenswrapper[4806]: I0126 08:29:09.018858 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nb8n9" Jan 26 08:29:09 crc kubenswrapper[4806]: I0126 08:29:09.019857 4806 scope.go:117] "RemoveContainer" containerID="5db4db124c20863c1a50a6415e84ba450551c276846ace689231dfe5e1859e05" Jan 26 08:29:09 crc kubenswrapper[4806]: I0126 08:29:09.067242 4806 scope.go:117] "RemoveContainer" containerID="7d13b05e302c3e38aaf7e7cbfe3cb3252b008c5dc4ff69f4e2b586c6373fbbb6" Jan 26 08:29:09 crc kubenswrapper[4806]: I0126 08:29:09.094390 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nb8n9"] Jan 26 08:29:09 crc kubenswrapper[4806]: I0126 08:29:09.097066 4806 scope.go:117] "RemoveContainer" containerID="10b6ead55ac9f82e5ee889306d6ccfb665c4ef5fef323627fa0e82afc8c4430f" Jan 26 08:29:09 crc kubenswrapper[4806]: I0126 08:29:09.108003 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-nb8n9"] Jan 26 08:29:09 crc kubenswrapper[4806]: I0126 08:29:09.145363 4806 scope.go:117] "RemoveContainer" containerID="5db4db124c20863c1a50a6415e84ba450551c276846ace689231dfe5e1859e05" Jan 26 08:29:09 crc kubenswrapper[4806]: E0126 08:29:09.145776 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5db4db124c20863c1a50a6415e84ba450551c276846ace689231dfe5e1859e05\": container with ID starting with 5db4db124c20863c1a50a6415e84ba450551c276846ace689231dfe5e1859e05 not found: ID does not exist" containerID="5db4db124c20863c1a50a6415e84ba450551c276846ace689231dfe5e1859e05" Jan 26 08:29:09 crc kubenswrapper[4806]: I0126 08:29:09.145809 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5db4db124c20863c1a50a6415e84ba450551c276846ace689231dfe5e1859e05"} err="failed to get container status \"5db4db124c20863c1a50a6415e84ba450551c276846ace689231dfe5e1859e05\": rpc error: code = NotFound desc = could not find container \"5db4db124c20863c1a50a6415e84ba450551c276846ace689231dfe5e1859e05\": container with ID starting with 5db4db124c20863c1a50a6415e84ba450551c276846ace689231dfe5e1859e05 not found: ID does not exist" Jan 26 08:29:09 crc kubenswrapper[4806]: I0126 08:29:09.145829 4806 scope.go:117] "RemoveContainer" containerID="7d13b05e302c3e38aaf7e7cbfe3cb3252b008c5dc4ff69f4e2b586c6373fbbb6" Jan 26 08:29:09 crc kubenswrapper[4806]: E0126 08:29:09.146191 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d13b05e302c3e38aaf7e7cbfe3cb3252b008c5dc4ff69f4e2b586c6373fbbb6\": container with ID starting with 7d13b05e302c3e38aaf7e7cbfe3cb3252b008c5dc4ff69f4e2b586c6373fbbb6 not found: ID does not exist" containerID="7d13b05e302c3e38aaf7e7cbfe3cb3252b008c5dc4ff69f4e2b586c6373fbbb6" Jan 26 08:29:09 crc kubenswrapper[4806]: I0126 08:29:09.146214 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d13b05e302c3e38aaf7e7cbfe3cb3252b008c5dc4ff69f4e2b586c6373fbbb6"} err="failed to get container status \"7d13b05e302c3e38aaf7e7cbfe3cb3252b008c5dc4ff69f4e2b586c6373fbbb6\": rpc error: code = NotFound desc = could not find container \"7d13b05e302c3e38aaf7e7cbfe3cb3252b008c5dc4ff69f4e2b586c6373fbbb6\": container with ID starting with 7d13b05e302c3e38aaf7e7cbfe3cb3252b008c5dc4ff69f4e2b586c6373fbbb6 not found: ID does not exist" Jan 26 08:29:09 crc kubenswrapper[4806]: I0126 08:29:09.146236 4806 scope.go:117] "RemoveContainer" containerID="10b6ead55ac9f82e5ee889306d6ccfb665c4ef5fef323627fa0e82afc8c4430f" Jan 26 08:29:09 crc kubenswrapper[4806]: E0126 08:29:09.146632 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10b6ead55ac9f82e5ee889306d6ccfb665c4ef5fef323627fa0e82afc8c4430f\": container with ID starting with 10b6ead55ac9f82e5ee889306d6ccfb665c4ef5fef323627fa0e82afc8c4430f not found: ID does not exist" containerID="10b6ead55ac9f82e5ee889306d6ccfb665c4ef5fef323627fa0e82afc8c4430f" Jan 26 08:29:09 crc kubenswrapper[4806]: I0126 08:29:09.146656 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10b6ead55ac9f82e5ee889306d6ccfb665c4ef5fef323627fa0e82afc8c4430f"} err="failed to get container status \"10b6ead55ac9f82e5ee889306d6ccfb665c4ef5fef323627fa0e82afc8c4430f\": rpc error: code = NotFound desc = could not find container \"10b6ead55ac9f82e5ee889306d6ccfb665c4ef5fef323627fa0e82afc8c4430f\": container with ID starting with 10b6ead55ac9f82e5ee889306d6ccfb665c4ef5fef323627fa0e82afc8c4430f not found: ID does not exist" Jan 26 08:29:09 crc kubenswrapper[4806]: E0126 08:29:09.191239 4806 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5b9a302e_05ed_41eb_83b9_f8a734b49ea1.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5b9a302e_05ed_41eb_83b9_f8a734b49ea1.slice/crio-665907d2e16faaa249dad5bef86606834e9ad30a05663583968539a1dd9db6e6\": RecentStats: unable to find data in memory cache]" Jan 26 08:29:09 crc kubenswrapper[4806]: I0126 08:29:09.450105 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-snd4r" Jan 26 08:29:09 crc kubenswrapper[4806]: I0126 08:29:09.584059 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/0c4ea336-2189-42c5-9c34-1ad75642efd0-neutron-ovn-metadata-agent-neutron-config-0\") pod \"0c4ea336-2189-42c5-9c34-1ad75642efd0\" (UID: \"0c4ea336-2189-42c5-9c34-1ad75642efd0\") " Jan 26 08:29:09 crc kubenswrapper[4806]: I0126 08:29:09.584429 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0c4ea336-2189-42c5-9c34-1ad75642efd0-ssh-key-openstack-edpm-ipam\") pod \"0c4ea336-2189-42c5-9c34-1ad75642efd0\" (UID: \"0c4ea336-2189-42c5-9c34-1ad75642efd0\") " Jan 26 08:29:09 crc kubenswrapper[4806]: I0126 08:29:09.584556 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0c4ea336-2189-42c5-9c34-1ad75642efd0-inventory\") pod \"0c4ea336-2189-42c5-9c34-1ad75642efd0\" (UID: \"0c4ea336-2189-42c5-9c34-1ad75642efd0\") " Jan 26 08:29:09 crc kubenswrapper[4806]: I0126 08:29:09.584633 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/0c4ea336-2189-42c5-9c34-1ad75642efd0-nova-metadata-neutron-config-0\") pod \"0c4ea336-2189-42c5-9c34-1ad75642efd0\" (UID: \"0c4ea336-2189-42c5-9c34-1ad75642efd0\") " Jan 26 08:29:09 crc kubenswrapper[4806]: I0126 08:29:09.584796 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z92cx\" (UniqueName: \"kubernetes.io/projected/0c4ea336-2189-42c5-9c34-1ad75642efd0-kube-api-access-z92cx\") pod \"0c4ea336-2189-42c5-9c34-1ad75642efd0\" (UID: \"0c4ea336-2189-42c5-9c34-1ad75642efd0\") " Jan 26 08:29:09 crc kubenswrapper[4806]: I0126 08:29:09.584884 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c4ea336-2189-42c5-9c34-1ad75642efd0-neutron-metadata-combined-ca-bundle\") pod \"0c4ea336-2189-42c5-9c34-1ad75642efd0\" (UID: \"0c4ea336-2189-42c5-9c34-1ad75642efd0\") " Jan 26 08:29:09 crc kubenswrapper[4806]: I0126 08:29:09.601713 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c4ea336-2189-42c5-9c34-1ad75642efd0-kube-api-access-z92cx" (OuterVolumeSpecName: "kube-api-access-z92cx") pod "0c4ea336-2189-42c5-9c34-1ad75642efd0" (UID: "0c4ea336-2189-42c5-9c34-1ad75642efd0"). InnerVolumeSpecName "kube-api-access-z92cx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:29:09 crc kubenswrapper[4806]: I0126 08:29:09.601803 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c4ea336-2189-42c5-9c34-1ad75642efd0-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "0c4ea336-2189-42c5-9c34-1ad75642efd0" (UID: "0c4ea336-2189-42c5-9c34-1ad75642efd0"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:29:09 crc kubenswrapper[4806]: I0126 08:29:09.618407 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c4ea336-2189-42c5-9c34-1ad75642efd0-inventory" (OuterVolumeSpecName: "inventory") pod "0c4ea336-2189-42c5-9c34-1ad75642efd0" (UID: "0c4ea336-2189-42c5-9c34-1ad75642efd0"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:29:09 crc kubenswrapper[4806]: I0126 08:29:09.619174 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c4ea336-2189-42c5-9c34-1ad75642efd0-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "0c4ea336-2189-42c5-9c34-1ad75642efd0" (UID: "0c4ea336-2189-42c5-9c34-1ad75642efd0"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:29:09 crc kubenswrapper[4806]: I0126 08:29:09.641334 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c4ea336-2189-42c5-9c34-1ad75642efd0-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "0c4ea336-2189-42c5-9c34-1ad75642efd0" (UID: "0c4ea336-2189-42c5-9c34-1ad75642efd0"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:29:09 crc kubenswrapper[4806]: I0126 08:29:09.642117 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c4ea336-2189-42c5-9c34-1ad75642efd0-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "0c4ea336-2189-42c5-9c34-1ad75642efd0" (UID: "0c4ea336-2189-42c5-9c34-1ad75642efd0"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:29:09 crc kubenswrapper[4806]: I0126 08:29:09.687475 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z92cx\" (UniqueName: \"kubernetes.io/projected/0c4ea336-2189-42c5-9c34-1ad75642efd0-kube-api-access-z92cx\") on node \"crc\" DevicePath \"\"" Jan 26 08:29:09 crc kubenswrapper[4806]: I0126 08:29:09.687502 4806 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c4ea336-2189-42c5-9c34-1ad75642efd0-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 08:29:09 crc kubenswrapper[4806]: I0126 08:29:09.687515 4806 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/0c4ea336-2189-42c5-9c34-1ad75642efd0-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 26 08:29:09 crc kubenswrapper[4806]: I0126 08:29:09.687537 4806 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0c4ea336-2189-42c5-9c34-1ad75642efd0-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 08:29:09 crc kubenswrapper[4806]: I0126 08:29:09.687547 4806 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0c4ea336-2189-42c5-9c34-1ad75642efd0-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 08:29:09 crc kubenswrapper[4806]: I0126 08:29:09.687555 4806 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/0c4ea336-2189-42c5-9c34-1ad75642efd0-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 26 08:29:10 crc kubenswrapper[4806]: I0126 08:29:10.027539 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-snd4r" Jan 26 08:29:10 crc kubenswrapper[4806]: I0126 08:29:10.027508 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-snd4r" event={"ID":"0c4ea336-2189-42c5-9c34-1ad75642efd0","Type":"ContainerDied","Data":"dd426b8f54bbab600b98513d55c707a8e28f4381a8c4c22b4642dbcb9e841366"} Jan 26 08:29:10 crc kubenswrapper[4806]: I0126 08:29:10.027688 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd426b8f54bbab600b98513d55c707a8e28f4381a8c4c22b4642dbcb9e841366" Jan 26 08:29:10 crc kubenswrapper[4806]: I0126 08:29:10.168894 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bgxrh"] Jan 26 08:29:10 crc kubenswrapper[4806]: E0126 08:29:10.169344 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b9a302e-05ed-41eb-83b9-f8a734b49ea1" containerName="extract-content" Jan 26 08:29:10 crc kubenswrapper[4806]: I0126 08:29:10.169365 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b9a302e-05ed-41eb-83b9-f8a734b49ea1" containerName="extract-content" Jan 26 08:29:10 crc kubenswrapper[4806]: E0126 08:29:10.169390 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b9a302e-05ed-41eb-83b9-f8a734b49ea1" containerName="registry-server" Jan 26 08:29:10 crc kubenswrapper[4806]: I0126 08:29:10.169398 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b9a302e-05ed-41eb-83b9-f8a734b49ea1" containerName="registry-server" Jan 26 08:29:10 crc kubenswrapper[4806]: E0126 08:29:10.169422 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b9a302e-05ed-41eb-83b9-f8a734b49ea1" containerName="extract-utilities" Jan 26 08:29:10 crc kubenswrapper[4806]: I0126 08:29:10.169428 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b9a302e-05ed-41eb-83b9-f8a734b49ea1" containerName="extract-utilities" Jan 26 08:29:10 crc kubenswrapper[4806]: E0126 08:29:10.169447 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c4ea336-2189-42c5-9c34-1ad75642efd0" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 26 08:29:10 crc kubenswrapper[4806]: I0126 08:29:10.169456 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c4ea336-2189-42c5-9c34-1ad75642efd0" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 26 08:29:10 crc kubenswrapper[4806]: I0126 08:29:10.169652 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b9a302e-05ed-41eb-83b9-f8a734b49ea1" containerName="registry-server" Jan 26 08:29:10 crc kubenswrapper[4806]: I0126 08:29:10.169663 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c4ea336-2189-42c5-9c34-1ad75642efd0" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 26 08:29:10 crc kubenswrapper[4806]: I0126 08:29:10.170327 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bgxrh" Jan 26 08:29:10 crc kubenswrapper[4806]: I0126 08:29:10.172351 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 08:29:10 crc kubenswrapper[4806]: I0126 08:29:10.174115 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 08:29:10 crc kubenswrapper[4806]: I0126 08:29:10.174121 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Jan 26 08:29:10 crc kubenswrapper[4806]: I0126 08:29:10.174313 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 08:29:10 crc kubenswrapper[4806]: I0126 08:29:10.174337 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-tr4w7" Jan 26 08:29:10 crc kubenswrapper[4806]: I0126 08:29:10.180358 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bgxrh"] Jan 26 08:29:10 crc kubenswrapper[4806]: I0126 08:29:10.218750 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trpng\" (UniqueName: \"kubernetes.io/projected/c23b3bb7-8ff5-4e80-8476-b478ffeb87a7-kube-api-access-trpng\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-bgxrh\" (UID: \"c23b3bb7-8ff5-4e80-8476-b478ffeb87a7\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bgxrh" Jan 26 08:29:10 crc kubenswrapper[4806]: I0126 08:29:10.218807 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c23b3bb7-8ff5-4e80-8476-b478ffeb87a7-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-bgxrh\" (UID: \"c23b3bb7-8ff5-4e80-8476-b478ffeb87a7\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bgxrh" Jan 26 08:29:10 crc kubenswrapper[4806]: I0126 08:29:10.218830 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c23b3bb7-8ff5-4e80-8476-b478ffeb87a7-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-bgxrh\" (UID: \"c23b3bb7-8ff5-4e80-8476-b478ffeb87a7\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bgxrh" Jan 26 08:29:10 crc kubenswrapper[4806]: I0126 08:29:10.218849 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c23b3bb7-8ff5-4e80-8476-b478ffeb87a7-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-bgxrh\" (UID: \"c23b3bb7-8ff5-4e80-8476-b478ffeb87a7\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bgxrh" Jan 26 08:29:10 crc kubenswrapper[4806]: I0126 08:29:10.218874 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/c23b3bb7-8ff5-4e80-8476-b478ffeb87a7-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-bgxrh\" (UID: \"c23b3bb7-8ff5-4e80-8476-b478ffeb87a7\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bgxrh" Jan 26 08:29:10 crc kubenswrapper[4806]: I0126 08:29:10.320328 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-trpng\" (UniqueName: \"kubernetes.io/projected/c23b3bb7-8ff5-4e80-8476-b478ffeb87a7-kube-api-access-trpng\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-bgxrh\" (UID: \"c23b3bb7-8ff5-4e80-8476-b478ffeb87a7\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bgxrh" Jan 26 08:29:10 crc kubenswrapper[4806]: I0126 08:29:10.320505 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c23b3bb7-8ff5-4e80-8476-b478ffeb87a7-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-bgxrh\" (UID: \"c23b3bb7-8ff5-4e80-8476-b478ffeb87a7\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bgxrh" Jan 26 08:29:10 crc kubenswrapper[4806]: I0126 08:29:10.320554 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c23b3bb7-8ff5-4e80-8476-b478ffeb87a7-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-bgxrh\" (UID: \"c23b3bb7-8ff5-4e80-8476-b478ffeb87a7\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bgxrh" Jan 26 08:29:10 crc kubenswrapper[4806]: I0126 08:29:10.320586 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c23b3bb7-8ff5-4e80-8476-b478ffeb87a7-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-bgxrh\" (UID: \"c23b3bb7-8ff5-4e80-8476-b478ffeb87a7\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bgxrh" Jan 26 08:29:10 crc kubenswrapper[4806]: I0126 08:29:10.320619 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/c23b3bb7-8ff5-4e80-8476-b478ffeb87a7-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-bgxrh\" (UID: \"c23b3bb7-8ff5-4e80-8476-b478ffeb87a7\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bgxrh" Jan 26 08:29:10 crc kubenswrapper[4806]: I0126 08:29:10.325411 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/c23b3bb7-8ff5-4e80-8476-b478ffeb87a7-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-bgxrh\" (UID: \"c23b3bb7-8ff5-4e80-8476-b478ffeb87a7\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bgxrh" Jan 26 08:29:10 crc kubenswrapper[4806]: I0126 08:29:10.326297 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c23b3bb7-8ff5-4e80-8476-b478ffeb87a7-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-bgxrh\" (UID: \"c23b3bb7-8ff5-4e80-8476-b478ffeb87a7\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bgxrh" Jan 26 08:29:10 crc kubenswrapper[4806]: I0126 08:29:10.327077 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c23b3bb7-8ff5-4e80-8476-b478ffeb87a7-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-bgxrh\" (UID: \"c23b3bb7-8ff5-4e80-8476-b478ffeb87a7\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bgxrh" Jan 26 08:29:10 crc kubenswrapper[4806]: I0126 08:29:10.330095 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c23b3bb7-8ff5-4e80-8476-b478ffeb87a7-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-bgxrh\" (UID: \"c23b3bb7-8ff5-4e80-8476-b478ffeb87a7\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bgxrh" Jan 26 08:29:10 crc kubenswrapper[4806]: I0126 08:29:10.338612 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-trpng\" (UniqueName: \"kubernetes.io/projected/c23b3bb7-8ff5-4e80-8476-b478ffeb87a7-kube-api-access-trpng\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-bgxrh\" (UID: \"c23b3bb7-8ff5-4e80-8476-b478ffeb87a7\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bgxrh" Jan 26 08:29:10 crc kubenswrapper[4806]: I0126 08:29:10.540957 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bgxrh" Jan 26 08:29:11 crc kubenswrapper[4806]: I0126 08:29:11.061468 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b9a302e-05ed-41eb-83b9-f8a734b49ea1" path="/var/lib/kubelet/pods/5b9a302e-05ed-41eb-83b9-f8a734b49ea1/volumes" Jan 26 08:29:11 crc kubenswrapper[4806]: I0126 08:29:11.082381 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bgxrh"] Jan 26 08:29:12 crc kubenswrapper[4806]: I0126 08:29:12.051367 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bgxrh" event={"ID":"c23b3bb7-8ff5-4e80-8476-b478ffeb87a7","Type":"ContainerStarted","Data":"5c21ca66d85d343253cd9148a9a5702bc935b710542b0dd01bdf9ae0c50598ef"} Jan 26 08:29:12 crc kubenswrapper[4806]: I0126 08:29:12.051768 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bgxrh" event={"ID":"c23b3bb7-8ff5-4e80-8476-b478ffeb87a7","Type":"ContainerStarted","Data":"0b2556dd8b1e227c0e036491eee52fcf661be5a388b609b19fad1867fc6dce93"} Jan 26 08:29:12 crc kubenswrapper[4806]: I0126 08:29:12.080433 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bgxrh" podStartSLOduration=1.603915618 podStartE2EDuration="2.080411757s" podCreationTimestamp="2026-01-26 08:29:10 +0000 UTC" firstStartedPulling="2026-01-26 08:29:11.093467181 +0000 UTC m=+2130.357875247" lastFinishedPulling="2026-01-26 08:29:11.56996332 +0000 UTC m=+2130.834371386" observedRunningTime="2026-01-26 08:29:12.067660713 +0000 UTC m=+2131.332068769" watchObservedRunningTime="2026-01-26 08:29:12.080411757 +0000 UTC m=+2131.344819813" Jan 26 08:29:15 crc kubenswrapper[4806]: I0126 08:29:15.807098 4806 patch_prober.go:28] interesting pod/machine-config-daemon-k2tlk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 08:29:15 crc kubenswrapper[4806]: I0126 08:29:15.807454 4806 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 08:29:27 crc kubenswrapper[4806]: I0126 08:29:27.446362 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wv82t"] Jan 26 08:29:27 crc kubenswrapper[4806]: I0126 08:29:27.449438 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wv82t" Jan 26 08:29:27 crc kubenswrapper[4806]: I0126 08:29:27.473788 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wv82t"] Jan 26 08:29:27 crc kubenswrapper[4806]: I0126 08:29:27.482666 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29777069-68bf-4563-b523-7c97790725da-utilities\") pod \"community-operators-wv82t\" (UID: \"29777069-68bf-4563-b523-7c97790725da\") " pod="openshift-marketplace/community-operators-wv82t" Jan 26 08:29:27 crc kubenswrapper[4806]: I0126 08:29:27.487653 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7bxs\" (UniqueName: \"kubernetes.io/projected/29777069-68bf-4563-b523-7c97790725da-kube-api-access-n7bxs\") pod \"community-operators-wv82t\" (UID: \"29777069-68bf-4563-b523-7c97790725da\") " pod="openshift-marketplace/community-operators-wv82t" Jan 26 08:29:27 crc kubenswrapper[4806]: I0126 08:29:27.487909 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29777069-68bf-4563-b523-7c97790725da-catalog-content\") pod \"community-operators-wv82t\" (UID: \"29777069-68bf-4563-b523-7c97790725da\") " pod="openshift-marketplace/community-operators-wv82t" Jan 26 08:29:27 crc kubenswrapper[4806]: I0126 08:29:27.590221 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7bxs\" (UniqueName: \"kubernetes.io/projected/29777069-68bf-4563-b523-7c97790725da-kube-api-access-n7bxs\") pod \"community-operators-wv82t\" (UID: \"29777069-68bf-4563-b523-7c97790725da\") " pod="openshift-marketplace/community-operators-wv82t" Jan 26 08:29:27 crc kubenswrapper[4806]: I0126 08:29:27.590323 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29777069-68bf-4563-b523-7c97790725da-catalog-content\") pod \"community-operators-wv82t\" (UID: \"29777069-68bf-4563-b523-7c97790725da\") " pod="openshift-marketplace/community-operators-wv82t" Jan 26 08:29:27 crc kubenswrapper[4806]: I0126 08:29:27.590391 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29777069-68bf-4563-b523-7c97790725da-utilities\") pod \"community-operators-wv82t\" (UID: \"29777069-68bf-4563-b523-7c97790725da\") " pod="openshift-marketplace/community-operators-wv82t" Jan 26 08:29:27 crc kubenswrapper[4806]: I0126 08:29:27.591069 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29777069-68bf-4563-b523-7c97790725da-utilities\") pod \"community-operators-wv82t\" (UID: \"29777069-68bf-4563-b523-7c97790725da\") " pod="openshift-marketplace/community-operators-wv82t" Jan 26 08:29:27 crc kubenswrapper[4806]: I0126 08:29:27.591158 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29777069-68bf-4563-b523-7c97790725da-catalog-content\") pod \"community-operators-wv82t\" (UID: \"29777069-68bf-4563-b523-7c97790725da\") " pod="openshift-marketplace/community-operators-wv82t" Jan 26 08:29:27 crc kubenswrapper[4806]: I0126 08:29:27.609349 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7bxs\" (UniqueName: \"kubernetes.io/projected/29777069-68bf-4563-b523-7c97790725da-kube-api-access-n7bxs\") pod \"community-operators-wv82t\" (UID: \"29777069-68bf-4563-b523-7c97790725da\") " pod="openshift-marketplace/community-operators-wv82t" Jan 26 08:29:27 crc kubenswrapper[4806]: I0126 08:29:27.782370 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wv82t" Jan 26 08:29:28 crc kubenswrapper[4806]: I0126 08:29:28.163287 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wv82t"] Jan 26 08:29:28 crc kubenswrapper[4806]: I0126 08:29:28.213615 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wv82t" event={"ID":"29777069-68bf-4563-b523-7c97790725da","Type":"ContainerStarted","Data":"e1c176292a67290bc81ab7eb0e0ee74d00398f67515665dcd9cc48df138a320f"} Jan 26 08:29:29 crc kubenswrapper[4806]: I0126 08:29:29.222427 4806 generic.go:334] "Generic (PLEG): container finished" podID="29777069-68bf-4563-b523-7c97790725da" containerID="48544fc5e8d33db37667ba081b57d5ac4cc3d165b1fef3f3049a6586e3a31505" exitCode=0 Jan 26 08:29:29 crc kubenswrapper[4806]: I0126 08:29:29.222593 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wv82t" event={"ID":"29777069-68bf-4563-b523-7c97790725da","Type":"ContainerDied","Data":"48544fc5e8d33db37667ba081b57d5ac4cc3d165b1fef3f3049a6586e3a31505"} Jan 26 08:29:29 crc kubenswrapper[4806]: I0126 08:29:29.224424 4806 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 08:29:30 crc kubenswrapper[4806]: I0126 08:29:30.232197 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wv82t" event={"ID":"29777069-68bf-4563-b523-7c97790725da","Type":"ContainerStarted","Data":"9c8624ec39527de429ba1e485b486e1e033ec05b9a24310f87ab5bea69b67578"} Jan 26 08:29:31 crc kubenswrapper[4806]: I0126 08:29:31.241341 4806 generic.go:334] "Generic (PLEG): container finished" podID="29777069-68bf-4563-b523-7c97790725da" containerID="9c8624ec39527de429ba1e485b486e1e033ec05b9a24310f87ab5bea69b67578" exitCode=0 Jan 26 08:29:31 crc kubenswrapper[4806]: I0126 08:29:31.241380 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wv82t" event={"ID":"29777069-68bf-4563-b523-7c97790725da","Type":"ContainerDied","Data":"9c8624ec39527de429ba1e485b486e1e033ec05b9a24310f87ab5bea69b67578"} Jan 26 08:29:32 crc kubenswrapper[4806]: I0126 08:29:32.253554 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wv82t" event={"ID":"29777069-68bf-4563-b523-7c97790725da","Type":"ContainerStarted","Data":"78aebcaeaa44cd3a710b791d603fa6792890ef2f6e176cd560d4f3c9412bcd9a"} Jan 26 08:29:32 crc kubenswrapper[4806]: I0126 08:29:32.276289 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wv82t" podStartSLOduration=2.858339742 podStartE2EDuration="5.276271547s" podCreationTimestamp="2026-01-26 08:29:27 +0000 UTC" firstStartedPulling="2026-01-26 08:29:29.224231238 +0000 UTC m=+2148.488639294" lastFinishedPulling="2026-01-26 08:29:31.642163033 +0000 UTC m=+2150.906571099" observedRunningTime="2026-01-26 08:29:32.273313692 +0000 UTC m=+2151.537721758" watchObservedRunningTime="2026-01-26 08:29:32.276271547 +0000 UTC m=+2151.540679613" Jan 26 08:29:37 crc kubenswrapper[4806]: I0126 08:29:37.782649 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wv82t" Jan 26 08:29:37 crc kubenswrapper[4806]: I0126 08:29:37.783254 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-wv82t" Jan 26 08:29:37 crc kubenswrapper[4806]: I0126 08:29:37.825580 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wv82t" Jan 26 08:29:38 crc kubenswrapper[4806]: I0126 08:29:38.375666 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wv82t" Jan 26 08:29:38 crc kubenswrapper[4806]: I0126 08:29:38.440717 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wv82t"] Jan 26 08:29:40 crc kubenswrapper[4806]: I0126 08:29:40.316253 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-wv82t" podUID="29777069-68bf-4563-b523-7c97790725da" containerName="registry-server" containerID="cri-o://78aebcaeaa44cd3a710b791d603fa6792890ef2f6e176cd560d4f3c9412bcd9a" gracePeriod=2 Jan 26 08:29:40 crc kubenswrapper[4806]: I0126 08:29:40.814313 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wv82t" Jan 26 08:29:40 crc kubenswrapper[4806]: I0126 08:29:40.880089 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n7bxs\" (UniqueName: \"kubernetes.io/projected/29777069-68bf-4563-b523-7c97790725da-kube-api-access-n7bxs\") pod \"29777069-68bf-4563-b523-7c97790725da\" (UID: \"29777069-68bf-4563-b523-7c97790725da\") " Jan 26 08:29:40 crc kubenswrapper[4806]: I0126 08:29:40.880149 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29777069-68bf-4563-b523-7c97790725da-utilities\") pod \"29777069-68bf-4563-b523-7c97790725da\" (UID: \"29777069-68bf-4563-b523-7c97790725da\") " Jan 26 08:29:40 crc kubenswrapper[4806]: I0126 08:29:40.880175 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29777069-68bf-4563-b523-7c97790725da-catalog-content\") pod \"29777069-68bf-4563-b523-7c97790725da\" (UID: \"29777069-68bf-4563-b523-7c97790725da\") " Jan 26 08:29:40 crc kubenswrapper[4806]: I0126 08:29:40.880920 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/29777069-68bf-4563-b523-7c97790725da-utilities" (OuterVolumeSpecName: "utilities") pod "29777069-68bf-4563-b523-7c97790725da" (UID: "29777069-68bf-4563-b523-7c97790725da"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 08:29:40 crc kubenswrapper[4806]: I0126 08:29:40.885483 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29777069-68bf-4563-b523-7c97790725da-kube-api-access-n7bxs" (OuterVolumeSpecName: "kube-api-access-n7bxs") pod "29777069-68bf-4563-b523-7c97790725da" (UID: "29777069-68bf-4563-b523-7c97790725da"). InnerVolumeSpecName "kube-api-access-n7bxs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:29:40 crc kubenswrapper[4806]: I0126 08:29:40.938451 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/29777069-68bf-4563-b523-7c97790725da-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "29777069-68bf-4563-b523-7c97790725da" (UID: "29777069-68bf-4563-b523-7c97790725da"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 08:29:40 crc kubenswrapper[4806]: I0126 08:29:40.982187 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n7bxs\" (UniqueName: \"kubernetes.io/projected/29777069-68bf-4563-b523-7c97790725da-kube-api-access-n7bxs\") on node \"crc\" DevicePath \"\"" Jan 26 08:29:40 crc kubenswrapper[4806]: I0126 08:29:40.982222 4806 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29777069-68bf-4563-b523-7c97790725da-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 08:29:40 crc kubenswrapper[4806]: I0126 08:29:40.982234 4806 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29777069-68bf-4563-b523-7c97790725da-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 08:29:41 crc kubenswrapper[4806]: I0126 08:29:41.325301 4806 generic.go:334] "Generic (PLEG): container finished" podID="29777069-68bf-4563-b523-7c97790725da" containerID="78aebcaeaa44cd3a710b791d603fa6792890ef2f6e176cd560d4f3c9412bcd9a" exitCode=0 Jan 26 08:29:41 crc kubenswrapper[4806]: I0126 08:29:41.325361 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wv82t" Jan 26 08:29:41 crc kubenswrapper[4806]: I0126 08:29:41.325364 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wv82t" event={"ID":"29777069-68bf-4563-b523-7c97790725da","Type":"ContainerDied","Data":"78aebcaeaa44cd3a710b791d603fa6792890ef2f6e176cd560d4f3c9412bcd9a"} Jan 26 08:29:41 crc kubenswrapper[4806]: I0126 08:29:41.325549 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wv82t" event={"ID":"29777069-68bf-4563-b523-7c97790725da","Type":"ContainerDied","Data":"e1c176292a67290bc81ab7eb0e0ee74d00398f67515665dcd9cc48df138a320f"} Jan 26 08:29:41 crc kubenswrapper[4806]: I0126 08:29:41.325586 4806 scope.go:117] "RemoveContainer" containerID="78aebcaeaa44cd3a710b791d603fa6792890ef2f6e176cd560d4f3c9412bcd9a" Jan 26 08:29:41 crc kubenswrapper[4806]: I0126 08:29:41.346676 4806 scope.go:117] "RemoveContainer" containerID="9c8624ec39527de429ba1e485b486e1e033ec05b9a24310f87ab5bea69b67578" Jan 26 08:29:41 crc kubenswrapper[4806]: I0126 08:29:41.359026 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wv82t"] Jan 26 08:29:41 crc kubenswrapper[4806]: I0126 08:29:41.368422 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-wv82t"] Jan 26 08:29:41 crc kubenswrapper[4806]: I0126 08:29:41.368971 4806 scope.go:117] "RemoveContainer" containerID="48544fc5e8d33db37667ba081b57d5ac4cc3d165b1fef3f3049a6586e3a31505" Jan 26 08:29:41 crc kubenswrapper[4806]: I0126 08:29:41.407638 4806 scope.go:117] "RemoveContainer" containerID="78aebcaeaa44cd3a710b791d603fa6792890ef2f6e176cd560d4f3c9412bcd9a" Jan 26 08:29:41 crc kubenswrapper[4806]: E0126 08:29:41.408180 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78aebcaeaa44cd3a710b791d603fa6792890ef2f6e176cd560d4f3c9412bcd9a\": container with ID starting with 78aebcaeaa44cd3a710b791d603fa6792890ef2f6e176cd560d4f3c9412bcd9a not found: ID does not exist" containerID="78aebcaeaa44cd3a710b791d603fa6792890ef2f6e176cd560d4f3c9412bcd9a" Jan 26 08:29:41 crc kubenswrapper[4806]: I0126 08:29:41.408223 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78aebcaeaa44cd3a710b791d603fa6792890ef2f6e176cd560d4f3c9412bcd9a"} err="failed to get container status \"78aebcaeaa44cd3a710b791d603fa6792890ef2f6e176cd560d4f3c9412bcd9a\": rpc error: code = NotFound desc = could not find container \"78aebcaeaa44cd3a710b791d603fa6792890ef2f6e176cd560d4f3c9412bcd9a\": container with ID starting with 78aebcaeaa44cd3a710b791d603fa6792890ef2f6e176cd560d4f3c9412bcd9a not found: ID does not exist" Jan 26 08:29:41 crc kubenswrapper[4806]: I0126 08:29:41.408248 4806 scope.go:117] "RemoveContainer" containerID="9c8624ec39527de429ba1e485b486e1e033ec05b9a24310f87ab5bea69b67578" Jan 26 08:29:41 crc kubenswrapper[4806]: E0126 08:29:41.409030 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c8624ec39527de429ba1e485b486e1e033ec05b9a24310f87ab5bea69b67578\": container with ID starting with 9c8624ec39527de429ba1e485b486e1e033ec05b9a24310f87ab5bea69b67578 not found: ID does not exist" containerID="9c8624ec39527de429ba1e485b486e1e033ec05b9a24310f87ab5bea69b67578" Jan 26 08:29:41 crc kubenswrapper[4806]: I0126 08:29:41.409067 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c8624ec39527de429ba1e485b486e1e033ec05b9a24310f87ab5bea69b67578"} err="failed to get container status \"9c8624ec39527de429ba1e485b486e1e033ec05b9a24310f87ab5bea69b67578\": rpc error: code = NotFound desc = could not find container \"9c8624ec39527de429ba1e485b486e1e033ec05b9a24310f87ab5bea69b67578\": container with ID starting with 9c8624ec39527de429ba1e485b486e1e033ec05b9a24310f87ab5bea69b67578 not found: ID does not exist" Jan 26 08:29:41 crc kubenswrapper[4806]: I0126 08:29:41.409108 4806 scope.go:117] "RemoveContainer" containerID="48544fc5e8d33db37667ba081b57d5ac4cc3d165b1fef3f3049a6586e3a31505" Jan 26 08:29:41 crc kubenswrapper[4806]: E0126 08:29:41.409429 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48544fc5e8d33db37667ba081b57d5ac4cc3d165b1fef3f3049a6586e3a31505\": container with ID starting with 48544fc5e8d33db37667ba081b57d5ac4cc3d165b1fef3f3049a6586e3a31505 not found: ID does not exist" containerID="48544fc5e8d33db37667ba081b57d5ac4cc3d165b1fef3f3049a6586e3a31505" Jan 26 08:29:41 crc kubenswrapper[4806]: I0126 08:29:41.409463 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48544fc5e8d33db37667ba081b57d5ac4cc3d165b1fef3f3049a6586e3a31505"} err="failed to get container status \"48544fc5e8d33db37667ba081b57d5ac4cc3d165b1fef3f3049a6586e3a31505\": rpc error: code = NotFound desc = could not find container \"48544fc5e8d33db37667ba081b57d5ac4cc3d165b1fef3f3049a6586e3a31505\": container with ID starting with 48544fc5e8d33db37667ba081b57d5ac4cc3d165b1fef3f3049a6586e3a31505 not found: ID does not exist" Jan 26 08:29:43 crc kubenswrapper[4806]: I0126 08:29:43.052246 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29777069-68bf-4563-b523-7c97790725da" path="/var/lib/kubelet/pods/29777069-68bf-4563-b523-7c97790725da/volumes" Jan 26 08:29:45 crc kubenswrapper[4806]: I0126 08:29:45.806353 4806 patch_prober.go:28] interesting pod/machine-config-daemon-k2tlk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 08:29:45 crc kubenswrapper[4806]: I0126 08:29:45.806838 4806 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 08:30:00 crc kubenswrapper[4806]: I0126 08:30:00.173712 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490270-65lcp"] Jan 26 08:30:00 crc kubenswrapper[4806]: E0126 08:30:00.175410 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29777069-68bf-4563-b523-7c97790725da" containerName="extract-content" Jan 26 08:30:00 crc kubenswrapper[4806]: I0126 08:30:00.175433 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="29777069-68bf-4563-b523-7c97790725da" containerName="extract-content" Jan 26 08:30:00 crc kubenswrapper[4806]: E0126 08:30:00.175449 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29777069-68bf-4563-b523-7c97790725da" containerName="registry-server" Jan 26 08:30:00 crc kubenswrapper[4806]: I0126 08:30:00.175457 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="29777069-68bf-4563-b523-7c97790725da" containerName="registry-server" Jan 26 08:30:00 crc kubenswrapper[4806]: E0126 08:30:00.175486 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29777069-68bf-4563-b523-7c97790725da" containerName="extract-utilities" Jan 26 08:30:00 crc kubenswrapper[4806]: I0126 08:30:00.175493 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="29777069-68bf-4563-b523-7c97790725da" containerName="extract-utilities" Jan 26 08:30:00 crc kubenswrapper[4806]: I0126 08:30:00.175771 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="29777069-68bf-4563-b523-7c97790725da" containerName="registry-server" Jan 26 08:30:00 crc kubenswrapper[4806]: I0126 08:30:00.176412 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490270-65lcp" Jan 26 08:30:00 crc kubenswrapper[4806]: I0126 08:30:00.179997 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 26 08:30:00 crc kubenswrapper[4806]: I0126 08:30:00.180285 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 26 08:30:00 crc kubenswrapper[4806]: I0126 08:30:00.192543 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490270-65lcp"] Jan 26 08:30:00 crc kubenswrapper[4806]: I0126 08:30:00.223137 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jt74q\" (UniqueName: \"kubernetes.io/projected/c7c14310-f673-4a0f-a892-477b7a76b6ab-kube-api-access-jt74q\") pod \"collect-profiles-29490270-65lcp\" (UID: \"c7c14310-f673-4a0f-a892-477b7a76b6ab\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490270-65lcp" Jan 26 08:30:00 crc kubenswrapper[4806]: I0126 08:30:00.223400 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c7c14310-f673-4a0f-a892-477b7a76b6ab-config-volume\") pod \"collect-profiles-29490270-65lcp\" (UID: \"c7c14310-f673-4a0f-a892-477b7a76b6ab\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490270-65lcp" Jan 26 08:30:00 crc kubenswrapper[4806]: I0126 08:30:00.223450 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c7c14310-f673-4a0f-a892-477b7a76b6ab-secret-volume\") pod \"collect-profiles-29490270-65lcp\" (UID: \"c7c14310-f673-4a0f-a892-477b7a76b6ab\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490270-65lcp" Jan 26 08:30:00 crc kubenswrapper[4806]: I0126 08:30:00.324888 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c7c14310-f673-4a0f-a892-477b7a76b6ab-config-volume\") pod \"collect-profiles-29490270-65lcp\" (UID: \"c7c14310-f673-4a0f-a892-477b7a76b6ab\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490270-65lcp" Jan 26 08:30:00 crc kubenswrapper[4806]: I0126 08:30:00.325000 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c7c14310-f673-4a0f-a892-477b7a76b6ab-secret-volume\") pod \"collect-profiles-29490270-65lcp\" (UID: \"c7c14310-f673-4a0f-a892-477b7a76b6ab\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490270-65lcp" Jan 26 08:30:00 crc kubenswrapper[4806]: I0126 08:30:00.325211 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jt74q\" (UniqueName: \"kubernetes.io/projected/c7c14310-f673-4a0f-a892-477b7a76b6ab-kube-api-access-jt74q\") pod \"collect-profiles-29490270-65lcp\" (UID: \"c7c14310-f673-4a0f-a892-477b7a76b6ab\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490270-65lcp" Jan 26 08:30:00 crc kubenswrapper[4806]: I0126 08:30:00.326128 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c7c14310-f673-4a0f-a892-477b7a76b6ab-config-volume\") pod \"collect-profiles-29490270-65lcp\" (UID: \"c7c14310-f673-4a0f-a892-477b7a76b6ab\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490270-65lcp" Jan 26 08:30:00 crc kubenswrapper[4806]: I0126 08:30:00.334391 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c7c14310-f673-4a0f-a892-477b7a76b6ab-secret-volume\") pod \"collect-profiles-29490270-65lcp\" (UID: \"c7c14310-f673-4a0f-a892-477b7a76b6ab\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490270-65lcp" Jan 26 08:30:00 crc kubenswrapper[4806]: I0126 08:30:00.345598 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jt74q\" (UniqueName: \"kubernetes.io/projected/c7c14310-f673-4a0f-a892-477b7a76b6ab-kube-api-access-jt74q\") pod \"collect-profiles-29490270-65lcp\" (UID: \"c7c14310-f673-4a0f-a892-477b7a76b6ab\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490270-65lcp" Jan 26 08:30:00 crc kubenswrapper[4806]: I0126 08:30:00.557090 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490270-65lcp" Jan 26 08:30:00 crc kubenswrapper[4806]: I0126 08:30:00.986600 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490270-65lcp"] Jan 26 08:30:01 crc kubenswrapper[4806]: I0126 08:30:01.520102 4806 generic.go:334] "Generic (PLEG): container finished" podID="c7c14310-f673-4a0f-a892-477b7a76b6ab" containerID="4798f7dd37e755aa247ebc55fbfade0c9e5bee012f337adc7719d10f368ae5c0" exitCode=0 Jan 26 08:30:01 crc kubenswrapper[4806]: I0126 08:30:01.520385 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490270-65lcp" event={"ID":"c7c14310-f673-4a0f-a892-477b7a76b6ab","Type":"ContainerDied","Data":"4798f7dd37e755aa247ebc55fbfade0c9e5bee012f337adc7719d10f368ae5c0"} Jan 26 08:30:01 crc kubenswrapper[4806]: I0126 08:30:01.520417 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490270-65lcp" event={"ID":"c7c14310-f673-4a0f-a892-477b7a76b6ab","Type":"ContainerStarted","Data":"33abaca87dd65bf9e58b40cf0c7ae0d7fe7e653946560a36a3dfc49563a9df43"} Jan 26 08:30:02 crc kubenswrapper[4806]: I0126 08:30:02.841187 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490270-65lcp" Jan 26 08:30:02 crc kubenswrapper[4806]: I0126 08:30:02.979100 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c7c14310-f673-4a0f-a892-477b7a76b6ab-config-volume\") pod \"c7c14310-f673-4a0f-a892-477b7a76b6ab\" (UID: \"c7c14310-f673-4a0f-a892-477b7a76b6ab\") " Jan 26 08:30:02 crc kubenswrapper[4806]: I0126 08:30:02.979181 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c7c14310-f673-4a0f-a892-477b7a76b6ab-secret-volume\") pod \"c7c14310-f673-4a0f-a892-477b7a76b6ab\" (UID: \"c7c14310-f673-4a0f-a892-477b7a76b6ab\") " Jan 26 08:30:02 crc kubenswrapper[4806]: I0126 08:30:02.979262 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jt74q\" (UniqueName: \"kubernetes.io/projected/c7c14310-f673-4a0f-a892-477b7a76b6ab-kube-api-access-jt74q\") pod \"c7c14310-f673-4a0f-a892-477b7a76b6ab\" (UID: \"c7c14310-f673-4a0f-a892-477b7a76b6ab\") " Jan 26 08:30:02 crc kubenswrapper[4806]: I0126 08:30:02.979693 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7c14310-f673-4a0f-a892-477b7a76b6ab-config-volume" (OuterVolumeSpecName: "config-volume") pod "c7c14310-f673-4a0f-a892-477b7a76b6ab" (UID: "c7c14310-f673-4a0f-a892-477b7a76b6ab"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 08:30:02 crc kubenswrapper[4806]: I0126 08:30:02.980547 4806 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c7c14310-f673-4a0f-a892-477b7a76b6ab-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 08:30:02 crc kubenswrapper[4806]: I0126 08:30:02.985735 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7c14310-f673-4a0f-a892-477b7a76b6ab-kube-api-access-jt74q" (OuterVolumeSpecName: "kube-api-access-jt74q") pod "c7c14310-f673-4a0f-a892-477b7a76b6ab" (UID: "c7c14310-f673-4a0f-a892-477b7a76b6ab"). InnerVolumeSpecName "kube-api-access-jt74q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:30:02 crc kubenswrapper[4806]: I0126 08:30:02.986191 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7c14310-f673-4a0f-a892-477b7a76b6ab-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "c7c14310-f673-4a0f-a892-477b7a76b6ab" (UID: "c7c14310-f673-4a0f-a892-477b7a76b6ab"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:30:03 crc kubenswrapper[4806]: I0126 08:30:03.082981 4806 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c7c14310-f673-4a0f-a892-477b7a76b6ab-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 08:30:03 crc kubenswrapper[4806]: I0126 08:30:03.083012 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jt74q\" (UniqueName: \"kubernetes.io/projected/c7c14310-f673-4a0f-a892-477b7a76b6ab-kube-api-access-jt74q\") on node \"crc\" DevicePath \"\"" Jan 26 08:30:03 crc kubenswrapper[4806]: I0126 08:30:03.544155 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490270-65lcp" event={"ID":"c7c14310-f673-4a0f-a892-477b7a76b6ab","Type":"ContainerDied","Data":"33abaca87dd65bf9e58b40cf0c7ae0d7fe7e653946560a36a3dfc49563a9df43"} Jan 26 08:30:03 crc kubenswrapper[4806]: I0126 08:30:03.544653 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="33abaca87dd65bf9e58b40cf0c7ae0d7fe7e653946560a36a3dfc49563a9df43" Jan 26 08:30:03 crc kubenswrapper[4806]: I0126 08:30:03.544250 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490270-65lcp" Jan 26 08:30:03 crc kubenswrapper[4806]: I0126 08:30:03.915964 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490225-v2v7z"] Jan 26 08:30:03 crc kubenswrapper[4806]: I0126 08:30:03.923961 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490225-v2v7z"] Jan 26 08:30:05 crc kubenswrapper[4806]: I0126 08:30:05.052078 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7f14ee6-1d4a-4cb9-bf7d-7a3ad4b0a7c1" path="/var/lib/kubelet/pods/d7f14ee6-1d4a-4cb9-bf7d-7a3ad4b0a7c1/volumes" Jan 26 08:30:15 crc kubenswrapper[4806]: I0126 08:30:15.806222 4806 patch_prober.go:28] interesting pod/machine-config-daemon-k2tlk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 08:30:15 crc kubenswrapper[4806]: I0126 08:30:15.806904 4806 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 08:30:15 crc kubenswrapper[4806]: I0126 08:30:15.806952 4806 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" Jan 26 08:30:15 crc kubenswrapper[4806]: I0126 08:30:15.807765 4806 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4674bef7a8379021c9731cea3355fb93223c54af51ccd66095b35a4c211c8d3c"} pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 08:30:15 crc kubenswrapper[4806]: I0126 08:30:15.807822 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" containerName="machine-config-daemon" containerID="cri-o://4674bef7a8379021c9731cea3355fb93223c54af51ccd66095b35a4c211c8d3c" gracePeriod=600 Jan 26 08:30:16 crc kubenswrapper[4806]: I0126 08:30:16.667261 4806 generic.go:334] "Generic (PLEG): container finished" podID="d07502a2-50b0-4012-b335-340a1c694c50" containerID="4674bef7a8379021c9731cea3355fb93223c54af51ccd66095b35a4c211c8d3c" exitCode=0 Jan 26 08:30:16 crc kubenswrapper[4806]: I0126 08:30:16.667417 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" event={"ID":"d07502a2-50b0-4012-b335-340a1c694c50","Type":"ContainerDied","Data":"4674bef7a8379021c9731cea3355fb93223c54af51ccd66095b35a4c211c8d3c"} Jan 26 08:30:16 crc kubenswrapper[4806]: I0126 08:30:16.667597 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" event={"ID":"d07502a2-50b0-4012-b335-340a1c694c50","Type":"ContainerStarted","Data":"7ef77f272b960ff89ea7700ce294299cc718d9cbd53dfd55d5742a86584b3bca"} Jan 26 08:30:16 crc kubenswrapper[4806]: I0126 08:30:16.667620 4806 scope.go:117] "RemoveContainer" containerID="55ec0c990dc91e0e377b96ad87f2e5ff0605f7a996ad405f57b4bd86d9139349" Jan 26 08:30:37 crc kubenswrapper[4806]: I0126 08:30:37.279987 4806 scope.go:117] "RemoveContainer" containerID="3b8ffc16eec0a9205c74046e6957dc892472e5de7293eb6ef937088aaf25fa9d" Jan 26 08:32:45 crc kubenswrapper[4806]: I0126 08:32:45.806235 4806 patch_prober.go:28] interesting pod/machine-config-daemon-k2tlk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 08:32:45 crc kubenswrapper[4806]: I0126 08:32:45.807376 4806 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 08:33:15 crc kubenswrapper[4806]: I0126 08:33:15.806073 4806 patch_prober.go:28] interesting pod/machine-config-daemon-k2tlk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 08:33:15 crc kubenswrapper[4806]: I0126 08:33:15.806692 4806 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 08:33:45 crc kubenswrapper[4806]: I0126 08:33:45.806364 4806 patch_prober.go:28] interesting pod/machine-config-daemon-k2tlk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 08:33:45 crc kubenswrapper[4806]: I0126 08:33:45.807121 4806 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 08:33:45 crc kubenswrapper[4806]: I0126 08:33:45.807180 4806 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" Jan 26 08:33:45 crc kubenswrapper[4806]: I0126 08:33:45.808061 4806 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7ef77f272b960ff89ea7700ce294299cc718d9cbd53dfd55d5742a86584b3bca"} pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 08:33:45 crc kubenswrapper[4806]: I0126 08:33:45.808129 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" containerName="machine-config-daemon" containerID="cri-o://7ef77f272b960ff89ea7700ce294299cc718d9cbd53dfd55d5742a86584b3bca" gracePeriod=600 Jan 26 08:33:45 crc kubenswrapper[4806]: E0126 08:33:45.935067 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 08:33:46 crc kubenswrapper[4806]: I0126 08:33:46.594425 4806 generic.go:334] "Generic (PLEG): container finished" podID="d07502a2-50b0-4012-b335-340a1c694c50" containerID="7ef77f272b960ff89ea7700ce294299cc718d9cbd53dfd55d5742a86584b3bca" exitCode=0 Jan 26 08:33:46 crc kubenswrapper[4806]: I0126 08:33:46.594467 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" event={"ID":"d07502a2-50b0-4012-b335-340a1c694c50","Type":"ContainerDied","Data":"7ef77f272b960ff89ea7700ce294299cc718d9cbd53dfd55d5742a86584b3bca"} Jan 26 08:33:46 crc kubenswrapper[4806]: I0126 08:33:46.594848 4806 scope.go:117] "RemoveContainer" containerID="4674bef7a8379021c9731cea3355fb93223c54af51ccd66095b35a4c211c8d3c" Jan 26 08:33:46 crc kubenswrapper[4806]: I0126 08:33:46.595362 4806 scope.go:117] "RemoveContainer" containerID="7ef77f272b960ff89ea7700ce294299cc718d9cbd53dfd55d5742a86584b3bca" Jan 26 08:33:46 crc kubenswrapper[4806]: E0126 08:33:46.595694 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 08:33:58 crc kubenswrapper[4806]: I0126 08:33:58.042275 4806 scope.go:117] "RemoveContainer" containerID="7ef77f272b960ff89ea7700ce294299cc718d9cbd53dfd55d5742a86584b3bca" Jan 26 08:33:58 crc kubenswrapper[4806]: E0126 08:33:58.042997 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 08:34:02 crc kubenswrapper[4806]: I0126 08:34:02.724826 4806 generic.go:334] "Generic (PLEG): container finished" podID="c23b3bb7-8ff5-4e80-8476-b478ffeb87a7" containerID="5c21ca66d85d343253cd9148a9a5702bc935b710542b0dd01bdf9ae0c50598ef" exitCode=0 Jan 26 08:34:02 crc kubenswrapper[4806]: I0126 08:34:02.724948 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bgxrh" event={"ID":"c23b3bb7-8ff5-4e80-8476-b478ffeb87a7","Type":"ContainerDied","Data":"5c21ca66d85d343253cd9148a9a5702bc935b710542b0dd01bdf9ae0c50598ef"} Jan 26 08:34:04 crc kubenswrapper[4806]: I0126 08:34:04.181946 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bgxrh" Jan 26 08:34:04 crc kubenswrapper[4806]: I0126 08:34:04.339892 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c23b3bb7-8ff5-4e80-8476-b478ffeb87a7-libvirt-combined-ca-bundle\") pod \"c23b3bb7-8ff5-4e80-8476-b478ffeb87a7\" (UID: \"c23b3bb7-8ff5-4e80-8476-b478ffeb87a7\") " Jan 26 08:34:04 crc kubenswrapper[4806]: I0126 08:34:04.340764 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-trpng\" (UniqueName: \"kubernetes.io/projected/c23b3bb7-8ff5-4e80-8476-b478ffeb87a7-kube-api-access-trpng\") pod \"c23b3bb7-8ff5-4e80-8476-b478ffeb87a7\" (UID: \"c23b3bb7-8ff5-4e80-8476-b478ffeb87a7\") " Jan 26 08:34:04 crc kubenswrapper[4806]: I0126 08:34:04.340969 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c23b3bb7-8ff5-4e80-8476-b478ffeb87a7-inventory\") pod \"c23b3bb7-8ff5-4e80-8476-b478ffeb87a7\" (UID: \"c23b3bb7-8ff5-4e80-8476-b478ffeb87a7\") " Jan 26 08:34:04 crc kubenswrapper[4806]: I0126 08:34:04.341147 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c23b3bb7-8ff5-4e80-8476-b478ffeb87a7-ssh-key-openstack-edpm-ipam\") pod \"c23b3bb7-8ff5-4e80-8476-b478ffeb87a7\" (UID: \"c23b3bb7-8ff5-4e80-8476-b478ffeb87a7\") " Jan 26 08:34:04 crc kubenswrapper[4806]: I0126 08:34:04.341178 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/c23b3bb7-8ff5-4e80-8476-b478ffeb87a7-libvirt-secret-0\") pod \"c23b3bb7-8ff5-4e80-8476-b478ffeb87a7\" (UID: \"c23b3bb7-8ff5-4e80-8476-b478ffeb87a7\") " Jan 26 08:34:04 crc kubenswrapper[4806]: I0126 08:34:04.346006 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c23b3bb7-8ff5-4e80-8476-b478ffeb87a7-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "c23b3bb7-8ff5-4e80-8476-b478ffeb87a7" (UID: "c23b3bb7-8ff5-4e80-8476-b478ffeb87a7"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:34:04 crc kubenswrapper[4806]: I0126 08:34:04.350577 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c23b3bb7-8ff5-4e80-8476-b478ffeb87a7-kube-api-access-trpng" (OuterVolumeSpecName: "kube-api-access-trpng") pod "c23b3bb7-8ff5-4e80-8476-b478ffeb87a7" (UID: "c23b3bb7-8ff5-4e80-8476-b478ffeb87a7"). InnerVolumeSpecName "kube-api-access-trpng". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:34:04 crc kubenswrapper[4806]: I0126 08:34:04.369453 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c23b3bb7-8ff5-4e80-8476-b478ffeb87a7-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "c23b3bb7-8ff5-4e80-8476-b478ffeb87a7" (UID: "c23b3bb7-8ff5-4e80-8476-b478ffeb87a7"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:34:04 crc kubenswrapper[4806]: I0126 08:34:04.370480 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c23b3bb7-8ff5-4e80-8476-b478ffeb87a7-inventory" (OuterVolumeSpecName: "inventory") pod "c23b3bb7-8ff5-4e80-8476-b478ffeb87a7" (UID: "c23b3bb7-8ff5-4e80-8476-b478ffeb87a7"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:34:04 crc kubenswrapper[4806]: I0126 08:34:04.381614 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c23b3bb7-8ff5-4e80-8476-b478ffeb87a7-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c23b3bb7-8ff5-4e80-8476-b478ffeb87a7" (UID: "c23b3bb7-8ff5-4e80-8476-b478ffeb87a7"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:34:04 crc kubenswrapper[4806]: I0126 08:34:04.443723 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-trpng\" (UniqueName: \"kubernetes.io/projected/c23b3bb7-8ff5-4e80-8476-b478ffeb87a7-kube-api-access-trpng\") on node \"crc\" DevicePath \"\"" Jan 26 08:34:04 crc kubenswrapper[4806]: I0126 08:34:04.443767 4806 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c23b3bb7-8ff5-4e80-8476-b478ffeb87a7-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 08:34:04 crc kubenswrapper[4806]: I0126 08:34:04.443789 4806 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c23b3bb7-8ff5-4e80-8476-b478ffeb87a7-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 08:34:04 crc kubenswrapper[4806]: I0126 08:34:04.443810 4806 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/c23b3bb7-8ff5-4e80-8476-b478ffeb87a7-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Jan 26 08:34:04 crc kubenswrapper[4806]: I0126 08:34:04.443828 4806 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c23b3bb7-8ff5-4e80-8476-b478ffeb87a7-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 08:34:04 crc kubenswrapper[4806]: I0126 08:34:04.752759 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bgxrh" event={"ID":"c23b3bb7-8ff5-4e80-8476-b478ffeb87a7","Type":"ContainerDied","Data":"0b2556dd8b1e227c0e036491eee52fcf661be5a388b609b19fad1867fc6dce93"} Jan 26 08:34:04 crc kubenswrapper[4806]: I0126 08:34:04.752801 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0b2556dd8b1e227c0e036491eee52fcf661be5a388b609b19fad1867fc6dce93" Jan 26 08:34:04 crc kubenswrapper[4806]: I0126 08:34:04.752823 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-bgxrh" Jan 26 08:34:04 crc kubenswrapper[4806]: I0126 08:34:04.870168 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-tdxcm"] Jan 26 08:34:04 crc kubenswrapper[4806]: E0126 08:34:04.870573 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7c14310-f673-4a0f-a892-477b7a76b6ab" containerName="collect-profiles" Jan 26 08:34:04 crc kubenswrapper[4806]: I0126 08:34:04.870586 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7c14310-f673-4a0f-a892-477b7a76b6ab" containerName="collect-profiles" Jan 26 08:34:04 crc kubenswrapper[4806]: E0126 08:34:04.870613 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c23b3bb7-8ff5-4e80-8476-b478ffeb87a7" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 26 08:34:04 crc kubenswrapper[4806]: I0126 08:34:04.870620 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="c23b3bb7-8ff5-4e80-8476-b478ffeb87a7" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 26 08:34:04 crc kubenswrapper[4806]: I0126 08:34:04.870821 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7c14310-f673-4a0f-a892-477b7a76b6ab" containerName="collect-profiles" Jan 26 08:34:04 crc kubenswrapper[4806]: I0126 08:34:04.870840 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="c23b3bb7-8ff5-4e80-8476-b478ffeb87a7" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 26 08:34:04 crc kubenswrapper[4806]: I0126 08:34:04.871453 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tdxcm" Jan 26 08:34:04 crc kubenswrapper[4806]: I0126 08:34:04.875551 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Jan 26 08:34:04 crc kubenswrapper[4806]: I0126 08:34:04.875578 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 08:34:04 crc kubenswrapper[4806]: I0126 08:34:04.875620 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-tr4w7" Jan 26 08:34:04 crc kubenswrapper[4806]: I0126 08:34:04.875642 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 08:34:04 crc kubenswrapper[4806]: I0126 08:34:04.876507 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Jan 26 08:34:04 crc kubenswrapper[4806]: I0126 08:34:04.876689 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Jan 26 08:34:04 crc kubenswrapper[4806]: I0126 08:34:04.876871 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 08:34:04 crc kubenswrapper[4806]: I0126 08:34:04.897373 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-tdxcm"] Jan 26 08:34:05 crc kubenswrapper[4806]: I0126 08:34:05.055027 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0c6eebc2-cb5b-4524-931a-96b86b65585a-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tdxcm\" (UID: \"0c6eebc2-cb5b-4524-931a-96b86b65585a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tdxcm" Jan 26 08:34:05 crc kubenswrapper[4806]: I0126 08:34:05.055396 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/0c6eebc2-cb5b-4524-931a-96b86b65585a-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tdxcm\" (UID: \"0c6eebc2-cb5b-4524-931a-96b86b65585a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tdxcm" Jan 26 08:34:05 crc kubenswrapper[4806]: I0126 08:34:05.055427 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8g9s9\" (UniqueName: \"kubernetes.io/projected/0c6eebc2-cb5b-4524-931a-96b86b65585a-kube-api-access-8g9s9\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tdxcm\" (UID: \"0c6eebc2-cb5b-4524-931a-96b86b65585a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tdxcm" Jan 26 08:34:05 crc kubenswrapper[4806]: I0126 08:34:05.055495 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/0c6eebc2-cb5b-4524-931a-96b86b65585a-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tdxcm\" (UID: \"0c6eebc2-cb5b-4524-931a-96b86b65585a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tdxcm" Jan 26 08:34:05 crc kubenswrapper[4806]: I0126 08:34:05.055537 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/0c6eebc2-cb5b-4524-931a-96b86b65585a-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tdxcm\" (UID: \"0c6eebc2-cb5b-4524-931a-96b86b65585a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tdxcm" Jan 26 08:34:05 crc kubenswrapper[4806]: I0126 08:34:05.055636 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c6eebc2-cb5b-4524-931a-96b86b65585a-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tdxcm\" (UID: \"0c6eebc2-cb5b-4524-931a-96b86b65585a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tdxcm" Jan 26 08:34:05 crc kubenswrapper[4806]: I0126 08:34:05.055701 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0c6eebc2-cb5b-4524-931a-96b86b65585a-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tdxcm\" (UID: \"0c6eebc2-cb5b-4524-931a-96b86b65585a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tdxcm" Jan 26 08:34:05 crc kubenswrapper[4806]: I0126 08:34:05.055880 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/0c6eebc2-cb5b-4524-931a-96b86b65585a-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tdxcm\" (UID: \"0c6eebc2-cb5b-4524-931a-96b86b65585a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tdxcm" Jan 26 08:34:05 crc kubenswrapper[4806]: I0126 08:34:05.055989 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/0c6eebc2-cb5b-4524-931a-96b86b65585a-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tdxcm\" (UID: \"0c6eebc2-cb5b-4524-931a-96b86b65585a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tdxcm" Jan 26 08:34:05 crc kubenswrapper[4806]: I0126 08:34:05.160566 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0c6eebc2-cb5b-4524-931a-96b86b65585a-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tdxcm\" (UID: \"0c6eebc2-cb5b-4524-931a-96b86b65585a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tdxcm" Jan 26 08:34:05 crc kubenswrapper[4806]: I0126 08:34:05.160781 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/0c6eebc2-cb5b-4524-931a-96b86b65585a-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tdxcm\" (UID: \"0c6eebc2-cb5b-4524-931a-96b86b65585a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tdxcm" Jan 26 08:34:05 crc kubenswrapper[4806]: I0126 08:34:05.160849 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8g9s9\" (UniqueName: \"kubernetes.io/projected/0c6eebc2-cb5b-4524-931a-96b86b65585a-kube-api-access-8g9s9\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tdxcm\" (UID: \"0c6eebc2-cb5b-4524-931a-96b86b65585a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tdxcm" Jan 26 08:34:05 crc kubenswrapper[4806]: I0126 08:34:05.161017 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/0c6eebc2-cb5b-4524-931a-96b86b65585a-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tdxcm\" (UID: \"0c6eebc2-cb5b-4524-931a-96b86b65585a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tdxcm" Jan 26 08:34:05 crc kubenswrapper[4806]: I0126 08:34:05.161084 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/0c6eebc2-cb5b-4524-931a-96b86b65585a-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tdxcm\" (UID: \"0c6eebc2-cb5b-4524-931a-96b86b65585a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tdxcm" Jan 26 08:34:05 crc kubenswrapper[4806]: I0126 08:34:05.161159 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c6eebc2-cb5b-4524-931a-96b86b65585a-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tdxcm\" (UID: \"0c6eebc2-cb5b-4524-931a-96b86b65585a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tdxcm" Jan 26 08:34:05 crc kubenswrapper[4806]: I0126 08:34:05.161218 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0c6eebc2-cb5b-4524-931a-96b86b65585a-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tdxcm\" (UID: \"0c6eebc2-cb5b-4524-931a-96b86b65585a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tdxcm" Jan 26 08:34:05 crc kubenswrapper[4806]: I0126 08:34:05.161340 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/0c6eebc2-cb5b-4524-931a-96b86b65585a-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tdxcm\" (UID: \"0c6eebc2-cb5b-4524-931a-96b86b65585a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tdxcm" Jan 26 08:34:05 crc kubenswrapper[4806]: I0126 08:34:05.161414 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/0c6eebc2-cb5b-4524-931a-96b86b65585a-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tdxcm\" (UID: \"0c6eebc2-cb5b-4524-931a-96b86b65585a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tdxcm" Jan 26 08:34:05 crc kubenswrapper[4806]: I0126 08:34:05.164898 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/0c6eebc2-cb5b-4524-931a-96b86b65585a-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tdxcm\" (UID: \"0c6eebc2-cb5b-4524-931a-96b86b65585a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tdxcm" Jan 26 08:34:05 crc kubenswrapper[4806]: I0126 08:34:05.166172 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c6eebc2-cb5b-4524-931a-96b86b65585a-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tdxcm\" (UID: \"0c6eebc2-cb5b-4524-931a-96b86b65585a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tdxcm" Jan 26 08:34:05 crc kubenswrapper[4806]: I0126 08:34:05.166905 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/0c6eebc2-cb5b-4524-931a-96b86b65585a-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tdxcm\" (UID: \"0c6eebc2-cb5b-4524-931a-96b86b65585a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tdxcm" Jan 26 08:34:05 crc kubenswrapper[4806]: I0126 08:34:05.167038 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0c6eebc2-cb5b-4524-931a-96b86b65585a-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tdxcm\" (UID: \"0c6eebc2-cb5b-4524-931a-96b86b65585a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tdxcm" Jan 26 08:34:05 crc kubenswrapper[4806]: I0126 08:34:05.168437 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0c6eebc2-cb5b-4524-931a-96b86b65585a-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tdxcm\" (UID: \"0c6eebc2-cb5b-4524-931a-96b86b65585a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tdxcm" Jan 26 08:34:05 crc kubenswrapper[4806]: I0126 08:34:05.168877 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/0c6eebc2-cb5b-4524-931a-96b86b65585a-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tdxcm\" (UID: \"0c6eebc2-cb5b-4524-931a-96b86b65585a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tdxcm" Jan 26 08:34:05 crc kubenswrapper[4806]: I0126 08:34:05.173974 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/0c6eebc2-cb5b-4524-931a-96b86b65585a-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tdxcm\" (UID: \"0c6eebc2-cb5b-4524-931a-96b86b65585a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tdxcm" Jan 26 08:34:05 crc kubenswrapper[4806]: I0126 08:34:05.177320 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/0c6eebc2-cb5b-4524-931a-96b86b65585a-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tdxcm\" (UID: \"0c6eebc2-cb5b-4524-931a-96b86b65585a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tdxcm" Jan 26 08:34:05 crc kubenswrapper[4806]: I0126 08:34:05.185211 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8g9s9\" (UniqueName: \"kubernetes.io/projected/0c6eebc2-cb5b-4524-931a-96b86b65585a-kube-api-access-8g9s9\") pod \"nova-edpm-deployment-openstack-edpm-ipam-tdxcm\" (UID: \"0c6eebc2-cb5b-4524-931a-96b86b65585a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tdxcm" Jan 26 08:34:05 crc kubenswrapper[4806]: I0126 08:34:05.203744 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tdxcm" Jan 26 08:34:05 crc kubenswrapper[4806]: I0126 08:34:05.826143 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-tdxcm"] Jan 26 08:34:06 crc kubenswrapper[4806]: I0126 08:34:06.771515 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tdxcm" event={"ID":"0c6eebc2-cb5b-4524-931a-96b86b65585a","Type":"ContainerStarted","Data":"acb024c16aeeeab1c0e43d9e04049f116014ff8772b8474d4455d5c76be84fdd"} Jan 26 08:34:06 crc kubenswrapper[4806]: I0126 08:34:06.773463 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tdxcm" event={"ID":"0c6eebc2-cb5b-4524-931a-96b86b65585a","Type":"ContainerStarted","Data":"e54794255518985bf93268fff468bbfa1ac5eb1de08c20d31625e26300d01ee6"} Jan 26 08:34:06 crc kubenswrapper[4806]: I0126 08:34:06.803380 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tdxcm" podStartSLOduration=2.262905164 podStartE2EDuration="2.803358064s" podCreationTimestamp="2026-01-26 08:34:04 +0000 UTC" firstStartedPulling="2026-01-26 08:34:05.839274615 +0000 UTC m=+2425.103682671" lastFinishedPulling="2026-01-26 08:34:06.379727515 +0000 UTC m=+2425.644135571" observedRunningTime="2026-01-26 08:34:06.788499265 +0000 UTC m=+2426.052907321" watchObservedRunningTime="2026-01-26 08:34:06.803358064 +0000 UTC m=+2426.067766120" Jan 26 08:34:13 crc kubenswrapper[4806]: I0126 08:34:13.042457 4806 scope.go:117] "RemoveContainer" containerID="7ef77f272b960ff89ea7700ce294299cc718d9cbd53dfd55d5742a86584b3bca" Jan 26 08:34:13 crc kubenswrapper[4806]: E0126 08:34:13.043280 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 08:34:24 crc kubenswrapper[4806]: I0126 08:34:24.043857 4806 scope.go:117] "RemoveContainer" containerID="7ef77f272b960ff89ea7700ce294299cc718d9cbd53dfd55d5742a86584b3bca" Jan 26 08:34:24 crc kubenswrapper[4806]: E0126 08:34:24.044854 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 08:34:38 crc kubenswrapper[4806]: I0126 08:34:38.041701 4806 scope.go:117] "RemoveContainer" containerID="7ef77f272b960ff89ea7700ce294299cc718d9cbd53dfd55d5742a86584b3bca" Jan 26 08:34:38 crc kubenswrapper[4806]: E0126 08:34:38.042492 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 08:34:51 crc kubenswrapper[4806]: I0126 08:34:51.041621 4806 scope.go:117] "RemoveContainer" containerID="7ef77f272b960ff89ea7700ce294299cc718d9cbd53dfd55d5742a86584b3bca" Jan 26 08:34:51 crc kubenswrapper[4806]: E0126 08:34:51.042682 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 08:35:04 crc kubenswrapper[4806]: I0126 08:35:04.042369 4806 scope.go:117] "RemoveContainer" containerID="7ef77f272b960ff89ea7700ce294299cc718d9cbd53dfd55d5742a86584b3bca" Jan 26 08:35:04 crc kubenswrapper[4806]: E0126 08:35:04.043250 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 08:35:19 crc kubenswrapper[4806]: I0126 08:35:19.042142 4806 scope.go:117] "RemoveContainer" containerID="7ef77f272b960ff89ea7700ce294299cc718d9cbd53dfd55d5742a86584b3bca" Jan 26 08:35:19 crc kubenswrapper[4806]: E0126 08:35:19.043162 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 08:35:33 crc kubenswrapper[4806]: I0126 08:35:33.041834 4806 scope.go:117] "RemoveContainer" containerID="7ef77f272b960ff89ea7700ce294299cc718d9cbd53dfd55d5742a86584b3bca" Jan 26 08:35:33 crc kubenswrapper[4806]: E0126 08:35:33.042508 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 08:35:45 crc kubenswrapper[4806]: I0126 08:35:45.041815 4806 scope.go:117] "RemoveContainer" containerID="7ef77f272b960ff89ea7700ce294299cc718d9cbd53dfd55d5742a86584b3bca" Jan 26 08:35:45 crc kubenswrapper[4806]: E0126 08:35:45.043034 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 08:36:00 crc kubenswrapper[4806]: I0126 08:36:00.042989 4806 scope.go:117] "RemoveContainer" containerID="7ef77f272b960ff89ea7700ce294299cc718d9cbd53dfd55d5742a86584b3bca" Jan 26 08:36:00 crc kubenswrapper[4806]: E0126 08:36:00.044127 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 08:36:11 crc kubenswrapper[4806]: I0126 08:36:11.048340 4806 scope.go:117] "RemoveContainer" containerID="7ef77f272b960ff89ea7700ce294299cc718d9cbd53dfd55d5742a86584b3bca" Jan 26 08:36:11 crc kubenswrapper[4806]: E0126 08:36:11.050139 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 08:36:26 crc kubenswrapper[4806]: I0126 08:36:26.042717 4806 scope.go:117] "RemoveContainer" containerID="7ef77f272b960ff89ea7700ce294299cc718d9cbd53dfd55d5742a86584b3bca" Jan 26 08:36:26 crc kubenswrapper[4806]: E0126 08:36:26.043437 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 08:36:41 crc kubenswrapper[4806]: I0126 08:36:41.067031 4806 scope.go:117] "RemoveContainer" containerID="7ef77f272b960ff89ea7700ce294299cc718d9cbd53dfd55d5742a86584b3bca" Jan 26 08:36:41 crc kubenswrapper[4806]: E0126 08:36:41.070267 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 08:36:51 crc kubenswrapper[4806]: I0126 08:36:51.263269 4806 generic.go:334] "Generic (PLEG): container finished" podID="0c6eebc2-cb5b-4524-931a-96b86b65585a" containerID="acb024c16aeeeab1c0e43d9e04049f116014ff8772b8474d4455d5c76be84fdd" exitCode=0 Jan 26 08:36:51 crc kubenswrapper[4806]: I0126 08:36:51.263360 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tdxcm" event={"ID":"0c6eebc2-cb5b-4524-931a-96b86b65585a","Type":"ContainerDied","Data":"acb024c16aeeeab1c0e43d9e04049f116014ff8772b8474d4455d5c76be84fdd"} Jan 26 08:36:52 crc kubenswrapper[4806]: I0126 08:36:52.671641 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tdxcm" Jan 26 08:36:52 crc kubenswrapper[4806]: I0126 08:36:52.804808 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/0c6eebc2-cb5b-4524-931a-96b86b65585a-nova-cell1-compute-config-1\") pod \"0c6eebc2-cb5b-4524-931a-96b86b65585a\" (UID: \"0c6eebc2-cb5b-4524-931a-96b86b65585a\") " Jan 26 08:36:52 crc kubenswrapper[4806]: I0126 08:36:52.804865 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c6eebc2-cb5b-4524-931a-96b86b65585a-nova-combined-ca-bundle\") pod \"0c6eebc2-cb5b-4524-931a-96b86b65585a\" (UID: \"0c6eebc2-cb5b-4524-931a-96b86b65585a\") " Jan 26 08:36:52 crc kubenswrapper[4806]: I0126 08:36:52.804923 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0c6eebc2-cb5b-4524-931a-96b86b65585a-inventory\") pod \"0c6eebc2-cb5b-4524-931a-96b86b65585a\" (UID: \"0c6eebc2-cb5b-4524-931a-96b86b65585a\") " Jan 26 08:36:52 crc kubenswrapper[4806]: I0126 08:36:52.804999 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/0c6eebc2-cb5b-4524-931a-96b86b65585a-nova-cell1-compute-config-0\") pod \"0c6eebc2-cb5b-4524-931a-96b86b65585a\" (UID: \"0c6eebc2-cb5b-4524-931a-96b86b65585a\") " Jan 26 08:36:52 crc kubenswrapper[4806]: I0126 08:36:52.805024 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0c6eebc2-cb5b-4524-931a-96b86b65585a-ssh-key-openstack-edpm-ipam\") pod \"0c6eebc2-cb5b-4524-931a-96b86b65585a\" (UID: \"0c6eebc2-cb5b-4524-931a-96b86b65585a\") " Jan 26 08:36:52 crc kubenswrapper[4806]: I0126 08:36:52.805078 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/0c6eebc2-cb5b-4524-931a-96b86b65585a-nova-extra-config-0\") pod \"0c6eebc2-cb5b-4524-931a-96b86b65585a\" (UID: \"0c6eebc2-cb5b-4524-931a-96b86b65585a\") " Jan 26 08:36:52 crc kubenswrapper[4806]: I0126 08:36:52.805104 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/0c6eebc2-cb5b-4524-931a-96b86b65585a-nova-migration-ssh-key-1\") pod \"0c6eebc2-cb5b-4524-931a-96b86b65585a\" (UID: \"0c6eebc2-cb5b-4524-931a-96b86b65585a\") " Jan 26 08:36:52 crc kubenswrapper[4806]: I0126 08:36:52.805157 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8g9s9\" (UniqueName: \"kubernetes.io/projected/0c6eebc2-cb5b-4524-931a-96b86b65585a-kube-api-access-8g9s9\") pod \"0c6eebc2-cb5b-4524-931a-96b86b65585a\" (UID: \"0c6eebc2-cb5b-4524-931a-96b86b65585a\") " Jan 26 08:36:52 crc kubenswrapper[4806]: I0126 08:36:52.805215 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/0c6eebc2-cb5b-4524-931a-96b86b65585a-nova-migration-ssh-key-0\") pod \"0c6eebc2-cb5b-4524-931a-96b86b65585a\" (UID: \"0c6eebc2-cb5b-4524-931a-96b86b65585a\") " Jan 26 08:36:52 crc kubenswrapper[4806]: I0126 08:36:52.828723 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c6eebc2-cb5b-4524-931a-96b86b65585a-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "0c6eebc2-cb5b-4524-931a-96b86b65585a" (UID: "0c6eebc2-cb5b-4524-931a-96b86b65585a"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:36:52 crc kubenswrapper[4806]: I0126 08:36:52.832502 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c6eebc2-cb5b-4524-931a-96b86b65585a-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "0c6eebc2-cb5b-4524-931a-96b86b65585a" (UID: "0c6eebc2-cb5b-4524-931a-96b86b65585a"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:36:52 crc kubenswrapper[4806]: I0126 08:36:52.832670 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c6eebc2-cb5b-4524-931a-96b86b65585a-kube-api-access-8g9s9" (OuterVolumeSpecName: "kube-api-access-8g9s9") pod "0c6eebc2-cb5b-4524-931a-96b86b65585a" (UID: "0c6eebc2-cb5b-4524-931a-96b86b65585a"). InnerVolumeSpecName "kube-api-access-8g9s9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:36:52 crc kubenswrapper[4806]: I0126 08:36:52.838031 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c6eebc2-cb5b-4524-931a-96b86b65585a-inventory" (OuterVolumeSpecName: "inventory") pod "0c6eebc2-cb5b-4524-931a-96b86b65585a" (UID: "0c6eebc2-cb5b-4524-931a-96b86b65585a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:36:52 crc kubenswrapper[4806]: I0126 08:36:52.840058 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c6eebc2-cb5b-4524-931a-96b86b65585a-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "0c6eebc2-cb5b-4524-931a-96b86b65585a" (UID: "0c6eebc2-cb5b-4524-931a-96b86b65585a"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:36:52 crc kubenswrapper[4806]: I0126 08:36:52.851816 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c6eebc2-cb5b-4524-931a-96b86b65585a-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "0c6eebc2-cb5b-4524-931a-96b86b65585a" (UID: "0c6eebc2-cb5b-4524-931a-96b86b65585a"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:36:52 crc kubenswrapper[4806]: I0126 08:36:52.852159 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c6eebc2-cb5b-4524-931a-96b86b65585a-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "0c6eebc2-cb5b-4524-931a-96b86b65585a" (UID: "0c6eebc2-cb5b-4524-931a-96b86b65585a"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 08:36:52 crc kubenswrapper[4806]: I0126 08:36:52.853300 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c6eebc2-cb5b-4524-931a-96b86b65585a-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "0c6eebc2-cb5b-4524-931a-96b86b65585a" (UID: "0c6eebc2-cb5b-4524-931a-96b86b65585a"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:36:52 crc kubenswrapper[4806]: I0126 08:36:52.855263 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c6eebc2-cb5b-4524-931a-96b86b65585a-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "0c6eebc2-cb5b-4524-931a-96b86b65585a" (UID: "0c6eebc2-cb5b-4524-931a-96b86b65585a"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:36:52 crc kubenswrapper[4806]: I0126 08:36:52.907376 4806 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/0c6eebc2-cb5b-4524-931a-96b86b65585a-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Jan 26 08:36:52 crc kubenswrapper[4806]: I0126 08:36:52.907413 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8g9s9\" (UniqueName: \"kubernetes.io/projected/0c6eebc2-cb5b-4524-931a-96b86b65585a-kube-api-access-8g9s9\") on node \"crc\" DevicePath \"\"" Jan 26 08:36:52 crc kubenswrapper[4806]: I0126 08:36:52.907425 4806 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/0c6eebc2-cb5b-4524-931a-96b86b65585a-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Jan 26 08:36:52 crc kubenswrapper[4806]: I0126 08:36:52.907436 4806 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/0c6eebc2-cb5b-4524-931a-96b86b65585a-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Jan 26 08:36:52 crc kubenswrapper[4806]: I0126 08:36:52.907449 4806 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c6eebc2-cb5b-4524-931a-96b86b65585a-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 08:36:52 crc kubenswrapper[4806]: I0126 08:36:52.907463 4806 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0c6eebc2-cb5b-4524-931a-96b86b65585a-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 08:36:52 crc kubenswrapper[4806]: I0126 08:36:52.907474 4806 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/0c6eebc2-cb5b-4524-931a-96b86b65585a-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Jan 26 08:36:52 crc kubenswrapper[4806]: I0126 08:36:52.907484 4806 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0c6eebc2-cb5b-4524-931a-96b86b65585a-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 08:36:52 crc kubenswrapper[4806]: I0126 08:36:52.907496 4806 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/0c6eebc2-cb5b-4524-931a-96b86b65585a-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Jan 26 08:36:53 crc kubenswrapper[4806]: I0126 08:36:53.284685 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tdxcm" event={"ID":"0c6eebc2-cb5b-4524-931a-96b86b65585a","Type":"ContainerDied","Data":"e54794255518985bf93268fff468bbfa1ac5eb1de08c20d31625e26300d01ee6"} Jan 26 08:36:53 crc kubenswrapper[4806]: I0126 08:36:53.284722 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e54794255518985bf93268fff468bbfa1ac5eb1de08c20d31625e26300d01ee6" Jan 26 08:36:53 crc kubenswrapper[4806]: I0126 08:36:53.284777 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-tdxcm" Jan 26 08:36:53 crc kubenswrapper[4806]: I0126 08:36:53.394826 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-72fbn"] Jan 26 08:36:53 crc kubenswrapper[4806]: E0126 08:36:53.395450 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c6eebc2-cb5b-4524-931a-96b86b65585a" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 26 08:36:53 crc kubenswrapper[4806]: I0126 08:36:53.395466 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c6eebc2-cb5b-4524-931a-96b86b65585a" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 26 08:36:53 crc kubenswrapper[4806]: I0126 08:36:53.395659 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c6eebc2-cb5b-4524-931a-96b86b65585a" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 26 08:36:53 crc kubenswrapper[4806]: I0126 08:36:53.396251 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-72fbn" Jan 26 08:36:53 crc kubenswrapper[4806]: I0126 08:36:53.399906 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 08:36:53 crc kubenswrapper[4806]: I0126 08:36:53.400070 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-tr4w7" Jan 26 08:36:53 crc kubenswrapper[4806]: I0126 08:36:53.400103 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 08:36:53 crc kubenswrapper[4806]: I0126 08:36:53.400215 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 08:36:53 crc kubenswrapper[4806]: I0126 08:36:53.403736 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Jan 26 08:36:53 crc kubenswrapper[4806]: I0126 08:36:53.420843 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-72fbn"] Jan 26 08:36:53 crc kubenswrapper[4806]: I0126 08:36:53.520134 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/26657020-74ce-471a-8877-43f4fd4fde5d-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-72fbn\" (UID: \"26657020-74ce-471a-8877-43f4fd4fde5d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-72fbn" Jan 26 08:36:53 crc kubenswrapper[4806]: I0126 08:36:53.520440 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/26657020-74ce-471a-8877-43f4fd4fde5d-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-72fbn\" (UID: \"26657020-74ce-471a-8877-43f4fd4fde5d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-72fbn" Jan 26 08:36:53 crc kubenswrapper[4806]: I0126 08:36:53.520554 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/26657020-74ce-471a-8877-43f4fd4fde5d-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-72fbn\" (UID: \"26657020-74ce-471a-8877-43f4fd4fde5d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-72fbn" Jan 26 08:36:53 crc kubenswrapper[4806]: I0126 08:36:53.520717 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/26657020-74ce-471a-8877-43f4fd4fde5d-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-72fbn\" (UID: \"26657020-74ce-471a-8877-43f4fd4fde5d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-72fbn" Jan 26 08:36:53 crc kubenswrapper[4806]: I0126 08:36:53.520833 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26657020-74ce-471a-8877-43f4fd4fde5d-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-72fbn\" (UID: \"26657020-74ce-471a-8877-43f4fd4fde5d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-72fbn" Jan 26 08:36:53 crc kubenswrapper[4806]: I0126 08:36:53.520969 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/26657020-74ce-471a-8877-43f4fd4fde5d-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-72fbn\" (UID: \"26657020-74ce-471a-8877-43f4fd4fde5d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-72fbn" Jan 26 08:36:53 crc kubenswrapper[4806]: I0126 08:36:53.521067 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2frk2\" (UniqueName: \"kubernetes.io/projected/26657020-74ce-471a-8877-43f4fd4fde5d-kube-api-access-2frk2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-72fbn\" (UID: \"26657020-74ce-471a-8877-43f4fd4fde5d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-72fbn" Jan 26 08:36:53 crc kubenswrapper[4806]: I0126 08:36:53.622681 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/26657020-74ce-471a-8877-43f4fd4fde5d-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-72fbn\" (UID: \"26657020-74ce-471a-8877-43f4fd4fde5d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-72fbn" Jan 26 08:36:53 crc kubenswrapper[4806]: I0126 08:36:53.622738 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/26657020-74ce-471a-8877-43f4fd4fde5d-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-72fbn\" (UID: \"26657020-74ce-471a-8877-43f4fd4fde5d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-72fbn" Jan 26 08:36:53 crc kubenswrapper[4806]: I0126 08:36:53.622758 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/26657020-74ce-471a-8877-43f4fd4fde5d-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-72fbn\" (UID: \"26657020-74ce-471a-8877-43f4fd4fde5d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-72fbn" Jan 26 08:36:53 crc kubenswrapper[4806]: I0126 08:36:53.622791 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/26657020-74ce-471a-8877-43f4fd4fde5d-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-72fbn\" (UID: \"26657020-74ce-471a-8877-43f4fd4fde5d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-72fbn" Jan 26 08:36:53 crc kubenswrapper[4806]: I0126 08:36:53.622808 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26657020-74ce-471a-8877-43f4fd4fde5d-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-72fbn\" (UID: \"26657020-74ce-471a-8877-43f4fd4fde5d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-72fbn" Jan 26 08:36:53 crc kubenswrapper[4806]: I0126 08:36:53.622845 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/26657020-74ce-471a-8877-43f4fd4fde5d-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-72fbn\" (UID: \"26657020-74ce-471a-8877-43f4fd4fde5d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-72fbn" Jan 26 08:36:53 crc kubenswrapper[4806]: I0126 08:36:53.622872 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2frk2\" (UniqueName: \"kubernetes.io/projected/26657020-74ce-471a-8877-43f4fd4fde5d-kube-api-access-2frk2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-72fbn\" (UID: \"26657020-74ce-471a-8877-43f4fd4fde5d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-72fbn" Jan 26 08:36:53 crc kubenswrapper[4806]: I0126 08:36:53.627459 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/26657020-74ce-471a-8877-43f4fd4fde5d-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-72fbn\" (UID: \"26657020-74ce-471a-8877-43f4fd4fde5d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-72fbn" Jan 26 08:36:53 crc kubenswrapper[4806]: I0126 08:36:53.628647 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/26657020-74ce-471a-8877-43f4fd4fde5d-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-72fbn\" (UID: \"26657020-74ce-471a-8877-43f4fd4fde5d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-72fbn" Jan 26 08:36:53 crc kubenswrapper[4806]: I0126 08:36:53.629087 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/26657020-74ce-471a-8877-43f4fd4fde5d-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-72fbn\" (UID: \"26657020-74ce-471a-8877-43f4fd4fde5d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-72fbn" Jan 26 08:36:53 crc kubenswrapper[4806]: I0126 08:36:53.629117 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/26657020-74ce-471a-8877-43f4fd4fde5d-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-72fbn\" (UID: \"26657020-74ce-471a-8877-43f4fd4fde5d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-72fbn" Jan 26 08:36:53 crc kubenswrapper[4806]: I0126 08:36:53.631123 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26657020-74ce-471a-8877-43f4fd4fde5d-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-72fbn\" (UID: \"26657020-74ce-471a-8877-43f4fd4fde5d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-72fbn" Jan 26 08:36:53 crc kubenswrapper[4806]: I0126 08:36:53.631768 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/26657020-74ce-471a-8877-43f4fd4fde5d-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-72fbn\" (UID: \"26657020-74ce-471a-8877-43f4fd4fde5d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-72fbn" Jan 26 08:36:53 crc kubenswrapper[4806]: I0126 08:36:53.642275 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2frk2\" (UniqueName: \"kubernetes.io/projected/26657020-74ce-471a-8877-43f4fd4fde5d-kube-api-access-2frk2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-72fbn\" (UID: \"26657020-74ce-471a-8877-43f4fd4fde5d\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-72fbn" Jan 26 08:36:53 crc kubenswrapper[4806]: I0126 08:36:53.709732 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-72fbn" Jan 26 08:36:54 crc kubenswrapper[4806]: I0126 08:36:54.041902 4806 scope.go:117] "RemoveContainer" containerID="7ef77f272b960ff89ea7700ce294299cc718d9cbd53dfd55d5742a86584b3bca" Jan 26 08:36:54 crc kubenswrapper[4806]: E0126 08:36:54.042687 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 08:36:54 crc kubenswrapper[4806]: I0126 08:36:54.217881 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-72fbn"] Jan 26 08:36:54 crc kubenswrapper[4806]: I0126 08:36:54.227037 4806 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 08:36:54 crc kubenswrapper[4806]: I0126 08:36:54.293626 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-72fbn" event={"ID":"26657020-74ce-471a-8877-43f4fd4fde5d","Type":"ContainerStarted","Data":"5bde6bafd69e7ef77a0026d5565c45c061bd65ec9c9edfab0c249436cb2727f7"} Jan 26 08:36:55 crc kubenswrapper[4806]: I0126 08:36:55.305710 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-72fbn" event={"ID":"26657020-74ce-471a-8877-43f4fd4fde5d","Type":"ContainerStarted","Data":"92a77737c206e1f5fb237aced0a044ba93efab00f251844ef91b03563bb3dbb6"} Jan 26 08:36:55 crc kubenswrapper[4806]: I0126 08:36:55.346166 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-72fbn" podStartSLOduration=1.734724213 podStartE2EDuration="2.346141171s" podCreationTimestamp="2026-01-26 08:36:53 +0000 UTC" firstStartedPulling="2026-01-26 08:36:54.226607074 +0000 UTC m=+2593.491015140" lastFinishedPulling="2026-01-26 08:36:54.838024042 +0000 UTC m=+2594.102432098" observedRunningTime="2026-01-26 08:36:55.342390355 +0000 UTC m=+2594.606798411" watchObservedRunningTime="2026-01-26 08:36:55.346141171 +0000 UTC m=+2594.610549227" Jan 26 08:37:05 crc kubenswrapper[4806]: I0126 08:37:05.042575 4806 scope.go:117] "RemoveContainer" containerID="7ef77f272b960ff89ea7700ce294299cc718d9cbd53dfd55d5742a86584b3bca" Jan 26 08:37:05 crc kubenswrapper[4806]: E0126 08:37:05.043266 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 08:37:17 crc kubenswrapper[4806]: I0126 08:37:17.041902 4806 scope.go:117] "RemoveContainer" containerID="7ef77f272b960ff89ea7700ce294299cc718d9cbd53dfd55d5742a86584b3bca" Jan 26 08:37:17 crc kubenswrapper[4806]: E0126 08:37:17.042693 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 08:37:31 crc kubenswrapper[4806]: I0126 08:37:31.048591 4806 scope.go:117] "RemoveContainer" containerID="7ef77f272b960ff89ea7700ce294299cc718d9cbd53dfd55d5742a86584b3bca" Jan 26 08:37:31 crc kubenswrapper[4806]: E0126 08:37:31.049369 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 08:37:44 crc kubenswrapper[4806]: I0126 08:37:44.041731 4806 scope.go:117] "RemoveContainer" containerID="7ef77f272b960ff89ea7700ce294299cc718d9cbd53dfd55d5742a86584b3bca" Jan 26 08:37:44 crc kubenswrapper[4806]: E0126 08:37:44.042639 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 08:37:58 crc kubenswrapper[4806]: I0126 08:37:58.042341 4806 scope.go:117] "RemoveContainer" containerID="7ef77f272b960ff89ea7700ce294299cc718d9cbd53dfd55d5742a86584b3bca" Jan 26 08:37:58 crc kubenswrapper[4806]: E0126 08:37:58.044188 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 08:38:12 crc kubenswrapper[4806]: I0126 08:38:12.041994 4806 scope.go:117] "RemoveContainer" containerID="7ef77f272b960ff89ea7700ce294299cc718d9cbd53dfd55d5742a86584b3bca" Jan 26 08:38:12 crc kubenswrapper[4806]: E0126 08:38:12.042925 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 08:38:26 crc kubenswrapper[4806]: I0126 08:38:26.042237 4806 scope.go:117] "RemoveContainer" containerID="7ef77f272b960ff89ea7700ce294299cc718d9cbd53dfd55d5742a86584b3bca" Jan 26 08:38:26 crc kubenswrapper[4806]: E0126 08:38:26.043030 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 08:38:41 crc kubenswrapper[4806]: I0126 08:38:41.048791 4806 scope.go:117] "RemoveContainer" containerID="7ef77f272b960ff89ea7700ce294299cc718d9cbd53dfd55d5742a86584b3bca" Jan 26 08:38:41 crc kubenswrapper[4806]: E0126 08:38:41.049722 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 08:38:52 crc kubenswrapper[4806]: I0126 08:38:52.041563 4806 scope.go:117] "RemoveContainer" containerID="7ef77f272b960ff89ea7700ce294299cc718d9cbd53dfd55d5742a86584b3bca" Jan 26 08:38:52 crc kubenswrapper[4806]: I0126 08:38:52.366461 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" event={"ID":"d07502a2-50b0-4012-b335-340a1c694c50","Type":"ContainerStarted","Data":"5b1dcff18202e7b1ccf45070afc686479bd6ae343787b05873debdd35fcca2bb"} Jan 26 08:39:05 crc kubenswrapper[4806]: I0126 08:39:05.763873 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-b6wrz"] Jan 26 08:39:05 crc kubenswrapper[4806]: I0126 08:39:05.769616 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b6wrz" Jan 26 08:39:05 crc kubenswrapper[4806]: I0126 08:39:05.783608 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-b6wrz"] Jan 26 08:39:05 crc kubenswrapper[4806]: I0126 08:39:05.918547 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/573da633-1b60-4665-8f42-3bf30f8ce47d-utilities\") pod \"redhat-operators-b6wrz\" (UID: \"573da633-1b60-4665-8f42-3bf30f8ce47d\") " pod="openshift-marketplace/redhat-operators-b6wrz" Jan 26 08:39:05 crc kubenswrapper[4806]: I0126 08:39:05.918919 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/573da633-1b60-4665-8f42-3bf30f8ce47d-catalog-content\") pod \"redhat-operators-b6wrz\" (UID: \"573da633-1b60-4665-8f42-3bf30f8ce47d\") " pod="openshift-marketplace/redhat-operators-b6wrz" Jan 26 08:39:05 crc kubenswrapper[4806]: I0126 08:39:05.919012 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w56kh\" (UniqueName: \"kubernetes.io/projected/573da633-1b60-4665-8f42-3bf30f8ce47d-kube-api-access-w56kh\") pod \"redhat-operators-b6wrz\" (UID: \"573da633-1b60-4665-8f42-3bf30f8ce47d\") " pod="openshift-marketplace/redhat-operators-b6wrz" Jan 26 08:39:06 crc kubenswrapper[4806]: I0126 08:39:06.021295 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/573da633-1b60-4665-8f42-3bf30f8ce47d-utilities\") pod \"redhat-operators-b6wrz\" (UID: \"573da633-1b60-4665-8f42-3bf30f8ce47d\") " pod="openshift-marketplace/redhat-operators-b6wrz" Jan 26 08:39:06 crc kubenswrapper[4806]: I0126 08:39:06.021476 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/573da633-1b60-4665-8f42-3bf30f8ce47d-catalog-content\") pod \"redhat-operators-b6wrz\" (UID: \"573da633-1b60-4665-8f42-3bf30f8ce47d\") " pod="openshift-marketplace/redhat-operators-b6wrz" Jan 26 08:39:06 crc kubenswrapper[4806]: I0126 08:39:06.021539 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w56kh\" (UniqueName: \"kubernetes.io/projected/573da633-1b60-4665-8f42-3bf30f8ce47d-kube-api-access-w56kh\") pod \"redhat-operators-b6wrz\" (UID: \"573da633-1b60-4665-8f42-3bf30f8ce47d\") " pod="openshift-marketplace/redhat-operators-b6wrz" Jan 26 08:39:06 crc kubenswrapper[4806]: I0126 08:39:06.021993 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/573da633-1b60-4665-8f42-3bf30f8ce47d-utilities\") pod \"redhat-operators-b6wrz\" (UID: \"573da633-1b60-4665-8f42-3bf30f8ce47d\") " pod="openshift-marketplace/redhat-operators-b6wrz" Jan 26 08:39:06 crc kubenswrapper[4806]: I0126 08:39:06.022131 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/573da633-1b60-4665-8f42-3bf30f8ce47d-catalog-content\") pod \"redhat-operators-b6wrz\" (UID: \"573da633-1b60-4665-8f42-3bf30f8ce47d\") " pod="openshift-marketplace/redhat-operators-b6wrz" Jan 26 08:39:06 crc kubenswrapper[4806]: I0126 08:39:06.046426 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w56kh\" (UniqueName: \"kubernetes.io/projected/573da633-1b60-4665-8f42-3bf30f8ce47d-kube-api-access-w56kh\") pod \"redhat-operators-b6wrz\" (UID: \"573da633-1b60-4665-8f42-3bf30f8ce47d\") " pod="openshift-marketplace/redhat-operators-b6wrz" Jan 26 08:39:06 crc kubenswrapper[4806]: I0126 08:39:06.093008 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b6wrz" Jan 26 08:39:06 crc kubenswrapper[4806]: I0126 08:39:06.549338 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-b6wrz"] Jan 26 08:39:06 crc kubenswrapper[4806]: W0126 08:39:06.551151 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod573da633_1b60_4665_8f42_3bf30f8ce47d.slice/crio-b0d1193abc825d7dd7bdc2a454e6845a027f7352aa75394061f18a36ceeb5a2c WatchSource:0}: Error finding container b0d1193abc825d7dd7bdc2a454e6845a027f7352aa75394061f18a36ceeb5a2c: Status 404 returned error can't find the container with id b0d1193abc825d7dd7bdc2a454e6845a027f7352aa75394061f18a36ceeb5a2c Jan 26 08:39:07 crc kubenswrapper[4806]: I0126 08:39:07.503466 4806 generic.go:334] "Generic (PLEG): container finished" podID="573da633-1b60-4665-8f42-3bf30f8ce47d" containerID="522c1f205f07b3be9383bd56cfba54c18484c7a7a921c980d0733cbae62a5c43" exitCode=0 Jan 26 08:39:07 crc kubenswrapper[4806]: I0126 08:39:07.503669 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b6wrz" event={"ID":"573da633-1b60-4665-8f42-3bf30f8ce47d","Type":"ContainerDied","Data":"522c1f205f07b3be9383bd56cfba54c18484c7a7a921c980d0733cbae62a5c43"} Jan 26 08:39:07 crc kubenswrapper[4806]: I0126 08:39:07.503841 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b6wrz" event={"ID":"573da633-1b60-4665-8f42-3bf30f8ce47d","Type":"ContainerStarted","Data":"b0d1193abc825d7dd7bdc2a454e6845a027f7352aa75394061f18a36ceeb5a2c"} Jan 26 08:39:08 crc kubenswrapper[4806]: I0126 08:39:08.513812 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b6wrz" event={"ID":"573da633-1b60-4665-8f42-3bf30f8ce47d","Type":"ContainerStarted","Data":"ea93f6a5f424d34a4b8dd00e103dc61017e2a3daf82d0e3f143137d6aada61a7"} Jan 26 08:39:12 crc kubenswrapper[4806]: I0126 08:39:12.548420 4806 generic.go:334] "Generic (PLEG): container finished" podID="573da633-1b60-4665-8f42-3bf30f8ce47d" containerID="ea93f6a5f424d34a4b8dd00e103dc61017e2a3daf82d0e3f143137d6aada61a7" exitCode=0 Jan 26 08:39:12 crc kubenswrapper[4806]: I0126 08:39:12.548494 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b6wrz" event={"ID":"573da633-1b60-4665-8f42-3bf30f8ce47d","Type":"ContainerDied","Data":"ea93f6a5f424d34a4b8dd00e103dc61017e2a3daf82d0e3f143137d6aada61a7"} Jan 26 08:39:13 crc kubenswrapper[4806]: I0126 08:39:13.558353 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b6wrz" event={"ID":"573da633-1b60-4665-8f42-3bf30f8ce47d","Type":"ContainerStarted","Data":"45ba678fe3c5fdedad79d76c35e07cdb7e7b087a85bc9938538f7126bfa9ffe3"} Jan 26 08:39:13 crc kubenswrapper[4806]: I0126 08:39:13.578928 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-b6wrz" podStartSLOduration=2.960683569 podStartE2EDuration="8.578910833s" podCreationTimestamp="2026-01-26 08:39:05 +0000 UTC" firstStartedPulling="2026-01-26 08:39:07.506028118 +0000 UTC m=+2726.770436184" lastFinishedPulling="2026-01-26 08:39:13.124255362 +0000 UTC m=+2732.388663448" observedRunningTime="2026-01-26 08:39:13.574812058 +0000 UTC m=+2732.839220114" watchObservedRunningTime="2026-01-26 08:39:13.578910833 +0000 UTC m=+2732.843318889" Jan 26 08:39:16 crc kubenswrapper[4806]: I0126 08:39:16.093978 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-b6wrz" Jan 26 08:39:16 crc kubenswrapper[4806]: I0126 08:39:16.095016 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-b6wrz" Jan 26 08:39:17 crc kubenswrapper[4806]: I0126 08:39:17.146928 4806 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-b6wrz" podUID="573da633-1b60-4665-8f42-3bf30f8ce47d" containerName="registry-server" probeResult="failure" output=< Jan 26 08:39:17 crc kubenswrapper[4806]: timeout: failed to connect service ":50051" within 1s Jan 26 08:39:17 crc kubenswrapper[4806]: > Jan 26 08:39:26 crc kubenswrapper[4806]: I0126 08:39:26.159860 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-b6wrz" Jan 26 08:39:26 crc kubenswrapper[4806]: I0126 08:39:26.214374 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-b6wrz" Jan 26 08:39:26 crc kubenswrapper[4806]: I0126 08:39:26.404077 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-b6wrz"] Jan 26 08:39:27 crc kubenswrapper[4806]: I0126 08:39:27.676647 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-b6wrz" podUID="573da633-1b60-4665-8f42-3bf30f8ce47d" containerName="registry-server" containerID="cri-o://45ba678fe3c5fdedad79d76c35e07cdb7e7b087a85bc9938538f7126bfa9ffe3" gracePeriod=2 Jan 26 08:39:28 crc kubenswrapper[4806]: I0126 08:39:28.197746 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b6wrz" Jan 26 08:39:28 crc kubenswrapper[4806]: I0126 08:39:28.267191 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/573da633-1b60-4665-8f42-3bf30f8ce47d-utilities\") pod \"573da633-1b60-4665-8f42-3bf30f8ce47d\" (UID: \"573da633-1b60-4665-8f42-3bf30f8ce47d\") " Jan 26 08:39:28 crc kubenswrapper[4806]: I0126 08:39:28.267271 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/573da633-1b60-4665-8f42-3bf30f8ce47d-catalog-content\") pod \"573da633-1b60-4665-8f42-3bf30f8ce47d\" (UID: \"573da633-1b60-4665-8f42-3bf30f8ce47d\") " Jan 26 08:39:28 crc kubenswrapper[4806]: I0126 08:39:28.267341 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w56kh\" (UniqueName: \"kubernetes.io/projected/573da633-1b60-4665-8f42-3bf30f8ce47d-kube-api-access-w56kh\") pod \"573da633-1b60-4665-8f42-3bf30f8ce47d\" (UID: \"573da633-1b60-4665-8f42-3bf30f8ce47d\") " Jan 26 08:39:28 crc kubenswrapper[4806]: I0126 08:39:28.268034 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/573da633-1b60-4665-8f42-3bf30f8ce47d-utilities" (OuterVolumeSpecName: "utilities") pod "573da633-1b60-4665-8f42-3bf30f8ce47d" (UID: "573da633-1b60-4665-8f42-3bf30f8ce47d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 08:39:28 crc kubenswrapper[4806]: I0126 08:39:28.278741 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/573da633-1b60-4665-8f42-3bf30f8ce47d-kube-api-access-w56kh" (OuterVolumeSpecName: "kube-api-access-w56kh") pod "573da633-1b60-4665-8f42-3bf30f8ce47d" (UID: "573da633-1b60-4665-8f42-3bf30f8ce47d"). InnerVolumeSpecName "kube-api-access-w56kh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:39:28 crc kubenswrapper[4806]: I0126 08:39:28.369882 4806 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/573da633-1b60-4665-8f42-3bf30f8ce47d-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 08:39:28 crc kubenswrapper[4806]: I0126 08:39:28.369914 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w56kh\" (UniqueName: \"kubernetes.io/projected/573da633-1b60-4665-8f42-3bf30f8ce47d-kube-api-access-w56kh\") on node \"crc\" DevicePath \"\"" Jan 26 08:39:28 crc kubenswrapper[4806]: I0126 08:39:28.391975 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/573da633-1b60-4665-8f42-3bf30f8ce47d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "573da633-1b60-4665-8f42-3bf30f8ce47d" (UID: "573da633-1b60-4665-8f42-3bf30f8ce47d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 08:39:28 crc kubenswrapper[4806]: I0126 08:39:28.471580 4806 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/573da633-1b60-4665-8f42-3bf30f8ce47d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 08:39:28 crc kubenswrapper[4806]: I0126 08:39:28.686860 4806 generic.go:334] "Generic (PLEG): container finished" podID="573da633-1b60-4665-8f42-3bf30f8ce47d" containerID="45ba678fe3c5fdedad79d76c35e07cdb7e7b087a85bc9938538f7126bfa9ffe3" exitCode=0 Jan 26 08:39:28 crc kubenswrapper[4806]: I0126 08:39:28.686962 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b6wrz" event={"ID":"573da633-1b60-4665-8f42-3bf30f8ce47d","Type":"ContainerDied","Data":"45ba678fe3c5fdedad79d76c35e07cdb7e7b087a85bc9938538f7126bfa9ffe3"} Jan 26 08:39:28 crc kubenswrapper[4806]: I0126 08:39:28.687804 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b6wrz" event={"ID":"573da633-1b60-4665-8f42-3bf30f8ce47d","Type":"ContainerDied","Data":"b0d1193abc825d7dd7bdc2a454e6845a027f7352aa75394061f18a36ceeb5a2c"} Jan 26 08:39:28 crc kubenswrapper[4806]: I0126 08:39:28.687013 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b6wrz" Jan 26 08:39:28 crc kubenswrapper[4806]: I0126 08:39:28.687869 4806 scope.go:117] "RemoveContainer" containerID="45ba678fe3c5fdedad79d76c35e07cdb7e7b087a85bc9938538f7126bfa9ffe3" Jan 26 08:39:28 crc kubenswrapper[4806]: I0126 08:39:28.724696 4806 scope.go:117] "RemoveContainer" containerID="ea93f6a5f424d34a4b8dd00e103dc61017e2a3daf82d0e3f143137d6aada61a7" Jan 26 08:39:28 crc kubenswrapper[4806]: I0126 08:39:28.729220 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-b6wrz"] Jan 26 08:39:28 crc kubenswrapper[4806]: I0126 08:39:28.738640 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-b6wrz"] Jan 26 08:39:28 crc kubenswrapper[4806]: I0126 08:39:28.751445 4806 scope.go:117] "RemoveContainer" containerID="522c1f205f07b3be9383bd56cfba54c18484c7a7a921c980d0733cbae62a5c43" Jan 26 08:39:28 crc kubenswrapper[4806]: I0126 08:39:28.797333 4806 scope.go:117] "RemoveContainer" containerID="45ba678fe3c5fdedad79d76c35e07cdb7e7b087a85bc9938538f7126bfa9ffe3" Jan 26 08:39:28 crc kubenswrapper[4806]: E0126 08:39:28.798302 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"45ba678fe3c5fdedad79d76c35e07cdb7e7b087a85bc9938538f7126bfa9ffe3\": container with ID starting with 45ba678fe3c5fdedad79d76c35e07cdb7e7b087a85bc9938538f7126bfa9ffe3 not found: ID does not exist" containerID="45ba678fe3c5fdedad79d76c35e07cdb7e7b087a85bc9938538f7126bfa9ffe3" Jan 26 08:39:28 crc kubenswrapper[4806]: I0126 08:39:28.798341 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45ba678fe3c5fdedad79d76c35e07cdb7e7b087a85bc9938538f7126bfa9ffe3"} err="failed to get container status \"45ba678fe3c5fdedad79d76c35e07cdb7e7b087a85bc9938538f7126bfa9ffe3\": rpc error: code = NotFound desc = could not find container \"45ba678fe3c5fdedad79d76c35e07cdb7e7b087a85bc9938538f7126bfa9ffe3\": container with ID starting with 45ba678fe3c5fdedad79d76c35e07cdb7e7b087a85bc9938538f7126bfa9ffe3 not found: ID does not exist" Jan 26 08:39:28 crc kubenswrapper[4806]: I0126 08:39:28.798367 4806 scope.go:117] "RemoveContainer" containerID="ea93f6a5f424d34a4b8dd00e103dc61017e2a3daf82d0e3f143137d6aada61a7" Jan 26 08:39:28 crc kubenswrapper[4806]: E0126 08:39:28.798674 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea93f6a5f424d34a4b8dd00e103dc61017e2a3daf82d0e3f143137d6aada61a7\": container with ID starting with ea93f6a5f424d34a4b8dd00e103dc61017e2a3daf82d0e3f143137d6aada61a7 not found: ID does not exist" containerID="ea93f6a5f424d34a4b8dd00e103dc61017e2a3daf82d0e3f143137d6aada61a7" Jan 26 08:39:28 crc kubenswrapper[4806]: I0126 08:39:28.798698 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea93f6a5f424d34a4b8dd00e103dc61017e2a3daf82d0e3f143137d6aada61a7"} err="failed to get container status \"ea93f6a5f424d34a4b8dd00e103dc61017e2a3daf82d0e3f143137d6aada61a7\": rpc error: code = NotFound desc = could not find container \"ea93f6a5f424d34a4b8dd00e103dc61017e2a3daf82d0e3f143137d6aada61a7\": container with ID starting with ea93f6a5f424d34a4b8dd00e103dc61017e2a3daf82d0e3f143137d6aada61a7 not found: ID does not exist" Jan 26 08:39:28 crc kubenswrapper[4806]: I0126 08:39:28.798718 4806 scope.go:117] "RemoveContainer" containerID="522c1f205f07b3be9383bd56cfba54c18484c7a7a921c980d0733cbae62a5c43" Jan 26 08:39:28 crc kubenswrapper[4806]: E0126 08:39:28.799059 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"522c1f205f07b3be9383bd56cfba54c18484c7a7a921c980d0733cbae62a5c43\": container with ID starting with 522c1f205f07b3be9383bd56cfba54c18484c7a7a921c980d0733cbae62a5c43 not found: ID does not exist" containerID="522c1f205f07b3be9383bd56cfba54c18484c7a7a921c980d0733cbae62a5c43" Jan 26 08:39:28 crc kubenswrapper[4806]: I0126 08:39:28.799148 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"522c1f205f07b3be9383bd56cfba54c18484c7a7a921c980d0733cbae62a5c43"} err="failed to get container status \"522c1f205f07b3be9383bd56cfba54c18484c7a7a921c980d0733cbae62a5c43\": rpc error: code = NotFound desc = could not find container \"522c1f205f07b3be9383bd56cfba54c18484c7a7a921c980d0733cbae62a5c43\": container with ID starting with 522c1f205f07b3be9383bd56cfba54c18484c7a7a921c980d0733cbae62a5c43 not found: ID does not exist" Jan 26 08:39:29 crc kubenswrapper[4806]: I0126 08:39:29.054872 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="573da633-1b60-4665-8f42-3bf30f8ce47d" path="/var/lib/kubelet/pods/573da633-1b60-4665-8f42-3bf30f8ce47d/volumes" Jan 26 08:39:31 crc kubenswrapper[4806]: I0126 08:39:31.809907 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vrpx7"] Jan 26 08:39:31 crc kubenswrapper[4806]: E0126 08:39:31.810577 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="573da633-1b60-4665-8f42-3bf30f8ce47d" containerName="extract-content" Jan 26 08:39:31 crc kubenswrapper[4806]: I0126 08:39:31.810589 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="573da633-1b60-4665-8f42-3bf30f8ce47d" containerName="extract-content" Jan 26 08:39:31 crc kubenswrapper[4806]: E0126 08:39:31.810613 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="573da633-1b60-4665-8f42-3bf30f8ce47d" containerName="extract-utilities" Jan 26 08:39:31 crc kubenswrapper[4806]: I0126 08:39:31.810620 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="573da633-1b60-4665-8f42-3bf30f8ce47d" containerName="extract-utilities" Jan 26 08:39:31 crc kubenswrapper[4806]: E0126 08:39:31.810635 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="573da633-1b60-4665-8f42-3bf30f8ce47d" containerName="registry-server" Jan 26 08:39:31 crc kubenswrapper[4806]: I0126 08:39:31.810641 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="573da633-1b60-4665-8f42-3bf30f8ce47d" containerName="registry-server" Jan 26 08:39:31 crc kubenswrapper[4806]: I0126 08:39:31.810814 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="573da633-1b60-4665-8f42-3bf30f8ce47d" containerName="registry-server" Jan 26 08:39:31 crc kubenswrapper[4806]: I0126 08:39:31.812121 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vrpx7" Jan 26 08:39:31 crc kubenswrapper[4806]: I0126 08:39:31.840828 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vrpx7"] Jan 26 08:39:31 crc kubenswrapper[4806]: I0126 08:39:31.937475 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5zql\" (UniqueName: \"kubernetes.io/projected/9ec39aef-e85b-4c43-aa61-b769812e33bd-kube-api-access-c5zql\") pod \"redhat-marketplace-vrpx7\" (UID: \"9ec39aef-e85b-4c43-aa61-b769812e33bd\") " pod="openshift-marketplace/redhat-marketplace-vrpx7" Jan 26 08:39:31 crc kubenswrapper[4806]: I0126 08:39:31.937557 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ec39aef-e85b-4c43-aa61-b769812e33bd-utilities\") pod \"redhat-marketplace-vrpx7\" (UID: \"9ec39aef-e85b-4c43-aa61-b769812e33bd\") " pod="openshift-marketplace/redhat-marketplace-vrpx7" Jan 26 08:39:31 crc kubenswrapper[4806]: I0126 08:39:31.937651 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ec39aef-e85b-4c43-aa61-b769812e33bd-catalog-content\") pod \"redhat-marketplace-vrpx7\" (UID: \"9ec39aef-e85b-4c43-aa61-b769812e33bd\") " pod="openshift-marketplace/redhat-marketplace-vrpx7" Jan 26 08:39:32 crc kubenswrapper[4806]: I0126 08:39:32.039161 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ec39aef-e85b-4c43-aa61-b769812e33bd-catalog-content\") pod \"redhat-marketplace-vrpx7\" (UID: \"9ec39aef-e85b-4c43-aa61-b769812e33bd\") " pod="openshift-marketplace/redhat-marketplace-vrpx7" Jan 26 08:39:32 crc kubenswrapper[4806]: I0126 08:39:32.039534 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5zql\" (UniqueName: \"kubernetes.io/projected/9ec39aef-e85b-4c43-aa61-b769812e33bd-kube-api-access-c5zql\") pod \"redhat-marketplace-vrpx7\" (UID: \"9ec39aef-e85b-4c43-aa61-b769812e33bd\") " pod="openshift-marketplace/redhat-marketplace-vrpx7" Jan 26 08:39:32 crc kubenswrapper[4806]: I0126 08:39:32.039652 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ec39aef-e85b-4c43-aa61-b769812e33bd-utilities\") pod \"redhat-marketplace-vrpx7\" (UID: \"9ec39aef-e85b-4c43-aa61-b769812e33bd\") " pod="openshift-marketplace/redhat-marketplace-vrpx7" Jan 26 08:39:32 crc kubenswrapper[4806]: I0126 08:39:32.040166 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ec39aef-e85b-4c43-aa61-b769812e33bd-utilities\") pod \"redhat-marketplace-vrpx7\" (UID: \"9ec39aef-e85b-4c43-aa61-b769812e33bd\") " pod="openshift-marketplace/redhat-marketplace-vrpx7" Jan 26 08:39:32 crc kubenswrapper[4806]: I0126 08:39:32.040463 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ec39aef-e85b-4c43-aa61-b769812e33bd-catalog-content\") pod \"redhat-marketplace-vrpx7\" (UID: \"9ec39aef-e85b-4c43-aa61-b769812e33bd\") " pod="openshift-marketplace/redhat-marketplace-vrpx7" Jan 26 08:39:32 crc kubenswrapper[4806]: I0126 08:39:32.063821 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5zql\" (UniqueName: \"kubernetes.io/projected/9ec39aef-e85b-4c43-aa61-b769812e33bd-kube-api-access-c5zql\") pod \"redhat-marketplace-vrpx7\" (UID: \"9ec39aef-e85b-4c43-aa61-b769812e33bd\") " pod="openshift-marketplace/redhat-marketplace-vrpx7" Jan 26 08:39:32 crc kubenswrapper[4806]: I0126 08:39:32.150061 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vrpx7" Jan 26 08:39:32 crc kubenswrapper[4806]: I0126 08:39:32.667904 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vrpx7"] Jan 26 08:39:32 crc kubenswrapper[4806]: I0126 08:39:32.722440 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vrpx7" event={"ID":"9ec39aef-e85b-4c43-aa61-b769812e33bd","Type":"ContainerStarted","Data":"e3eb0b3e935bc93346de6a4f135aa98ce8983960fffb14dfddc6e5e8f019fc99"} Jan 26 08:39:33 crc kubenswrapper[4806]: I0126 08:39:33.737936 4806 generic.go:334] "Generic (PLEG): container finished" podID="9ec39aef-e85b-4c43-aa61-b769812e33bd" containerID="fc610ce799617d68aef7e86ae0271d6f73803ad83d53ad7faef6b770cdb5d96b" exitCode=0 Jan 26 08:39:33 crc kubenswrapper[4806]: I0126 08:39:33.738020 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vrpx7" event={"ID":"9ec39aef-e85b-4c43-aa61-b769812e33bd","Type":"ContainerDied","Data":"fc610ce799617d68aef7e86ae0271d6f73803ad83d53ad7faef6b770cdb5d96b"} Jan 26 08:39:34 crc kubenswrapper[4806]: I0126 08:39:34.747845 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vrpx7" event={"ID":"9ec39aef-e85b-4c43-aa61-b769812e33bd","Type":"ContainerStarted","Data":"6244aaff0efcd835b192ef090e1926c9f90cc4ca5faf9b8d4adbfda07f502178"} Jan 26 08:39:35 crc kubenswrapper[4806]: I0126 08:39:35.759725 4806 generic.go:334] "Generic (PLEG): container finished" podID="9ec39aef-e85b-4c43-aa61-b769812e33bd" containerID="6244aaff0efcd835b192ef090e1926c9f90cc4ca5faf9b8d4adbfda07f502178" exitCode=0 Jan 26 08:39:35 crc kubenswrapper[4806]: I0126 08:39:35.759878 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vrpx7" event={"ID":"9ec39aef-e85b-4c43-aa61-b769812e33bd","Type":"ContainerDied","Data":"6244aaff0efcd835b192ef090e1926c9f90cc4ca5faf9b8d4adbfda07f502178"} Jan 26 08:39:36 crc kubenswrapper[4806]: I0126 08:39:36.772221 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vrpx7" event={"ID":"9ec39aef-e85b-4c43-aa61-b769812e33bd","Type":"ContainerStarted","Data":"c74dd55834b3b7a7e26b9e18ecb19835185e7edde85d29aa41a2b09b4373a99b"} Jan 26 08:39:36 crc kubenswrapper[4806]: I0126 08:39:36.796904 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vrpx7" podStartSLOduration=3.344172033 podStartE2EDuration="5.79688978s" podCreationTimestamp="2026-01-26 08:39:31 +0000 UTC" firstStartedPulling="2026-01-26 08:39:33.740182308 +0000 UTC m=+2753.004590374" lastFinishedPulling="2026-01-26 08:39:36.192900065 +0000 UTC m=+2755.457308121" observedRunningTime="2026-01-26 08:39:36.78870987 +0000 UTC m=+2756.053117936" watchObservedRunningTime="2026-01-26 08:39:36.79688978 +0000 UTC m=+2756.061297836" Jan 26 08:39:42 crc kubenswrapper[4806]: I0126 08:39:42.151832 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vrpx7" Jan 26 08:39:42 crc kubenswrapper[4806]: I0126 08:39:42.152436 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vrpx7" Jan 26 08:39:42 crc kubenswrapper[4806]: I0126 08:39:42.198612 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vrpx7" Jan 26 08:39:42 crc kubenswrapper[4806]: I0126 08:39:42.878440 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vrpx7" Jan 26 08:39:42 crc kubenswrapper[4806]: I0126 08:39:42.927795 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vrpx7"] Jan 26 08:39:44 crc kubenswrapper[4806]: I0126 08:39:44.848506 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-vrpx7" podUID="9ec39aef-e85b-4c43-aa61-b769812e33bd" containerName="registry-server" containerID="cri-o://c74dd55834b3b7a7e26b9e18ecb19835185e7edde85d29aa41a2b09b4373a99b" gracePeriod=2 Jan 26 08:39:45 crc kubenswrapper[4806]: I0126 08:39:45.328995 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vrpx7" Jan 26 08:39:45 crc kubenswrapper[4806]: I0126 08:39:45.496588 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c5zql\" (UniqueName: \"kubernetes.io/projected/9ec39aef-e85b-4c43-aa61-b769812e33bd-kube-api-access-c5zql\") pod \"9ec39aef-e85b-4c43-aa61-b769812e33bd\" (UID: \"9ec39aef-e85b-4c43-aa61-b769812e33bd\") " Jan 26 08:39:45 crc kubenswrapper[4806]: I0126 08:39:45.496641 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ec39aef-e85b-4c43-aa61-b769812e33bd-catalog-content\") pod \"9ec39aef-e85b-4c43-aa61-b769812e33bd\" (UID: \"9ec39aef-e85b-4c43-aa61-b769812e33bd\") " Jan 26 08:39:45 crc kubenswrapper[4806]: I0126 08:39:45.496704 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ec39aef-e85b-4c43-aa61-b769812e33bd-utilities\") pod \"9ec39aef-e85b-4c43-aa61-b769812e33bd\" (UID: \"9ec39aef-e85b-4c43-aa61-b769812e33bd\") " Jan 26 08:39:45 crc kubenswrapper[4806]: I0126 08:39:45.498011 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ec39aef-e85b-4c43-aa61-b769812e33bd-utilities" (OuterVolumeSpecName: "utilities") pod "9ec39aef-e85b-4c43-aa61-b769812e33bd" (UID: "9ec39aef-e85b-4c43-aa61-b769812e33bd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 08:39:45 crc kubenswrapper[4806]: I0126 08:39:45.501896 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ec39aef-e85b-4c43-aa61-b769812e33bd-kube-api-access-c5zql" (OuterVolumeSpecName: "kube-api-access-c5zql") pod "9ec39aef-e85b-4c43-aa61-b769812e33bd" (UID: "9ec39aef-e85b-4c43-aa61-b769812e33bd"). InnerVolumeSpecName "kube-api-access-c5zql". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:39:45 crc kubenswrapper[4806]: I0126 08:39:45.522095 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ec39aef-e85b-4c43-aa61-b769812e33bd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9ec39aef-e85b-4c43-aa61-b769812e33bd" (UID: "9ec39aef-e85b-4c43-aa61-b769812e33bd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 08:39:45 crc kubenswrapper[4806]: I0126 08:39:45.598333 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c5zql\" (UniqueName: \"kubernetes.io/projected/9ec39aef-e85b-4c43-aa61-b769812e33bd-kube-api-access-c5zql\") on node \"crc\" DevicePath \"\"" Jan 26 08:39:45 crc kubenswrapper[4806]: I0126 08:39:45.598369 4806 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ec39aef-e85b-4c43-aa61-b769812e33bd-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 08:39:45 crc kubenswrapper[4806]: I0126 08:39:45.598383 4806 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ec39aef-e85b-4c43-aa61-b769812e33bd-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 08:39:45 crc kubenswrapper[4806]: I0126 08:39:45.859586 4806 generic.go:334] "Generic (PLEG): container finished" podID="9ec39aef-e85b-4c43-aa61-b769812e33bd" containerID="c74dd55834b3b7a7e26b9e18ecb19835185e7edde85d29aa41a2b09b4373a99b" exitCode=0 Jan 26 08:39:45 crc kubenswrapper[4806]: I0126 08:39:45.859625 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vrpx7" event={"ID":"9ec39aef-e85b-4c43-aa61-b769812e33bd","Type":"ContainerDied","Data":"c74dd55834b3b7a7e26b9e18ecb19835185e7edde85d29aa41a2b09b4373a99b"} Jan 26 08:39:45 crc kubenswrapper[4806]: I0126 08:39:45.859651 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vrpx7" event={"ID":"9ec39aef-e85b-4c43-aa61-b769812e33bd","Type":"ContainerDied","Data":"e3eb0b3e935bc93346de6a4f135aa98ce8983960fffb14dfddc6e5e8f019fc99"} Jan 26 08:39:45 crc kubenswrapper[4806]: I0126 08:39:45.859668 4806 scope.go:117] "RemoveContainer" containerID="c74dd55834b3b7a7e26b9e18ecb19835185e7edde85d29aa41a2b09b4373a99b" Jan 26 08:39:45 crc kubenswrapper[4806]: I0126 08:39:45.859677 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vrpx7" Jan 26 08:39:45 crc kubenswrapper[4806]: I0126 08:39:45.890377 4806 scope.go:117] "RemoveContainer" containerID="6244aaff0efcd835b192ef090e1926c9f90cc4ca5faf9b8d4adbfda07f502178" Jan 26 08:39:45 crc kubenswrapper[4806]: I0126 08:39:45.896533 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vrpx7"] Jan 26 08:39:45 crc kubenswrapper[4806]: I0126 08:39:45.908954 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-vrpx7"] Jan 26 08:39:45 crc kubenswrapper[4806]: I0126 08:39:45.921402 4806 scope.go:117] "RemoveContainer" containerID="fc610ce799617d68aef7e86ae0271d6f73803ad83d53ad7faef6b770cdb5d96b" Jan 26 08:39:45 crc kubenswrapper[4806]: I0126 08:39:45.973389 4806 scope.go:117] "RemoveContainer" containerID="c74dd55834b3b7a7e26b9e18ecb19835185e7edde85d29aa41a2b09b4373a99b" Jan 26 08:39:45 crc kubenswrapper[4806]: E0126 08:39:45.973887 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c74dd55834b3b7a7e26b9e18ecb19835185e7edde85d29aa41a2b09b4373a99b\": container with ID starting with c74dd55834b3b7a7e26b9e18ecb19835185e7edde85d29aa41a2b09b4373a99b not found: ID does not exist" containerID="c74dd55834b3b7a7e26b9e18ecb19835185e7edde85d29aa41a2b09b4373a99b" Jan 26 08:39:45 crc kubenswrapper[4806]: I0126 08:39:45.973917 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c74dd55834b3b7a7e26b9e18ecb19835185e7edde85d29aa41a2b09b4373a99b"} err="failed to get container status \"c74dd55834b3b7a7e26b9e18ecb19835185e7edde85d29aa41a2b09b4373a99b\": rpc error: code = NotFound desc = could not find container \"c74dd55834b3b7a7e26b9e18ecb19835185e7edde85d29aa41a2b09b4373a99b\": container with ID starting with c74dd55834b3b7a7e26b9e18ecb19835185e7edde85d29aa41a2b09b4373a99b not found: ID does not exist" Jan 26 08:39:45 crc kubenswrapper[4806]: I0126 08:39:45.973938 4806 scope.go:117] "RemoveContainer" containerID="6244aaff0efcd835b192ef090e1926c9f90cc4ca5faf9b8d4adbfda07f502178" Jan 26 08:39:45 crc kubenswrapper[4806]: E0126 08:39:45.974333 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6244aaff0efcd835b192ef090e1926c9f90cc4ca5faf9b8d4adbfda07f502178\": container with ID starting with 6244aaff0efcd835b192ef090e1926c9f90cc4ca5faf9b8d4adbfda07f502178 not found: ID does not exist" containerID="6244aaff0efcd835b192ef090e1926c9f90cc4ca5faf9b8d4adbfda07f502178" Jan 26 08:39:45 crc kubenswrapper[4806]: I0126 08:39:45.974387 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6244aaff0efcd835b192ef090e1926c9f90cc4ca5faf9b8d4adbfda07f502178"} err="failed to get container status \"6244aaff0efcd835b192ef090e1926c9f90cc4ca5faf9b8d4adbfda07f502178\": rpc error: code = NotFound desc = could not find container \"6244aaff0efcd835b192ef090e1926c9f90cc4ca5faf9b8d4adbfda07f502178\": container with ID starting with 6244aaff0efcd835b192ef090e1926c9f90cc4ca5faf9b8d4adbfda07f502178 not found: ID does not exist" Jan 26 08:39:45 crc kubenswrapper[4806]: I0126 08:39:45.974420 4806 scope.go:117] "RemoveContainer" containerID="fc610ce799617d68aef7e86ae0271d6f73803ad83d53ad7faef6b770cdb5d96b" Jan 26 08:39:45 crc kubenswrapper[4806]: E0126 08:39:45.974762 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc610ce799617d68aef7e86ae0271d6f73803ad83d53ad7faef6b770cdb5d96b\": container with ID starting with fc610ce799617d68aef7e86ae0271d6f73803ad83d53ad7faef6b770cdb5d96b not found: ID does not exist" containerID="fc610ce799617d68aef7e86ae0271d6f73803ad83d53ad7faef6b770cdb5d96b" Jan 26 08:39:45 crc kubenswrapper[4806]: I0126 08:39:45.974788 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc610ce799617d68aef7e86ae0271d6f73803ad83d53ad7faef6b770cdb5d96b"} err="failed to get container status \"fc610ce799617d68aef7e86ae0271d6f73803ad83d53ad7faef6b770cdb5d96b\": rpc error: code = NotFound desc = could not find container \"fc610ce799617d68aef7e86ae0271d6f73803ad83d53ad7faef6b770cdb5d96b\": container with ID starting with fc610ce799617d68aef7e86ae0271d6f73803ad83d53ad7faef6b770cdb5d96b not found: ID does not exist" Jan 26 08:39:47 crc kubenswrapper[4806]: I0126 08:39:47.102202 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ec39aef-e85b-4c43-aa61-b769812e33bd" path="/var/lib/kubelet/pods/9ec39aef-e85b-4c43-aa61-b769812e33bd/volumes" Jan 26 08:39:47 crc kubenswrapper[4806]: I0126 08:39:47.849269 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wrjdf"] Jan 26 08:39:47 crc kubenswrapper[4806]: E0126 08:39:47.849697 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ec39aef-e85b-4c43-aa61-b769812e33bd" containerName="extract-utilities" Jan 26 08:39:47 crc kubenswrapper[4806]: I0126 08:39:47.849717 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ec39aef-e85b-4c43-aa61-b769812e33bd" containerName="extract-utilities" Jan 26 08:39:47 crc kubenswrapper[4806]: E0126 08:39:47.849727 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ec39aef-e85b-4c43-aa61-b769812e33bd" containerName="registry-server" Jan 26 08:39:47 crc kubenswrapper[4806]: I0126 08:39:47.849734 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ec39aef-e85b-4c43-aa61-b769812e33bd" containerName="registry-server" Jan 26 08:39:47 crc kubenswrapper[4806]: E0126 08:39:47.849747 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ec39aef-e85b-4c43-aa61-b769812e33bd" containerName="extract-content" Jan 26 08:39:47 crc kubenswrapper[4806]: I0126 08:39:47.849753 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ec39aef-e85b-4c43-aa61-b769812e33bd" containerName="extract-content" Jan 26 08:39:47 crc kubenswrapper[4806]: I0126 08:39:47.849936 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ec39aef-e85b-4c43-aa61-b769812e33bd" containerName="registry-server" Jan 26 08:39:47 crc kubenswrapper[4806]: I0126 08:39:47.851238 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wrjdf" Jan 26 08:39:47 crc kubenswrapper[4806]: I0126 08:39:47.870027 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wrjdf"] Jan 26 08:39:48 crc kubenswrapper[4806]: I0126 08:39:48.012496 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c53fcd2-06aa-43fb-a733-3f1b2889ddcd-catalog-content\") pod \"community-operators-wrjdf\" (UID: \"0c53fcd2-06aa-43fb-a733-3f1b2889ddcd\") " pod="openshift-marketplace/community-operators-wrjdf" Jan 26 08:39:48 crc kubenswrapper[4806]: I0126 08:39:48.012616 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbnpd\" (UniqueName: \"kubernetes.io/projected/0c53fcd2-06aa-43fb-a733-3f1b2889ddcd-kube-api-access-hbnpd\") pod \"community-operators-wrjdf\" (UID: \"0c53fcd2-06aa-43fb-a733-3f1b2889ddcd\") " pod="openshift-marketplace/community-operators-wrjdf" Jan 26 08:39:48 crc kubenswrapper[4806]: I0126 08:39:48.012692 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c53fcd2-06aa-43fb-a733-3f1b2889ddcd-utilities\") pod \"community-operators-wrjdf\" (UID: \"0c53fcd2-06aa-43fb-a733-3f1b2889ddcd\") " pod="openshift-marketplace/community-operators-wrjdf" Jan 26 08:39:48 crc kubenswrapper[4806]: I0126 08:39:48.114877 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbnpd\" (UniqueName: \"kubernetes.io/projected/0c53fcd2-06aa-43fb-a733-3f1b2889ddcd-kube-api-access-hbnpd\") pod \"community-operators-wrjdf\" (UID: \"0c53fcd2-06aa-43fb-a733-3f1b2889ddcd\") " pod="openshift-marketplace/community-operators-wrjdf" Jan 26 08:39:48 crc kubenswrapper[4806]: I0126 08:39:48.115236 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c53fcd2-06aa-43fb-a733-3f1b2889ddcd-utilities\") pod \"community-operators-wrjdf\" (UID: \"0c53fcd2-06aa-43fb-a733-3f1b2889ddcd\") " pod="openshift-marketplace/community-operators-wrjdf" Jan 26 08:39:48 crc kubenswrapper[4806]: I0126 08:39:48.115278 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c53fcd2-06aa-43fb-a733-3f1b2889ddcd-catalog-content\") pod \"community-operators-wrjdf\" (UID: \"0c53fcd2-06aa-43fb-a733-3f1b2889ddcd\") " pod="openshift-marketplace/community-operators-wrjdf" Jan 26 08:39:48 crc kubenswrapper[4806]: I0126 08:39:48.115732 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c53fcd2-06aa-43fb-a733-3f1b2889ddcd-catalog-content\") pod \"community-operators-wrjdf\" (UID: \"0c53fcd2-06aa-43fb-a733-3f1b2889ddcd\") " pod="openshift-marketplace/community-operators-wrjdf" Jan 26 08:39:48 crc kubenswrapper[4806]: I0126 08:39:48.115796 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c53fcd2-06aa-43fb-a733-3f1b2889ddcd-utilities\") pod \"community-operators-wrjdf\" (UID: \"0c53fcd2-06aa-43fb-a733-3f1b2889ddcd\") " pod="openshift-marketplace/community-operators-wrjdf" Jan 26 08:39:48 crc kubenswrapper[4806]: I0126 08:39:48.141419 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbnpd\" (UniqueName: \"kubernetes.io/projected/0c53fcd2-06aa-43fb-a733-3f1b2889ddcd-kube-api-access-hbnpd\") pod \"community-operators-wrjdf\" (UID: \"0c53fcd2-06aa-43fb-a733-3f1b2889ddcd\") " pod="openshift-marketplace/community-operators-wrjdf" Jan 26 08:39:48 crc kubenswrapper[4806]: I0126 08:39:48.264437 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wrjdf" Jan 26 08:39:48 crc kubenswrapper[4806]: W0126 08:39:48.840690 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0c53fcd2_06aa_43fb_a733_3f1b2889ddcd.slice/crio-94c22fffc8d29f5d03592fb0cc3e336b29e00091d366c431281956edd2a29301 WatchSource:0}: Error finding container 94c22fffc8d29f5d03592fb0cc3e336b29e00091d366c431281956edd2a29301: Status 404 returned error can't find the container with id 94c22fffc8d29f5d03592fb0cc3e336b29e00091d366c431281956edd2a29301 Jan 26 08:39:48 crc kubenswrapper[4806]: I0126 08:39:48.846782 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wrjdf"] Jan 26 08:39:49 crc kubenswrapper[4806]: I0126 08:39:49.123526 4806 generic.go:334] "Generic (PLEG): container finished" podID="0c53fcd2-06aa-43fb-a733-3f1b2889ddcd" containerID="4b4cf06357765c30ae44ac1359430b973e18c6997cae50682f0d5803eed0a317" exitCode=0 Jan 26 08:39:49 crc kubenswrapper[4806]: I0126 08:39:49.123659 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wrjdf" event={"ID":"0c53fcd2-06aa-43fb-a733-3f1b2889ddcd","Type":"ContainerDied","Data":"4b4cf06357765c30ae44ac1359430b973e18c6997cae50682f0d5803eed0a317"} Jan 26 08:39:49 crc kubenswrapper[4806]: I0126 08:39:49.123918 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wrjdf" event={"ID":"0c53fcd2-06aa-43fb-a733-3f1b2889ddcd","Type":"ContainerStarted","Data":"94c22fffc8d29f5d03592fb0cc3e336b29e00091d366c431281956edd2a29301"} Jan 26 08:39:50 crc kubenswrapper[4806]: I0126 08:39:50.133465 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wrjdf" event={"ID":"0c53fcd2-06aa-43fb-a733-3f1b2889ddcd","Type":"ContainerStarted","Data":"b81607f5a5245a6d3a69032c0d6c528485f1b305b83d161b9fd0573907877916"} Jan 26 08:39:52 crc kubenswrapper[4806]: I0126 08:39:52.150551 4806 generic.go:334] "Generic (PLEG): container finished" podID="0c53fcd2-06aa-43fb-a733-3f1b2889ddcd" containerID="b81607f5a5245a6d3a69032c0d6c528485f1b305b83d161b9fd0573907877916" exitCode=0 Jan 26 08:39:52 crc kubenswrapper[4806]: I0126 08:39:52.150622 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wrjdf" event={"ID":"0c53fcd2-06aa-43fb-a733-3f1b2889ddcd","Type":"ContainerDied","Data":"b81607f5a5245a6d3a69032c0d6c528485f1b305b83d161b9fd0573907877916"} Jan 26 08:39:53 crc kubenswrapper[4806]: I0126 08:39:53.166300 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wrjdf" event={"ID":"0c53fcd2-06aa-43fb-a733-3f1b2889ddcd","Type":"ContainerStarted","Data":"4e21b56464b4e2570db9bbf8808bf77bac9ee195a8600f6c8aca9b35449094fc"} Jan 26 08:39:53 crc kubenswrapper[4806]: I0126 08:39:53.199363 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wrjdf" podStartSLOduration=2.476479667 podStartE2EDuration="6.199341081s" podCreationTimestamp="2026-01-26 08:39:47 +0000 UTC" firstStartedPulling="2026-01-26 08:39:49.125259671 +0000 UTC m=+2768.389667727" lastFinishedPulling="2026-01-26 08:39:52.848121085 +0000 UTC m=+2772.112529141" observedRunningTime="2026-01-26 08:39:53.186068498 +0000 UTC m=+2772.450476554" watchObservedRunningTime="2026-01-26 08:39:53.199341081 +0000 UTC m=+2772.463749147" Jan 26 08:39:58 crc kubenswrapper[4806]: I0126 08:39:58.265368 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-wrjdf" Jan 26 08:39:58 crc kubenswrapper[4806]: I0126 08:39:58.265941 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wrjdf" Jan 26 08:39:58 crc kubenswrapper[4806]: I0126 08:39:58.324122 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wrjdf" Jan 26 08:39:59 crc kubenswrapper[4806]: I0126 08:39:59.263276 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wrjdf" Jan 26 08:39:59 crc kubenswrapper[4806]: I0126 08:39:59.316175 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wrjdf"] Jan 26 08:40:01 crc kubenswrapper[4806]: I0126 08:40:01.235431 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-wrjdf" podUID="0c53fcd2-06aa-43fb-a733-3f1b2889ddcd" containerName="registry-server" containerID="cri-o://4e21b56464b4e2570db9bbf8808bf77bac9ee195a8600f6c8aca9b35449094fc" gracePeriod=2 Jan 26 08:40:01 crc kubenswrapper[4806]: I0126 08:40:01.724874 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wrjdf" Jan 26 08:40:01 crc kubenswrapper[4806]: I0126 08:40:01.820214 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hbnpd\" (UniqueName: \"kubernetes.io/projected/0c53fcd2-06aa-43fb-a733-3f1b2889ddcd-kube-api-access-hbnpd\") pod \"0c53fcd2-06aa-43fb-a733-3f1b2889ddcd\" (UID: \"0c53fcd2-06aa-43fb-a733-3f1b2889ddcd\") " Jan 26 08:40:01 crc kubenswrapper[4806]: I0126 08:40:01.820402 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c53fcd2-06aa-43fb-a733-3f1b2889ddcd-catalog-content\") pod \"0c53fcd2-06aa-43fb-a733-3f1b2889ddcd\" (UID: \"0c53fcd2-06aa-43fb-a733-3f1b2889ddcd\") " Jan 26 08:40:01 crc kubenswrapper[4806]: I0126 08:40:01.820442 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c53fcd2-06aa-43fb-a733-3f1b2889ddcd-utilities\") pod \"0c53fcd2-06aa-43fb-a733-3f1b2889ddcd\" (UID: \"0c53fcd2-06aa-43fb-a733-3f1b2889ddcd\") " Jan 26 08:40:01 crc kubenswrapper[4806]: I0126 08:40:01.821285 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0c53fcd2-06aa-43fb-a733-3f1b2889ddcd-utilities" (OuterVolumeSpecName: "utilities") pod "0c53fcd2-06aa-43fb-a733-3f1b2889ddcd" (UID: "0c53fcd2-06aa-43fb-a733-3f1b2889ddcd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 08:40:01 crc kubenswrapper[4806]: I0126 08:40:01.836995 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c53fcd2-06aa-43fb-a733-3f1b2889ddcd-kube-api-access-hbnpd" (OuterVolumeSpecName: "kube-api-access-hbnpd") pod "0c53fcd2-06aa-43fb-a733-3f1b2889ddcd" (UID: "0c53fcd2-06aa-43fb-a733-3f1b2889ddcd"). InnerVolumeSpecName "kube-api-access-hbnpd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:40:01 crc kubenswrapper[4806]: I0126 08:40:01.871653 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0c53fcd2-06aa-43fb-a733-3f1b2889ddcd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0c53fcd2-06aa-43fb-a733-3f1b2889ddcd" (UID: "0c53fcd2-06aa-43fb-a733-3f1b2889ddcd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 08:40:01 crc kubenswrapper[4806]: I0126 08:40:01.922342 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hbnpd\" (UniqueName: \"kubernetes.io/projected/0c53fcd2-06aa-43fb-a733-3f1b2889ddcd-kube-api-access-hbnpd\") on node \"crc\" DevicePath \"\"" Jan 26 08:40:01 crc kubenswrapper[4806]: I0126 08:40:01.922371 4806 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c53fcd2-06aa-43fb-a733-3f1b2889ddcd-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 08:40:01 crc kubenswrapper[4806]: I0126 08:40:01.922381 4806 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c53fcd2-06aa-43fb-a733-3f1b2889ddcd-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 08:40:02 crc kubenswrapper[4806]: I0126 08:40:02.249638 4806 generic.go:334] "Generic (PLEG): container finished" podID="0c53fcd2-06aa-43fb-a733-3f1b2889ddcd" containerID="4e21b56464b4e2570db9bbf8808bf77bac9ee195a8600f6c8aca9b35449094fc" exitCode=0 Jan 26 08:40:02 crc kubenswrapper[4806]: I0126 08:40:02.249680 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wrjdf" event={"ID":"0c53fcd2-06aa-43fb-a733-3f1b2889ddcd","Type":"ContainerDied","Data":"4e21b56464b4e2570db9bbf8808bf77bac9ee195a8600f6c8aca9b35449094fc"} Jan 26 08:40:02 crc kubenswrapper[4806]: I0126 08:40:02.249705 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wrjdf" event={"ID":"0c53fcd2-06aa-43fb-a733-3f1b2889ddcd","Type":"ContainerDied","Data":"94c22fffc8d29f5d03592fb0cc3e336b29e00091d366c431281956edd2a29301"} Jan 26 08:40:02 crc kubenswrapper[4806]: I0126 08:40:02.249722 4806 scope.go:117] "RemoveContainer" containerID="4e21b56464b4e2570db9bbf8808bf77bac9ee195a8600f6c8aca9b35449094fc" Jan 26 08:40:02 crc kubenswrapper[4806]: I0126 08:40:02.249850 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wrjdf" Jan 26 08:40:02 crc kubenswrapper[4806]: I0126 08:40:02.272099 4806 scope.go:117] "RemoveContainer" containerID="b81607f5a5245a6d3a69032c0d6c528485f1b305b83d161b9fd0573907877916" Jan 26 08:40:02 crc kubenswrapper[4806]: I0126 08:40:02.295330 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wrjdf"] Jan 26 08:40:02 crc kubenswrapper[4806]: I0126 08:40:02.307988 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-wrjdf"] Jan 26 08:40:02 crc kubenswrapper[4806]: I0126 08:40:02.315762 4806 scope.go:117] "RemoveContainer" containerID="4b4cf06357765c30ae44ac1359430b973e18c6997cae50682f0d5803eed0a317" Jan 26 08:40:02 crc kubenswrapper[4806]: I0126 08:40:02.352449 4806 scope.go:117] "RemoveContainer" containerID="4e21b56464b4e2570db9bbf8808bf77bac9ee195a8600f6c8aca9b35449094fc" Jan 26 08:40:02 crc kubenswrapper[4806]: E0126 08:40:02.353028 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e21b56464b4e2570db9bbf8808bf77bac9ee195a8600f6c8aca9b35449094fc\": container with ID starting with 4e21b56464b4e2570db9bbf8808bf77bac9ee195a8600f6c8aca9b35449094fc not found: ID does not exist" containerID="4e21b56464b4e2570db9bbf8808bf77bac9ee195a8600f6c8aca9b35449094fc" Jan 26 08:40:02 crc kubenswrapper[4806]: I0126 08:40:02.353066 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e21b56464b4e2570db9bbf8808bf77bac9ee195a8600f6c8aca9b35449094fc"} err="failed to get container status \"4e21b56464b4e2570db9bbf8808bf77bac9ee195a8600f6c8aca9b35449094fc\": rpc error: code = NotFound desc = could not find container \"4e21b56464b4e2570db9bbf8808bf77bac9ee195a8600f6c8aca9b35449094fc\": container with ID starting with 4e21b56464b4e2570db9bbf8808bf77bac9ee195a8600f6c8aca9b35449094fc not found: ID does not exist" Jan 26 08:40:02 crc kubenswrapper[4806]: I0126 08:40:02.353092 4806 scope.go:117] "RemoveContainer" containerID="b81607f5a5245a6d3a69032c0d6c528485f1b305b83d161b9fd0573907877916" Jan 26 08:40:02 crc kubenswrapper[4806]: E0126 08:40:02.353899 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b81607f5a5245a6d3a69032c0d6c528485f1b305b83d161b9fd0573907877916\": container with ID starting with b81607f5a5245a6d3a69032c0d6c528485f1b305b83d161b9fd0573907877916 not found: ID does not exist" containerID="b81607f5a5245a6d3a69032c0d6c528485f1b305b83d161b9fd0573907877916" Jan 26 08:40:02 crc kubenswrapper[4806]: I0126 08:40:02.354007 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b81607f5a5245a6d3a69032c0d6c528485f1b305b83d161b9fd0573907877916"} err="failed to get container status \"b81607f5a5245a6d3a69032c0d6c528485f1b305b83d161b9fd0573907877916\": rpc error: code = NotFound desc = could not find container \"b81607f5a5245a6d3a69032c0d6c528485f1b305b83d161b9fd0573907877916\": container with ID starting with b81607f5a5245a6d3a69032c0d6c528485f1b305b83d161b9fd0573907877916 not found: ID does not exist" Jan 26 08:40:02 crc kubenswrapper[4806]: I0126 08:40:02.354075 4806 scope.go:117] "RemoveContainer" containerID="4b4cf06357765c30ae44ac1359430b973e18c6997cae50682f0d5803eed0a317" Jan 26 08:40:02 crc kubenswrapper[4806]: E0126 08:40:02.354544 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b4cf06357765c30ae44ac1359430b973e18c6997cae50682f0d5803eed0a317\": container with ID starting with 4b4cf06357765c30ae44ac1359430b973e18c6997cae50682f0d5803eed0a317 not found: ID does not exist" containerID="4b4cf06357765c30ae44ac1359430b973e18c6997cae50682f0d5803eed0a317" Jan 26 08:40:02 crc kubenswrapper[4806]: I0126 08:40:02.354568 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b4cf06357765c30ae44ac1359430b973e18c6997cae50682f0d5803eed0a317"} err="failed to get container status \"4b4cf06357765c30ae44ac1359430b973e18c6997cae50682f0d5803eed0a317\": rpc error: code = NotFound desc = could not find container \"4b4cf06357765c30ae44ac1359430b973e18c6997cae50682f0d5803eed0a317\": container with ID starting with 4b4cf06357765c30ae44ac1359430b973e18c6997cae50682f0d5803eed0a317 not found: ID does not exist" Jan 26 08:40:03 crc kubenswrapper[4806]: I0126 08:40:03.051502 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c53fcd2-06aa-43fb-a733-3f1b2889ddcd" path="/var/lib/kubelet/pods/0c53fcd2-06aa-43fb-a733-3f1b2889ddcd/volumes" Jan 26 08:40:20 crc kubenswrapper[4806]: I0126 08:40:20.398323 4806 generic.go:334] "Generic (PLEG): container finished" podID="26657020-74ce-471a-8877-43f4fd4fde5d" containerID="92a77737c206e1f5fb237aced0a044ba93efab00f251844ef91b03563bb3dbb6" exitCode=0 Jan 26 08:40:20 crc kubenswrapper[4806]: I0126 08:40:20.398413 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-72fbn" event={"ID":"26657020-74ce-471a-8877-43f4fd4fde5d","Type":"ContainerDied","Data":"92a77737c206e1f5fb237aced0a044ba93efab00f251844ef91b03563bb3dbb6"} Jan 26 08:40:21 crc kubenswrapper[4806]: I0126 08:40:21.974819 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-72fbn" Jan 26 08:40:22 crc kubenswrapper[4806]: I0126 08:40:22.076714 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/26657020-74ce-471a-8877-43f4fd4fde5d-inventory\") pod \"26657020-74ce-471a-8877-43f4fd4fde5d\" (UID: \"26657020-74ce-471a-8877-43f4fd4fde5d\") " Jan 26 08:40:22 crc kubenswrapper[4806]: I0126 08:40:22.076846 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/26657020-74ce-471a-8877-43f4fd4fde5d-ssh-key-openstack-edpm-ipam\") pod \"26657020-74ce-471a-8877-43f4fd4fde5d\" (UID: \"26657020-74ce-471a-8877-43f4fd4fde5d\") " Jan 26 08:40:22 crc kubenswrapper[4806]: I0126 08:40:22.076992 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/26657020-74ce-471a-8877-43f4fd4fde5d-ceilometer-compute-config-data-0\") pod \"26657020-74ce-471a-8877-43f4fd4fde5d\" (UID: \"26657020-74ce-471a-8877-43f4fd4fde5d\") " Jan 26 08:40:22 crc kubenswrapper[4806]: I0126 08:40:22.077008 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26657020-74ce-471a-8877-43f4fd4fde5d-telemetry-combined-ca-bundle\") pod \"26657020-74ce-471a-8877-43f4fd4fde5d\" (UID: \"26657020-74ce-471a-8877-43f4fd4fde5d\") " Jan 26 08:40:22 crc kubenswrapper[4806]: I0126 08:40:22.077074 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2frk2\" (UniqueName: \"kubernetes.io/projected/26657020-74ce-471a-8877-43f4fd4fde5d-kube-api-access-2frk2\") pod \"26657020-74ce-471a-8877-43f4fd4fde5d\" (UID: \"26657020-74ce-471a-8877-43f4fd4fde5d\") " Jan 26 08:40:22 crc kubenswrapper[4806]: I0126 08:40:22.077113 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/26657020-74ce-471a-8877-43f4fd4fde5d-ceilometer-compute-config-data-2\") pod \"26657020-74ce-471a-8877-43f4fd4fde5d\" (UID: \"26657020-74ce-471a-8877-43f4fd4fde5d\") " Jan 26 08:40:22 crc kubenswrapper[4806]: I0126 08:40:22.077142 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/26657020-74ce-471a-8877-43f4fd4fde5d-ceilometer-compute-config-data-1\") pod \"26657020-74ce-471a-8877-43f4fd4fde5d\" (UID: \"26657020-74ce-471a-8877-43f4fd4fde5d\") " Jan 26 08:40:22 crc kubenswrapper[4806]: I0126 08:40:22.083720 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26657020-74ce-471a-8877-43f4fd4fde5d-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "26657020-74ce-471a-8877-43f4fd4fde5d" (UID: "26657020-74ce-471a-8877-43f4fd4fde5d"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:40:22 crc kubenswrapper[4806]: I0126 08:40:22.089737 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26657020-74ce-471a-8877-43f4fd4fde5d-kube-api-access-2frk2" (OuterVolumeSpecName: "kube-api-access-2frk2") pod "26657020-74ce-471a-8877-43f4fd4fde5d" (UID: "26657020-74ce-471a-8877-43f4fd4fde5d"). InnerVolumeSpecName "kube-api-access-2frk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:40:22 crc kubenswrapper[4806]: I0126 08:40:22.113137 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26657020-74ce-471a-8877-43f4fd4fde5d-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "26657020-74ce-471a-8877-43f4fd4fde5d" (UID: "26657020-74ce-471a-8877-43f4fd4fde5d"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:40:22 crc kubenswrapper[4806]: I0126 08:40:22.116187 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26657020-74ce-471a-8877-43f4fd4fde5d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "26657020-74ce-471a-8877-43f4fd4fde5d" (UID: "26657020-74ce-471a-8877-43f4fd4fde5d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:40:22 crc kubenswrapper[4806]: I0126 08:40:22.122379 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26657020-74ce-471a-8877-43f4fd4fde5d-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "26657020-74ce-471a-8877-43f4fd4fde5d" (UID: "26657020-74ce-471a-8877-43f4fd4fde5d"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:40:22 crc kubenswrapper[4806]: I0126 08:40:22.124866 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26657020-74ce-471a-8877-43f4fd4fde5d-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "26657020-74ce-471a-8877-43f4fd4fde5d" (UID: "26657020-74ce-471a-8877-43f4fd4fde5d"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:40:22 crc kubenswrapper[4806]: I0126 08:40:22.128498 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26657020-74ce-471a-8877-43f4fd4fde5d-inventory" (OuterVolumeSpecName: "inventory") pod "26657020-74ce-471a-8877-43f4fd4fde5d" (UID: "26657020-74ce-471a-8877-43f4fd4fde5d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:40:22 crc kubenswrapper[4806]: I0126 08:40:22.180414 4806 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/26657020-74ce-471a-8877-43f4fd4fde5d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 08:40:22 crc kubenswrapper[4806]: I0126 08:40:22.180442 4806 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/26657020-74ce-471a-8877-43f4fd4fde5d-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Jan 26 08:40:22 crc kubenswrapper[4806]: I0126 08:40:22.180452 4806 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26657020-74ce-471a-8877-43f4fd4fde5d-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 08:40:22 crc kubenswrapper[4806]: I0126 08:40:22.180462 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2frk2\" (UniqueName: \"kubernetes.io/projected/26657020-74ce-471a-8877-43f4fd4fde5d-kube-api-access-2frk2\") on node \"crc\" DevicePath \"\"" Jan 26 08:40:22 crc kubenswrapper[4806]: I0126 08:40:22.180472 4806 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/26657020-74ce-471a-8877-43f4fd4fde5d-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Jan 26 08:40:22 crc kubenswrapper[4806]: I0126 08:40:22.180481 4806 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/26657020-74ce-471a-8877-43f4fd4fde5d-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Jan 26 08:40:22 crc kubenswrapper[4806]: I0126 08:40:22.180490 4806 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/26657020-74ce-471a-8877-43f4fd4fde5d-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 08:40:22 crc kubenswrapper[4806]: I0126 08:40:22.415595 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-72fbn" event={"ID":"26657020-74ce-471a-8877-43f4fd4fde5d","Type":"ContainerDied","Data":"5bde6bafd69e7ef77a0026d5565c45c061bd65ec9c9edfab0c249436cb2727f7"} Jan 26 08:40:22 crc kubenswrapper[4806]: I0126 08:40:22.415641 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5bde6bafd69e7ef77a0026d5565c45c061bd65ec9c9edfab0c249436cb2727f7" Jan 26 08:40:22 crc kubenswrapper[4806]: I0126 08:40:22.415700 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-72fbn" Jan 26 08:40:59 crc kubenswrapper[4806]: I0126 08:40:59.634750 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-ksd7c"] Jan 26 08:40:59 crc kubenswrapper[4806]: E0126 08:40:59.635717 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26657020-74ce-471a-8877-43f4fd4fde5d" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 26 08:40:59 crc kubenswrapper[4806]: I0126 08:40:59.635733 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="26657020-74ce-471a-8877-43f4fd4fde5d" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 26 08:40:59 crc kubenswrapper[4806]: E0126 08:40:59.635751 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c53fcd2-06aa-43fb-a733-3f1b2889ddcd" containerName="extract-utilities" Jan 26 08:40:59 crc kubenswrapper[4806]: I0126 08:40:59.635760 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c53fcd2-06aa-43fb-a733-3f1b2889ddcd" containerName="extract-utilities" Jan 26 08:40:59 crc kubenswrapper[4806]: E0126 08:40:59.635790 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c53fcd2-06aa-43fb-a733-3f1b2889ddcd" containerName="registry-server" Jan 26 08:40:59 crc kubenswrapper[4806]: I0126 08:40:59.635798 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c53fcd2-06aa-43fb-a733-3f1b2889ddcd" containerName="registry-server" Jan 26 08:40:59 crc kubenswrapper[4806]: E0126 08:40:59.635820 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c53fcd2-06aa-43fb-a733-3f1b2889ddcd" containerName="extract-content" Jan 26 08:40:59 crc kubenswrapper[4806]: I0126 08:40:59.635828 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c53fcd2-06aa-43fb-a733-3f1b2889ddcd" containerName="extract-content" Jan 26 08:40:59 crc kubenswrapper[4806]: I0126 08:40:59.636046 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="26657020-74ce-471a-8877-43f4fd4fde5d" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 26 08:40:59 crc kubenswrapper[4806]: I0126 08:40:59.636068 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c53fcd2-06aa-43fb-a733-3f1b2889ddcd" containerName="registry-server" Jan 26 08:40:59 crc kubenswrapper[4806]: I0126 08:40:59.637694 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ksd7c" Jan 26 08:40:59 crc kubenswrapper[4806]: I0126 08:40:59.647394 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ksd7c"] Jan 26 08:40:59 crc kubenswrapper[4806]: I0126 08:40:59.772641 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62beedb9-4317-493f-af0d-a841b9bda0bc-catalog-content\") pod \"certified-operators-ksd7c\" (UID: \"62beedb9-4317-493f-af0d-a841b9bda0bc\") " pod="openshift-marketplace/certified-operators-ksd7c" Jan 26 08:40:59 crc kubenswrapper[4806]: I0126 08:40:59.772856 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rtbd\" (UniqueName: \"kubernetes.io/projected/62beedb9-4317-493f-af0d-a841b9bda0bc-kube-api-access-8rtbd\") pod \"certified-operators-ksd7c\" (UID: \"62beedb9-4317-493f-af0d-a841b9bda0bc\") " pod="openshift-marketplace/certified-operators-ksd7c" Jan 26 08:40:59 crc kubenswrapper[4806]: I0126 08:40:59.772995 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62beedb9-4317-493f-af0d-a841b9bda0bc-utilities\") pod \"certified-operators-ksd7c\" (UID: \"62beedb9-4317-493f-af0d-a841b9bda0bc\") " pod="openshift-marketplace/certified-operators-ksd7c" Jan 26 08:40:59 crc kubenswrapper[4806]: I0126 08:40:59.875232 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rtbd\" (UniqueName: \"kubernetes.io/projected/62beedb9-4317-493f-af0d-a841b9bda0bc-kube-api-access-8rtbd\") pod \"certified-operators-ksd7c\" (UID: \"62beedb9-4317-493f-af0d-a841b9bda0bc\") " pod="openshift-marketplace/certified-operators-ksd7c" Jan 26 08:40:59 crc kubenswrapper[4806]: I0126 08:40:59.875294 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62beedb9-4317-493f-af0d-a841b9bda0bc-utilities\") pod \"certified-operators-ksd7c\" (UID: \"62beedb9-4317-493f-af0d-a841b9bda0bc\") " pod="openshift-marketplace/certified-operators-ksd7c" Jan 26 08:40:59 crc kubenswrapper[4806]: I0126 08:40:59.875382 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62beedb9-4317-493f-af0d-a841b9bda0bc-catalog-content\") pod \"certified-operators-ksd7c\" (UID: \"62beedb9-4317-493f-af0d-a841b9bda0bc\") " pod="openshift-marketplace/certified-operators-ksd7c" Jan 26 08:40:59 crc kubenswrapper[4806]: I0126 08:40:59.875856 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62beedb9-4317-493f-af0d-a841b9bda0bc-catalog-content\") pod \"certified-operators-ksd7c\" (UID: \"62beedb9-4317-493f-af0d-a841b9bda0bc\") " pod="openshift-marketplace/certified-operators-ksd7c" Jan 26 08:40:59 crc kubenswrapper[4806]: I0126 08:40:59.876309 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62beedb9-4317-493f-af0d-a841b9bda0bc-utilities\") pod \"certified-operators-ksd7c\" (UID: \"62beedb9-4317-493f-af0d-a841b9bda0bc\") " pod="openshift-marketplace/certified-operators-ksd7c" Jan 26 08:40:59 crc kubenswrapper[4806]: I0126 08:40:59.918623 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rtbd\" (UniqueName: \"kubernetes.io/projected/62beedb9-4317-493f-af0d-a841b9bda0bc-kube-api-access-8rtbd\") pod \"certified-operators-ksd7c\" (UID: \"62beedb9-4317-493f-af0d-a841b9bda0bc\") " pod="openshift-marketplace/certified-operators-ksd7c" Jan 26 08:40:59 crc kubenswrapper[4806]: I0126 08:40:59.990879 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ksd7c" Jan 26 08:41:00 crc kubenswrapper[4806]: I0126 08:41:00.547541 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ksd7c"] Jan 26 08:41:00 crc kubenswrapper[4806]: I0126 08:41:00.823147 4806 generic.go:334] "Generic (PLEG): container finished" podID="62beedb9-4317-493f-af0d-a841b9bda0bc" containerID="3fd9902e10a584e94e2d1b8d03aeab3dc59a9699457321a7ec5adb5937452769" exitCode=0 Jan 26 08:41:00 crc kubenswrapper[4806]: I0126 08:41:00.823265 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ksd7c" event={"ID":"62beedb9-4317-493f-af0d-a841b9bda0bc","Type":"ContainerDied","Data":"3fd9902e10a584e94e2d1b8d03aeab3dc59a9699457321a7ec5adb5937452769"} Jan 26 08:41:00 crc kubenswrapper[4806]: I0126 08:41:00.823465 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ksd7c" event={"ID":"62beedb9-4317-493f-af0d-a841b9bda0bc","Type":"ContainerStarted","Data":"5a2bd2a72c8e8ba16573b901ab17d50c3351ca0594939459ec8a551583bb1e34"} Jan 26 08:41:02 crc kubenswrapper[4806]: I0126 08:41:02.844445 4806 generic.go:334] "Generic (PLEG): container finished" podID="62beedb9-4317-493f-af0d-a841b9bda0bc" containerID="cf878c04aaf76f3a9b58a6191541099e7e78f3472e724e2c3d838161e6c5973d" exitCode=0 Jan 26 08:41:02 crc kubenswrapper[4806]: I0126 08:41:02.844485 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ksd7c" event={"ID":"62beedb9-4317-493f-af0d-a841b9bda0bc","Type":"ContainerDied","Data":"cf878c04aaf76f3a9b58a6191541099e7e78f3472e724e2c3d838161e6c5973d"} Jan 26 08:41:03 crc kubenswrapper[4806]: I0126 08:41:03.855687 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ksd7c" event={"ID":"62beedb9-4317-493f-af0d-a841b9bda0bc","Type":"ContainerStarted","Data":"79bfd17c8d2a382da947734941578f07dc67ed6afe22ea0f060879bbd7105052"} Jan 26 08:41:03 crc kubenswrapper[4806]: I0126 08:41:03.887806 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-ksd7c" podStartSLOduration=2.5044006899999998 podStartE2EDuration="4.887783679s" podCreationTimestamp="2026-01-26 08:40:59 +0000 UTC" firstStartedPulling="2026-01-26 08:41:00.824657386 +0000 UTC m=+2840.089065442" lastFinishedPulling="2026-01-26 08:41:03.208040375 +0000 UTC m=+2842.472448431" observedRunningTime="2026-01-26 08:41:03.880854344 +0000 UTC m=+2843.145262390" watchObservedRunningTime="2026-01-26 08:41:03.887783679 +0000 UTC m=+2843.152191745" Jan 26 08:41:09 crc kubenswrapper[4806]: I0126 08:41:09.991573 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-ksd7c" Jan 26 08:41:09 crc kubenswrapper[4806]: I0126 08:41:09.993165 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-ksd7c" Jan 26 08:41:10 crc kubenswrapper[4806]: I0126 08:41:10.069154 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-ksd7c" Jan 26 08:41:10 crc kubenswrapper[4806]: I0126 08:41:10.988241 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-ksd7c" Jan 26 08:41:11 crc kubenswrapper[4806]: I0126 08:41:11.053504 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ksd7c"] Jan 26 08:41:12 crc kubenswrapper[4806]: I0126 08:41:12.964646 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-ksd7c" podUID="62beedb9-4317-493f-af0d-a841b9bda0bc" containerName="registry-server" containerID="cri-o://79bfd17c8d2a382da947734941578f07dc67ed6afe22ea0f060879bbd7105052" gracePeriod=2 Jan 26 08:41:13 crc kubenswrapper[4806]: I0126 08:41:13.423116 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ksd7c" Jan 26 08:41:13 crc kubenswrapper[4806]: I0126 08:41:13.578120 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62beedb9-4317-493f-af0d-a841b9bda0bc-utilities\") pod \"62beedb9-4317-493f-af0d-a841b9bda0bc\" (UID: \"62beedb9-4317-493f-af0d-a841b9bda0bc\") " Jan 26 08:41:13 crc kubenswrapper[4806]: I0126 08:41:13.578328 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8rtbd\" (UniqueName: \"kubernetes.io/projected/62beedb9-4317-493f-af0d-a841b9bda0bc-kube-api-access-8rtbd\") pod \"62beedb9-4317-493f-af0d-a841b9bda0bc\" (UID: \"62beedb9-4317-493f-af0d-a841b9bda0bc\") " Jan 26 08:41:13 crc kubenswrapper[4806]: I0126 08:41:13.578367 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62beedb9-4317-493f-af0d-a841b9bda0bc-catalog-content\") pod \"62beedb9-4317-493f-af0d-a841b9bda0bc\" (UID: \"62beedb9-4317-493f-af0d-a841b9bda0bc\") " Jan 26 08:41:13 crc kubenswrapper[4806]: I0126 08:41:13.580498 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/62beedb9-4317-493f-af0d-a841b9bda0bc-utilities" (OuterVolumeSpecName: "utilities") pod "62beedb9-4317-493f-af0d-a841b9bda0bc" (UID: "62beedb9-4317-493f-af0d-a841b9bda0bc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 08:41:13 crc kubenswrapper[4806]: I0126 08:41:13.590702 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62beedb9-4317-493f-af0d-a841b9bda0bc-kube-api-access-8rtbd" (OuterVolumeSpecName: "kube-api-access-8rtbd") pod "62beedb9-4317-493f-af0d-a841b9bda0bc" (UID: "62beedb9-4317-493f-af0d-a841b9bda0bc"). InnerVolumeSpecName "kube-api-access-8rtbd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:41:13 crc kubenswrapper[4806]: I0126 08:41:13.638291 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/62beedb9-4317-493f-af0d-a841b9bda0bc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "62beedb9-4317-493f-af0d-a841b9bda0bc" (UID: "62beedb9-4317-493f-af0d-a841b9bda0bc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 08:41:13 crc kubenswrapper[4806]: I0126 08:41:13.681086 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8rtbd\" (UniqueName: \"kubernetes.io/projected/62beedb9-4317-493f-af0d-a841b9bda0bc-kube-api-access-8rtbd\") on node \"crc\" DevicePath \"\"" Jan 26 08:41:13 crc kubenswrapper[4806]: I0126 08:41:13.681494 4806 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62beedb9-4317-493f-af0d-a841b9bda0bc-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 08:41:13 crc kubenswrapper[4806]: I0126 08:41:13.681506 4806 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62beedb9-4317-493f-af0d-a841b9bda0bc-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 08:41:13 crc kubenswrapper[4806]: I0126 08:41:13.973601 4806 generic.go:334] "Generic (PLEG): container finished" podID="62beedb9-4317-493f-af0d-a841b9bda0bc" containerID="79bfd17c8d2a382da947734941578f07dc67ed6afe22ea0f060879bbd7105052" exitCode=0 Jan 26 08:41:13 crc kubenswrapper[4806]: I0126 08:41:13.973669 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ksd7c" event={"ID":"62beedb9-4317-493f-af0d-a841b9bda0bc","Type":"ContainerDied","Data":"79bfd17c8d2a382da947734941578f07dc67ed6afe22ea0f060879bbd7105052"} Jan 26 08:41:13 crc kubenswrapper[4806]: I0126 08:41:13.973692 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ksd7c" Jan 26 08:41:13 crc kubenswrapper[4806]: I0126 08:41:13.973712 4806 scope.go:117] "RemoveContainer" containerID="79bfd17c8d2a382da947734941578f07dc67ed6afe22ea0f060879bbd7105052" Jan 26 08:41:13 crc kubenswrapper[4806]: I0126 08:41:13.973700 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ksd7c" event={"ID":"62beedb9-4317-493f-af0d-a841b9bda0bc","Type":"ContainerDied","Data":"5a2bd2a72c8e8ba16573b901ab17d50c3351ca0594939459ec8a551583bb1e34"} Jan 26 08:41:14 crc kubenswrapper[4806]: I0126 08:41:14.002811 4806 scope.go:117] "RemoveContainer" containerID="cf878c04aaf76f3a9b58a6191541099e7e78f3472e724e2c3d838161e6c5973d" Jan 26 08:41:14 crc kubenswrapper[4806]: I0126 08:41:14.012670 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ksd7c"] Jan 26 08:41:14 crc kubenswrapper[4806]: I0126 08:41:14.044090 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-ksd7c"] Jan 26 08:41:14 crc kubenswrapper[4806]: I0126 08:41:14.051876 4806 scope.go:117] "RemoveContainer" containerID="3fd9902e10a584e94e2d1b8d03aeab3dc59a9699457321a7ec5adb5937452769" Jan 26 08:41:14 crc kubenswrapper[4806]: I0126 08:41:14.074629 4806 scope.go:117] "RemoveContainer" containerID="79bfd17c8d2a382da947734941578f07dc67ed6afe22ea0f060879bbd7105052" Jan 26 08:41:14 crc kubenswrapper[4806]: E0126 08:41:14.075245 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79bfd17c8d2a382da947734941578f07dc67ed6afe22ea0f060879bbd7105052\": container with ID starting with 79bfd17c8d2a382da947734941578f07dc67ed6afe22ea0f060879bbd7105052 not found: ID does not exist" containerID="79bfd17c8d2a382da947734941578f07dc67ed6afe22ea0f060879bbd7105052" Jan 26 08:41:14 crc kubenswrapper[4806]: I0126 08:41:14.075305 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79bfd17c8d2a382da947734941578f07dc67ed6afe22ea0f060879bbd7105052"} err="failed to get container status \"79bfd17c8d2a382da947734941578f07dc67ed6afe22ea0f060879bbd7105052\": rpc error: code = NotFound desc = could not find container \"79bfd17c8d2a382da947734941578f07dc67ed6afe22ea0f060879bbd7105052\": container with ID starting with 79bfd17c8d2a382da947734941578f07dc67ed6afe22ea0f060879bbd7105052 not found: ID does not exist" Jan 26 08:41:14 crc kubenswrapper[4806]: I0126 08:41:14.075337 4806 scope.go:117] "RemoveContainer" containerID="cf878c04aaf76f3a9b58a6191541099e7e78f3472e724e2c3d838161e6c5973d" Jan 26 08:41:14 crc kubenswrapper[4806]: E0126 08:41:14.076447 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf878c04aaf76f3a9b58a6191541099e7e78f3472e724e2c3d838161e6c5973d\": container with ID starting with cf878c04aaf76f3a9b58a6191541099e7e78f3472e724e2c3d838161e6c5973d not found: ID does not exist" containerID="cf878c04aaf76f3a9b58a6191541099e7e78f3472e724e2c3d838161e6c5973d" Jan 26 08:41:14 crc kubenswrapper[4806]: I0126 08:41:14.076475 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf878c04aaf76f3a9b58a6191541099e7e78f3472e724e2c3d838161e6c5973d"} err="failed to get container status \"cf878c04aaf76f3a9b58a6191541099e7e78f3472e724e2c3d838161e6c5973d\": rpc error: code = NotFound desc = could not find container \"cf878c04aaf76f3a9b58a6191541099e7e78f3472e724e2c3d838161e6c5973d\": container with ID starting with cf878c04aaf76f3a9b58a6191541099e7e78f3472e724e2c3d838161e6c5973d not found: ID does not exist" Jan 26 08:41:14 crc kubenswrapper[4806]: I0126 08:41:14.076492 4806 scope.go:117] "RemoveContainer" containerID="3fd9902e10a584e94e2d1b8d03aeab3dc59a9699457321a7ec5adb5937452769" Jan 26 08:41:14 crc kubenswrapper[4806]: E0126 08:41:14.076865 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3fd9902e10a584e94e2d1b8d03aeab3dc59a9699457321a7ec5adb5937452769\": container with ID starting with 3fd9902e10a584e94e2d1b8d03aeab3dc59a9699457321a7ec5adb5937452769 not found: ID does not exist" containerID="3fd9902e10a584e94e2d1b8d03aeab3dc59a9699457321a7ec5adb5937452769" Jan 26 08:41:14 crc kubenswrapper[4806]: I0126 08:41:14.076909 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3fd9902e10a584e94e2d1b8d03aeab3dc59a9699457321a7ec5adb5937452769"} err="failed to get container status \"3fd9902e10a584e94e2d1b8d03aeab3dc59a9699457321a7ec5adb5937452769\": rpc error: code = NotFound desc = could not find container \"3fd9902e10a584e94e2d1b8d03aeab3dc59a9699457321a7ec5adb5937452769\": container with ID starting with 3fd9902e10a584e94e2d1b8d03aeab3dc59a9699457321a7ec5adb5937452769 not found: ID does not exist" Jan 26 08:41:15 crc kubenswrapper[4806]: I0126 08:41:15.056165 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62beedb9-4317-493f-af0d-a841b9bda0bc" path="/var/lib/kubelet/pods/62beedb9-4317-493f-af0d-a841b9bda0bc/volumes" Jan 26 08:41:15 crc kubenswrapper[4806]: I0126 08:41:15.806676 4806 patch_prober.go:28] interesting pod/machine-config-daemon-k2tlk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 08:41:15 crc kubenswrapper[4806]: I0126 08:41:15.807162 4806 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 08:41:25 crc kubenswrapper[4806]: I0126 08:41:25.178803 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest-s00-multi-thread-testing"] Jan 26 08:41:25 crc kubenswrapper[4806]: E0126 08:41:25.179767 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62beedb9-4317-493f-af0d-a841b9bda0bc" containerName="registry-server" Jan 26 08:41:25 crc kubenswrapper[4806]: I0126 08:41:25.179780 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="62beedb9-4317-493f-af0d-a841b9bda0bc" containerName="registry-server" Jan 26 08:41:25 crc kubenswrapper[4806]: E0126 08:41:25.179803 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62beedb9-4317-493f-af0d-a841b9bda0bc" containerName="extract-content" Jan 26 08:41:25 crc kubenswrapper[4806]: I0126 08:41:25.179811 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="62beedb9-4317-493f-af0d-a841b9bda0bc" containerName="extract-content" Jan 26 08:41:25 crc kubenswrapper[4806]: E0126 08:41:25.179833 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62beedb9-4317-493f-af0d-a841b9bda0bc" containerName="extract-utilities" Jan 26 08:41:25 crc kubenswrapper[4806]: I0126 08:41:25.179839 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="62beedb9-4317-493f-af0d-a841b9bda0bc" containerName="extract-utilities" Jan 26 08:41:25 crc kubenswrapper[4806]: I0126 08:41:25.180006 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="62beedb9-4317-493f-af0d-a841b9bda0bc" containerName="registry-server" Jan 26 08:41:25 crc kubenswrapper[4806]: I0126 08:41:25.180713 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 26 08:41:25 crc kubenswrapper[4806]: I0126 08:41:25.189540 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Jan 26 08:41:25 crc kubenswrapper[4806]: I0126 08:41:25.189708 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Jan 26 08:41:25 crc kubenswrapper[4806]: I0126 08:41:25.191841 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 26 08:41:25 crc kubenswrapper[4806]: I0126 08:41:25.193103 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-v9vjw" Jan 26 08:41:25 crc kubenswrapper[4806]: I0126 08:41:25.209550 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest-s00-multi-thread-testing"] Jan 26 08:41:25 crc kubenswrapper[4806]: I0126 08:41:25.355183 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"d392e063-ed04-4768-b95c-cbd7d0e5afda\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 26 08:41:25 crc kubenswrapper[4806]: I0126 08:41:25.355405 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d392e063-ed04-4768-b95c-cbd7d0e5afda-config-data\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"d392e063-ed04-4768-b95c-cbd7d0e5afda\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 26 08:41:25 crc kubenswrapper[4806]: I0126 08:41:25.355514 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qggff\" (UniqueName: \"kubernetes.io/projected/d392e063-ed04-4768-b95c-cbd7d0e5afda-kube-api-access-qggff\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"d392e063-ed04-4768-b95c-cbd7d0e5afda\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 26 08:41:25 crc kubenswrapper[4806]: I0126 08:41:25.355549 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d392e063-ed04-4768-b95c-cbd7d0e5afda-openstack-config\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"d392e063-ed04-4768-b95c-cbd7d0e5afda\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 26 08:41:25 crc kubenswrapper[4806]: I0126 08:41:25.355652 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d392e063-ed04-4768-b95c-cbd7d0e5afda-openstack-config-secret\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"d392e063-ed04-4768-b95c-cbd7d0e5afda\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 26 08:41:25 crc kubenswrapper[4806]: I0126 08:41:25.355718 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/d392e063-ed04-4768-b95c-cbd7d0e5afda-ca-certs\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"d392e063-ed04-4768-b95c-cbd7d0e5afda\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 26 08:41:25 crc kubenswrapper[4806]: I0126 08:41:25.355757 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/d392e063-ed04-4768-b95c-cbd7d0e5afda-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"d392e063-ed04-4768-b95c-cbd7d0e5afda\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 26 08:41:25 crc kubenswrapper[4806]: I0126 08:41:25.355825 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d392e063-ed04-4768-b95c-cbd7d0e5afda-ssh-key\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"d392e063-ed04-4768-b95c-cbd7d0e5afda\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 26 08:41:25 crc kubenswrapper[4806]: I0126 08:41:25.355870 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/d392e063-ed04-4768-b95c-cbd7d0e5afda-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"d392e063-ed04-4768-b95c-cbd7d0e5afda\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 26 08:41:25 crc kubenswrapper[4806]: I0126 08:41:25.457565 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qggff\" (UniqueName: \"kubernetes.io/projected/d392e063-ed04-4768-b95c-cbd7d0e5afda-kube-api-access-qggff\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"d392e063-ed04-4768-b95c-cbd7d0e5afda\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 26 08:41:25 crc kubenswrapper[4806]: I0126 08:41:25.457626 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d392e063-ed04-4768-b95c-cbd7d0e5afda-openstack-config\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"d392e063-ed04-4768-b95c-cbd7d0e5afda\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 26 08:41:25 crc kubenswrapper[4806]: I0126 08:41:25.457660 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d392e063-ed04-4768-b95c-cbd7d0e5afda-openstack-config-secret\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"d392e063-ed04-4768-b95c-cbd7d0e5afda\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 26 08:41:25 crc kubenswrapper[4806]: I0126 08:41:25.457712 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/d392e063-ed04-4768-b95c-cbd7d0e5afda-ca-certs\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"d392e063-ed04-4768-b95c-cbd7d0e5afda\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 26 08:41:25 crc kubenswrapper[4806]: I0126 08:41:25.457766 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/d392e063-ed04-4768-b95c-cbd7d0e5afda-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"d392e063-ed04-4768-b95c-cbd7d0e5afda\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 26 08:41:25 crc kubenswrapper[4806]: I0126 08:41:25.457803 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d392e063-ed04-4768-b95c-cbd7d0e5afda-ssh-key\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"d392e063-ed04-4768-b95c-cbd7d0e5afda\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 26 08:41:25 crc kubenswrapper[4806]: I0126 08:41:25.457829 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/d392e063-ed04-4768-b95c-cbd7d0e5afda-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"d392e063-ed04-4768-b95c-cbd7d0e5afda\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 26 08:41:25 crc kubenswrapper[4806]: I0126 08:41:25.457880 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d392e063-ed04-4768-b95c-cbd7d0e5afda-config-data\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"d392e063-ed04-4768-b95c-cbd7d0e5afda\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 26 08:41:25 crc kubenswrapper[4806]: I0126 08:41:25.457897 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"d392e063-ed04-4768-b95c-cbd7d0e5afda\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 26 08:41:25 crc kubenswrapper[4806]: I0126 08:41:25.458701 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/d392e063-ed04-4768-b95c-cbd7d0e5afda-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"d392e063-ed04-4768-b95c-cbd7d0e5afda\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 26 08:41:25 crc kubenswrapper[4806]: I0126 08:41:25.459148 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d392e063-ed04-4768-b95c-cbd7d0e5afda-openstack-config\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"d392e063-ed04-4768-b95c-cbd7d0e5afda\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 26 08:41:25 crc kubenswrapper[4806]: I0126 08:41:25.459291 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/d392e063-ed04-4768-b95c-cbd7d0e5afda-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"d392e063-ed04-4768-b95c-cbd7d0e5afda\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 26 08:41:25 crc kubenswrapper[4806]: I0126 08:41:25.459937 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d392e063-ed04-4768-b95c-cbd7d0e5afda-config-data\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"d392e063-ed04-4768-b95c-cbd7d0e5afda\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 26 08:41:25 crc kubenswrapper[4806]: I0126 08:41:25.460049 4806 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"d392e063-ed04-4768-b95c-cbd7d0e5afda\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 26 08:41:25 crc kubenswrapper[4806]: I0126 08:41:25.464469 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d392e063-ed04-4768-b95c-cbd7d0e5afda-openstack-config-secret\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"d392e063-ed04-4768-b95c-cbd7d0e5afda\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 26 08:41:25 crc kubenswrapper[4806]: I0126 08:41:25.472498 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d392e063-ed04-4768-b95c-cbd7d0e5afda-ssh-key\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"d392e063-ed04-4768-b95c-cbd7d0e5afda\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 26 08:41:25 crc kubenswrapper[4806]: I0126 08:41:25.473484 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/d392e063-ed04-4768-b95c-cbd7d0e5afda-ca-certs\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"d392e063-ed04-4768-b95c-cbd7d0e5afda\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 26 08:41:25 crc kubenswrapper[4806]: I0126 08:41:25.482425 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qggff\" (UniqueName: \"kubernetes.io/projected/d392e063-ed04-4768-b95c-cbd7d0e5afda-kube-api-access-qggff\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"d392e063-ed04-4768-b95c-cbd7d0e5afda\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 26 08:41:25 crc kubenswrapper[4806]: I0126 08:41:25.491846 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"d392e063-ed04-4768-b95c-cbd7d0e5afda\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 26 08:41:25 crc kubenswrapper[4806]: I0126 08:41:25.498789 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 26 08:41:26 crc kubenswrapper[4806]: I0126 08:41:26.076174 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest-s00-multi-thread-testing"] Jan 26 08:41:27 crc kubenswrapper[4806]: I0126 08:41:27.112665 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" event={"ID":"d392e063-ed04-4768-b95c-cbd7d0e5afda","Type":"ContainerStarted","Data":"3267d0c955ac1e44cf22a3d3bf078d3d192439d417fa58dd44f449c69ca61f66"} Jan 26 08:41:45 crc kubenswrapper[4806]: I0126 08:41:45.806971 4806 patch_prober.go:28] interesting pod/machine-config-daemon-k2tlk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 08:41:45 crc kubenswrapper[4806]: I0126 08:41:45.808494 4806 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 08:42:15 crc kubenswrapper[4806]: I0126 08:42:15.807008 4806 patch_prober.go:28] interesting pod/machine-config-daemon-k2tlk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 08:42:15 crc kubenswrapper[4806]: I0126 08:42:15.807735 4806 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 08:42:15 crc kubenswrapper[4806]: I0126 08:42:15.807790 4806 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" Jan 26 08:42:15 crc kubenswrapper[4806]: I0126 08:42:15.808551 4806 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5b1dcff18202e7b1ccf45070afc686479bd6ae343787b05873debdd35fcca2bb"} pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 08:42:15 crc kubenswrapper[4806]: I0126 08:42:15.808606 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" containerName="machine-config-daemon" containerID="cri-o://5b1dcff18202e7b1ccf45070afc686479bd6ae343787b05873debdd35fcca2bb" gracePeriod=600 Jan 26 08:42:16 crc kubenswrapper[4806]: I0126 08:42:16.736327 4806 generic.go:334] "Generic (PLEG): container finished" podID="d07502a2-50b0-4012-b335-340a1c694c50" containerID="5b1dcff18202e7b1ccf45070afc686479bd6ae343787b05873debdd35fcca2bb" exitCode=0 Jan 26 08:42:16 crc kubenswrapper[4806]: I0126 08:42:16.736643 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" event={"ID":"d07502a2-50b0-4012-b335-340a1c694c50","Type":"ContainerDied","Data":"5b1dcff18202e7b1ccf45070afc686479bd6ae343787b05873debdd35fcca2bb"} Jan 26 08:42:16 crc kubenswrapper[4806]: I0126 08:42:16.736722 4806 scope.go:117] "RemoveContainer" containerID="7ef77f272b960ff89ea7700ce294299cc718d9cbd53dfd55d5742a86584b3bca" Jan 26 08:42:21 crc kubenswrapper[4806]: E0126 08:42:21.912389 4806 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-tempest-all:c3923531bcda0b0811b2d5053f189beb" Jan 26 08:42:21 crc kubenswrapper[4806]: E0126 08:42:21.912917 4806 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-tempest-all:c3923531bcda0b0811b2d5053f189beb" Jan 26 08:42:21 crc kubenswrapper[4806]: E0126 08:42:21.914021 4806 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-tempest-all:c3923531bcda0b0811b2d5053f189beb,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qggff,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest-s00-multi-thread-testing_openstack(d392e063-ed04-4768-b95c-cbd7d0e5afda): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 08:42:21 crc kubenswrapper[4806]: E0126 08:42:21.915768 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" podUID="d392e063-ed04-4768-b95c-cbd7d0e5afda" Jan 26 08:42:22 crc kubenswrapper[4806]: I0126 08:42:22.795362 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" event={"ID":"d07502a2-50b0-4012-b335-340a1c694c50","Type":"ContainerStarted","Data":"10b18e01788d453026d0ea8a18222425ae42bbc6f5a48cace082710622106c43"} Jan 26 08:42:22 crc kubenswrapper[4806]: E0126 08:42:22.796765 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-antelope-centos9/openstack-tempest-all:c3923531bcda0b0811b2d5053f189beb\\\"\"" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" podUID="d392e063-ed04-4768-b95c-cbd7d0e5afda" Jan 26 08:42:36 crc kubenswrapper[4806]: I0126 08:42:36.045039 4806 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 08:42:36 crc kubenswrapper[4806]: I0126 08:42:36.543883 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 26 08:42:38 crc kubenswrapper[4806]: I0126 08:42:38.946553 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" event={"ID":"d392e063-ed04-4768-b95c-cbd7d0e5afda","Type":"ContainerStarted","Data":"28ce943823052dfa7a387d045ee512ed129b9e6deb0bca1fe81de4164b5c4746"} Jan 26 08:42:38 crc kubenswrapper[4806]: I0126 08:42:38.978110 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" podStartSLOduration=4.518830942 podStartE2EDuration="1m14.978085228s" podCreationTimestamp="2026-01-26 08:41:24 +0000 UTC" firstStartedPulling="2026-01-26 08:41:26.081701239 +0000 UTC m=+2865.346109295" lastFinishedPulling="2026-01-26 08:42:36.540955525 +0000 UTC m=+2935.805363581" observedRunningTime="2026-01-26 08:42:38.970785013 +0000 UTC m=+2938.235193119" watchObservedRunningTime="2026-01-26 08:42:38.978085228 +0000 UTC m=+2938.242493324" Jan 26 08:44:45 crc kubenswrapper[4806]: I0126 08:44:45.806453 4806 patch_prober.go:28] interesting pod/machine-config-daemon-k2tlk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 08:44:45 crc kubenswrapper[4806]: I0126 08:44:45.807034 4806 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 08:45:00 crc kubenswrapper[4806]: I0126 08:45:00.422771 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490285-4cwsp"] Jan 26 08:45:00 crc kubenswrapper[4806]: I0126 08:45:00.426507 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490285-4cwsp" Jan 26 08:45:00 crc kubenswrapper[4806]: I0126 08:45:00.432479 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 26 08:45:00 crc kubenswrapper[4806]: I0126 08:45:00.435207 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 26 08:45:00 crc kubenswrapper[4806]: I0126 08:45:00.462625 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cx2q6\" (UniqueName: \"kubernetes.io/projected/d931af36-7c58-4c04-b118-53cdbaafb655-kube-api-access-cx2q6\") pod \"collect-profiles-29490285-4cwsp\" (UID: \"d931af36-7c58-4c04-b118-53cdbaafb655\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490285-4cwsp" Jan 26 08:45:00 crc kubenswrapper[4806]: I0126 08:45:00.462711 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d931af36-7c58-4c04-b118-53cdbaafb655-config-volume\") pod \"collect-profiles-29490285-4cwsp\" (UID: \"d931af36-7c58-4c04-b118-53cdbaafb655\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490285-4cwsp" Jan 26 08:45:00 crc kubenswrapper[4806]: I0126 08:45:00.462771 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d931af36-7c58-4c04-b118-53cdbaafb655-secret-volume\") pod \"collect-profiles-29490285-4cwsp\" (UID: \"d931af36-7c58-4c04-b118-53cdbaafb655\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490285-4cwsp" Jan 26 08:45:00 crc kubenswrapper[4806]: I0126 08:45:00.551363 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490285-4cwsp"] Jan 26 08:45:00 crc kubenswrapper[4806]: I0126 08:45:00.565909 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cx2q6\" (UniqueName: \"kubernetes.io/projected/d931af36-7c58-4c04-b118-53cdbaafb655-kube-api-access-cx2q6\") pod \"collect-profiles-29490285-4cwsp\" (UID: \"d931af36-7c58-4c04-b118-53cdbaafb655\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490285-4cwsp" Jan 26 08:45:00 crc kubenswrapper[4806]: I0126 08:45:00.565992 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d931af36-7c58-4c04-b118-53cdbaafb655-config-volume\") pod \"collect-profiles-29490285-4cwsp\" (UID: \"d931af36-7c58-4c04-b118-53cdbaafb655\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490285-4cwsp" Jan 26 08:45:00 crc kubenswrapper[4806]: I0126 08:45:00.566047 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d931af36-7c58-4c04-b118-53cdbaafb655-secret-volume\") pod \"collect-profiles-29490285-4cwsp\" (UID: \"d931af36-7c58-4c04-b118-53cdbaafb655\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490285-4cwsp" Jan 26 08:45:00 crc kubenswrapper[4806]: I0126 08:45:00.569021 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d931af36-7c58-4c04-b118-53cdbaafb655-config-volume\") pod \"collect-profiles-29490285-4cwsp\" (UID: \"d931af36-7c58-4c04-b118-53cdbaafb655\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490285-4cwsp" Jan 26 08:45:00 crc kubenswrapper[4806]: I0126 08:45:00.583408 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d931af36-7c58-4c04-b118-53cdbaafb655-secret-volume\") pod \"collect-profiles-29490285-4cwsp\" (UID: \"d931af36-7c58-4c04-b118-53cdbaafb655\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490285-4cwsp" Jan 26 08:45:00 crc kubenswrapper[4806]: I0126 08:45:00.588338 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cx2q6\" (UniqueName: \"kubernetes.io/projected/d931af36-7c58-4c04-b118-53cdbaafb655-kube-api-access-cx2q6\") pod \"collect-profiles-29490285-4cwsp\" (UID: \"d931af36-7c58-4c04-b118-53cdbaafb655\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490285-4cwsp" Jan 26 08:45:00 crc kubenswrapper[4806]: I0126 08:45:00.755193 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490285-4cwsp" Jan 26 08:45:01 crc kubenswrapper[4806]: I0126 08:45:01.717310 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490285-4cwsp"] Jan 26 08:45:02 crc kubenswrapper[4806]: I0126 08:45:02.683686 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490285-4cwsp" event={"ID":"d931af36-7c58-4c04-b118-53cdbaafb655","Type":"ContainerDied","Data":"7d9ca67e779af60e8f0b4b5bf373f0c45ed4484b58d7a75958970c336957a521"} Jan 26 08:45:02 crc kubenswrapper[4806]: I0126 08:45:02.684739 4806 generic.go:334] "Generic (PLEG): container finished" podID="d931af36-7c58-4c04-b118-53cdbaafb655" containerID="7d9ca67e779af60e8f0b4b5bf373f0c45ed4484b58d7a75958970c336957a521" exitCode=0 Jan 26 08:45:02 crc kubenswrapper[4806]: I0126 08:45:02.684814 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490285-4cwsp" event={"ID":"d931af36-7c58-4c04-b118-53cdbaafb655","Type":"ContainerStarted","Data":"c89efe2cf27e407af096a672aee72e3249808064cc49a9f1e01de9b8772b3d69"} Jan 26 08:45:04 crc kubenswrapper[4806]: I0126 08:45:04.302676 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490285-4cwsp" Jan 26 08:45:04 crc kubenswrapper[4806]: I0126 08:45:04.448906 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d931af36-7c58-4c04-b118-53cdbaafb655-config-volume\") pod \"d931af36-7c58-4c04-b118-53cdbaafb655\" (UID: \"d931af36-7c58-4c04-b118-53cdbaafb655\") " Jan 26 08:45:04 crc kubenswrapper[4806]: I0126 08:45:04.448999 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cx2q6\" (UniqueName: \"kubernetes.io/projected/d931af36-7c58-4c04-b118-53cdbaafb655-kube-api-access-cx2q6\") pod \"d931af36-7c58-4c04-b118-53cdbaafb655\" (UID: \"d931af36-7c58-4c04-b118-53cdbaafb655\") " Jan 26 08:45:04 crc kubenswrapper[4806]: I0126 08:45:04.449236 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d931af36-7c58-4c04-b118-53cdbaafb655-secret-volume\") pod \"d931af36-7c58-4c04-b118-53cdbaafb655\" (UID: \"d931af36-7c58-4c04-b118-53cdbaafb655\") " Jan 26 08:45:04 crc kubenswrapper[4806]: I0126 08:45:04.450568 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d931af36-7c58-4c04-b118-53cdbaafb655-config-volume" (OuterVolumeSpecName: "config-volume") pod "d931af36-7c58-4c04-b118-53cdbaafb655" (UID: "d931af36-7c58-4c04-b118-53cdbaafb655"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 08:45:04 crc kubenswrapper[4806]: I0126 08:45:04.451325 4806 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d931af36-7c58-4c04-b118-53cdbaafb655-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 08:45:04 crc kubenswrapper[4806]: I0126 08:45:04.460174 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d931af36-7c58-4c04-b118-53cdbaafb655-kube-api-access-cx2q6" (OuterVolumeSpecName: "kube-api-access-cx2q6") pod "d931af36-7c58-4c04-b118-53cdbaafb655" (UID: "d931af36-7c58-4c04-b118-53cdbaafb655"). InnerVolumeSpecName "kube-api-access-cx2q6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:45:04 crc kubenswrapper[4806]: I0126 08:45:04.472597 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d931af36-7c58-4c04-b118-53cdbaafb655-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "d931af36-7c58-4c04-b118-53cdbaafb655" (UID: "d931af36-7c58-4c04-b118-53cdbaafb655"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:45:04 crc kubenswrapper[4806]: I0126 08:45:04.553149 4806 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d931af36-7c58-4c04-b118-53cdbaafb655-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 08:45:04 crc kubenswrapper[4806]: I0126 08:45:04.553197 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cx2q6\" (UniqueName: \"kubernetes.io/projected/d931af36-7c58-4c04-b118-53cdbaafb655-kube-api-access-cx2q6\") on node \"crc\" DevicePath \"\"" Jan 26 08:45:04 crc kubenswrapper[4806]: I0126 08:45:04.704935 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490285-4cwsp" event={"ID":"d931af36-7c58-4c04-b118-53cdbaafb655","Type":"ContainerDied","Data":"c89efe2cf27e407af096a672aee72e3249808064cc49a9f1e01de9b8772b3d69"} Jan 26 08:45:04 crc kubenswrapper[4806]: I0126 08:45:04.704972 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c89efe2cf27e407af096a672aee72e3249808064cc49a9f1e01de9b8772b3d69" Jan 26 08:45:04 crc kubenswrapper[4806]: I0126 08:45:04.705026 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490285-4cwsp" Jan 26 08:45:05 crc kubenswrapper[4806]: I0126 08:45:05.417183 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490240-ttrd8"] Jan 26 08:45:05 crc kubenswrapper[4806]: I0126 08:45:05.431086 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490240-ttrd8"] Jan 26 08:45:07 crc kubenswrapper[4806]: I0126 08:45:07.053889 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8e8ed9e-0309-4618-8376-ab447ae9bb09" path="/var/lib/kubelet/pods/b8e8ed9e-0309-4618-8376-ab447ae9bb09/volumes" Jan 26 08:45:15 crc kubenswrapper[4806]: I0126 08:45:15.806671 4806 patch_prober.go:28] interesting pod/machine-config-daemon-k2tlk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 08:45:15 crc kubenswrapper[4806]: I0126 08:45:15.807233 4806 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 08:45:37 crc kubenswrapper[4806]: I0126 08:45:37.729725 4806 scope.go:117] "RemoveContainer" containerID="7225795f0dc41b00698fbefc84298c1387c254dd54fa9aabb3738812d9426911" Jan 26 08:45:45 crc kubenswrapper[4806]: I0126 08:45:45.806846 4806 patch_prober.go:28] interesting pod/machine-config-daemon-k2tlk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 08:45:45 crc kubenswrapper[4806]: I0126 08:45:45.807287 4806 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 08:45:45 crc kubenswrapper[4806]: I0126 08:45:45.807333 4806 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" Jan 26 08:45:45 crc kubenswrapper[4806]: I0126 08:45:45.807949 4806 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"10b18e01788d453026d0ea8a18222425ae42bbc6f5a48cace082710622106c43"} pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 08:45:45 crc kubenswrapper[4806]: I0126 08:45:45.808004 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" containerName="machine-config-daemon" containerID="cri-o://10b18e01788d453026d0ea8a18222425ae42bbc6f5a48cace082710622106c43" gracePeriod=600 Jan 26 08:45:45 crc kubenswrapper[4806]: E0126 08:45:45.928623 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 08:45:46 crc kubenswrapper[4806]: I0126 08:45:46.076081 4806 generic.go:334] "Generic (PLEG): container finished" podID="d07502a2-50b0-4012-b335-340a1c694c50" containerID="10b18e01788d453026d0ea8a18222425ae42bbc6f5a48cace082710622106c43" exitCode=0 Jan 26 08:45:46 crc kubenswrapper[4806]: I0126 08:45:46.076125 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" event={"ID":"d07502a2-50b0-4012-b335-340a1c694c50","Type":"ContainerDied","Data":"10b18e01788d453026d0ea8a18222425ae42bbc6f5a48cace082710622106c43"} Jan 26 08:45:46 crc kubenswrapper[4806]: I0126 08:45:46.076159 4806 scope.go:117] "RemoveContainer" containerID="5b1dcff18202e7b1ccf45070afc686479bd6ae343787b05873debdd35fcca2bb" Jan 26 08:45:46 crc kubenswrapper[4806]: I0126 08:45:46.076825 4806 scope.go:117] "RemoveContainer" containerID="10b18e01788d453026d0ea8a18222425ae42bbc6f5a48cace082710622106c43" Jan 26 08:45:46 crc kubenswrapper[4806]: E0126 08:45:46.077063 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 08:46:01 crc kubenswrapper[4806]: I0126 08:46:01.047186 4806 scope.go:117] "RemoveContainer" containerID="10b18e01788d453026d0ea8a18222425ae42bbc6f5a48cace082710622106c43" Jan 26 08:46:01 crc kubenswrapper[4806]: E0126 08:46:01.048004 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 08:46:13 crc kubenswrapper[4806]: I0126 08:46:13.042608 4806 scope.go:117] "RemoveContainer" containerID="10b18e01788d453026d0ea8a18222425ae42bbc6f5a48cace082710622106c43" Jan 26 08:46:13 crc kubenswrapper[4806]: E0126 08:46:13.043397 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 08:46:26 crc kubenswrapper[4806]: I0126 08:46:26.042485 4806 scope.go:117] "RemoveContainer" containerID="10b18e01788d453026d0ea8a18222425ae42bbc6f5a48cace082710622106c43" Jan 26 08:46:26 crc kubenswrapper[4806]: E0126 08:46:26.043296 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 08:46:41 crc kubenswrapper[4806]: I0126 08:46:41.066772 4806 scope.go:117] "RemoveContainer" containerID="10b18e01788d453026d0ea8a18222425ae42bbc6f5a48cace082710622106c43" Jan 26 08:46:41 crc kubenswrapper[4806]: E0126 08:46:41.067616 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 08:46:52 crc kubenswrapper[4806]: I0126 08:46:52.041767 4806 scope.go:117] "RemoveContainer" containerID="10b18e01788d453026d0ea8a18222425ae42bbc6f5a48cace082710622106c43" Jan 26 08:46:52 crc kubenswrapper[4806]: E0126 08:46:52.042492 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 08:47:07 crc kubenswrapper[4806]: I0126 08:47:07.043976 4806 scope.go:117] "RemoveContainer" containerID="10b18e01788d453026d0ea8a18222425ae42bbc6f5a48cace082710622106c43" Jan 26 08:47:07 crc kubenswrapper[4806]: E0126 08:47:07.044764 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 08:47:22 crc kubenswrapper[4806]: I0126 08:47:22.042263 4806 scope.go:117] "RemoveContainer" containerID="10b18e01788d453026d0ea8a18222425ae42bbc6f5a48cace082710622106c43" Jan 26 08:47:22 crc kubenswrapper[4806]: E0126 08:47:22.042913 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 08:47:33 crc kubenswrapper[4806]: I0126 08:47:33.042311 4806 scope.go:117] "RemoveContainer" containerID="10b18e01788d453026d0ea8a18222425ae42bbc6f5a48cace082710622106c43" Jan 26 08:47:33 crc kubenswrapper[4806]: E0126 08:47:33.043286 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 08:47:48 crc kubenswrapper[4806]: I0126 08:47:48.041665 4806 scope.go:117] "RemoveContainer" containerID="10b18e01788d453026d0ea8a18222425ae42bbc6f5a48cace082710622106c43" Jan 26 08:47:48 crc kubenswrapper[4806]: E0126 08:47:48.042404 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 08:48:01 crc kubenswrapper[4806]: I0126 08:48:01.047929 4806 scope.go:117] "RemoveContainer" containerID="10b18e01788d453026d0ea8a18222425ae42bbc6f5a48cace082710622106c43" Jan 26 08:48:01 crc kubenswrapper[4806]: E0126 08:48:01.050057 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 08:48:16 crc kubenswrapper[4806]: I0126 08:48:16.042138 4806 scope.go:117] "RemoveContainer" containerID="10b18e01788d453026d0ea8a18222425ae42bbc6f5a48cace082710622106c43" Jan 26 08:48:16 crc kubenswrapper[4806]: E0126 08:48:16.043934 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 08:48:30 crc kubenswrapper[4806]: I0126 08:48:30.042829 4806 scope.go:117] "RemoveContainer" containerID="10b18e01788d453026d0ea8a18222425ae42bbc6f5a48cace082710622106c43" Jan 26 08:48:30 crc kubenswrapper[4806]: E0126 08:48:30.043625 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 08:48:42 crc kubenswrapper[4806]: I0126 08:48:42.042210 4806 scope.go:117] "RemoveContainer" containerID="10b18e01788d453026d0ea8a18222425ae42bbc6f5a48cace082710622106c43" Jan 26 08:48:42 crc kubenswrapper[4806]: E0126 08:48:42.044407 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 08:48:56 crc kubenswrapper[4806]: I0126 08:48:56.042535 4806 scope.go:117] "RemoveContainer" containerID="10b18e01788d453026d0ea8a18222425ae42bbc6f5a48cace082710622106c43" Jan 26 08:48:56 crc kubenswrapper[4806]: E0126 08:48:56.043424 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 08:49:09 crc kubenswrapper[4806]: I0126 08:49:09.043081 4806 scope.go:117] "RemoveContainer" containerID="10b18e01788d453026d0ea8a18222425ae42bbc6f5a48cace082710622106c43" Jan 26 08:49:09 crc kubenswrapper[4806]: E0126 08:49:09.043892 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 08:49:22 crc kubenswrapper[4806]: I0126 08:49:22.043421 4806 scope.go:117] "RemoveContainer" containerID="10b18e01788d453026d0ea8a18222425ae42bbc6f5a48cace082710622106c43" Jan 26 08:49:22 crc kubenswrapper[4806]: E0126 08:49:22.044249 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 08:49:24 crc kubenswrapper[4806]: I0126 08:49:24.633899 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-w5n7l"] Jan 26 08:49:24 crc kubenswrapper[4806]: E0126 08:49:24.637022 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d931af36-7c58-4c04-b118-53cdbaafb655" containerName="collect-profiles" Jan 26 08:49:24 crc kubenswrapper[4806]: I0126 08:49:24.637063 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="d931af36-7c58-4c04-b118-53cdbaafb655" containerName="collect-profiles" Jan 26 08:49:24 crc kubenswrapper[4806]: I0126 08:49:24.637737 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="d931af36-7c58-4c04-b118-53cdbaafb655" containerName="collect-profiles" Jan 26 08:49:24 crc kubenswrapper[4806]: I0126 08:49:24.640892 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w5n7l" Jan 26 08:49:24 crc kubenswrapper[4806]: I0126 08:49:24.760836 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmsbc\" (UniqueName: \"kubernetes.io/projected/0e9c1006-435d-4127-928f-1acf363f3908-kube-api-access-dmsbc\") pod \"redhat-operators-w5n7l\" (UID: \"0e9c1006-435d-4127-928f-1acf363f3908\") " pod="openshift-marketplace/redhat-operators-w5n7l" Jan 26 08:49:24 crc kubenswrapper[4806]: I0126 08:49:24.760979 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e9c1006-435d-4127-928f-1acf363f3908-utilities\") pod \"redhat-operators-w5n7l\" (UID: \"0e9c1006-435d-4127-928f-1acf363f3908\") " pod="openshift-marketplace/redhat-operators-w5n7l" Jan 26 08:49:24 crc kubenswrapper[4806]: I0126 08:49:24.761039 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e9c1006-435d-4127-928f-1acf363f3908-catalog-content\") pod \"redhat-operators-w5n7l\" (UID: \"0e9c1006-435d-4127-928f-1acf363f3908\") " pod="openshift-marketplace/redhat-operators-w5n7l" Jan 26 08:49:24 crc kubenswrapper[4806]: I0126 08:49:24.785642 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-w5n7l"] Jan 26 08:49:24 crc kubenswrapper[4806]: I0126 08:49:24.862936 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmsbc\" (UniqueName: \"kubernetes.io/projected/0e9c1006-435d-4127-928f-1acf363f3908-kube-api-access-dmsbc\") pod \"redhat-operators-w5n7l\" (UID: \"0e9c1006-435d-4127-928f-1acf363f3908\") " pod="openshift-marketplace/redhat-operators-w5n7l" Jan 26 08:49:24 crc kubenswrapper[4806]: I0126 08:49:24.863019 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e9c1006-435d-4127-928f-1acf363f3908-utilities\") pod \"redhat-operators-w5n7l\" (UID: \"0e9c1006-435d-4127-928f-1acf363f3908\") " pod="openshift-marketplace/redhat-operators-w5n7l" Jan 26 08:49:24 crc kubenswrapper[4806]: I0126 08:49:24.863060 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e9c1006-435d-4127-928f-1acf363f3908-catalog-content\") pod \"redhat-operators-w5n7l\" (UID: \"0e9c1006-435d-4127-928f-1acf363f3908\") " pod="openshift-marketplace/redhat-operators-w5n7l" Jan 26 08:49:24 crc kubenswrapper[4806]: I0126 08:49:24.864786 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e9c1006-435d-4127-928f-1acf363f3908-utilities\") pod \"redhat-operators-w5n7l\" (UID: \"0e9c1006-435d-4127-928f-1acf363f3908\") " pod="openshift-marketplace/redhat-operators-w5n7l" Jan 26 08:49:24 crc kubenswrapper[4806]: I0126 08:49:24.864935 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e9c1006-435d-4127-928f-1acf363f3908-catalog-content\") pod \"redhat-operators-w5n7l\" (UID: \"0e9c1006-435d-4127-928f-1acf363f3908\") " pod="openshift-marketplace/redhat-operators-w5n7l" Jan 26 08:49:24 crc kubenswrapper[4806]: I0126 08:49:24.890498 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmsbc\" (UniqueName: \"kubernetes.io/projected/0e9c1006-435d-4127-928f-1acf363f3908-kube-api-access-dmsbc\") pod \"redhat-operators-w5n7l\" (UID: \"0e9c1006-435d-4127-928f-1acf363f3908\") " pod="openshift-marketplace/redhat-operators-w5n7l" Jan 26 08:49:24 crc kubenswrapper[4806]: I0126 08:49:24.965602 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w5n7l" Jan 26 08:49:26 crc kubenswrapper[4806]: I0126 08:49:26.851989 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-w5n7l"] Jan 26 08:49:27 crc kubenswrapper[4806]: I0126 08:49:27.035779 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w5n7l" event={"ID":"0e9c1006-435d-4127-928f-1acf363f3908","Type":"ContainerStarted","Data":"0c815d27f039a1f3f0480a3eb3f7acced33e2b04c7b2feea79b4d46c1b922280"} Jan 26 08:49:28 crc kubenswrapper[4806]: I0126 08:49:28.044584 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w5n7l" event={"ID":"0e9c1006-435d-4127-928f-1acf363f3908","Type":"ContainerDied","Data":"a481f62d8b4c9028fa9fd5717138944b0f2cb06679f12f689e99e962126ee18a"} Jan 26 08:49:28 crc kubenswrapper[4806]: I0126 08:49:28.044723 4806 generic.go:334] "Generic (PLEG): container finished" podID="0e9c1006-435d-4127-928f-1acf363f3908" containerID="a481f62d8b4c9028fa9fd5717138944b0f2cb06679f12f689e99e962126ee18a" exitCode=0 Jan 26 08:49:28 crc kubenswrapper[4806]: I0126 08:49:28.047793 4806 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 08:49:30 crc kubenswrapper[4806]: I0126 08:49:30.063169 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w5n7l" event={"ID":"0e9c1006-435d-4127-928f-1acf363f3908","Type":"ContainerStarted","Data":"79a9ab1d2207b13004d18b05a04dcbd030e8da9833f04b96aff28b1692b9ef6e"} Jan 26 08:49:34 crc kubenswrapper[4806]: I0126 08:49:34.095080 4806 generic.go:334] "Generic (PLEG): container finished" podID="0e9c1006-435d-4127-928f-1acf363f3908" containerID="79a9ab1d2207b13004d18b05a04dcbd030e8da9833f04b96aff28b1692b9ef6e" exitCode=0 Jan 26 08:49:34 crc kubenswrapper[4806]: I0126 08:49:34.095166 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w5n7l" event={"ID":"0e9c1006-435d-4127-928f-1acf363f3908","Type":"ContainerDied","Data":"79a9ab1d2207b13004d18b05a04dcbd030e8da9833f04b96aff28b1692b9ef6e"} Jan 26 08:49:35 crc kubenswrapper[4806]: I0126 08:49:35.042693 4806 scope.go:117] "RemoveContainer" containerID="10b18e01788d453026d0ea8a18222425ae42bbc6f5a48cace082710622106c43" Jan 26 08:49:35 crc kubenswrapper[4806]: E0126 08:49:35.043245 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 08:49:35 crc kubenswrapper[4806]: I0126 08:49:35.107852 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w5n7l" event={"ID":"0e9c1006-435d-4127-928f-1acf363f3908","Type":"ContainerStarted","Data":"02f81f23061be3827c6642c7c3e4d51a70c7631a42098fde201194a06eeece76"} Jan 26 08:49:35 crc kubenswrapper[4806]: I0126 08:49:35.149389 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-w5n7l" podStartSLOduration=4.668055261 podStartE2EDuration="11.148932462s" podCreationTimestamp="2026-01-26 08:49:24 +0000 UTC" firstStartedPulling="2026-01-26 08:49:28.04624851 +0000 UTC m=+3347.310656566" lastFinishedPulling="2026-01-26 08:49:34.527125711 +0000 UTC m=+3353.791533767" observedRunningTime="2026-01-26 08:49:35.14422549 +0000 UTC m=+3354.408633556" watchObservedRunningTime="2026-01-26 08:49:35.148932462 +0000 UTC m=+3354.413340518" Jan 26 08:49:39 crc kubenswrapper[4806]: I0126 08:49:39.216041 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-nhhn2"] Jan 26 08:49:39 crc kubenswrapper[4806]: I0126 08:49:39.222633 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nhhn2" Jan 26 08:49:39 crc kubenswrapper[4806]: I0126 08:49:39.243426 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nhhn2"] Jan 26 08:49:39 crc kubenswrapper[4806]: I0126 08:49:39.323658 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba161922-4b73-43b6-aa07-60a401b9b149-utilities\") pod \"redhat-marketplace-nhhn2\" (UID: \"ba161922-4b73-43b6-aa07-60a401b9b149\") " pod="openshift-marketplace/redhat-marketplace-nhhn2" Jan 26 08:49:39 crc kubenswrapper[4806]: I0126 08:49:39.323794 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qx5c\" (UniqueName: \"kubernetes.io/projected/ba161922-4b73-43b6-aa07-60a401b9b149-kube-api-access-6qx5c\") pod \"redhat-marketplace-nhhn2\" (UID: \"ba161922-4b73-43b6-aa07-60a401b9b149\") " pod="openshift-marketplace/redhat-marketplace-nhhn2" Jan 26 08:49:39 crc kubenswrapper[4806]: I0126 08:49:39.323833 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba161922-4b73-43b6-aa07-60a401b9b149-catalog-content\") pod \"redhat-marketplace-nhhn2\" (UID: \"ba161922-4b73-43b6-aa07-60a401b9b149\") " pod="openshift-marketplace/redhat-marketplace-nhhn2" Jan 26 08:49:39 crc kubenswrapper[4806]: I0126 08:49:39.424772 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba161922-4b73-43b6-aa07-60a401b9b149-catalog-content\") pod \"redhat-marketplace-nhhn2\" (UID: \"ba161922-4b73-43b6-aa07-60a401b9b149\") " pod="openshift-marketplace/redhat-marketplace-nhhn2" Jan 26 08:49:39 crc kubenswrapper[4806]: I0126 08:49:39.425151 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba161922-4b73-43b6-aa07-60a401b9b149-utilities\") pod \"redhat-marketplace-nhhn2\" (UID: \"ba161922-4b73-43b6-aa07-60a401b9b149\") " pod="openshift-marketplace/redhat-marketplace-nhhn2" Jan 26 08:49:39 crc kubenswrapper[4806]: I0126 08:49:39.425248 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6qx5c\" (UniqueName: \"kubernetes.io/projected/ba161922-4b73-43b6-aa07-60a401b9b149-kube-api-access-6qx5c\") pod \"redhat-marketplace-nhhn2\" (UID: \"ba161922-4b73-43b6-aa07-60a401b9b149\") " pod="openshift-marketplace/redhat-marketplace-nhhn2" Jan 26 08:49:39 crc kubenswrapper[4806]: I0126 08:49:39.426318 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba161922-4b73-43b6-aa07-60a401b9b149-catalog-content\") pod \"redhat-marketplace-nhhn2\" (UID: \"ba161922-4b73-43b6-aa07-60a401b9b149\") " pod="openshift-marketplace/redhat-marketplace-nhhn2" Jan 26 08:49:39 crc kubenswrapper[4806]: I0126 08:49:39.426356 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba161922-4b73-43b6-aa07-60a401b9b149-utilities\") pod \"redhat-marketplace-nhhn2\" (UID: \"ba161922-4b73-43b6-aa07-60a401b9b149\") " pod="openshift-marketplace/redhat-marketplace-nhhn2" Jan 26 08:49:39 crc kubenswrapper[4806]: I0126 08:49:39.456778 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6qx5c\" (UniqueName: \"kubernetes.io/projected/ba161922-4b73-43b6-aa07-60a401b9b149-kube-api-access-6qx5c\") pod \"redhat-marketplace-nhhn2\" (UID: \"ba161922-4b73-43b6-aa07-60a401b9b149\") " pod="openshift-marketplace/redhat-marketplace-nhhn2" Jan 26 08:49:39 crc kubenswrapper[4806]: I0126 08:49:39.557611 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nhhn2" Jan 26 08:49:40 crc kubenswrapper[4806]: I0126 08:49:40.196790 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nhhn2"] Jan 26 08:49:41 crc kubenswrapper[4806]: I0126 08:49:41.165443 4806 generic.go:334] "Generic (PLEG): container finished" podID="ba161922-4b73-43b6-aa07-60a401b9b149" containerID="5664a9ec5c29b8adcaf41e501d4ad0188653439d33cc47bf766175c9e2057aef" exitCode=0 Jan 26 08:49:41 crc kubenswrapper[4806]: I0126 08:49:41.165611 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nhhn2" event={"ID":"ba161922-4b73-43b6-aa07-60a401b9b149","Type":"ContainerDied","Data":"5664a9ec5c29b8adcaf41e501d4ad0188653439d33cc47bf766175c9e2057aef"} Jan 26 08:49:41 crc kubenswrapper[4806]: I0126 08:49:41.166207 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nhhn2" event={"ID":"ba161922-4b73-43b6-aa07-60a401b9b149","Type":"ContainerStarted","Data":"01498327da7af950eb82c838544b266255965c914ccd50d244c931dbf7cb9325"} Jan 26 08:49:42 crc kubenswrapper[4806]: I0126 08:49:42.182572 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nhhn2" event={"ID":"ba161922-4b73-43b6-aa07-60a401b9b149","Type":"ContainerStarted","Data":"1fccfab77d2988a8fe2bceb2154ff9ed5fe59b6179385ecf8c396500c05de153"} Jan 26 08:49:43 crc kubenswrapper[4806]: I0126 08:49:43.192252 4806 generic.go:334] "Generic (PLEG): container finished" podID="ba161922-4b73-43b6-aa07-60a401b9b149" containerID="1fccfab77d2988a8fe2bceb2154ff9ed5fe59b6179385ecf8c396500c05de153" exitCode=0 Jan 26 08:49:43 crc kubenswrapper[4806]: I0126 08:49:43.192308 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nhhn2" event={"ID":"ba161922-4b73-43b6-aa07-60a401b9b149","Type":"ContainerDied","Data":"1fccfab77d2988a8fe2bceb2154ff9ed5fe59b6179385ecf8c396500c05de153"} Jan 26 08:49:44 crc kubenswrapper[4806]: I0126 08:49:44.202965 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nhhn2" event={"ID":"ba161922-4b73-43b6-aa07-60a401b9b149","Type":"ContainerStarted","Data":"cbd9f021136746fd1f43b9944cf28f632b29934f2539ff247a8fbf19183cf0d4"} Jan 26 08:49:44 crc kubenswrapper[4806]: I0126 08:49:44.226618 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-nhhn2" podStartSLOduration=2.830500836 podStartE2EDuration="5.226594543s" podCreationTimestamp="2026-01-26 08:49:39 +0000 UTC" firstStartedPulling="2026-01-26 08:49:41.174978681 +0000 UTC m=+3360.439386737" lastFinishedPulling="2026-01-26 08:49:43.571072388 +0000 UTC m=+3362.835480444" observedRunningTime="2026-01-26 08:49:44.218774493 +0000 UTC m=+3363.483182549" watchObservedRunningTime="2026-01-26 08:49:44.226594543 +0000 UTC m=+3363.491002599" Jan 26 08:49:44 crc kubenswrapper[4806]: I0126 08:49:44.966726 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-w5n7l" Jan 26 08:49:44 crc kubenswrapper[4806]: I0126 08:49:44.967104 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-w5n7l" Jan 26 08:49:46 crc kubenswrapper[4806]: I0126 08:49:46.041543 4806 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-w5n7l" podUID="0e9c1006-435d-4127-928f-1acf363f3908" containerName="registry-server" probeResult="failure" output=< Jan 26 08:49:46 crc kubenswrapper[4806]: timeout: failed to connect service ":50051" within 1s Jan 26 08:49:46 crc kubenswrapper[4806]: > Jan 26 08:49:47 crc kubenswrapper[4806]: I0126 08:49:47.042109 4806 scope.go:117] "RemoveContainer" containerID="10b18e01788d453026d0ea8a18222425ae42bbc6f5a48cace082710622106c43" Jan 26 08:49:47 crc kubenswrapper[4806]: E0126 08:49:47.042353 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 08:49:49 crc kubenswrapper[4806]: I0126 08:49:49.558017 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-nhhn2" Jan 26 08:49:49 crc kubenswrapper[4806]: I0126 08:49:49.558646 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-nhhn2" Jan 26 08:49:49 crc kubenswrapper[4806]: I0126 08:49:49.607911 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-nhhn2" Jan 26 08:49:50 crc kubenswrapper[4806]: I0126 08:49:50.293208 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-nhhn2" Jan 26 08:49:51 crc kubenswrapper[4806]: I0126 08:49:51.985723 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nhhn2"] Jan 26 08:49:52 crc kubenswrapper[4806]: I0126 08:49:52.289769 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-nhhn2" podUID="ba161922-4b73-43b6-aa07-60a401b9b149" containerName="registry-server" containerID="cri-o://cbd9f021136746fd1f43b9944cf28f632b29934f2539ff247a8fbf19183cf0d4" gracePeriod=2 Jan 26 08:49:53 crc kubenswrapper[4806]: I0126 08:49:53.162073 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nhhn2" Jan 26 08:49:53 crc kubenswrapper[4806]: I0126 08:49:53.301811 4806 generic.go:334] "Generic (PLEG): container finished" podID="ba161922-4b73-43b6-aa07-60a401b9b149" containerID="cbd9f021136746fd1f43b9944cf28f632b29934f2539ff247a8fbf19183cf0d4" exitCode=0 Jan 26 08:49:53 crc kubenswrapper[4806]: I0126 08:49:53.301852 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nhhn2" event={"ID":"ba161922-4b73-43b6-aa07-60a401b9b149","Type":"ContainerDied","Data":"cbd9f021136746fd1f43b9944cf28f632b29934f2539ff247a8fbf19183cf0d4"} Jan 26 08:49:53 crc kubenswrapper[4806]: I0126 08:49:53.301877 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nhhn2" event={"ID":"ba161922-4b73-43b6-aa07-60a401b9b149","Type":"ContainerDied","Data":"01498327da7af950eb82c838544b266255965c914ccd50d244c931dbf7cb9325"} Jan 26 08:49:53 crc kubenswrapper[4806]: I0126 08:49:53.301895 4806 scope.go:117] "RemoveContainer" containerID="cbd9f021136746fd1f43b9944cf28f632b29934f2539ff247a8fbf19183cf0d4" Jan 26 08:49:53 crc kubenswrapper[4806]: I0126 08:49:53.302007 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nhhn2" Jan 26 08:49:53 crc kubenswrapper[4806]: I0126 08:49:53.314665 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba161922-4b73-43b6-aa07-60a401b9b149-utilities\") pod \"ba161922-4b73-43b6-aa07-60a401b9b149\" (UID: \"ba161922-4b73-43b6-aa07-60a401b9b149\") " Jan 26 08:49:53 crc kubenswrapper[4806]: I0126 08:49:53.314761 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6qx5c\" (UniqueName: \"kubernetes.io/projected/ba161922-4b73-43b6-aa07-60a401b9b149-kube-api-access-6qx5c\") pod \"ba161922-4b73-43b6-aa07-60a401b9b149\" (UID: \"ba161922-4b73-43b6-aa07-60a401b9b149\") " Jan 26 08:49:53 crc kubenswrapper[4806]: I0126 08:49:53.314923 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba161922-4b73-43b6-aa07-60a401b9b149-catalog-content\") pod \"ba161922-4b73-43b6-aa07-60a401b9b149\" (UID: \"ba161922-4b73-43b6-aa07-60a401b9b149\") " Jan 26 08:49:53 crc kubenswrapper[4806]: I0126 08:49:53.316165 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba161922-4b73-43b6-aa07-60a401b9b149-utilities" (OuterVolumeSpecName: "utilities") pod "ba161922-4b73-43b6-aa07-60a401b9b149" (UID: "ba161922-4b73-43b6-aa07-60a401b9b149"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 08:49:53 crc kubenswrapper[4806]: I0126 08:49:53.331872 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba161922-4b73-43b6-aa07-60a401b9b149-kube-api-access-6qx5c" (OuterVolumeSpecName: "kube-api-access-6qx5c") pod "ba161922-4b73-43b6-aa07-60a401b9b149" (UID: "ba161922-4b73-43b6-aa07-60a401b9b149"). InnerVolumeSpecName "kube-api-access-6qx5c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:49:53 crc kubenswrapper[4806]: I0126 08:49:53.340107 4806 scope.go:117] "RemoveContainer" containerID="1fccfab77d2988a8fe2bceb2154ff9ed5fe59b6179385ecf8c396500c05de153" Jan 26 08:49:53 crc kubenswrapper[4806]: I0126 08:49:53.347356 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba161922-4b73-43b6-aa07-60a401b9b149-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ba161922-4b73-43b6-aa07-60a401b9b149" (UID: "ba161922-4b73-43b6-aa07-60a401b9b149"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 08:49:53 crc kubenswrapper[4806]: I0126 08:49:53.417187 4806 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba161922-4b73-43b6-aa07-60a401b9b149-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 08:49:53 crc kubenswrapper[4806]: I0126 08:49:53.417222 4806 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba161922-4b73-43b6-aa07-60a401b9b149-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 08:49:53 crc kubenswrapper[4806]: I0126 08:49:53.417235 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6qx5c\" (UniqueName: \"kubernetes.io/projected/ba161922-4b73-43b6-aa07-60a401b9b149-kube-api-access-6qx5c\") on node \"crc\" DevicePath \"\"" Jan 26 08:49:53 crc kubenswrapper[4806]: I0126 08:49:53.424763 4806 scope.go:117] "RemoveContainer" containerID="5664a9ec5c29b8adcaf41e501d4ad0188653439d33cc47bf766175c9e2057aef" Jan 26 08:49:53 crc kubenswrapper[4806]: I0126 08:49:53.452251 4806 scope.go:117] "RemoveContainer" containerID="cbd9f021136746fd1f43b9944cf28f632b29934f2539ff247a8fbf19183cf0d4" Jan 26 08:49:53 crc kubenswrapper[4806]: E0126 08:49:53.453873 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cbd9f021136746fd1f43b9944cf28f632b29934f2539ff247a8fbf19183cf0d4\": container with ID starting with cbd9f021136746fd1f43b9944cf28f632b29934f2539ff247a8fbf19183cf0d4 not found: ID does not exist" containerID="cbd9f021136746fd1f43b9944cf28f632b29934f2539ff247a8fbf19183cf0d4" Jan 26 08:49:53 crc kubenswrapper[4806]: I0126 08:49:53.454112 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cbd9f021136746fd1f43b9944cf28f632b29934f2539ff247a8fbf19183cf0d4"} err="failed to get container status \"cbd9f021136746fd1f43b9944cf28f632b29934f2539ff247a8fbf19183cf0d4\": rpc error: code = NotFound desc = could not find container \"cbd9f021136746fd1f43b9944cf28f632b29934f2539ff247a8fbf19183cf0d4\": container with ID starting with cbd9f021136746fd1f43b9944cf28f632b29934f2539ff247a8fbf19183cf0d4 not found: ID does not exist" Jan 26 08:49:53 crc kubenswrapper[4806]: I0126 08:49:53.454150 4806 scope.go:117] "RemoveContainer" containerID="1fccfab77d2988a8fe2bceb2154ff9ed5fe59b6179385ecf8c396500c05de153" Jan 26 08:49:53 crc kubenswrapper[4806]: E0126 08:49:53.455368 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1fccfab77d2988a8fe2bceb2154ff9ed5fe59b6179385ecf8c396500c05de153\": container with ID starting with 1fccfab77d2988a8fe2bceb2154ff9ed5fe59b6179385ecf8c396500c05de153 not found: ID does not exist" containerID="1fccfab77d2988a8fe2bceb2154ff9ed5fe59b6179385ecf8c396500c05de153" Jan 26 08:49:53 crc kubenswrapper[4806]: I0126 08:49:53.455394 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1fccfab77d2988a8fe2bceb2154ff9ed5fe59b6179385ecf8c396500c05de153"} err="failed to get container status \"1fccfab77d2988a8fe2bceb2154ff9ed5fe59b6179385ecf8c396500c05de153\": rpc error: code = NotFound desc = could not find container \"1fccfab77d2988a8fe2bceb2154ff9ed5fe59b6179385ecf8c396500c05de153\": container with ID starting with 1fccfab77d2988a8fe2bceb2154ff9ed5fe59b6179385ecf8c396500c05de153 not found: ID does not exist" Jan 26 08:49:53 crc kubenswrapper[4806]: I0126 08:49:53.455433 4806 scope.go:117] "RemoveContainer" containerID="5664a9ec5c29b8adcaf41e501d4ad0188653439d33cc47bf766175c9e2057aef" Jan 26 08:49:53 crc kubenswrapper[4806]: E0126 08:49:53.455866 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5664a9ec5c29b8adcaf41e501d4ad0188653439d33cc47bf766175c9e2057aef\": container with ID starting with 5664a9ec5c29b8adcaf41e501d4ad0188653439d33cc47bf766175c9e2057aef not found: ID does not exist" containerID="5664a9ec5c29b8adcaf41e501d4ad0188653439d33cc47bf766175c9e2057aef" Jan 26 08:49:53 crc kubenswrapper[4806]: I0126 08:49:53.455892 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5664a9ec5c29b8adcaf41e501d4ad0188653439d33cc47bf766175c9e2057aef"} err="failed to get container status \"5664a9ec5c29b8adcaf41e501d4ad0188653439d33cc47bf766175c9e2057aef\": rpc error: code = NotFound desc = could not find container \"5664a9ec5c29b8adcaf41e501d4ad0188653439d33cc47bf766175c9e2057aef\": container with ID starting with 5664a9ec5c29b8adcaf41e501d4ad0188653439d33cc47bf766175c9e2057aef not found: ID does not exist" Jan 26 08:49:53 crc kubenswrapper[4806]: I0126 08:49:53.634941 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nhhn2"] Jan 26 08:49:53 crc kubenswrapper[4806]: I0126 08:49:53.642784 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-nhhn2"] Jan 26 08:49:55 crc kubenswrapper[4806]: I0126 08:49:55.014736 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-w5n7l" Jan 26 08:49:55 crc kubenswrapper[4806]: I0126 08:49:55.055280 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba161922-4b73-43b6-aa07-60a401b9b149" path="/var/lib/kubelet/pods/ba161922-4b73-43b6-aa07-60a401b9b149/volumes" Jan 26 08:49:55 crc kubenswrapper[4806]: I0126 08:49:55.075054 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-w5n7l" Jan 26 08:49:56 crc kubenswrapper[4806]: I0126 08:49:56.789202 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-w5n7l"] Jan 26 08:49:56 crc kubenswrapper[4806]: I0126 08:49:56.790590 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-w5n7l" podUID="0e9c1006-435d-4127-928f-1acf363f3908" containerName="registry-server" containerID="cri-o://02f81f23061be3827c6642c7c3e4d51a70c7631a42098fde201194a06eeece76" gracePeriod=2 Jan 26 08:49:57 crc kubenswrapper[4806]: I0126 08:49:57.309902 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w5n7l" Jan 26 08:49:57 crc kubenswrapper[4806]: I0126 08:49:57.338947 4806 generic.go:334] "Generic (PLEG): container finished" podID="0e9c1006-435d-4127-928f-1acf363f3908" containerID="02f81f23061be3827c6642c7c3e4d51a70c7631a42098fde201194a06eeece76" exitCode=0 Jan 26 08:49:57 crc kubenswrapper[4806]: I0126 08:49:57.338988 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w5n7l" event={"ID":"0e9c1006-435d-4127-928f-1acf363f3908","Type":"ContainerDied","Data":"02f81f23061be3827c6642c7c3e4d51a70c7631a42098fde201194a06eeece76"} Jan 26 08:49:57 crc kubenswrapper[4806]: I0126 08:49:57.339007 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w5n7l" Jan 26 08:49:57 crc kubenswrapper[4806]: I0126 08:49:57.339026 4806 scope.go:117] "RemoveContainer" containerID="02f81f23061be3827c6642c7c3e4d51a70c7631a42098fde201194a06eeece76" Jan 26 08:49:57 crc kubenswrapper[4806]: I0126 08:49:57.339015 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w5n7l" event={"ID":"0e9c1006-435d-4127-928f-1acf363f3908","Type":"ContainerDied","Data":"0c815d27f039a1f3f0480a3eb3f7acced33e2b04c7b2feea79b4d46c1b922280"} Jan 26 08:49:57 crc kubenswrapper[4806]: I0126 08:49:57.358389 4806 scope.go:117] "RemoveContainer" containerID="79a9ab1d2207b13004d18b05a04dcbd030e8da9833f04b96aff28b1692b9ef6e" Jan 26 08:49:57 crc kubenswrapper[4806]: I0126 08:49:57.389373 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e9c1006-435d-4127-928f-1acf363f3908-catalog-content\") pod \"0e9c1006-435d-4127-928f-1acf363f3908\" (UID: \"0e9c1006-435d-4127-928f-1acf363f3908\") " Jan 26 08:49:57 crc kubenswrapper[4806]: I0126 08:49:57.389502 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e9c1006-435d-4127-928f-1acf363f3908-utilities\") pod \"0e9c1006-435d-4127-928f-1acf363f3908\" (UID: \"0e9c1006-435d-4127-928f-1acf363f3908\") " Jan 26 08:49:57 crc kubenswrapper[4806]: I0126 08:49:57.389681 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dmsbc\" (UniqueName: \"kubernetes.io/projected/0e9c1006-435d-4127-928f-1acf363f3908-kube-api-access-dmsbc\") pod \"0e9c1006-435d-4127-928f-1acf363f3908\" (UID: \"0e9c1006-435d-4127-928f-1acf363f3908\") " Jan 26 08:49:57 crc kubenswrapper[4806]: I0126 08:49:57.390224 4806 scope.go:117] "RemoveContainer" containerID="a481f62d8b4c9028fa9fd5717138944b0f2cb06679f12f689e99e962126ee18a" Jan 26 08:49:57 crc kubenswrapper[4806]: I0126 08:49:57.390700 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e9c1006-435d-4127-928f-1acf363f3908-utilities" (OuterVolumeSpecName: "utilities") pod "0e9c1006-435d-4127-928f-1acf363f3908" (UID: "0e9c1006-435d-4127-928f-1acf363f3908"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 08:49:57 crc kubenswrapper[4806]: I0126 08:49:57.396275 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e9c1006-435d-4127-928f-1acf363f3908-kube-api-access-dmsbc" (OuterVolumeSpecName: "kube-api-access-dmsbc") pod "0e9c1006-435d-4127-928f-1acf363f3908" (UID: "0e9c1006-435d-4127-928f-1acf363f3908"). InnerVolumeSpecName "kube-api-access-dmsbc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:49:57 crc kubenswrapper[4806]: I0126 08:49:57.492855 4806 scope.go:117] "RemoveContainer" containerID="02f81f23061be3827c6642c7c3e4d51a70c7631a42098fde201194a06eeece76" Jan 26 08:49:57 crc kubenswrapper[4806]: I0126 08:49:57.493469 4806 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e9c1006-435d-4127-928f-1acf363f3908-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 08:49:57 crc kubenswrapper[4806]: I0126 08:49:57.493500 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dmsbc\" (UniqueName: \"kubernetes.io/projected/0e9c1006-435d-4127-928f-1acf363f3908-kube-api-access-dmsbc\") on node \"crc\" DevicePath \"\"" Jan 26 08:49:57 crc kubenswrapper[4806]: E0126 08:49:57.493466 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"02f81f23061be3827c6642c7c3e4d51a70c7631a42098fde201194a06eeece76\": container with ID starting with 02f81f23061be3827c6642c7c3e4d51a70c7631a42098fde201194a06eeece76 not found: ID does not exist" containerID="02f81f23061be3827c6642c7c3e4d51a70c7631a42098fde201194a06eeece76" Jan 26 08:49:57 crc kubenswrapper[4806]: I0126 08:49:57.493541 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02f81f23061be3827c6642c7c3e4d51a70c7631a42098fde201194a06eeece76"} err="failed to get container status \"02f81f23061be3827c6642c7c3e4d51a70c7631a42098fde201194a06eeece76\": rpc error: code = NotFound desc = could not find container \"02f81f23061be3827c6642c7c3e4d51a70c7631a42098fde201194a06eeece76\": container with ID starting with 02f81f23061be3827c6642c7c3e4d51a70c7631a42098fde201194a06eeece76 not found: ID does not exist" Jan 26 08:49:57 crc kubenswrapper[4806]: I0126 08:49:57.493665 4806 scope.go:117] "RemoveContainer" containerID="79a9ab1d2207b13004d18b05a04dcbd030e8da9833f04b96aff28b1692b9ef6e" Jan 26 08:49:57 crc kubenswrapper[4806]: E0126 08:49:57.493995 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79a9ab1d2207b13004d18b05a04dcbd030e8da9833f04b96aff28b1692b9ef6e\": container with ID starting with 79a9ab1d2207b13004d18b05a04dcbd030e8da9833f04b96aff28b1692b9ef6e not found: ID does not exist" containerID="79a9ab1d2207b13004d18b05a04dcbd030e8da9833f04b96aff28b1692b9ef6e" Jan 26 08:49:57 crc kubenswrapper[4806]: I0126 08:49:57.494054 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79a9ab1d2207b13004d18b05a04dcbd030e8da9833f04b96aff28b1692b9ef6e"} err="failed to get container status \"79a9ab1d2207b13004d18b05a04dcbd030e8da9833f04b96aff28b1692b9ef6e\": rpc error: code = NotFound desc = could not find container \"79a9ab1d2207b13004d18b05a04dcbd030e8da9833f04b96aff28b1692b9ef6e\": container with ID starting with 79a9ab1d2207b13004d18b05a04dcbd030e8da9833f04b96aff28b1692b9ef6e not found: ID does not exist" Jan 26 08:49:57 crc kubenswrapper[4806]: I0126 08:49:57.494080 4806 scope.go:117] "RemoveContainer" containerID="a481f62d8b4c9028fa9fd5717138944b0f2cb06679f12f689e99e962126ee18a" Jan 26 08:49:57 crc kubenswrapper[4806]: E0126 08:49:57.494315 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a481f62d8b4c9028fa9fd5717138944b0f2cb06679f12f689e99e962126ee18a\": container with ID starting with a481f62d8b4c9028fa9fd5717138944b0f2cb06679f12f689e99e962126ee18a not found: ID does not exist" containerID="a481f62d8b4c9028fa9fd5717138944b0f2cb06679f12f689e99e962126ee18a" Jan 26 08:49:57 crc kubenswrapper[4806]: I0126 08:49:57.494341 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a481f62d8b4c9028fa9fd5717138944b0f2cb06679f12f689e99e962126ee18a"} err="failed to get container status \"a481f62d8b4c9028fa9fd5717138944b0f2cb06679f12f689e99e962126ee18a\": rpc error: code = NotFound desc = could not find container \"a481f62d8b4c9028fa9fd5717138944b0f2cb06679f12f689e99e962126ee18a\": container with ID starting with a481f62d8b4c9028fa9fd5717138944b0f2cb06679f12f689e99e962126ee18a not found: ID does not exist" Jan 26 08:49:57 crc kubenswrapper[4806]: I0126 08:49:57.549602 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e9c1006-435d-4127-928f-1acf363f3908-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0e9c1006-435d-4127-928f-1acf363f3908" (UID: "0e9c1006-435d-4127-928f-1acf363f3908"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 08:49:57 crc kubenswrapper[4806]: I0126 08:49:57.595707 4806 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e9c1006-435d-4127-928f-1acf363f3908-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 08:49:57 crc kubenswrapper[4806]: I0126 08:49:57.683199 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-w5n7l"] Jan 26 08:49:57 crc kubenswrapper[4806]: I0126 08:49:57.694492 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-w5n7l"] Jan 26 08:49:58 crc kubenswrapper[4806]: I0126 08:49:58.042226 4806 scope.go:117] "RemoveContainer" containerID="10b18e01788d453026d0ea8a18222425ae42bbc6f5a48cace082710622106c43" Jan 26 08:49:58 crc kubenswrapper[4806]: E0126 08:49:58.042584 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 08:49:59 crc kubenswrapper[4806]: I0126 08:49:59.053491 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e9c1006-435d-4127-928f-1acf363f3908" path="/var/lib/kubelet/pods/0e9c1006-435d-4127-928f-1acf363f3908/volumes" Jan 26 08:50:09 crc kubenswrapper[4806]: I0126 08:50:09.557331 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-nh9fm"] Jan 26 08:50:09 crc kubenswrapper[4806]: E0126 08:50:09.561895 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba161922-4b73-43b6-aa07-60a401b9b149" containerName="extract-utilities" Jan 26 08:50:09 crc kubenswrapper[4806]: I0126 08:50:09.561932 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba161922-4b73-43b6-aa07-60a401b9b149" containerName="extract-utilities" Jan 26 08:50:09 crc kubenswrapper[4806]: E0126 08:50:09.561973 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e9c1006-435d-4127-928f-1acf363f3908" containerName="registry-server" Jan 26 08:50:09 crc kubenswrapper[4806]: I0126 08:50:09.561980 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e9c1006-435d-4127-928f-1acf363f3908" containerName="registry-server" Jan 26 08:50:09 crc kubenswrapper[4806]: E0126 08:50:09.561998 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba161922-4b73-43b6-aa07-60a401b9b149" containerName="extract-content" Jan 26 08:50:09 crc kubenswrapper[4806]: I0126 08:50:09.562004 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba161922-4b73-43b6-aa07-60a401b9b149" containerName="extract-content" Jan 26 08:50:09 crc kubenswrapper[4806]: E0126 08:50:09.562014 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e9c1006-435d-4127-928f-1acf363f3908" containerName="extract-utilities" Jan 26 08:50:09 crc kubenswrapper[4806]: I0126 08:50:09.562021 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e9c1006-435d-4127-928f-1acf363f3908" containerName="extract-utilities" Jan 26 08:50:09 crc kubenswrapper[4806]: E0126 08:50:09.562043 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e9c1006-435d-4127-928f-1acf363f3908" containerName="extract-content" Jan 26 08:50:09 crc kubenswrapper[4806]: I0126 08:50:09.562049 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e9c1006-435d-4127-928f-1acf363f3908" containerName="extract-content" Jan 26 08:50:09 crc kubenswrapper[4806]: E0126 08:50:09.562060 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba161922-4b73-43b6-aa07-60a401b9b149" containerName="registry-server" Jan 26 08:50:09 crc kubenswrapper[4806]: I0126 08:50:09.562066 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba161922-4b73-43b6-aa07-60a401b9b149" containerName="registry-server" Jan 26 08:50:09 crc kubenswrapper[4806]: I0126 08:50:09.562659 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e9c1006-435d-4127-928f-1acf363f3908" containerName="registry-server" Jan 26 08:50:09 crc kubenswrapper[4806]: I0126 08:50:09.562696 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba161922-4b73-43b6-aa07-60a401b9b149" containerName="registry-server" Jan 26 08:50:09 crc kubenswrapper[4806]: I0126 08:50:09.565003 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nh9fm" Jan 26 08:50:09 crc kubenswrapper[4806]: I0126 08:50:09.591408 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nh9fm"] Jan 26 08:50:09 crc kubenswrapper[4806]: I0126 08:50:09.631674 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzrl8\" (UniqueName: \"kubernetes.io/projected/1ee5585f-806d-4bbf-bbf7-b769754b9805-kube-api-access-tzrl8\") pod \"community-operators-nh9fm\" (UID: \"1ee5585f-806d-4bbf-bbf7-b769754b9805\") " pod="openshift-marketplace/community-operators-nh9fm" Jan 26 08:50:09 crc kubenswrapper[4806]: I0126 08:50:09.631751 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ee5585f-806d-4bbf-bbf7-b769754b9805-utilities\") pod \"community-operators-nh9fm\" (UID: \"1ee5585f-806d-4bbf-bbf7-b769754b9805\") " pod="openshift-marketplace/community-operators-nh9fm" Jan 26 08:50:09 crc kubenswrapper[4806]: I0126 08:50:09.632096 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ee5585f-806d-4bbf-bbf7-b769754b9805-catalog-content\") pod \"community-operators-nh9fm\" (UID: \"1ee5585f-806d-4bbf-bbf7-b769754b9805\") " pod="openshift-marketplace/community-operators-nh9fm" Jan 26 08:50:09 crc kubenswrapper[4806]: I0126 08:50:09.733991 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ee5585f-806d-4bbf-bbf7-b769754b9805-utilities\") pod \"community-operators-nh9fm\" (UID: \"1ee5585f-806d-4bbf-bbf7-b769754b9805\") " pod="openshift-marketplace/community-operators-nh9fm" Jan 26 08:50:09 crc kubenswrapper[4806]: I0126 08:50:09.734134 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ee5585f-806d-4bbf-bbf7-b769754b9805-catalog-content\") pod \"community-operators-nh9fm\" (UID: \"1ee5585f-806d-4bbf-bbf7-b769754b9805\") " pod="openshift-marketplace/community-operators-nh9fm" Jan 26 08:50:09 crc kubenswrapper[4806]: I0126 08:50:09.734195 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tzrl8\" (UniqueName: \"kubernetes.io/projected/1ee5585f-806d-4bbf-bbf7-b769754b9805-kube-api-access-tzrl8\") pod \"community-operators-nh9fm\" (UID: \"1ee5585f-806d-4bbf-bbf7-b769754b9805\") " pod="openshift-marketplace/community-operators-nh9fm" Jan 26 08:50:09 crc kubenswrapper[4806]: I0126 08:50:09.734719 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ee5585f-806d-4bbf-bbf7-b769754b9805-catalog-content\") pod \"community-operators-nh9fm\" (UID: \"1ee5585f-806d-4bbf-bbf7-b769754b9805\") " pod="openshift-marketplace/community-operators-nh9fm" Jan 26 08:50:09 crc kubenswrapper[4806]: I0126 08:50:09.735775 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ee5585f-806d-4bbf-bbf7-b769754b9805-utilities\") pod \"community-operators-nh9fm\" (UID: \"1ee5585f-806d-4bbf-bbf7-b769754b9805\") " pod="openshift-marketplace/community-operators-nh9fm" Jan 26 08:50:09 crc kubenswrapper[4806]: I0126 08:50:09.761449 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzrl8\" (UniqueName: \"kubernetes.io/projected/1ee5585f-806d-4bbf-bbf7-b769754b9805-kube-api-access-tzrl8\") pod \"community-operators-nh9fm\" (UID: \"1ee5585f-806d-4bbf-bbf7-b769754b9805\") " pod="openshift-marketplace/community-operators-nh9fm" Jan 26 08:50:09 crc kubenswrapper[4806]: I0126 08:50:09.883425 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nh9fm" Jan 26 08:50:10 crc kubenswrapper[4806]: I0126 08:50:10.426816 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nh9fm"] Jan 26 08:50:10 crc kubenswrapper[4806]: I0126 08:50:10.459231 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nh9fm" event={"ID":"1ee5585f-806d-4bbf-bbf7-b769754b9805","Type":"ContainerStarted","Data":"a7f8c5518bac03e1f21e4dcab813ae1e7c96332722b4a6d337c32498ba03475e"} Jan 26 08:50:11 crc kubenswrapper[4806]: I0126 08:50:11.467999 4806 generic.go:334] "Generic (PLEG): container finished" podID="1ee5585f-806d-4bbf-bbf7-b769754b9805" containerID="4f6641c83d42862762080992013e120939981a49486fdaa0a5383680a8c4e5b3" exitCode=0 Jan 26 08:50:11 crc kubenswrapper[4806]: I0126 08:50:11.468120 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nh9fm" event={"ID":"1ee5585f-806d-4bbf-bbf7-b769754b9805","Type":"ContainerDied","Data":"4f6641c83d42862762080992013e120939981a49486fdaa0a5383680a8c4e5b3"} Jan 26 08:50:12 crc kubenswrapper[4806]: I0126 08:50:12.480784 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nh9fm" event={"ID":"1ee5585f-806d-4bbf-bbf7-b769754b9805","Type":"ContainerStarted","Data":"120caa543e400685ec098de6f9425085c57b611740a9c36e7c09365767ed2d18"} Jan 26 08:50:13 crc kubenswrapper[4806]: I0126 08:50:13.098494 4806 scope.go:117] "RemoveContainer" containerID="10b18e01788d453026d0ea8a18222425ae42bbc6f5a48cace082710622106c43" Jan 26 08:50:13 crc kubenswrapper[4806]: E0126 08:50:13.099141 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 08:50:14 crc kubenswrapper[4806]: I0126 08:50:14.499384 4806 generic.go:334] "Generic (PLEG): container finished" podID="1ee5585f-806d-4bbf-bbf7-b769754b9805" containerID="120caa543e400685ec098de6f9425085c57b611740a9c36e7c09365767ed2d18" exitCode=0 Jan 26 08:50:14 crc kubenswrapper[4806]: I0126 08:50:14.499462 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nh9fm" event={"ID":"1ee5585f-806d-4bbf-bbf7-b769754b9805","Type":"ContainerDied","Data":"120caa543e400685ec098de6f9425085c57b611740a9c36e7c09365767ed2d18"} Jan 26 08:50:15 crc kubenswrapper[4806]: I0126 08:50:15.511298 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nh9fm" event={"ID":"1ee5585f-806d-4bbf-bbf7-b769754b9805","Type":"ContainerStarted","Data":"8d549e32c939f53413302848abc00fae52bd1ac3fa0e1f2383bf34a46b33149c"} Jan 26 08:50:15 crc kubenswrapper[4806]: I0126 08:50:15.535976 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-nh9fm" podStartSLOduration=2.964786267 podStartE2EDuration="6.535091169s" podCreationTimestamp="2026-01-26 08:50:09 +0000 UTC" firstStartedPulling="2026-01-26 08:50:11.469510742 +0000 UTC m=+3390.733918798" lastFinishedPulling="2026-01-26 08:50:15.039815644 +0000 UTC m=+3394.304223700" observedRunningTime="2026-01-26 08:50:15.527549808 +0000 UTC m=+3394.791957864" watchObservedRunningTime="2026-01-26 08:50:15.535091169 +0000 UTC m=+3394.799499225" Jan 26 08:50:19 crc kubenswrapper[4806]: I0126 08:50:19.883676 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-nh9fm" Jan 26 08:50:19 crc kubenswrapper[4806]: I0126 08:50:19.884929 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-nh9fm" Jan 26 08:50:20 crc kubenswrapper[4806]: I0126 08:50:20.933279 4806 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-nh9fm" podUID="1ee5585f-806d-4bbf-bbf7-b769754b9805" containerName="registry-server" probeResult="failure" output=< Jan 26 08:50:20 crc kubenswrapper[4806]: timeout: failed to connect service ":50051" within 1s Jan 26 08:50:20 crc kubenswrapper[4806]: > Jan 26 08:50:26 crc kubenswrapper[4806]: I0126 08:50:26.042112 4806 scope.go:117] "RemoveContainer" containerID="10b18e01788d453026d0ea8a18222425ae42bbc6f5a48cace082710622106c43" Jan 26 08:50:26 crc kubenswrapper[4806]: E0126 08:50:26.042964 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 08:50:29 crc kubenswrapper[4806]: I0126 08:50:29.935183 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-nh9fm" Jan 26 08:50:29 crc kubenswrapper[4806]: I0126 08:50:29.985753 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-nh9fm" Jan 26 08:50:30 crc kubenswrapper[4806]: I0126 08:50:30.178714 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nh9fm"] Jan 26 08:50:31 crc kubenswrapper[4806]: I0126 08:50:31.636897 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-nh9fm" podUID="1ee5585f-806d-4bbf-bbf7-b769754b9805" containerName="registry-server" containerID="cri-o://8d549e32c939f53413302848abc00fae52bd1ac3fa0e1f2383bf34a46b33149c" gracePeriod=2 Jan 26 08:50:32 crc kubenswrapper[4806]: I0126 08:50:32.408773 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nh9fm" Jan 26 08:50:32 crc kubenswrapper[4806]: I0126 08:50:32.455243 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ee5585f-806d-4bbf-bbf7-b769754b9805-catalog-content\") pod \"1ee5585f-806d-4bbf-bbf7-b769754b9805\" (UID: \"1ee5585f-806d-4bbf-bbf7-b769754b9805\") " Jan 26 08:50:32 crc kubenswrapper[4806]: I0126 08:50:32.455396 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ee5585f-806d-4bbf-bbf7-b769754b9805-utilities\") pod \"1ee5585f-806d-4bbf-bbf7-b769754b9805\" (UID: \"1ee5585f-806d-4bbf-bbf7-b769754b9805\") " Jan 26 08:50:32 crc kubenswrapper[4806]: I0126 08:50:32.455425 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tzrl8\" (UniqueName: \"kubernetes.io/projected/1ee5585f-806d-4bbf-bbf7-b769754b9805-kube-api-access-tzrl8\") pod \"1ee5585f-806d-4bbf-bbf7-b769754b9805\" (UID: \"1ee5585f-806d-4bbf-bbf7-b769754b9805\") " Jan 26 08:50:32 crc kubenswrapper[4806]: I0126 08:50:32.456604 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ee5585f-806d-4bbf-bbf7-b769754b9805-utilities" (OuterVolumeSpecName: "utilities") pod "1ee5585f-806d-4bbf-bbf7-b769754b9805" (UID: "1ee5585f-806d-4bbf-bbf7-b769754b9805"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 08:50:32 crc kubenswrapper[4806]: I0126 08:50:32.488768 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ee5585f-806d-4bbf-bbf7-b769754b9805-kube-api-access-tzrl8" (OuterVolumeSpecName: "kube-api-access-tzrl8") pod "1ee5585f-806d-4bbf-bbf7-b769754b9805" (UID: "1ee5585f-806d-4bbf-bbf7-b769754b9805"). InnerVolumeSpecName "kube-api-access-tzrl8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:50:32 crc kubenswrapper[4806]: I0126 08:50:32.558091 4806 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ee5585f-806d-4bbf-bbf7-b769754b9805-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 08:50:32 crc kubenswrapper[4806]: I0126 08:50:32.558396 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tzrl8\" (UniqueName: \"kubernetes.io/projected/1ee5585f-806d-4bbf-bbf7-b769754b9805-kube-api-access-tzrl8\") on node \"crc\" DevicePath \"\"" Jan 26 08:50:32 crc kubenswrapper[4806]: I0126 08:50:32.591238 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ee5585f-806d-4bbf-bbf7-b769754b9805-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1ee5585f-806d-4bbf-bbf7-b769754b9805" (UID: "1ee5585f-806d-4bbf-bbf7-b769754b9805"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 08:50:32 crc kubenswrapper[4806]: I0126 08:50:32.646917 4806 generic.go:334] "Generic (PLEG): container finished" podID="1ee5585f-806d-4bbf-bbf7-b769754b9805" containerID="8d549e32c939f53413302848abc00fae52bd1ac3fa0e1f2383bf34a46b33149c" exitCode=0 Jan 26 08:50:32 crc kubenswrapper[4806]: I0126 08:50:32.647001 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nh9fm" Jan 26 08:50:32 crc kubenswrapper[4806]: I0126 08:50:32.647013 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nh9fm" event={"ID":"1ee5585f-806d-4bbf-bbf7-b769754b9805","Type":"ContainerDied","Data":"8d549e32c939f53413302848abc00fae52bd1ac3fa0e1f2383bf34a46b33149c"} Jan 26 08:50:32 crc kubenswrapper[4806]: I0126 08:50:32.648059 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nh9fm" event={"ID":"1ee5585f-806d-4bbf-bbf7-b769754b9805","Type":"ContainerDied","Data":"a7f8c5518bac03e1f21e4dcab813ae1e7c96332722b4a6d337c32498ba03475e"} Jan 26 08:50:32 crc kubenswrapper[4806]: I0126 08:50:32.648084 4806 scope.go:117] "RemoveContainer" containerID="8d549e32c939f53413302848abc00fae52bd1ac3fa0e1f2383bf34a46b33149c" Jan 26 08:50:32 crc kubenswrapper[4806]: I0126 08:50:32.659664 4806 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ee5585f-806d-4bbf-bbf7-b769754b9805-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 08:50:32 crc kubenswrapper[4806]: I0126 08:50:32.696397 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nh9fm"] Jan 26 08:50:32 crc kubenswrapper[4806]: I0126 08:50:32.698303 4806 scope.go:117] "RemoveContainer" containerID="120caa543e400685ec098de6f9425085c57b611740a9c36e7c09365767ed2d18" Jan 26 08:50:32 crc kubenswrapper[4806]: I0126 08:50:32.710348 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-nh9fm"] Jan 26 08:50:32 crc kubenswrapper[4806]: I0126 08:50:32.777416 4806 scope.go:117] "RemoveContainer" containerID="4f6641c83d42862762080992013e120939981a49486fdaa0a5383680a8c4e5b3" Jan 26 08:50:32 crc kubenswrapper[4806]: I0126 08:50:32.815600 4806 scope.go:117] "RemoveContainer" containerID="8d549e32c939f53413302848abc00fae52bd1ac3fa0e1f2383bf34a46b33149c" Jan 26 08:50:32 crc kubenswrapper[4806]: E0126 08:50:32.817020 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d549e32c939f53413302848abc00fae52bd1ac3fa0e1f2383bf34a46b33149c\": container with ID starting with 8d549e32c939f53413302848abc00fae52bd1ac3fa0e1f2383bf34a46b33149c not found: ID does not exist" containerID="8d549e32c939f53413302848abc00fae52bd1ac3fa0e1f2383bf34a46b33149c" Jan 26 08:50:32 crc kubenswrapper[4806]: I0126 08:50:32.817053 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d549e32c939f53413302848abc00fae52bd1ac3fa0e1f2383bf34a46b33149c"} err="failed to get container status \"8d549e32c939f53413302848abc00fae52bd1ac3fa0e1f2383bf34a46b33149c\": rpc error: code = NotFound desc = could not find container \"8d549e32c939f53413302848abc00fae52bd1ac3fa0e1f2383bf34a46b33149c\": container with ID starting with 8d549e32c939f53413302848abc00fae52bd1ac3fa0e1f2383bf34a46b33149c not found: ID does not exist" Jan 26 08:50:32 crc kubenswrapper[4806]: I0126 08:50:32.817075 4806 scope.go:117] "RemoveContainer" containerID="120caa543e400685ec098de6f9425085c57b611740a9c36e7c09365767ed2d18" Jan 26 08:50:32 crc kubenswrapper[4806]: E0126 08:50:32.817411 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"120caa543e400685ec098de6f9425085c57b611740a9c36e7c09365767ed2d18\": container with ID starting with 120caa543e400685ec098de6f9425085c57b611740a9c36e7c09365767ed2d18 not found: ID does not exist" containerID="120caa543e400685ec098de6f9425085c57b611740a9c36e7c09365767ed2d18" Jan 26 08:50:32 crc kubenswrapper[4806]: I0126 08:50:32.817430 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"120caa543e400685ec098de6f9425085c57b611740a9c36e7c09365767ed2d18"} err="failed to get container status \"120caa543e400685ec098de6f9425085c57b611740a9c36e7c09365767ed2d18\": rpc error: code = NotFound desc = could not find container \"120caa543e400685ec098de6f9425085c57b611740a9c36e7c09365767ed2d18\": container with ID starting with 120caa543e400685ec098de6f9425085c57b611740a9c36e7c09365767ed2d18 not found: ID does not exist" Jan 26 08:50:32 crc kubenswrapper[4806]: I0126 08:50:32.817443 4806 scope.go:117] "RemoveContainer" containerID="4f6641c83d42862762080992013e120939981a49486fdaa0a5383680a8c4e5b3" Jan 26 08:50:32 crc kubenswrapper[4806]: E0126 08:50:32.822649 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f6641c83d42862762080992013e120939981a49486fdaa0a5383680a8c4e5b3\": container with ID starting with 4f6641c83d42862762080992013e120939981a49486fdaa0a5383680a8c4e5b3 not found: ID does not exist" containerID="4f6641c83d42862762080992013e120939981a49486fdaa0a5383680a8c4e5b3" Jan 26 08:50:32 crc kubenswrapper[4806]: I0126 08:50:32.822678 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f6641c83d42862762080992013e120939981a49486fdaa0a5383680a8c4e5b3"} err="failed to get container status \"4f6641c83d42862762080992013e120939981a49486fdaa0a5383680a8c4e5b3\": rpc error: code = NotFound desc = could not find container \"4f6641c83d42862762080992013e120939981a49486fdaa0a5383680a8c4e5b3\": container with ID starting with 4f6641c83d42862762080992013e120939981a49486fdaa0a5383680a8c4e5b3 not found: ID does not exist" Jan 26 08:50:33 crc kubenswrapper[4806]: I0126 08:50:33.061908 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ee5585f-806d-4bbf-bbf7-b769754b9805" path="/var/lib/kubelet/pods/1ee5585f-806d-4bbf-bbf7-b769754b9805/volumes" Jan 26 08:50:37 crc kubenswrapper[4806]: I0126 08:50:37.041958 4806 scope.go:117] "RemoveContainer" containerID="10b18e01788d453026d0ea8a18222425ae42bbc6f5a48cace082710622106c43" Jan 26 08:50:37 crc kubenswrapper[4806]: E0126 08:50:37.042759 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 08:50:48 crc kubenswrapper[4806]: I0126 08:50:48.042424 4806 scope.go:117] "RemoveContainer" containerID="10b18e01788d453026d0ea8a18222425ae42bbc6f5a48cace082710622106c43" Jan 26 08:50:48 crc kubenswrapper[4806]: I0126 08:50:48.794746 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" event={"ID":"d07502a2-50b0-4012-b335-340a1c694c50","Type":"ContainerStarted","Data":"43ff16cf69e749e0df6a48aeca81c99531814b9095b171bb9f480a5506e94d53"} Jan 26 08:53:15 crc kubenswrapper[4806]: I0126 08:53:15.806161 4806 patch_prober.go:28] interesting pod/machine-config-daemon-k2tlk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 08:53:15 crc kubenswrapper[4806]: I0126 08:53:15.807661 4806 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 08:53:45 crc kubenswrapper[4806]: I0126 08:53:45.806767 4806 patch_prober.go:28] interesting pod/machine-config-daemon-k2tlk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 08:53:45 crc kubenswrapper[4806]: I0126 08:53:45.807451 4806 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 08:54:15 crc kubenswrapper[4806]: I0126 08:54:15.806007 4806 patch_prober.go:28] interesting pod/machine-config-daemon-k2tlk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 08:54:15 crc kubenswrapper[4806]: I0126 08:54:15.806996 4806 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 08:54:15 crc kubenswrapper[4806]: I0126 08:54:15.807080 4806 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" Jan 26 08:54:15 crc kubenswrapper[4806]: I0126 08:54:15.808342 4806 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"43ff16cf69e749e0df6a48aeca81c99531814b9095b171bb9f480a5506e94d53"} pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 08:54:15 crc kubenswrapper[4806]: I0126 08:54:15.808466 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" containerName="machine-config-daemon" containerID="cri-o://43ff16cf69e749e0df6a48aeca81c99531814b9095b171bb9f480a5506e94d53" gracePeriod=600 Jan 26 08:54:15 crc kubenswrapper[4806]: E0126 08:54:15.955210 4806 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd07502a2_50b0_4012_b335_340a1c694c50.slice/crio-conmon-43ff16cf69e749e0df6a48aeca81c99531814b9095b171bb9f480a5506e94d53.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd07502a2_50b0_4012_b335_340a1c694c50.slice/crio-43ff16cf69e749e0df6a48aeca81c99531814b9095b171bb9f480a5506e94d53.scope\": RecentStats: unable to find data in memory cache]" Jan 26 08:54:16 crc kubenswrapper[4806]: I0126 08:54:16.703826 4806 generic.go:334] "Generic (PLEG): container finished" podID="d07502a2-50b0-4012-b335-340a1c694c50" containerID="43ff16cf69e749e0df6a48aeca81c99531814b9095b171bb9f480a5506e94d53" exitCode=0 Jan 26 08:54:16 crc kubenswrapper[4806]: I0126 08:54:16.704220 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" event={"ID":"d07502a2-50b0-4012-b335-340a1c694c50","Type":"ContainerDied","Data":"43ff16cf69e749e0df6a48aeca81c99531814b9095b171bb9f480a5506e94d53"} Jan 26 08:54:16 crc kubenswrapper[4806]: I0126 08:54:16.704253 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" event={"ID":"d07502a2-50b0-4012-b335-340a1c694c50","Type":"ContainerStarted","Data":"19d15f3db83358f34004e7d990728b7d25579f940569bef0ed6a4be6149f4437"} Jan 26 08:54:16 crc kubenswrapper[4806]: I0126 08:54:16.704272 4806 scope.go:117] "RemoveContainer" containerID="10b18e01788d453026d0ea8a18222425ae42bbc6f5a48cace082710622106c43" Jan 26 08:54:46 crc kubenswrapper[4806]: I0126 08:54:46.896355 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-sshjs"] Jan 26 08:54:46 crc kubenswrapper[4806]: E0126 08:54:46.897916 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ee5585f-806d-4bbf-bbf7-b769754b9805" containerName="extract-content" Jan 26 08:54:46 crc kubenswrapper[4806]: I0126 08:54:46.897934 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ee5585f-806d-4bbf-bbf7-b769754b9805" containerName="extract-content" Jan 26 08:54:46 crc kubenswrapper[4806]: E0126 08:54:46.897956 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ee5585f-806d-4bbf-bbf7-b769754b9805" containerName="registry-server" Jan 26 08:54:46 crc kubenswrapper[4806]: I0126 08:54:46.897962 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ee5585f-806d-4bbf-bbf7-b769754b9805" containerName="registry-server" Jan 26 08:54:46 crc kubenswrapper[4806]: E0126 08:54:46.898011 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ee5585f-806d-4bbf-bbf7-b769754b9805" containerName="extract-utilities" Jan 26 08:54:46 crc kubenswrapper[4806]: I0126 08:54:46.898020 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ee5585f-806d-4bbf-bbf7-b769754b9805" containerName="extract-utilities" Jan 26 08:54:46 crc kubenswrapper[4806]: I0126 08:54:46.898687 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ee5585f-806d-4bbf-bbf7-b769754b9805" containerName="registry-server" Jan 26 08:54:46 crc kubenswrapper[4806]: I0126 08:54:46.903939 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sshjs" Jan 26 08:54:46 crc kubenswrapper[4806]: I0126 08:54:46.963415 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sshjs"] Jan 26 08:54:47 crc kubenswrapper[4806]: I0126 08:54:47.034831 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cbd8d95c-93d0-4ace-8836-30306122082b-utilities\") pod \"certified-operators-sshjs\" (UID: \"cbd8d95c-93d0-4ace-8836-30306122082b\") " pod="openshift-marketplace/certified-operators-sshjs" Jan 26 08:54:47 crc kubenswrapper[4806]: I0126 08:54:47.035143 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7c5wr\" (UniqueName: \"kubernetes.io/projected/cbd8d95c-93d0-4ace-8836-30306122082b-kube-api-access-7c5wr\") pod \"certified-operators-sshjs\" (UID: \"cbd8d95c-93d0-4ace-8836-30306122082b\") " pod="openshift-marketplace/certified-operators-sshjs" Jan 26 08:54:47 crc kubenswrapper[4806]: I0126 08:54:47.035283 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cbd8d95c-93d0-4ace-8836-30306122082b-catalog-content\") pod \"certified-operators-sshjs\" (UID: \"cbd8d95c-93d0-4ace-8836-30306122082b\") " pod="openshift-marketplace/certified-operators-sshjs" Jan 26 08:54:47 crc kubenswrapper[4806]: I0126 08:54:47.137395 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7c5wr\" (UniqueName: \"kubernetes.io/projected/cbd8d95c-93d0-4ace-8836-30306122082b-kube-api-access-7c5wr\") pod \"certified-operators-sshjs\" (UID: \"cbd8d95c-93d0-4ace-8836-30306122082b\") " pod="openshift-marketplace/certified-operators-sshjs" Jan 26 08:54:47 crc kubenswrapper[4806]: I0126 08:54:47.137500 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cbd8d95c-93d0-4ace-8836-30306122082b-catalog-content\") pod \"certified-operators-sshjs\" (UID: \"cbd8d95c-93d0-4ace-8836-30306122082b\") " pod="openshift-marketplace/certified-operators-sshjs" Jan 26 08:54:47 crc kubenswrapper[4806]: I0126 08:54:47.137628 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cbd8d95c-93d0-4ace-8836-30306122082b-utilities\") pod \"certified-operators-sshjs\" (UID: \"cbd8d95c-93d0-4ace-8836-30306122082b\") " pod="openshift-marketplace/certified-operators-sshjs" Jan 26 08:54:47 crc kubenswrapper[4806]: I0126 08:54:47.138242 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cbd8d95c-93d0-4ace-8836-30306122082b-utilities\") pod \"certified-operators-sshjs\" (UID: \"cbd8d95c-93d0-4ace-8836-30306122082b\") " pod="openshift-marketplace/certified-operators-sshjs" Jan 26 08:54:47 crc kubenswrapper[4806]: I0126 08:54:47.138249 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cbd8d95c-93d0-4ace-8836-30306122082b-catalog-content\") pod \"certified-operators-sshjs\" (UID: \"cbd8d95c-93d0-4ace-8836-30306122082b\") " pod="openshift-marketplace/certified-operators-sshjs" Jan 26 08:54:47 crc kubenswrapper[4806]: I0126 08:54:47.167463 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7c5wr\" (UniqueName: \"kubernetes.io/projected/cbd8d95c-93d0-4ace-8836-30306122082b-kube-api-access-7c5wr\") pod \"certified-operators-sshjs\" (UID: \"cbd8d95c-93d0-4ace-8836-30306122082b\") " pod="openshift-marketplace/certified-operators-sshjs" Jan 26 08:54:47 crc kubenswrapper[4806]: I0126 08:54:47.238116 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sshjs" Jan 26 08:54:47 crc kubenswrapper[4806]: I0126 08:54:47.849907 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sshjs"] Jan 26 08:54:47 crc kubenswrapper[4806]: I0126 08:54:47.988953 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sshjs" event={"ID":"cbd8d95c-93d0-4ace-8836-30306122082b","Type":"ContainerStarted","Data":"7a1f9381011de00f452e42d53b945b3aa449f5b543a23a0e388a321506848b66"} Jan 26 08:54:49 crc kubenswrapper[4806]: I0126 08:54:49.001093 4806 generic.go:334] "Generic (PLEG): container finished" podID="cbd8d95c-93d0-4ace-8836-30306122082b" containerID="2cfeada6bfd9667f90d74699e92256cf7645b718c1c70ed7dcb123e3f67918fc" exitCode=0 Jan 26 08:54:49 crc kubenswrapper[4806]: I0126 08:54:49.001147 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sshjs" event={"ID":"cbd8d95c-93d0-4ace-8836-30306122082b","Type":"ContainerDied","Data":"2cfeada6bfd9667f90d74699e92256cf7645b718c1c70ed7dcb123e3f67918fc"} Jan 26 08:54:49 crc kubenswrapper[4806]: I0126 08:54:49.003554 4806 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 08:54:51 crc kubenswrapper[4806]: I0126 08:54:51.019101 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sshjs" event={"ID":"cbd8d95c-93d0-4ace-8836-30306122082b","Type":"ContainerStarted","Data":"3c0dfaaaf70578e6a86ba3514fb13eeea4943293c55ddaca778ceff26c8ac3c9"} Jan 26 08:54:52 crc kubenswrapper[4806]: I0126 08:54:52.030396 4806 generic.go:334] "Generic (PLEG): container finished" podID="cbd8d95c-93d0-4ace-8836-30306122082b" containerID="3c0dfaaaf70578e6a86ba3514fb13eeea4943293c55ddaca778ceff26c8ac3c9" exitCode=0 Jan 26 08:54:52 crc kubenswrapper[4806]: I0126 08:54:52.030458 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sshjs" event={"ID":"cbd8d95c-93d0-4ace-8836-30306122082b","Type":"ContainerDied","Data":"3c0dfaaaf70578e6a86ba3514fb13eeea4943293c55ddaca778ceff26c8ac3c9"} Jan 26 08:54:53 crc kubenswrapper[4806]: I0126 08:54:53.040780 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sshjs" event={"ID":"cbd8d95c-93d0-4ace-8836-30306122082b","Type":"ContainerStarted","Data":"95eab4050395a861bcfa7ad7a2fb4613968e7b6dfe4ab0ef9002958a6f68c204"} Jan 26 08:54:53 crc kubenswrapper[4806]: I0126 08:54:53.058660 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-sshjs" podStartSLOduration=3.59250857 podStartE2EDuration="7.058626538s" podCreationTimestamp="2026-01-26 08:54:46 +0000 UTC" firstStartedPulling="2026-01-26 08:54:49.003271272 +0000 UTC m=+3668.267679338" lastFinishedPulling="2026-01-26 08:54:52.46938925 +0000 UTC m=+3671.733797306" observedRunningTime="2026-01-26 08:54:53.056413877 +0000 UTC m=+3672.320821933" watchObservedRunningTime="2026-01-26 08:54:53.058626538 +0000 UTC m=+3672.323034594" Jan 26 08:54:57 crc kubenswrapper[4806]: I0126 08:54:57.239429 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-sshjs" Jan 26 08:54:57 crc kubenswrapper[4806]: I0126 08:54:57.240271 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-sshjs" Jan 26 08:54:57 crc kubenswrapper[4806]: I0126 08:54:57.284511 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-sshjs" Jan 26 08:54:58 crc kubenswrapper[4806]: I0126 08:54:58.123214 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-sshjs" Jan 26 08:54:58 crc kubenswrapper[4806]: I0126 08:54:58.168243 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-sshjs"] Jan 26 08:55:00 crc kubenswrapper[4806]: I0126 08:55:00.091174 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-sshjs" podUID="cbd8d95c-93d0-4ace-8836-30306122082b" containerName="registry-server" containerID="cri-o://95eab4050395a861bcfa7ad7a2fb4613968e7b6dfe4ab0ef9002958a6f68c204" gracePeriod=2 Jan 26 08:55:00 crc kubenswrapper[4806]: I0126 08:55:00.548150 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sshjs" Jan 26 08:55:00 crc kubenswrapper[4806]: I0126 08:55:00.736532 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c5wr\" (UniqueName: \"kubernetes.io/projected/cbd8d95c-93d0-4ace-8836-30306122082b-kube-api-access-7c5wr\") pod \"cbd8d95c-93d0-4ace-8836-30306122082b\" (UID: \"cbd8d95c-93d0-4ace-8836-30306122082b\") " Jan 26 08:55:00 crc kubenswrapper[4806]: I0126 08:55:00.736606 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cbd8d95c-93d0-4ace-8836-30306122082b-catalog-content\") pod \"cbd8d95c-93d0-4ace-8836-30306122082b\" (UID: \"cbd8d95c-93d0-4ace-8836-30306122082b\") " Jan 26 08:55:00 crc kubenswrapper[4806]: I0126 08:55:00.736810 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cbd8d95c-93d0-4ace-8836-30306122082b-utilities\") pod \"cbd8d95c-93d0-4ace-8836-30306122082b\" (UID: \"cbd8d95c-93d0-4ace-8836-30306122082b\") " Jan 26 08:55:00 crc kubenswrapper[4806]: I0126 08:55:00.737824 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cbd8d95c-93d0-4ace-8836-30306122082b-utilities" (OuterVolumeSpecName: "utilities") pod "cbd8d95c-93d0-4ace-8836-30306122082b" (UID: "cbd8d95c-93d0-4ace-8836-30306122082b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 08:55:00 crc kubenswrapper[4806]: I0126 08:55:00.746036 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cbd8d95c-93d0-4ace-8836-30306122082b-kube-api-access-7c5wr" (OuterVolumeSpecName: "kube-api-access-7c5wr") pod "cbd8d95c-93d0-4ace-8836-30306122082b" (UID: "cbd8d95c-93d0-4ace-8836-30306122082b"). InnerVolumeSpecName "kube-api-access-7c5wr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:55:00 crc kubenswrapper[4806]: I0126 08:55:00.781744 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cbd8d95c-93d0-4ace-8836-30306122082b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cbd8d95c-93d0-4ace-8836-30306122082b" (UID: "cbd8d95c-93d0-4ace-8836-30306122082b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 08:55:00 crc kubenswrapper[4806]: I0126 08:55:00.839009 4806 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cbd8d95c-93d0-4ace-8836-30306122082b-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 08:55:00 crc kubenswrapper[4806]: I0126 08:55:00.839047 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c5wr\" (UniqueName: \"kubernetes.io/projected/cbd8d95c-93d0-4ace-8836-30306122082b-kube-api-access-7c5wr\") on node \"crc\" DevicePath \"\"" Jan 26 08:55:00 crc kubenswrapper[4806]: I0126 08:55:00.839059 4806 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cbd8d95c-93d0-4ace-8836-30306122082b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 08:55:01 crc kubenswrapper[4806]: I0126 08:55:01.100783 4806 generic.go:334] "Generic (PLEG): container finished" podID="cbd8d95c-93d0-4ace-8836-30306122082b" containerID="95eab4050395a861bcfa7ad7a2fb4613968e7b6dfe4ab0ef9002958a6f68c204" exitCode=0 Jan 26 08:55:01 crc kubenswrapper[4806]: I0126 08:55:01.100835 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sshjs" event={"ID":"cbd8d95c-93d0-4ace-8836-30306122082b","Type":"ContainerDied","Data":"95eab4050395a861bcfa7ad7a2fb4613968e7b6dfe4ab0ef9002958a6f68c204"} Jan 26 08:55:01 crc kubenswrapper[4806]: I0126 08:55:01.100875 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sshjs" event={"ID":"cbd8d95c-93d0-4ace-8836-30306122082b","Type":"ContainerDied","Data":"7a1f9381011de00f452e42d53b945b3aa449f5b543a23a0e388a321506848b66"} Jan 26 08:55:01 crc kubenswrapper[4806]: I0126 08:55:01.100896 4806 scope.go:117] "RemoveContainer" containerID="95eab4050395a861bcfa7ad7a2fb4613968e7b6dfe4ab0ef9002958a6f68c204" Jan 26 08:55:01 crc kubenswrapper[4806]: I0126 08:55:01.102138 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sshjs" Jan 26 08:55:01 crc kubenswrapper[4806]: I0126 08:55:01.121930 4806 scope.go:117] "RemoveContainer" containerID="3c0dfaaaf70578e6a86ba3514fb13eeea4943293c55ddaca778ceff26c8ac3c9" Jan 26 08:55:01 crc kubenswrapper[4806]: I0126 08:55:01.126381 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-sshjs"] Jan 26 08:55:01 crc kubenswrapper[4806]: I0126 08:55:01.136421 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-sshjs"] Jan 26 08:55:01 crc kubenswrapper[4806]: I0126 08:55:01.151882 4806 scope.go:117] "RemoveContainer" containerID="2cfeada6bfd9667f90d74699e92256cf7645b718c1c70ed7dcb123e3f67918fc" Jan 26 08:55:01 crc kubenswrapper[4806]: I0126 08:55:01.187274 4806 scope.go:117] "RemoveContainer" containerID="95eab4050395a861bcfa7ad7a2fb4613968e7b6dfe4ab0ef9002958a6f68c204" Jan 26 08:55:01 crc kubenswrapper[4806]: E0126 08:55:01.187655 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"95eab4050395a861bcfa7ad7a2fb4613968e7b6dfe4ab0ef9002958a6f68c204\": container with ID starting with 95eab4050395a861bcfa7ad7a2fb4613968e7b6dfe4ab0ef9002958a6f68c204 not found: ID does not exist" containerID="95eab4050395a861bcfa7ad7a2fb4613968e7b6dfe4ab0ef9002958a6f68c204" Jan 26 08:55:01 crc kubenswrapper[4806]: I0126 08:55:01.187688 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95eab4050395a861bcfa7ad7a2fb4613968e7b6dfe4ab0ef9002958a6f68c204"} err="failed to get container status \"95eab4050395a861bcfa7ad7a2fb4613968e7b6dfe4ab0ef9002958a6f68c204\": rpc error: code = NotFound desc = could not find container \"95eab4050395a861bcfa7ad7a2fb4613968e7b6dfe4ab0ef9002958a6f68c204\": container with ID starting with 95eab4050395a861bcfa7ad7a2fb4613968e7b6dfe4ab0ef9002958a6f68c204 not found: ID does not exist" Jan 26 08:55:01 crc kubenswrapper[4806]: I0126 08:55:01.187709 4806 scope.go:117] "RemoveContainer" containerID="3c0dfaaaf70578e6a86ba3514fb13eeea4943293c55ddaca778ceff26c8ac3c9" Jan 26 08:55:01 crc kubenswrapper[4806]: E0126 08:55:01.187882 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c0dfaaaf70578e6a86ba3514fb13eeea4943293c55ddaca778ceff26c8ac3c9\": container with ID starting with 3c0dfaaaf70578e6a86ba3514fb13eeea4943293c55ddaca778ceff26c8ac3c9 not found: ID does not exist" containerID="3c0dfaaaf70578e6a86ba3514fb13eeea4943293c55ddaca778ceff26c8ac3c9" Jan 26 08:55:01 crc kubenswrapper[4806]: I0126 08:55:01.187901 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c0dfaaaf70578e6a86ba3514fb13eeea4943293c55ddaca778ceff26c8ac3c9"} err="failed to get container status \"3c0dfaaaf70578e6a86ba3514fb13eeea4943293c55ddaca778ceff26c8ac3c9\": rpc error: code = NotFound desc = could not find container \"3c0dfaaaf70578e6a86ba3514fb13eeea4943293c55ddaca778ceff26c8ac3c9\": container with ID starting with 3c0dfaaaf70578e6a86ba3514fb13eeea4943293c55ddaca778ceff26c8ac3c9 not found: ID does not exist" Jan 26 08:55:01 crc kubenswrapper[4806]: I0126 08:55:01.187917 4806 scope.go:117] "RemoveContainer" containerID="2cfeada6bfd9667f90d74699e92256cf7645b718c1c70ed7dcb123e3f67918fc" Jan 26 08:55:01 crc kubenswrapper[4806]: E0126 08:55:01.188080 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2cfeada6bfd9667f90d74699e92256cf7645b718c1c70ed7dcb123e3f67918fc\": container with ID starting with 2cfeada6bfd9667f90d74699e92256cf7645b718c1c70ed7dcb123e3f67918fc not found: ID does not exist" containerID="2cfeada6bfd9667f90d74699e92256cf7645b718c1c70ed7dcb123e3f67918fc" Jan 26 08:55:01 crc kubenswrapper[4806]: I0126 08:55:01.188098 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2cfeada6bfd9667f90d74699e92256cf7645b718c1c70ed7dcb123e3f67918fc"} err="failed to get container status \"2cfeada6bfd9667f90d74699e92256cf7645b718c1c70ed7dcb123e3f67918fc\": rpc error: code = NotFound desc = could not find container \"2cfeada6bfd9667f90d74699e92256cf7645b718c1c70ed7dcb123e3f67918fc\": container with ID starting with 2cfeada6bfd9667f90d74699e92256cf7645b718c1c70ed7dcb123e3f67918fc not found: ID does not exist" Jan 26 08:55:03 crc kubenswrapper[4806]: I0126 08:55:03.054293 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cbd8d95c-93d0-4ace-8836-30306122082b" path="/var/lib/kubelet/pods/cbd8d95c-93d0-4ace-8836-30306122082b/volumes" Jan 26 08:55:45 crc kubenswrapper[4806]: I0126 08:55:45.479663 4806 generic.go:334] "Generic (PLEG): container finished" podID="d392e063-ed04-4768-b95c-cbd7d0e5afda" containerID="28ce943823052dfa7a387d045ee512ed129b9e6deb0bca1fe81de4164b5c4746" exitCode=1 Jan 26 08:55:45 crc kubenswrapper[4806]: I0126 08:55:45.480268 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" event={"ID":"d392e063-ed04-4768-b95c-cbd7d0e5afda","Type":"ContainerDied","Data":"28ce943823052dfa7a387d045ee512ed129b9e6deb0bca1fe81de4164b5c4746"} Jan 26 08:55:47 crc kubenswrapper[4806]: I0126 08:55:47.062112 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 26 08:55:47 crc kubenswrapper[4806]: I0126 08:55:47.157167 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest-s01-single-thread-testing"] Jan 26 08:55:47 crc kubenswrapper[4806]: E0126 08:55:47.157744 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbd8d95c-93d0-4ace-8836-30306122082b" containerName="registry-server" Jan 26 08:55:47 crc kubenswrapper[4806]: I0126 08:55:47.157757 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbd8d95c-93d0-4ace-8836-30306122082b" containerName="registry-server" Jan 26 08:55:47 crc kubenswrapper[4806]: E0126 08:55:47.157767 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbd8d95c-93d0-4ace-8836-30306122082b" containerName="extract-utilities" Jan 26 08:55:47 crc kubenswrapper[4806]: I0126 08:55:47.157774 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbd8d95c-93d0-4ace-8836-30306122082b" containerName="extract-utilities" Jan 26 08:55:47 crc kubenswrapper[4806]: E0126 08:55:47.157792 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbd8d95c-93d0-4ace-8836-30306122082b" containerName="extract-content" Jan 26 08:55:47 crc kubenswrapper[4806]: I0126 08:55:47.157798 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbd8d95c-93d0-4ace-8836-30306122082b" containerName="extract-content" Jan 26 08:55:47 crc kubenswrapper[4806]: E0126 08:55:47.158546 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d392e063-ed04-4768-b95c-cbd7d0e5afda" containerName="tempest-tests-tempest-tests-runner" Jan 26 08:55:47 crc kubenswrapper[4806]: I0126 08:55:47.158560 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="d392e063-ed04-4768-b95c-cbd7d0e5afda" containerName="tempest-tests-tempest-tests-runner" Jan 26 08:55:47 crc kubenswrapper[4806]: I0126 08:55:47.158743 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbd8d95c-93d0-4ace-8836-30306122082b" containerName="registry-server" Jan 26 08:55:47 crc kubenswrapper[4806]: I0126 08:55:47.158762 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="d392e063-ed04-4768-b95c-cbd7d0e5afda" containerName="tempest-tests-tempest-tests-runner" Jan 26 08:55:47 crc kubenswrapper[4806]: I0126 08:55:47.159356 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 26 08:55:47 crc kubenswrapper[4806]: I0126 08:55:47.163062 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s1" Jan 26 08:55:47 crc kubenswrapper[4806]: I0126 08:55:47.163194 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s1" Jan 26 08:55:47 crc kubenswrapper[4806]: I0126 08:55:47.167210 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/d392e063-ed04-4768-b95c-cbd7d0e5afda-test-operator-ephemeral-temporary\") pod \"d392e063-ed04-4768-b95c-cbd7d0e5afda\" (UID: \"d392e063-ed04-4768-b95c-cbd7d0e5afda\") " Jan 26 08:55:47 crc kubenswrapper[4806]: I0126 08:55:47.167262 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d392e063-ed04-4768-b95c-cbd7d0e5afda-config-data\") pod \"d392e063-ed04-4768-b95c-cbd7d0e5afda\" (UID: \"d392e063-ed04-4768-b95c-cbd7d0e5afda\") " Jan 26 08:55:47 crc kubenswrapper[4806]: I0126 08:55:47.167288 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d392e063-ed04-4768-b95c-cbd7d0e5afda-openstack-config\") pod \"d392e063-ed04-4768-b95c-cbd7d0e5afda\" (UID: \"d392e063-ed04-4768-b95c-cbd7d0e5afda\") " Jan 26 08:55:47 crc kubenswrapper[4806]: I0126 08:55:47.167558 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d392e063-ed04-4768-b95c-cbd7d0e5afda-openstack-config-secret\") pod \"d392e063-ed04-4768-b95c-cbd7d0e5afda\" (UID: \"d392e063-ed04-4768-b95c-cbd7d0e5afda\") " Jan 26 08:55:47 crc kubenswrapper[4806]: I0126 08:55:47.167627 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d392e063-ed04-4768-b95c-cbd7d0e5afda-ssh-key\") pod \"d392e063-ed04-4768-b95c-cbd7d0e5afda\" (UID: \"d392e063-ed04-4768-b95c-cbd7d0e5afda\") " Jan 26 08:55:47 crc kubenswrapper[4806]: I0126 08:55:47.167689 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qggff\" (UniqueName: \"kubernetes.io/projected/d392e063-ed04-4768-b95c-cbd7d0e5afda-kube-api-access-qggff\") pod \"d392e063-ed04-4768-b95c-cbd7d0e5afda\" (UID: \"d392e063-ed04-4768-b95c-cbd7d0e5afda\") " Jan 26 08:55:47 crc kubenswrapper[4806]: I0126 08:55:47.167736 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"d392e063-ed04-4768-b95c-cbd7d0e5afda\" (UID: \"d392e063-ed04-4768-b95c-cbd7d0e5afda\") " Jan 26 08:55:47 crc kubenswrapper[4806]: I0126 08:55:47.167770 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/d392e063-ed04-4768-b95c-cbd7d0e5afda-ca-certs\") pod \"d392e063-ed04-4768-b95c-cbd7d0e5afda\" (UID: \"d392e063-ed04-4768-b95c-cbd7d0e5afda\") " Jan 26 08:55:47 crc kubenswrapper[4806]: I0126 08:55:47.167812 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/d392e063-ed04-4768-b95c-cbd7d0e5afda-test-operator-ephemeral-workdir\") pod \"d392e063-ed04-4768-b95c-cbd7d0e5afda\" (UID: \"d392e063-ed04-4768-b95c-cbd7d0e5afda\") " Jan 26 08:55:47 crc kubenswrapper[4806]: I0126 08:55:47.168243 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d392e063-ed04-4768-b95c-cbd7d0e5afda-config-data" (OuterVolumeSpecName: "config-data") pod "d392e063-ed04-4768-b95c-cbd7d0e5afda" (UID: "d392e063-ed04-4768-b95c-cbd7d0e5afda"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 08:55:47 crc kubenswrapper[4806]: I0126 08:55:47.170316 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d392e063-ed04-4768-b95c-cbd7d0e5afda-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "d392e063-ed04-4768-b95c-cbd7d0e5afda" (UID: "d392e063-ed04-4768-b95c-cbd7d0e5afda"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 08:55:47 crc kubenswrapper[4806]: I0126 08:55:47.175145 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest-s01-single-thread-testing"] Jan 26 08:55:47 crc kubenswrapper[4806]: I0126 08:55:47.175760 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d392e063-ed04-4768-b95c-cbd7d0e5afda-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "d392e063-ed04-4768-b95c-cbd7d0e5afda" (UID: "d392e063-ed04-4768-b95c-cbd7d0e5afda"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 08:55:47 crc kubenswrapper[4806]: I0126 08:55:47.197728 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d392e063-ed04-4768-b95c-cbd7d0e5afda-kube-api-access-qggff" (OuterVolumeSpecName: "kube-api-access-qggff") pod "d392e063-ed04-4768-b95c-cbd7d0e5afda" (UID: "d392e063-ed04-4768-b95c-cbd7d0e5afda"). InnerVolumeSpecName "kube-api-access-qggff". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:55:47 crc kubenswrapper[4806]: I0126 08:55:47.198427 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "test-operator-logs") pod "d392e063-ed04-4768-b95c-cbd7d0e5afda" (UID: "d392e063-ed04-4768-b95c-cbd7d0e5afda"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 26 08:55:47 crc kubenswrapper[4806]: I0126 08:55:47.198872 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d392e063-ed04-4768-b95c-cbd7d0e5afda-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "d392e063-ed04-4768-b95c-cbd7d0e5afda" (UID: "d392e063-ed04-4768-b95c-cbd7d0e5afda"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:55:47 crc kubenswrapper[4806]: I0126 08:55:47.204963 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d392e063-ed04-4768-b95c-cbd7d0e5afda-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "d392e063-ed04-4768-b95c-cbd7d0e5afda" (UID: "d392e063-ed04-4768-b95c-cbd7d0e5afda"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:55:47 crc kubenswrapper[4806]: I0126 08:55:47.220772 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d392e063-ed04-4768-b95c-cbd7d0e5afda-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "d392e063-ed04-4768-b95c-cbd7d0e5afda" (UID: "d392e063-ed04-4768-b95c-cbd7d0e5afda"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:55:47 crc kubenswrapper[4806]: I0126 08:55:47.241895 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d392e063-ed04-4768-b95c-cbd7d0e5afda-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "d392e063-ed04-4768-b95c-cbd7d0e5afda" (UID: "d392e063-ed04-4768-b95c-cbd7d0e5afda"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 08:55:47 crc kubenswrapper[4806]: I0126 08:55:47.269312 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/e2f598ac-916e-43f9-9d50-09c4be97c717-openstack-config-secret\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"e2f598ac-916e-43f9-9d50-09c4be97c717\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 26 08:55:47 crc kubenswrapper[4806]: I0126 08:55:47.269425 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e2f598ac-916e-43f9-9d50-09c4be97c717-ssh-key\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"e2f598ac-916e-43f9-9d50-09c4be97c717\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 26 08:55:47 crc kubenswrapper[4806]: I0126 08:55:47.269463 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e2f598ac-916e-43f9-9d50-09c4be97c717-config-data\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"e2f598ac-916e-43f9-9d50-09c4be97c717\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 26 08:55:47 crc kubenswrapper[4806]: I0126 08:55:47.269482 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7x86\" (UniqueName: \"kubernetes.io/projected/e2f598ac-916e-43f9-9d50-09c4be97c717-kube-api-access-c7x86\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"e2f598ac-916e-43f9-9d50-09c4be97c717\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 26 08:55:47 crc kubenswrapper[4806]: I0126 08:55:47.269591 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/e2f598ac-916e-43f9-9d50-09c4be97c717-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"e2f598ac-916e-43f9-9d50-09c4be97c717\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 26 08:55:47 crc kubenswrapper[4806]: I0126 08:55:47.269726 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"e2f598ac-916e-43f9-9d50-09c4be97c717\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 26 08:55:47 crc kubenswrapper[4806]: I0126 08:55:47.269843 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/e2f598ac-916e-43f9-9d50-09c4be97c717-ca-certs\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"e2f598ac-916e-43f9-9d50-09c4be97c717\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 26 08:55:47 crc kubenswrapper[4806]: I0126 08:55:47.270042 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/e2f598ac-916e-43f9-9d50-09c4be97c717-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"e2f598ac-916e-43f9-9d50-09c4be97c717\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 26 08:55:47 crc kubenswrapper[4806]: I0126 08:55:47.270095 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/e2f598ac-916e-43f9-9d50-09c4be97c717-openstack-config\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"e2f598ac-916e-43f9-9d50-09c4be97c717\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 26 08:55:47 crc kubenswrapper[4806]: I0126 08:55:47.270312 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qggff\" (UniqueName: \"kubernetes.io/projected/d392e063-ed04-4768-b95c-cbd7d0e5afda-kube-api-access-qggff\") on node \"crc\" DevicePath \"\"" Jan 26 08:55:47 crc kubenswrapper[4806]: I0126 08:55:47.270439 4806 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/d392e063-ed04-4768-b95c-cbd7d0e5afda-ca-certs\") on node \"crc\" DevicePath \"\"" Jan 26 08:55:47 crc kubenswrapper[4806]: I0126 08:55:47.270658 4806 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/d392e063-ed04-4768-b95c-cbd7d0e5afda-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Jan 26 08:55:47 crc kubenswrapper[4806]: I0126 08:55:47.270762 4806 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/d392e063-ed04-4768-b95c-cbd7d0e5afda-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Jan 26 08:55:47 crc kubenswrapper[4806]: I0126 08:55:47.270858 4806 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d392e063-ed04-4768-b95c-cbd7d0e5afda-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 08:55:47 crc kubenswrapper[4806]: I0126 08:55:47.270954 4806 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d392e063-ed04-4768-b95c-cbd7d0e5afda-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 26 08:55:47 crc kubenswrapper[4806]: I0126 08:55:47.271047 4806 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d392e063-ed04-4768-b95c-cbd7d0e5afda-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 26 08:55:47 crc kubenswrapper[4806]: I0126 08:55:47.271151 4806 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d392e063-ed04-4768-b95c-cbd7d0e5afda-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 26 08:55:47 crc kubenswrapper[4806]: I0126 08:55:47.344226 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"e2f598ac-916e-43f9-9d50-09c4be97c717\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 26 08:55:47 crc kubenswrapper[4806]: I0126 08:55:47.372890 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e2f598ac-916e-43f9-9d50-09c4be97c717-ssh-key\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"e2f598ac-916e-43f9-9d50-09c4be97c717\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 26 08:55:47 crc kubenswrapper[4806]: I0126 08:55:47.373051 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e2f598ac-916e-43f9-9d50-09c4be97c717-config-data\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"e2f598ac-916e-43f9-9d50-09c4be97c717\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 26 08:55:47 crc kubenswrapper[4806]: I0126 08:55:47.373095 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c7x86\" (UniqueName: \"kubernetes.io/projected/e2f598ac-916e-43f9-9d50-09c4be97c717-kube-api-access-c7x86\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"e2f598ac-916e-43f9-9d50-09c4be97c717\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 26 08:55:47 crc kubenswrapper[4806]: I0126 08:55:47.373186 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/e2f598ac-916e-43f9-9d50-09c4be97c717-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"e2f598ac-916e-43f9-9d50-09c4be97c717\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 26 08:55:47 crc kubenswrapper[4806]: I0126 08:55:47.373354 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/e2f598ac-916e-43f9-9d50-09c4be97c717-ca-certs\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"e2f598ac-916e-43f9-9d50-09c4be97c717\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 26 08:55:47 crc kubenswrapper[4806]: I0126 08:55:47.373476 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/e2f598ac-916e-43f9-9d50-09c4be97c717-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"e2f598ac-916e-43f9-9d50-09c4be97c717\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 26 08:55:47 crc kubenswrapper[4806]: I0126 08:55:47.373534 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/e2f598ac-916e-43f9-9d50-09c4be97c717-openstack-config\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"e2f598ac-916e-43f9-9d50-09c4be97c717\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 26 08:55:47 crc kubenswrapper[4806]: I0126 08:55:47.373593 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/e2f598ac-916e-43f9-9d50-09c4be97c717-openstack-config-secret\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"e2f598ac-916e-43f9-9d50-09c4be97c717\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 26 08:55:47 crc kubenswrapper[4806]: I0126 08:55:47.373800 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/e2f598ac-916e-43f9-9d50-09c4be97c717-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"e2f598ac-916e-43f9-9d50-09c4be97c717\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 26 08:55:47 crc kubenswrapper[4806]: I0126 08:55:47.374400 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/e2f598ac-916e-43f9-9d50-09c4be97c717-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"e2f598ac-916e-43f9-9d50-09c4be97c717\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 26 08:55:47 crc kubenswrapper[4806]: I0126 08:55:47.375032 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e2f598ac-916e-43f9-9d50-09c4be97c717-config-data\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"e2f598ac-916e-43f9-9d50-09c4be97c717\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 26 08:55:47 crc kubenswrapper[4806]: I0126 08:55:47.375389 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/e2f598ac-916e-43f9-9d50-09c4be97c717-openstack-config\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"e2f598ac-916e-43f9-9d50-09c4be97c717\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 26 08:55:47 crc kubenswrapper[4806]: I0126 08:55:47.376538 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e2f598ac-916e-43f9-9d50-09c4be97c717-ssh-key\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"e2f598ac-916e-43f9-9d50-09c4be97c717\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 26 08:55:47 crc kubenswrapper[4806]: I0126 08:55:47.377862 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/e2f598ac-916e-43f9-9d50-09c4be97c717-ca-certs\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"e2f598ac-916e-43f9-9d50-09c4be97c717\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 26 08:55:47 crc kubenswrapper[4806]: I0126 08:55:47.379944 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/e2f598ac-916e-43f9-9d50-09c4be97c717-openstack-config-secret\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"e2f598ac-916e-43f9-9d50-09c4be97c717\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 26 08:55:47 crc kubenswrapper[4806]: I0126 08:55:47.806605 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" event={"ID":"d392e063-ed04-4768-b95c-cbd7d0e5afda","Type":"ContainerDied","Data":"3267d0c955ac1e44cf22a3d3bf078d3d192439d417fa58dd44f449c69ca61f66"} Jan 26 08:55:47 crc kubenswrapper[4806]: I0126 08:55:47.806656 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3267d0c955ac1e44cf22a3d3bf078d3d192439d417fa58dd44f449c69ca61f66" Jan 26 08:55:47 crc kubenswrapper[4806]: I0126 08:55:47.806726 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 26 08:55:47 crc kubenswrapper[4806]: I0126 08:55:47.822923 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c7x86\" (UniqueName: \"kubernetes.io/projected/e2f598ac-916e-43f9-9d50-09c4be97c717-kube-api-access-c7x86\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"e2f598ac-916e-43f9-9d50-09c4be97c717\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 26 08:55:47 crc kubenswrapper[4806]: I0126 08:55:47.929701 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 26 08:55:48 crc kubenswrapper[4806]: I0126 08:55:48.501458 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest-s01-single-thread-testing"] Jan 26 08:55:48 crc kubenswrapper[4806]: I0126 08:55:48.822111 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s01-single-thread-testing" event={"ID":"e2f598ac-916e-43f9-9d50-09c4be97c717","Type":"ContainerStarted","Data":"9a774aa32768cae47e6c3e3d00a1e3e33e1c492130b9577838269d73afbae778"} Jan 26 08:55:50 crc kubenswrapper[4806]: I0126 08:55:50.842384 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s01-single-thread-testing" event={"ID":"e2f598ac-916e-43f9-9d50-09c4be97c717","Type":"ContainerStarted","Data":"88fdc1d9344479c10ca824d5d551e93e6dbfba04a3baa898e553f266398c29e8"} Jan 26 08:55:50 crc kubenswrapper[4806]: I0126 08:55:50.861381 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest-s01-single-thread-testing" podStartSLOduration=3.8613619569999997 podStartE2EDuration="3.861361957s" podCreationTimestamp="2026-01-26 08:55:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 08:55:50.857035656 +0000 UTC m=+3730.121443732" watchObservedRunningTime="2026-01-26 08:55:50.861361957 +0000 UTC m=+3730.125770013" Jan 26 08:56:39 crc kubenswrapper[4806]: I0126 08:56:39.269926 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-7fd7f5fc77-snnst"] Jan 26 08:56:39 crc kubenswrapper[4806]: I0126 08:56:39.284082 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7fd7f5fc77-snnst"] Jan 26 08:56:39 crc kubenswrapper[4806]: I0126 08:56:39.284182 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7fd7f5fc77-snnst" Jan 26 08:56:39 crc kubenswrapper[4806]: I0126 08:56:39.379440 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a42a8dd-6e74-4dba-a208-21461ce7ad8f-ovndb-tls-certs\") pod \"neutron-7fd7f5fc77-snnst\" (UID: \"1a42a8dd-6e74-4dba-a208-21461ce7ad8f\") " pod="openstack/neutron-7fd7f5fc77-snnst" Jan 26 08:56:39 crc kubenswrapper[4806]: I0126 08:56:39.379496 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a42a8dd-6e74-4dba-a208-21461ce7ad8f-public-tls-certs\") pod \"neutron-7fd7f5fc77-snnst\" (UID: \"1a42a8dd-6e74-4dba-a208-21461ce7ad8f\") " pod="openstack/neutron-7fd7f5fc77-snnst" Jan 26 08:56:39 crc kubenswrapper[4806]: I0126 08:56:39.379541 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmnq7\" (UniqueName: \"kubernetes.io/projected/1a42a8dd-6e74-4dba-a208-21461ce7ad8f-kube-api-access-bmnq7\") pod \"neutron-7fd7f5fc77-snnst\" (UID: \"1a42a8dd-6e74-4dba-a208-21461ce7ad8f\") " pod="openstack/neutron-7fd7f5fc77-snnst" Jan 26 08:56:39 crc kubenswrapper[4806]: I0126 08:56:39.379677 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1a42a8dd-6e74-4dba-a208-21461ce7ad8f-config\") pod \"neutron-7fd7f5fc77-snnst\" (UID: \"1a42a8dd-6e74-4dba-a208-21461ce7ad8f\") " pod="openstack/neutron-7fd7f5fc77-snnst" Jan 26 08:56:39 crc kubenswrapper[4806]: I0126 08:56:39.379876 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a42a8dd-6e74-4dba-a208-21461ce7ad8f-combined-ca-bundle\") pod \"neutron-7fd7f5fc77-snnst\" (UID: \"1a42a8dd-6e74-4dba-a208-21461ce7ad8f\") " pod="openstack/neutron-7fd7f5fc77-snnst" Jan 26 08:56:39 crc kubenswrapper[4806]: I0126 08:56:39.379946 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1a42a8dd-6e74-4dba-a208-21461ce7ad8f-httpd-config\") pod \"neutron-7fd7f5fc77-snnst\" (UID: \"1a42a8dd-6e74-4dba-a208-21461ce7ad8f\") " pod="openstack/neutron-7fd7f5fc77-snnst" Jan 26 08:56:39 crc kubenswrapper[4806]: I0126 08:56:39.380046 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a42a8dd-6e74-4dba-a208-21461ce7ad8f-internal-tls-certs\") pod \"neutron-7fd7f5fc77-snnst\" (UID: \"1a42a8dd-6e74-4dba-a208-21461ce7ad8f\") " pod="openstack/neutron-7fd7f5fc77-snnst" Jan 26 08:56:39 crc kubenswrapper[4806]: I0126 08:56:39.480663 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a42a8dd-6e74-4dba-a208-21461ce7ad8f-ovndb-tls-certs\") pod \"neutron-7fd7f5fc77-snnst\" (UID: \"1a42a8dd-6e74-4dba-a208-21461ce7ad8f\") " pod="openstack/neutron-7fd7f5fc77-snnst" Jan 26 08:56:39 crc kubenswrapper[4806]: I0126 08:56:39.481053 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a42a8dd-6e74-4dba-a208-21461ce7ad8f-public-tls-certs\") pod \"neutron-7fd7f5fc77-snnst\" (UID: \"1a42a8dd-6e74-4dba-a208-21461ce7ad8f\") " pod="openstack/neutron-7fd7f5fc77-snnst" Jan 26 08:56:39 crc kubenswrapper[4806]: I0126 08:56:39.481085 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmnq7\" (UniqueName: \"kubernetes.io/projected/1a42a8dd-6e74-4dba-a208-21461ce7ad8f-kube-api-access-bmnq7\") pod \"neutron-7fd7f5fc77-snnst\" (UID: \"1a42a8dd-6e74-4dba-a208-21461ce7ad8f\") " pod="openstack/neutron-7fd7f5fc77-snnst" Jan 26 08:56:39 crc kubenswrapper[4806]: I0126 08:56:39.481110 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1a42a8dd-6e74-4dba-a208-21461ce7ad8f-config\") pod \"neutron-7fd7f5fc77-snnst\" (UID: \"1a42a8dd-6e74-4dba-a208-21461ce7ad8f\") " pod="openstack/neutron-7fd7f5fc77-snnst" Jan 26 08:56:39 crc kubenswrapper[4806]: I0126 08:56:39.481160 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a42a8dd-6e74-4dba-a208-21461ce7ad8f-combined-ca-bundle\") pod \"neutron-7fd7f5fc77-snnst\" (UID: \"1a42a8dd-6e74-4dba-a208-21461ce7ad8f\") " pod="openstack/neutron-7fd7f5fc77-snnst" Jan 26 08:56:39 crc kubenswrapper[4806]: I0126 08:56:39.481185 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1a42a8dd-6e74-4dba-a208-21461ce7ad8f-httpd-config\") pod \"neutron-7fd7f5fc77-snnst\" (UID: \"1a42a8dd-6e74-4dba-a208-21461ce7ad8f\") " pod="openstack/neutron-7fd7f5fc77-snnst" Jan 26 08:56:39 crc kubenswrapper[4806]: I0126 08:56:39.481210 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a42a8dd-6e74-4dba-a208-21461ce7ad8f-internal-tls-certs\") pod \"neutron-7fd7f5fc77-snnst\" (UID: \"1a42a8dd-6e74-4dba-a208-21461ce7ad8f\") " pod="openstack/neutron-7fd7f5fc77-snnst" Jan 26 08:56:39 crc kubenswrapper[4806]: I0126 08:56:39.489196 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a42a8dd-6e74-4dba-a208-21461ce7ad8f-combined-ca-bundle\") pod \"neutron-7fd7f5fc77-snnst\" (UID: \"1a42a8dd-6e74-4dba-a208-21461ce7ad8f\") " pod="openstack/neutron-7fd7f5fc77-snnst" Jan 26 08:56:39 crc kubenswrapper[4806]: I0126 08:56:39.490052 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1a42a8dd-6e74-4dba-a208-21461ce7ad8f-httpd-config\") pod \"neutron-7fd7f5fc77-snnst\" (UID: \"1a42a8dd-6e74-4dba-a208-21461ce7ad8f\") " pod="openstack/neutron-7fd7f5fc77-snnst" Jan 26 08:56:39 crc kubenswrapper[4806]: I0126 08:56:39.492214 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a42a8dd-6e74-4dba-a208-21461ce7ad8f-ovndb-tls-certs\") pod \"neutron-7fd7f5fc77-snnst\" (UID: \"1a42a8dd-6e74-4dba-a208-21461ce7ad8f\") " pod="openstack/neutron-7fd7f5fc77-snnst" Jan 26 08:56:39 crc kubenswrapper[4806]: I0126 08:56:39.492961 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/1a42a8dd-6e74-4dba-a208-21461ce7ad8f-config\") pod \"neutron-7fd7f5fc77-snnst\" (UID: \"1a42a8dd-6e74-4dba-a208-21461ce7ad8f\") " pod="openstack/neutron-7fd7f5fc77-snnst" Jan 26 08:56:39 crc kubenswrapper[4806]: I0126 08:56:39.495979 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a42a8dd-6e74-4dba-a208-21461ce7ad8f-public-tls-certs\") pod \"neutron-7fd7f5fc77-snnst\" (UID: \"1a42a8dd-6e74-4dba-a208-21461ce7ad8f\") " pod="openstack/neutron-7fd7f5fc77-snnst" Jan 26 08:56:39 crc kubenswrapper[4806]: I0126 08:56:39.496734 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a42a8dd-6e74-4dba-a208-21461ce7ad8f-internal-tls-certs\") pod \"neutron-7fd7f5fc77-snnst\" (UID: \"1a42a8dd-6e74-4dba-a208-21461ce7ad8f\") " pod="openstack/neutron-7fd7f5fc77-snnst" Jan 26 08:56:39 crc kubenswrapper[4806]: I0126 08:56:39.501724 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmnq7\" (UniqueName: \"kubernetes.io/projected/1a42a8dd-6e74-4dba-a208-21461ce7ad8f-kube-api-access-bmnq7\") pod \"neutron-7fd7f5fc77-snnst\" (UID: \"1a42a8dd-6e74-4dba-a208-21461ce7ad8f\") " pod="openstack/neutron-7fd7f5fc77-snnst" Jan 26 08:56:39 crc kubenswrapper[4806]: I0126 08:56:39.615968 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7fd7f5fc77-snnst" Jan 26 08:56:40 crc kubenswrapper[4806]: I0126 08:56:40.247576 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7fd7f5fc77-snnst"] Jan 26 08:56:40 crc kubenswrapper[4806]: I0126 08:56:40.284337 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7fd7f5fc77-snnst" event={"ID":"1a42a8dd-6e74-4dba-a208-21461ce7ad8f","Type":"ContainerStarted","Data":"dcba9cf66a7fa75532dd8661dacb9cdc8ea890bf0435da063d1451b84efafdef"} Jan 26 08:56:41 crc kubenswrapper[4806]: I0126 08:56:41.295750 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7fd7f5fc77-snnst" event={"ID":"1a42a8dd-6e74-4dba-a208-21461ce7ad8f","Type":"ContainerStarted","Data":"917306e7bbeb9b54dc9a02bcbdfe6b4c60d8fe2df3c9a899fed4684d4d1eafa4"} Jan 26 08:56:41 crc kubenswrapper[4806]: I0126 08:56:41.296391 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-7fd7f5fc77-snnst" Jan 26 08:56:41 crc kubenswrapper[4806]: I0126 08:56:41.296402 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7fd7f5fc77-snnst" event={"ID":"1a42a8dd-6e74-4dba-a208-21461ce7ad8f","Type":"ContainerStarted","Data":"dbf92d3c9ccab2d639420553b365841303b3b37631362ea1814d3548b107050f"} Jan 26 08:56:41 crc kubenswrapper[4806]: I0126 08:56:41.336146 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-7fd7f5fc77-snnst" podStartSLOduration=2.336125923 podStartE2EDuration="2.336125923s" podCreationTimestamp="2026-01-26 08:56:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 08:56:41.325540297 +0000 UTC m=+3780.589948353" watchObservedRunningTime="2026-01-26 08:56:41.336125923 +0000 UTC m=+3780.600533969" Jan 26 08:56:45 crc kubenswrapper[4806]: I0126 08:56:45.806167 4806 patch_prober.go:28] interesting pod/machine-config-daemon-k2tlk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 08:56:45 crc kubenswrapper[4806]: I0126 08:56:45.807907 4806 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 08:57:09 crc kubenswrapper[4806]: I0126 08:57:09.630935 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-7fd7f5fc77-snnst" Jan 26 08:57:09 crc kubenswrapper[4806]: I0126 08:57:09.710955 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-74d94c4d65-ms88t"] Jan 26 08:57:09 crc kubenswrapper[4806]: I0126 08:57:09.713202 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-74d94c4d65-ms88t" podUID="1d9165bb-c377-4c19-9728-58a6ea046166" containerName="neutron-httpd" containerID="cri-o://3b40b8338b443d9b968b8d4234f7e57d534802db3a000fcee7657289cb8287e4" gracePeriod=30 Jan 26 08:57:09 crc kubenswrapper[4806]: I0126 08:57:09.713382 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-74d94c4d65-ms88t" podUID="1d9165bb-c377-4c19-9728-58a6ea046166" containerName="neutron-api" containerID="cri-o://5f29e3670e7c17d3438882af45c0c1ecd4e0e50999b2425753851569b4fa4361" gracePeriod=30 Jan 26 08:57:10 crc kubenswrapper[4806]: I0126 08:57:10.525075 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-74d94c4d65-ms88t" event={"ID":"1d9165bb-c377-4c19-9728-58a6ea046166","Type":"ContainerDied","Data":"3b40b8338b443d9b968b8d4234f7e57d534802db3a000fcee7657289cb8287e4"} Jan 26 08:57:10 crc kubenswrapper[4806]: I0126 08:57:10.525211 4806 generic.go:334] "Generic (PLEG): container finished" podID="1d9165bb-c377-4c19-9728-58a6ea046166" containerID="3b40b8338b443d9b968b8d4234f7e57d534802db3a000fcee7657289cb8287e4" exitCode=0 Jan 26 08:57:15 crc kubenswrapper[4806]: I0126 08:57:15.806399 4806 patch_prober.go:28] interesting pod/machine-config-daemon-k2tlk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 08:57:15 crc kubenswrapper[4806]: I0126 08:57:15.807383 4806 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 08:57:21 crc kubenswrapper[4806]: I0126 08:57:21.625231 4806 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-74d94c4d65-ms88t" podUID="1d9165bb-c377-4c19-9728-58a6ea046166" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.171:9696/\": dial tcp 10.217.0.171:9696: connect: connection refused" Jan 26 08:57:29 crc kubenswrapper[4806]: I0126 08:57:29.674335 4806 generic.go:334] "Generic (PLEG): container finished" podID="1d9165bb-c377-4c19-9728-58a6ea046166" containerID="5f29e3670e7c17d3438882af45c0c1ecd4e0e50999b2425753851569b4fa4361" exitCode=0 Jan 26 08:57:29 crc kubenswrapper[4806]: I0126 08:57:29.674471 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-74d94c4d65-ms88t" event={"ID":"1d9165bb-c377-4c19-9728-58a6ea046166","Type":"ContainerDied","Data":"5f29e3670e7c17d3438882af45c0c1ecd4e0e50999b2425753851569b4fa4361"} Jan 26 08:57:29 crc kubenswrapper[4806]: I0126 08:57:29.674956 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-74d94c4d65-ms88t" event={"ID":"1d9165bb-c377-4c19-9728-58a6ea046166","Type":"ContainerDied","Data":"22610a1a13aa710c22702bd17d86583a07a6500c5e5b2d697b6dae3f9654602a"} Jan 26 08:57:29 crc kubenswrapper[4806]: I0126 08:57:29.676986 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="22610a1a13aa710c22702bd17d86583a07a6500c5e5b2d697b6dae3f9654602a" Jan 26 08:57:29 crc kubenswrapper[4806]: I0126 08:57:29.740234 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-74d94c4d65-ms88t" Jan 26 08:57:29 crc kubenswrapper[4806]: I0126 08:57:29.883205 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d9165bb-c377-4c19-9728-58a6ea046166-public-tls-certs\") pod \"1d9165bb-c377-4c19-9728-58a6ea046166\" (UID: \"1d9165bb-c377-4c19-9728-58a6ea046166\") " Jan 26 08:57:29 crc kubenswrapper[4806]: I0126 08:57:29.883271 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1d9165bb-c377-4c19-9728-58a6ea046166-config\") pod \"1d9165bb-c377-4c19-9728-58a6ea046166\" (UID: \"1d9165bb-c377-4c19-9728-58a6ea046166\") " Jan 26 08:57:29 crc kubenswrapper[4806]: I0126 08:57:29.883307 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gnv49\" (UniqueName: \"kubernetes.io/projected/1d9165bb-c377-4c19-9728-58a6ea046166-kube-api-access-gnv49\") pod \"1d9165bb-c377-4c19-9728-58a6ea046166\" (UID: \"1d9165bb-c377-4c19-9728-58a6ea046166\") " Jan 26 08:57:29 crc kubenswrapper[4806]: I0126 08:57:29.883345 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d9165bb-c377-4c19-9728-58a6ea046166-ovndb-tls-certs\") pod \"1d9165bb-c377-4c19-9728-58a6ea046166\" (UID: \"1d9165bb-c377-4c19-9728-58a6ea046166\") " Jan 26 08:57:29 crc kubenswrapper[4806]: I0126 08:57:29.883374 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d9165bb-c377-4c19-9728-58a6ea046166-combined-ca-bundle\") pod \"1d9165bb-c377-4c19-9728-58a6ea046166\" (UID: \"1d9165bb-c377-4c19-9728-58a6ea046166\") " Jan 26 08:57:29 crc kubenswrapper[4806]: I0126 08:57:29.883403 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1d9165bb-c377-4c19-9728-58a6ea046166-httpd-config\") pod \"1d9165bb-c377-4c19-9728-58a6ea046166\" (UID: \"1d9165bb-c377-4c19-9728-58a6ea046166\") " Jan 26 08:57:29 crc kubenswrapper[4806]: I0126 08:57:29.883594 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d9165bb-c377-4c19-9728-58a6ea046166-internal-tls-certs\") pod \"1d9165bb-c377-4c19-9728-58a6ea046166\" (UID: \"1d9165bb-c377-4c19-9728-58a6ea046166\") " Jan 26 08:57:29 crc kubenswrapper[4806]: I0126 08:57:29.889762 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d9165bb-c377-4c19-9728-58a6ea046166-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "1d9165bb-c377-4c19-9728-58a6ea046166" (UID: "1d9165bb-c377-4c19-9728-58a6ea046166"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:57:29 crc kubenswrapper[4806]: I0126 08:57:29.890411 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d9165bb-c377-4c19-9728-58a6ea046166-kube-api-access-gnv49" (OuterVolumeSpecName: "kube-api-access-gnv49") pod "1d9165bb-c377-4c19-9728-58a6ea046166" (UID: "1d9165bb-c377-4c19-9728-58a6ea046166"). InnerVolumeSpecName "kube-api-access-gnv49". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 08:57:29 crc kubenswrapper[4806]: I0126 08:57:29.947081 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d9165bb-c377-4c19-9728-58a6ea046166-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "1d9165bb-c377-4c19-9728-58a6ea046166" (UID: "1d9165bb-c377-4c19-9728-58a6ea046166"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:57:29 crc kubenswrapper[4806]: I0126 08:57:29.947634 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d9165bb-c377-4c19-9728-58a6ea046166-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "1d9165bb-c377-4c19-9728-58a6ea046166" (UID: "1d9165bb-c377-4c19-9728-58a6ea046166"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:57:29 crc kubenswrapper[4806]: I0126 08:57:29.973566 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d9165bb-c377-4c19-9728-58a6ea046166-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1d9165bb-c377-4c19-9728-58a6ea046166" (UID: "1d9165bb-c377-4c19-9728-58a6ea046166"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:57:29 crc kubenswrapper[4806]: I0126 08:57:29.973961 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d9165bb-c377-4c19-9728-58a6ea046166-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "1d9165bb-c377-4c19-9728-58a6ea046166" (UID: "1d9165bb-c377-4c19-9728-58a6ea046166"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:57:29 crc kubenswrapper[4806]: I0126 08:57:29.974826 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d9165bb-c377-4c19-9728-58a6ea046166-config" (OuterVolumeSpecName: "config") pod "1d9165bb-c377-4c19-9728-58a6ea046166" (UID: "1d9165bb-c377-4c19-9728-58a6ea046166"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 08:57:29 crc kubenswrapper[4806]: I0126 08:57:29.986128 4806 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1d9165bb-c377-4c19-9728-58a6ea046166-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 26 08:57:29 crc kubenswrapper[4806]: I0126 08:57:29.986291 4806 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d9165bb-c377-4c19-9728-58a6ea046166-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 08:57:29 crc kubenswrapper[4806]: I0126 08:57:29.986362 4806 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d9165bb-c377-4c19-9728-58a6ea046166-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 08:57:29 crc kubenswrapper[4806]: I0126 08:57:29.986439 4806 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/1d9165bb-c377-4c19-9728-58a6ea046166-config\") on node \"crc\" DevicePath \"\"" Jan 26 08:57:29 crc kubenswrapper[4806]: I0126 08:57:29.986700 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gnv49\" (UniqueName: \"kubernetes.io/projected/1d9165bb-c377-4c19-9728-58a6ea046166-kube-api-access-gnv49\") on node \"crc\" DevicePath \"\"" Jan 26 08:57:29 crc kubenswrapper[4806]: I0126 08:57:29.986781 4806 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d9165bb-c377-4c19-9728-58a6ea046166-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 08:57:29 crc kubenswrapper[4806]: I0126 08:57:29.986869 4806 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d9165bb-c377-4c19-9728-58a6ea046166-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 08:57:30 crc kubenswrapper[4806]: I0126 08:57:30.699043 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-74d94c4d65-ms88t" Jan 26 08:57:30 crc kubenswrapper[4806]: I0126 08:57:30.737492 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-74d94c4d65-ms88t"] Jan 26 08:57:30 crc kubenswrapper[4806]: I0126 08:57:30.745875 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-74d94c4d65-ms88t"] Jan 26 08:57:31 crc kubenswrapper[4806]: I0126 08:57:31.052717 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d9165bb-c377-4c19-9728-58a6ea046166" path="/var/lib/kubelet/pods/1d9165bb-c377-4c19-9728-58a6ea046166/volumes" Jan 26 08:57:38 crc kubenswrapper[4806]: I0126 08:57:38.113457 4806 scope.go:117] "RemoveContainer" containerID="3b40b8338b443d9b968b8d4234f7e57d534802db3a000fcee7657289cb8287e4" Jan 26 08:57:38 crc kubenswrapper[4806]: I0126 08:57:38.146831 4806 scope.go:117] "RemoveContainer" containerID="5f29e3670e7c17d3438882af45c0c1ecd4e0e50999b2425753851569b4fa4361" Jan 26 08:57:45 crc kubenswrapper[4806]: I0126 08:57:45.806874 4806 patch_prober.go:28] interesting pod/machine-config-daemon-k2tlk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 08:57:45 crc kubenswrapper[4806]: I0126 08:57:45.807488 4806 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 08:57:45 crc kubenswrapper[4806]: I0126 08:57:45.807559 4806 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" Jan 26 08:57:45 crc kubenswrapper[4806]: I0126 08:57:45.808245 4806 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"19d15f3db83358f34004e7d990728b7d25579f940569bef0ed6a4be6149f4437"} pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 08:57:45 crc kubenswrapper[4806]: I0126 08:57:45.808295 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" containerName="machine-config-daemon" containerID="cri-o://19d15f3db83358f34004e7d990728b7d25579f940569bef0ed6a4be6149f4437" gracePeriod=600 Jan 26 08:57:45 crc kubenswrapper[4806]: E0126 08:57:45.939877 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 08:57:46 crc kubenswrapper[4806]: I0126 08:57:46.854831 4806 generic.go:334] "Generic (PLEG): container finished" podID="d07502a2-50b0-4012-b335-340a1c694c50" containerID="19d15f3db83358f34004e7d990728b7d25579f940569bef0ed6a4be6149f4437" exitCode=0 Jan 26 08:57:46 crc kubenswrapper[4806]: I0126 08:57:46.854960 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" event={"ID":"d07502a2-50b0-4012-b335-340a1c694c50","Type":"ContainerDied","Data":"19d15f3db83358f34004e7d990728b7d25579f940569bef0ed6a4be6149f4437"} Jan 26 08:57:46 crc kubenswrapper[4806]: I0126 08:57:46.855454 4806 scope.go:117] "RemoveContainer" containerID="43ff16cf69e749e0df6a48aeca81c99531814b9095b171bb9f480a5506e94d53" Jan 26 08:57:46 crc kubenswrapper[4806]: I0126 08:57:46.856374 4806 scope.go:117] "RemoveContainer" containerID="19d15f3db83358f34004e7d990728b7d25579f940569bef0ed6a4be6149f4437" Jan 26 08:57:46 crc kubenswrapper[4806]: E0126 08:57:46.856838 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 08:57:50 crc kubenswrapper[4806]: I0126 08:57:50.435705 4806 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-rqfsl" podUID="b94c8b95-1f08-4f96-a9c0-47aef79a823b" containerName="registry-server" probeResult="failure" output=< Jan 26 08:57:50 crc kubenswrapper[4806]: timeout: health rpc did not complete within 1s Jan 26 08:57:50 crc kubenswrapper[4806]: > Jan 26 08:58:01 crc kubenswrapper[4806]: I0126 08:58:01.052532 4806 scope.go:117] "RemoveContainer" containerID="19d15f3db83358f34004e7d990728b7d25579f940569bef0ed6a4be6149f4437" Jan 26 08:58:01 crc kubenswrapper[4806]: E0126 08:58:01.053662 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 08:58:12 crc kubenswrapper[4806]: I0126 08:58:12.042631 4806 scope.go:117] "RemoveContainer" containerID="19d15f3db83358f34004e7d990728b7d25579f940569bef0ed6a4be6149f4437" Jan 26 08:58:12 crc kubenswrapper[4806]: E0126 08:58:12.043356 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 08:58:26 crc kubenswrapper[4806]: I0126 08:58:26.042321 4806 scope.go:117] "RemoveContainer" containerID="19d15f3db83358f34004e7d990728b7d25579f940569bef0ed6a4be6149f4437" Jan 26 08:58:26 crc kubenswrapper[4806]: E0126 08:58:26.043230 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 08:58:38 crc kubenswrapper[4806]: I0126 08:58:38.043081 4806 scope.go:117] "RemoveContainer" containerID="19d15f3db83358f34004e7d990728b7d25579f940569bef0ed6a4be6149f4437" Jan 26 08:58:38 crc kubenswrapper[4806]: E0126 08:58:38.043955 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 08:58:50 crc kubenswrapper[4806]: I0126 08:58:50.041945 4806 scope.go:117] "RemoveContainer" containerID="19d15f3db83358f34004e7d990728b7d25579f940569bef0ed6a4be6149f4437" Jan 26 08:58:50 crc kubenswrapper[4806]: E0126 08:58:50.042783 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 08:59:01 crc kubenswrapper[4806]: I0126 08:59:01.047840 4806 scope.go:117] "RemoveContainer" containerID="19d15f3db83358f34004e7d990728b7d25579f940569bef0ed6a4be6149f4437" Jan 26 08:59:01 crc kubenswrapper[4806]: E0126 08:59:01.048683 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 08:59:13 crc kubenswrapper[4806]: I0126 08:59:13.041589 4806 scope.go:117] "RemoveContainer" containerID="19d15f3db83358f34004e7d990728b7d25579f940569bef0ed6a4be6149f4437" Jan 26 08:59:13 crc kubenswrapper[4806]: E0126 08:59:13.042383 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 08:59:28 crc kubenswrapper[4806]: I0126 08:59:28.041443 4806 scope.go:117] "RemoveContainer" containerID="19d15f3db83358f34004e7d990728b7d25579f940569bef0ed6a4be6149f4437" Jan 26 08:59:28 crc kubenswrapper[4806]: E0126 08:59:28.042353 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 08:59:42 crc kubenswrapper[4806]: I0126 08:59:42.041988 4806 scope.go:117] "RemoveContainer" containerID="19d15f3db83358f34004e7d990728b7d25579f940569bef0ed6a4be6149f4437" Jan 26 08:59:42 crc kubenswrapper[4806]: E0126 08:59:42.042783 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 08:59:57 crc kubenswrapper[4806]: I0126 08:59:57.041972 4806 scope.go:117] "RemoveContainer" containerID="19d15f3db83358f34004e7d990728b7d25579f940569bef0ed6a4be6149f4437" Jan 26 08:59:57 crc kubenswrapper[4806]: E0126 08:59:57.042722 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 09:00:00 crc kubenswrapper[4806]: I0126 09:00:00.186219 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490300-wpj66"] Jan 26 09:00:00 crc kubenswrapper[4806]: E0126 09:00:00.186982 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d9165bb-c377-4c19-9728-58a6ea046166" containerName="neutron-httpd" Jan 26 09:00:00 crc kubenswrapper[4806]: I0126 09:00:00.187000 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d9165bb-c377-4c19-9728-58a6ea046166" containerName="neutron-httpd" Jan 26 09:00:00 crc kubenswrapper[4806]: E0126 09:00:00.187050 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d9165bb-c377-4c19-9728-58a6ea046166" containerName="neutron-api" Jan 26 09:00:00 crc kubenswrapper[4806]: I0126 09:00:00.187056 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d9165bb-c377-4c19-9728-58a6ea046166" containerName="neutron-api" Jan 26 09:00:00 crc kubenswrapper[4806]: I0126 09:00:00.187243 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d9165bb-c377-4c19-9728-58a6ea046166" containerName="neutron-api" Jan 26 09:00:00 crc kubenswrapper[4806]: I0126 09:00:00.187269 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d9165bb-c377-4c19-9728-58a6ea046166" containerName="neutron-httpd" Jan 26 09:00:00 crc kubenswrapper[4806]: I0126 09:00:00.188817 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490300-wpj66" Jan 26 09:00:00 crc kubenswrapper[4806]: I0126 09:00:00.191704 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 26 09:00:00 crc kubenswrapper[4806]: I0126 09:00:00.200611 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 26 09:00:00 crc kubenswrapper[4806]: I0126 09:00:00.208240 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490300-wpj66"] Jan 26 09:00:00 crc kubenswrapper[4806]: I0126 09:00:00.229547 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnc4j\" (UniqueName: \"kubernetes.io/projected/0e727139-3126-460d-9f74-0dc96b6fcf53-kube-api-access-bnc4j\") pod \"collect-profiles-29490300-wpj66\" (UID: \"0e727139-3126-460d-9f74-0dc96b6fcf53\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490300-wpj66" Jan 26 09:00:00 crc kubenswrapper[4806]: I0126 09:00:00.229841 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0e727139-3126-460d-9f74-0dc96b6fcf53-config-volume\") pod \"collect-profiles-29490300-wpj66\" (UID: \"0e727139-3126-460d-9f74-0dc96b6fcf53\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490300-wpj66" Jan 26 09:00:00 crc kubenswrapper[4806]: I0126 09:00:00.229960 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0e727139-3126-460d-9f74-0dc96b6fcf53-secret-volume\") pod \"collect-profiles-29490300-wpj66\" (UID: \"0e727139-3126-460d-9f74-0dc96b6fcf53\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490300-wpj66" Jan 26 09:00:00 crc kubenswrapper[4806]: I0126 09:00:00.332125 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0e727139-3126-460d-9f74-0dc96b6fcf53-config-volume\") pod \"collect-profiles-29490300-wpj66\" (UID: \"0e727139-3126-460d-9f74-0dc96b6fcf53\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490300-wpj66" Jan 26 09:00:00 crc kubenswrapper[4806]: I0126 09:00:00.332444 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0e727139-3126-460d-9f74-0dc96b6fcf53-secret-volume\") pod \"collect-profiles-29490300-wpj66\" (UID: \"0e727139-3126-460d-9f74-0dc96b6fcf53\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490300-wpj66" Jan 26 09:00:00 crc kubenswrapper[4806]: I0126 09:00:00.332628 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bnc4j\" (UniqueName: \"kubernetes.io/projected/0e727139-3126-460d-9f74-0dc96b6fcf53-kube-api-access-bnc4j\") pod \"collect-profiles-29490300-wpj66\" (UID: \"0e727139-3126-460d-9f74-0dc96b6fcf53\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490300-wpj66" Jan 26 09:00:00 crc kubenswrapper[4806]: I0126 09:00:00.333260 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0e727139-3126-460d-9f74-0dc96b6fcf53-config-volume\") pod \"collect-profiles-29490300-wpj66\" (UID: \"0e727139-3126-460d-9f74-0dc96b6fcf53\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490300-wpj66" Jan 26 09:00:00 crc kubenswrapper[4806]: I0126 09:00:00.340757 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0e727139-3126-460d-9f74-0dc96b6fcf53-secret-volume\") pod \"collect-profiles-29490300-wpj66\" (UID: \"0e727139-3126-460d-9f74-0dc96b6fcf53\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490300-wpj66" Jan 26 09:00:00 crc kubenswrapper[4806]: I0126 09:00:00.348386 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bnc4j\" (UniqueName: \"kubernetes.io/projected/0e727139-3126-460d-9f74-0dc96b6fcf53-kube-api-access-bnc4j\") pod \"collect-profiles-29490300-wpj66\" (UID: \"0e727139-3126-460d-9f74-0dc96b6fcf53\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490300-wpj66" Jan 26 09:00:00 crc kubenswrapper[4806]: I0126 09:00:00.513250 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490300-wpj66" Jan 26 09:00:01 crc kubenswrapper[4806]: I0126 09:00:01.040137 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490300-wpj66"] Jan 26 09:00:01 crc kubenswrapper[4806]: I0126 09:00:01.596342 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490300-wpj66" event={"ID":"0e727139-3126-460d-9f74-0dc96b6fcf53","Type":"ContainerStarted","Data":"57aa4ad8d59f5cb77cb4a7c6b622e582931a0168af12f2d75b3912786fa4fd79"} Jan 26 09:00:01 crc kubenswrapper[4806]: I0126 09:00:01.596675 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490300-wpj66" event={"ID":"0e727139-3126-460d-9f74-0dc96b6fcf53","Type":"ContainerStarted","Data":"5b53bc60ed00df1a6554c667daa80af2694ae6fca745d2cea6b18ab55d11dec9"} Jan 26 09:00:01 crc kubenswrapper[4806]: I0126 09:00:01.614290 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29490300-wpj66" podStartSLOduration=1.614272435 podStartE2EDuration="1.614272435s" podCreationTimestamp="2026-01-26 09:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 09:00:01.611685033 +0000 UTC m=+3980.876093089" watchObservedRunningTime="2026-01-26 09:00:01.614272435 +0000 UTC m=+3980.878680491" Jan 26 09:00:02 crc kubenswrapper[4806]: I0126 09:00:02.619076 4806 generic.go:334] "Generic (PLEG): container finished" podID="0e727139-3126-460d-9f74-0dc96b6fcf53" containerID="57aa4ad8d59f5cb77cb4a7c6b622e582931a0168af12f2d75b3912786fa4fd79" exitCode=0 Jan 26 09:00:02 crc kubenswrapper[4806]: I0126 09:00:02.619432 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490300-wpj66" event={"ID":"0e727139-3126-460d-9f74-0dc96b6fcf53","Type":"ContainerDied","Data":"57aa4ad8d59f5cb77cb4a7c6b622e582931a0168af12f2d75b3912786fa4fd79"} Jan 26 09:00:04 crc kubenswrapper[4806]: I0126 09:00:04.040395 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490300-wpj66" Jan 26 09:00:04 crc kubenswrapper[4806]: I0126 09:00:04.106412 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bnc4j\" (UniqueName: \"kubernetes.io/projected/0e727139-3126-460d-9f74-0dc96b6fcf53-kube-api-access-bnc4j\") pod \"0e727139-3126-460d-9f74-0dc96b6fcf53\" (UID: \"0e727139-3126-460d-9f74-0dc96b6fcf53\") " Jan 26 09:00:04 crc kubenswrapper[4806]: I0126 09:00:04.107147 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0e727139-3126-460d-9f74-0dc96b6fcf53-secret-volume\") pod \"0e727139-3126-460d-9f74-0dc96b6fcf53\" (UID: \"0e727139-3126-460d-9f74-0dc96b6fcf53\") " Jan 26 09:00:04 crc kubenswrapper[4806]: I0126 09:00:04.107336 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0e727139-3126-460d-9f74-0dc96b6fcf53-config-volume\") pod \"0e727139-3126-460d-9f74-0dc96b6fcf53\" (UID: \"0e727139-3126-460d-9f74-0dc96b6fcf53\") " Jan 26 09:00:04 crc kubenswrapper[4806]: I0126 09:00:04.109438 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e727139-3126-460d-9f74-0dc96b6fcf53-config-volume" (OuterVolumeSpecName: "config-volume") pod "0e727139-3126-460d-9f74-0dc96b6fcf53" (UID: "0e727139-3126-460d-9f74-0dc96b6fcf53"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:00:04 crc kubenswrapper[4806]: I0126 09:00:04.114486 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e727139-3126-460d-9f74-0dc96b6fcf53-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "0e727139-3126-460d-9f74-0dc96b6fcf53" (UID: "0e727139-3126-460d-9f74-0dc96b6fcf53"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:00:04 crc kubenswrapper[4806]: I0126 09:00:04.127841 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e727139-3126-460d-9f74-0dc96b6fcf53-kube-api-access-bnc4j" (OuterVolumeSpecName: "kube-api-access-bnc4j") pod "0e727139-3126-460d-9f74-0dc96b6fcf53" (UID: "0e727139-3126-460d-9f74-0dc96b6fcf53"). InnerVolumeSpecName "kube-api-access-bnc4j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:00:04 crc kubenswrapper[4806]: I0126 09:00:04.212781 4806 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0e727139-3126-460d-9f74-0dc96b6fcf53-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 09:00:04 crc kubenswrapper[4806]: I0126 09:00:04.212816 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bnc4j\" (UniqueName: \"kubernetes.io/projected/0e727139-3126-460d-9f74-0dc96b6fcf53-kube-api-access-bnc4j\") on node \"crc\" DevicePath \"\"" Jan 26 09:00:04 crc kubenswrapper[4806]: I0126 09:00:04.212826 4806 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0e727139-3126-460d-9f74-0dc96b6fcf53-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 09:00:04 crc kubenswrapper[4806]: I0126 09:00:04.635363 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490300-wpj66" event={"ID":"0e727139-3126-460d-9f74-0dc96b6fcf53","Type":"ContainerDied","Data":"5b53bc60ed00df1a6554c667daa80af2694ae6fca745d2cea6b18ab55d11dec9"} Jan 26 09:00:04 crc kubenswrapper[4806]: I0126 09:00:04.635414 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5b53bc60ed00df1a6554c667daa80af2694ae6fca745d2cea6b18ab55d11dec9" Jan 26 09:00:04 crc kubenswrapper[4806]: I0126 09:00:04.635425 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490300-wpj66" Jan 26 09:00:04 crc kubenswrapper[4806]: I0126 09:00:04.699945 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490255-wznsc"] Jan 26 09:00:04 crc kubenswrapper[4806]: I0126 09:00:04.713145 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490255-wznsc"] Jan 26 09:00:05 crc kubenswrapper[4806]: I0126 09:00:05.053493 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d813a14-773d-4ceb-858f-8978f96fe6de" path="/var/lib/kubelet/pods/2d813a14-773d-4ceb-858f-8978f96fe6de/volumes" Jan 26 09:00:08 crc kubenswrapper[4806]: I0126 09:00:08.042220 4806 scope.go:117] "RemoveContainer" containerID="19d15f3db83358f34004e7d990728b7d25579f940569bef0ed6a4be6149f4437" Jan 26 09:00:08 crc kubenswrapper[4806]: E0126 09:00:08.042780 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 09:00:23 crc kubenswrapper[4806]: I0126 09:00:23.042331 4806 scope.go:117] "RemoveContainer" containerID="19d15f3db83358f34004e7d990728b7d25579f940569bef0ed6a4be6149f4437" Jan 26 09:00:23 crc kubenswrapper[4806]: E0126 09:00:23.043463 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 09:00:30 crc kubenswrapper[4806]: I0126 09:00:30.236363 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ldpk9"] Jan 26 09:00:30 crc kubenswrapper[4806]: E0126 09:00:30.237426 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e727139-3126-460d-9f74-0dc96b6fcf53" containerName="collect-profiles" Jan 26 09:00:30 crc kubenswrapper[4806]: I0126 09:00:30.237443 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e727139-3126-460d-9f74-0dc96b6fcf53" containerName="collect-profiles" Jan 26 09:00:30 crc kubenswrapper[4806]: I0126 09:00:30.237717 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e727139-3126-460d-9f74-0dc96b6fcf53" containerName="collect-profiles" Jan 26 09:00:30 crc kubenswrapper[4806]: I0126 09:00:30.239333 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ldpk9" Jan 26 09:00:30 crc kubenswrapper[4806]: I0126 09:00:30.251074 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b72372ab-fbb5-4d7a-ba48-246d52e90d5c-utilities\") pod \"community-operators-ldpk9\" (UID: \"b72372ab-fbb5-4d7a-ba48-246d52e90d5c\") " pod="openshift-marketplace/community-operators-ldpk9" Jan 26 09:00:30 crc kubenswrapper[4806]: I0126 09:00:30.251506 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdx8z\" (UniqueName: \"kubernetes.io/projected/b72372ab-fbb5-4d7a-ba48-246d52e90d5c-kube-api-access-hdx8z\") pod \"community-operators-ldpk9\" (UID: \"b72372ab-fbb5-4d7a-ba48-246d52e90d5c\") " pod="openshift-marketplace/community-operators-ldpk9" Jan 26 09:00:30 crc kubenswrapper[4806]: I0126 09:00:30.251969 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b72372ab-fbb5-4d7a-ba48-246d52e90d5c-catalog-content\") pod \"community-operators-ldpk9\" (UID: \"b72372ab-fbb5-4d7a-ba48-246d52e90d5c\") " pod="openshift-marketplace/community-operators-ldpk9" Jan 26 09:00:30 crc kubenswrapper[4806]: I0126 09:00:30.270131 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ldpk9"] Jan 26 09:00:30 crc kubenswrapper[4806]: I0126 09:00:30.353063 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b72372ab-fbb5-4d7a-ba48-246d52e90d5c-catalog-content\") pod \"community-operators-ldpk9\" (UID: \"b72372ab-fbb5-4d7a-ba48-246d52e90d5c\") " pod="openshift-marketplace/community-operators-ldpk9" Jan 26 09:00:30 crc kubenswrapper[4806]: I0126 09:00:30.353149 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b72372ab-fbb5-4d7a-ba48-246d52e90d5c-utilities\") pod \"community-operators-ldpk9\" (UID: \"b72372ab-fbb5-4d7a-ba48-246d52e90d5c\") " pod="openshift-marketplace/community-operators-ldpk9" Jan 26 09:00:30 crc kubenswrapper[4806]: I0126 09:00:30.353210 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hdx8z\" (UniqueName: \"kubernetes.io/projected/b72372ab-fbb5-4d7a-ba48-246d52e90d5c-kube-api-access-hdx8z\") pod \"community-operators-ldpk9\" (UID: \"b72372ab-fbb5-4d7a-ba48-246d52e90d5c\") " pod="openshift-marketplace/community-operators-ldpk9" Jan 26 09:00:30 crc kubenswrapper[4806]: I0126 09:00:30.353894 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b72372ab-fbb5-4d7a-ba48-246d52e90d5c-catalog-content\") pod \"community-operators-ldpk9\" (UID: \"b72372ab-fbb5-4d7a-ba48-246d52e90d5c\") " pod="openshift-marketplace/community-operators-ldpk9" Jan 26 09:00:30 crc kubenswrapper[4806]: I0126 09:00:30.353930 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b72372ab-fbb5-4d7a-ba48-246d52e90d5c-utilities\") pod \"community-operators-ldpk9\" (UID: \"b72372ab-fbb5-4d7a-ba48-246d52e90d5c\") " pod="openshift-marketplace/community-operators-ldpk9" Jan 26 09:00:30 crc kubenswrapper[4806]: I0126 09:00:30.384549 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdx8z\" (UniqueName: \"kubernetes.io/projected/b72372ab-fbb5-4d7a-ba48-246d52e90d5c-kube-api-access-hdx8z\") pod \"community-operators-ldpk9\" (UID: \"b72372ab-fbb5-4d7a-ba48-246d52e90d5c\") " pod="openshift-marketplace/community-operators-ldpk9" Jan 26 09:00:30 crc kubenswrapper[4806]: I0126 09:00:30.558656 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ldpk9" Jan 26 09:00:31 crc kubenswrapper[4806]: I0126 09:00:31.037215 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ldpk9"] Jan 26 09:00:31 crc kubenswrapper[4806]: I0126 09:00:31.878980 4806 generic.go:334] "Generic (PLEG): container finished" podID="b72372ab-fbb5-4d7a-ba48-246d52e90d5c" containerID="d5652b31bdedf1bcd90986aae1a4dc83cc6d1dcff290053294b2332d5fe3798f" exitCode=0 Jan 26 09:00:31 crc kubenswrapper[4806]: I0126 09:00:31.879163 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ldpk9" event={"ID":"b72372ab-fbb5-4d7a-ba48-246d52e90d5c","Type":"ContainerDied","Data":"d5652b31bdedf1bcd90986aae1a4dc83cc6d1dcff290053294b2332d5fe3798f"} Jan 26 09:00:31 crc kubenswrapper[4806]: I0126 09:00:31.879315 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ldpk9" event={"ID":"b72372ab-fbb5-4d7a-ba48-246d52e90d5c","Type":"ContainerStarted","Data":"407d33da52e20f54cfe25e5779d7675c12764ab33547ea7a2f201213934a2984"} Jan 26 09:00:31 crc kubenswrapper[4806]: I0126 09:00:31.881674 4806 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 09:00:32 crc kubenswrapper[4806]: I0126 09:00:32.888881 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ldpk9" event={"ID":"b72372ab-fbb5-4d7a-ba48-246d52e90d5c","Type":"ContainerStarted","Data":"4d05cd7aa643b71fac342597d8f430513ad985f6e22c37f5411a1f6e3c261459"} Jan 26 09:00:33 crc kubenswrapper[4806]: I0126 09:00:33.898793 4806 generic.go:334] "Generic (PLEG): container finished" podID="b72372ab-fbb5-4d7a-ba48-246d52e90d5c" containerID="4d05cd7aa643b71fac342597d8f430513ad985f6e22c37f5411a1f6e3c261459" exitCode=0 Jan 26 09:00:33 crc kubenswrapper[4806]: I0126 09:00:33.898899 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ldpk9" event={"ID":"b72372ab-fbb5-4d7a-ba48-246d52e90d5c","Type":"ContainerDied","Data":"4d05cd7aa643b71fac342597d8f430513ad985f6e22c37f5411a1f6e3c261459"} Jan 26 09:00:34 crc kubenswrapper[4806]: I0126 09:00:34.915059 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ldpk9" event={"ID":"b72372ab-fbb5-4d7a-ba48-246d52e90d5c","Type":"ContainerStarted","Data":"eaef2360f93b93e451520f7897cd015d710d608c661967562977ab260582333d"} Jan 26 09:00:34 crc kubenswrapper[4806]: I0126 09:00:34.950738 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ldpk9" podStartSLOduration=2.523562526 podStartE2EDuration="4.950718145s" podCreationTimestamp="2026-01-26 09:00:30 +0000 UTC" firstStartedPulling="2026-01-26 09:00:31.881143438 +0000 UTC m=+4011.145551494" lastFinishedPulling="2026-01-26 09:00:34.308299057 +0000 UTC m=+4013.572707113" observedRunningTime="2026-01-26 09:00:34.946182579 +0000 UTC m=+4014.210590645" watchObservedRunningTime="2026-01-26 09:00:34.950718145 +0000 UTC m=+4014.215126201" Jan 26 09:00:35 crc kubenswrapper[4806]: I0126 09:00:35.042964 4806 scope.go:117] "RemoveContainer" containerID="19d15f3db83358f34004e7d990728b7d25579f940569bef0ed6a4be6149f4437" Jan 26 09:00:35 crc kubenswrapper[4806]: E0126 09:00:35.043841 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 09:00:38 crc kubenswrapper[4806]: I0126 09:00:38.239653 4806 scope.go:117] "RemoveContainer" containerID="954a212c4d7ea0d7669d70baa7fd88d9093d7b86e201cd24f8678ceff7b15c54" Jan 26 09:00:40 crc kubenswrapper[4806]: I0126 09:00:40.559289 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ldpk9" Jan 26 09:00:40 crc kubenswrapper[4806]: I0126 09:00:40.559677 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-ldpk9" Jan 26 09:00:40 crc kubenswrapper[4806]: I0126 09:00:40.604371 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ldpk9" Jan 26 09:00:41 crc kubenswrapper[4806]: I0126 09:00:41.054419 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ldpk9" Jan 26 09:00:41 crc kubenswrapper[4806]: I0126 09:00:41.100827 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ldpk9"] Jan 26 09:00:42 crc kubenswrapper[4806]: I0126 09:00:42.988906 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-ldpk9" podUID="b72372ab-fbb5-4d7a-ba48-246d52e90d5c" containerName="registry-server" containerID="cri-o://eaef2360f93b93e451520f7897cd015d710d608c661967562977ab260582333d" gracePeriod=2 Jan 26 09:00:43 crc kubenswrapper[4806]: I0126 09:00:43.561376 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ldpk9" Jan 26 09:00:43 crc kubenswrapper[4806]: I0126 09:00:43.762658 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b72372ab-fbb5-4d7a-ba48-246d52e90d5c-catalog-content\") pod \"b72372ab-fbb5-4d7a-ba48-246d52e90d5c\" (UID: \"b72372ab-fbb5-4d7a-ba48-246d52e90d5c\") " Jan 26 09:00:43 crc kubenswrapper[4806]: I0126 09:00:43.762834 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hdx8z\" (UniqueName: \"kubernetes.io/projected/b72372ab-fbb5-4d7a-ba48-246d52e90d5c-kube-api-access-hdx8z\") pod \"b72372ab-fbb5-4d7a-ba48-246d52e90d5c\" (UID: \"b72372ab-fbb5-4d7a-ba48-246d52e90d5c\") " Jan 26 09:00:43 crc kubenswrapper[4806]: I0126 09:00:43.762927 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b72372ab-fbb5-4d7a-ba48-246d52e90d5c-utilities\") pod \"b72372ab-fbb5-4d7a-ba48-246d52e90d5c\" (UID: \"b72372ab-fbb5-4d7a-ba48-246d52e90d5c\") " Jan 26 09:00:43 crc kubenswrapper[4806]: I0126 09:00:43.763808 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b72372ab-fbb5-4d7a-ba48-246d52e90d5c-utilities" (OuterVolumeSpecName: "utilities") pod "b72372ab-fbb5-4d7a-ba48-246d52e90d5c" (UID: "b72372ab-fbb5-4d7a-ba48-246d52e90d5c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 09:00:43 crc kubenswrapper[4806]: I0126 09:00:43.772028 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b72372ab-fbb5-4d7a-ba48-246d52e90d5c-kube-api-access-hdx8z" (OuterVolumeSpecName: "kube-api-access-hdx8z") pod "b72372ab-fbb5-4d7a-ba48-246d52e90d5c" (UID: "b72372ab-fbb5-4d7a-ba48-246d52e90d5c"). InnerVolumeSpecName "kube-api-access-hdx8z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:00:43 crc kubenswrapper[4806]: I0126 09:00:43.823821 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b72372ab-fbb5-4d7a-ba48-246d52e90d5c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b72372ab-fbb5-4d7a-ba48-246d52e90d5c" (UID: "b72372ab-fbb5-4d7a-ba48-246d52e90d5c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 09:00:43 crc kubenswrapper[4806]: I0126 09:00:43.865066 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hdx8z\" (UniqueName: \"kubernetes.io/projected/b72372ab-fbb5-4d7a-ba48-246d52e90d5c-kube-api-access-hdx8z\") on node \"crc\" DevicePath \"\"" Jan 26 09:00:43 crc kubenswrapper[4806]: I0126 09:00:43.865098 4806 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b72372ab-fbb5-4d7a-ba48-246d52e90d5c-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 09:00:43 crc kubenswrapper[4806]: I0126 09:00:43.865109 4806 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b72372ab-fbb5-4d7a-ba48-246d52e90d5c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 09:00:44 crc kubenswrapper[4806]: I0126 09:00:44.010956 4806 generic.go:334] "Generic (PLEG): container finished" podID="b72372ab-fbb5-4d7a-ba48-246d52e90d5c" containerID="eaef2360f93b93e451520f7897cd015d710d608c661967562977ab260582333d" exitCode=0 Jan 26 09:00:44 crc kubenswrapper[4806]: I0126 09:00:44.011020 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ldpk9" event={"ID":"b72372ab-fbb5-4d7a-ba48-246d52e90d5c","Type":"ContainerDied","Data":"eaef2360f93b93e451520f7897cd015d710d608c661967562977ab260582333d"} Jan 26 09:00:44 crc kubenswrapper[4806]: I0126 09:00:44.011055 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ldpk9" event={"ID":"b72372ab-fbb5-4d7a-ba48-246d52e90d5c","Type":"ContainerDied","Data":"407d33da52e20f54cfe25e5779d7675c12764ab33547ea7a2f201213934a2984"} Jan 26 09:00:44 crc kubenswrapper[4806]: I0126 09:00:44.011090 4806 scope.go:117] "RemoveContainer" containerID="eaef2360f93b93e451520f7897cd015d710d608c661967562977ab260582333d" Jan 26 09:00:44 crc kubenswrapper[4806]: I0126 09:00:44.011298 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ldpk9" Jan 26 09:00:44 crc kubenswrapper[4806]: I0126 09:00:44.045774 4806 scope.go:117] "RemoveContainer" containerID="4d05cd7aa643b71fac342597d8f430513ad985f6e22c37f5411a1f6e3c261459" Jan 26 09:00:44 crc kubenswrapper[4806]: I0126 09:00:44.059705 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ldpk9"] Jan 26 09:00:44 crc kubenswrapper[4806]: I0126 09:00:44.067095 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-ldpk9"] Jan 26 09:00:44 crc kubenswrapper[4806]: I0126 09:00:44.087268 4806 scope.go:117] "RemoveContainer" containerID="d5652b31bdedf1bcd90986aae1a4dc83cc6d1dcff290053294b2332d5fe3798f" Jan 26 09:00:44 crc kubenswrapper[4806]: I0126 09:00:44.119329 4806 scope.go:117] "RemoveContainer" containerID="eaef2360f93b93e451520f7897cd015d710d608c661967562977ab260582333d" Jan 26 09:00:44 crc kubenswrapper[4806]: E0126 09:00:44.120719 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eaef2360f93b93e451520f7897cd015d710d608c661967562977ab260582333d\": container with ID starting with eaef2360f93b93e451520f7897cd015d710d608c661967562977ab260582333d not found: ID does not exist" containerID="eaef2360f93b93e451520f7897cd015d710d608c661967562977ab260582333d" Jan 26 09:00:44 crc kubenswrapper[4806]: I0126 09:00:44.121221 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eaef2360f93b93e451520f7897cd015d710d608c661967562977ab260582333d"} err="failed to get container status \"eaef2360f93b93e451520f7897cd015d710d608c661967562977ab260582333d\": rpc error: code = NotFound desc = could not find container \"eaef2360f93b93e451520f7897cd015d710d608c661967562977ab260582333d\": container with ID starting with eaef2360f93b93e451520f7897cd015d710d608c661967562977ab260582333d not found: ID does not exist" Jan 26 09:00:44 crc kubenswrapper[4806]: I0126 09:00:44.121248 4806 scope.go:117] "RemoveContainer" containerID="4d05cd7aa643b71fac342597d8f430513ad985f6e22c37f5411a1f6e3c261459" Jan 26 09:00:44 crc kubenswrapper[4806]: E0126 09:00:44.121596 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d05cd7aa643b71fac342597d8f430513ad985f6e22c37f5411a1f6e3c261459\": container with ID starting with 4d05cd7aa643b71fac342597d8f430513ad985f6e22c37f5411a1f6e3c261459 not found: ID does not exist" containerID="4d05cd7aa643b71fac342597d8f430513ad985f6e22c37f5411a1f6e3c261459" Jan 26 09:00:44 crc kubenswrapper[4806]: I0126 09:00:44.121673 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d05cd7aa643b71fac342597d8f430513ad985f6e22c37f5411a1f6e3c261459"} err="failed to get container status \"4d05cd7aa643b71fac342597d8f430513ad985f6e22c37f5411a1f6e3c261459\": rpc error: code = NotFound desc = could not find container \"4d05cd7aa643b71fac342597d8f430513ad985f6e22c37f5411a1f6e3c261459\": container with ID starting with 4d05cd7aa643b71fac342597d8f430513ad985f6e22c37f5411a1f6e3c261459 not found: ID does not exist" Jan 26 09:00:44 crc kubenswrapper[4806]: I0126 09:00:44.121736 4806 scope.go:117] "RemoveContainer" containerID="d5652b31bdedf1bcd90986aae1a4dc83cc6d1dcff290053294b2332d5fe3798f" Jan 26 09:00:44 crc kubenswrapper[4806]: E0126 09:00:44.122233 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d5652b31bdedf1bcd90986aae1a4dc83cc6d1dcff290053294b2332d5fe3798f\": container with ID starting with d5652b31bdedf1bcd90986aae1a4dc83cc6d1dcff290053294b2332d5fe3798f not found: ID does not exist" containerID="d5652b31bdedf1bcd90986aae1a4dc83cc6d1dcff290053294b2332d5fe3798f" Jan 26 09:00:44 crc kubenswrapper[4806]: I0126 09:00:44.122286 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d5652b31bdedf1bcd90986aae1a4dc83cc6d1dcff290053294b2332d5fe3798f"} err="failed to get container status \"d5652b31bdedf1bcd90986aae1a4dc83cc6d1dcff290053294b2332d5fe3798f\": rpc error: code = NotFound desc = could not find container \"d5652b31bdedf1bcd90986aae1a4dc83cc6d1dcff290053294b2332d5fe3798f\": container with ID starting with d5652b31bdedf1bcd90986aae1a4dc83cc6d1dcff290053294b2332d5fe3798f not found: ID does not exist" Jan 26 09:00:44 crc kubenswrapper[4806]: I0126 09:00:44.244257 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-k7lhg"] Jan 26 09:00:44 crc kubenswrapper[4806]: E0126 09:00:44.244654 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b72372ab-fbb5-4d7a-ba48-246d52e90d5c" containerName="extract-utilities" Jan 26 09:00:44 crc kubenswrapper[4806]: I0126 09:00:44.244671 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="b72372ab-fbb5-4d7a-ba48-246d52e90d5c" containerName="extract-utilities" Jan 26 09:00:44 crc kubenswrapper[4806]: E0126 09:00:44.244683 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b72372ab-fbb5-4d7a-ba48-246d52e90d5c" containerName="registry-server" Jan 26 09:00:44 crc kubenswrapper[4806]: I0126 09:00:44.244690 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="b72372ab-fbb5-4d7a-ba48-246d52e90d5c" containerName="registry-server" Jan 26 09:00:44 crc kubenswrapper[4806]: E0126 09:00:44.244720 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b72372ab-fbb5-4d7a-ba48-246d52e90d5c" containerName="extract-content" Jan 26 09:00:44 crc kubenswrapper[4806]: I0126 09:00:44.244726 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="b72372ab-fbb5-4d7a-ba48-246d52e90d5c" containerName="extract-content" Jan 26 09:00:44 crc kubenswrapper[4806]: I0126 09:00:44.244929 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="b72372ab-fbb5-4d7a-ba48-246d52e90d5c" containerName="registry-server" Jan 26 09:00:44 crc kubenswrapper[4806]: I0126 09:00:44.248771 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k7lhg" Jan 26 09:00:44 crc kubenswrapper[4806]: I0126 09:00:44.261464 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-k7lhg"] Jan 26 09:00:44 crc kubenswrapper[4806]: I0126 09:00:44.376805 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56e5cbed-1b68-4a58-b8e0-9ed300ed6825-catalog-content\") pod \"redhat-marketplace-k7lhg\" (UID: \"56e5cbed-1b68-4a58-b8e0-9ed300ed6825\") " pod="openshift-marketplace/redhat-marketplace-k7lhg" Jan 26 09:00:44 crc kubenswrapper[4806]: I0126 09:00:44.376866 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8szqh\" (UniqueName: \"kubernetes.io/projected/56e5cbed-1b68-4a58-b8e0-9ed300ed6825-kube-api-access-8szqh\") pod \"redhat-marketplace-k7lhg\" (UID: \"56e5cbed-1b68-4a58-b8e0-9ed300ed6825\") " pod="openshift-marketplace/redhat-marketplace-k7lhg" Jan 26 09:00:44 crc kubenswrapper[4806]: I0126 09:00:44.377003 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56e5cbed-1b68-4a58-b8e0-9ed300ed6825-utilities\") pod \"redhat-marketplace-k7lhg\" (UID: \"56e5cbed-1b68-4a58-b8e0-9ed300ed6825\") " pod="openshift-marketplace/redhat-marketplace-k7lhg" Jan 26 09:00:44 crc kubenswrapper[4806]: I0126 09:00:44.479199 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56e5cbed-1b68-4a58-b8e0-9ed300ed6825-catalog-content\") pod \"redhat-marketplace-k7lhg\" (UID: \"56e5cbed-1b68-4a58-b8e0-9ed300ed6825\") " pod="openshift-marketplace/redhat-marketplace-k7lhg" Jan 26 09:00:44 crc kubenswrapper[4806]: I0126 09:00:44.479468 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8szqh\" (UniqueName: \"kubernetes.io/projected/56e5cbed-1b68-4a58-b8e0-9ed300ed6825-kube-api-access-8szqh\") pod \"redhat-marketplace-k7lhg\" (UID: \"56e5cbed-1b68-4a58-b8e0-9ed300ed6825\") " pod="openshift-marketplace/redhat-marketplace-k7lhg" Jan 26 09:00:44 crc kubenswrapper[4806]: I0126 09:00:44.479570 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56e5cbed-1b68-4a58-b8e0-9ed300ed6825-utilities\") pod \"redhat-marketplace-k7lhg\" (UID: \"56e5cbed-1b68-4a58-b8e0-9ed300ed6825\") " pod="openshift-marketplace/redhat-marketplace-k7lhg" Jan 26 09:00:44 crc kubenswrapper[4806]: I0126 09:00:44.479775 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56e5cbed-1b68-4a58-b8e0-9ed300ed6825-catalog-content\") pod \"redhat-marketplace-k7lhg\" (UID: \"56e5cbed-1b68-4a58-b8e0-9ed300ed6825\") " pod="openshift-marketplace/redhat-marketplace-k7lhg" Jan 26 09:00:44 crc kubenswrapper[4806]: I0126 09:00:44.480012 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56e5cbed-1b68-4a58-b8e0-9ed300ed6825-utilities\") pod \"redhat-marketplace-k7lhg\" (UID: \"56e5cbed-1b68-4a58-b8e0-9ed300ed6825\") " pod="openshift-marketplace/redhat-marketplace-k7lhg" Jan 26 09:00:44 crc kubenswrapper[4806]: I0126 09:00:44.496772 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8szqh\" (UniqueName: \"kubernetes.io/projected/56e5cbed-1b68-4a58-b8e0-9ed300ed6825-kube-api-access-8szqh\") pod \"redhat-marketplace-k7lhg\" (UID: \"56e5cbed-1b68-4a58-b8e0-9ed300ed6825\") " pod="openshift-marketplace/redhat-marketplace-k7lhg" Jan 26 09:00:44 crc kubenswrapper[4806]: I0126 09:00:44.567734 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k7lhg" Jan 26 09:00:45 crc kubenswrapper[4806]: I0126 09:00:45.052888 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b72372ab-fbb5-4d7a-ba48-246d52e90d5c" path="/var/lib/kubelet/pods/b72372ab-fbb5-4d7a-ba48-246d52e90d5c/volumes" Jan 26 09:00:45 crc kubenswrapper[4806]: I0126 09:00:45.053975 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-k7lhg"] Jan 26 09:00:46 crc kubenswrapper[4806]: I0126 09:00:46.029249 4806 generic.go:334] "Generic (PLEG): container finished" podID="56e5cbed-1b68-4a58-b8e0-9ed300ed6825" containerID="f4380de4144eb322f8c0bfbf74deb29c07e6bb19e60e5afe22d6c93e111242c0" exitCode=0 Jan 26 09:00:46 crc kubenswrapper[4806]: I0126 09:00:46.029300 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k7lhg" event={"ID":"56e5cbed-1b68-4a58-b8e0-9ed300ed6825","Type":"ContainerDied","Data":"f4380de4144eb322f8c0bfbf74deb29c07e6bb19e60e5afe22d6c93e111242c0"} Jan 26 09:00:46 crc kubenswrapper[4806]: I0126 09:00:46.029653 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k7lhg" event={"ID":"56e5cbed-1b68-4a58-b8e0-9ed300ed6825","Type":"ContainerStarted","Data":"fa6b6ab0447b7123556ec6db96ae0b01e31a2aefb5ec5610b873fe89aad94f6a"} Jan 26 09:00:48 crc kubenswrapper[4806]: I0126 09:00:48.049041 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k7lhg" event={"ID":"56e5cbed-1b68-4a58-b8e0-9ed300ed6825","Type":"ContainerStarted","Data":"e2a39daca47f22fcaafe49cc28a243fd11f00651bfaec98ec8cc95fa6e901392"} Jan 26 09:00:49 crc kubenswrapper[4806]: I0126 09:00:49.042106 4806 scope.go:117] "RemoveContainer" containerID="19d15f3db83358f34004e7d990728b7d25579f940569bef0ed6a4be6149f4437" Jan 26 09:00:49 crc kubenswrapper[4806]: E0126 09:00:49.042577 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 09:00:49 crc kubenswrapper[4806]: I0126 09:00:49.074918 4806 generic.go:334] "Generic (PLEG): container finished" podID="56e5cbed-1b68-4a58-b8e0-9ed300ed6825" containerID="e2a39daca47f22fcaafe49cc28a243fd11f00651bfaec98ec8cc95fa6e901392" exitCode=0 Jan 26 09:00:49 crc kubenswrapper[4806]: I0126 09:00:49.074968 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k7lhg" event={"ID":"56e5cbed-1b68-4a58-b8e0-9ed300ed6825","Type":"ContainerDied","Data":"e2a39daca47f22fcaafe49cc28a243fd11f00651bfaec98ec8cc95fa6e901392"} Jan 26 09:00:50 crc kubenswrapper[4806]: I0126 09:00:50.088149 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k7lhg" event={"ID":"56e5cbed-1b68-4a58-b8e0-9ed300ed6825","Type":"ContainerStarted","Data":"a6b5355758639963f07c3196d8722af04d255a551d910f01064c3655fbd4b44b"} Jan 26 09:00:50 crc kubenswrapper[4806]: I0126 09:00:50.116777 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-k7lhg" podStartSLOduration=2.646219268 podStartE2EDuration="6.116759116s" podCreationTimestamp="2026-01-26 09:00:44 +0000 UTC" firstStartedPulling="2026-01-26 09:00:46.031399789 +0000 UTC m=+4025.295807845" lastFinishedPulling="2026-01-26 09:00:49.501939637 +0000 UTC m=+4028.766347693" observedRunningTime="2026-01-26 09:00:50.110258175 +0000 UTC m=+4029.374666241" watchObservedRunningTime="2026-01-26 09:00:50.116759116 +0000 UTC m=+4029.381167172" Jan 26 09:00:54 crc kubenswrapper[4806]: I0126 09:00:54.568605 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-k7lhg" Jan 26 09:00:54 crc kubenswrapper[4806]: I0126 09:00:54.569251 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-k7lhg" Jan 26 09:00:54 crc kubenswrapper[4806]: I0126 09:00:54.618036 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-k7lhg" Jan 26 09:00:55 crc kubenswrapper[4806]: I0126 09:00:55.171604 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-k7lhg" Jan 26 09:00:55 crc kubenswrapper[4806]: I0126 09:00:55.224236 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-k7lhg"] Jan 26 09:00:57 crc kubenswrapper[4806]: I0126 09:00:57.145296 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-k7lhg" podUID="56e5cbed-1b68-4a58-b8e0-9ed300ed6825" containerName="registry-server" containerID="cri-o://a6b5355758639963f07c3196d8722af04d255a551d910f01064c3655fbd4b44b" gracePeriod=2 Jan 26 09:00:58 crc kubenswrapper[4806]: I0126 09:00:58.175860 4806 generic.go:334] "Generic (PLEG): container finished" podID="56e5cbed-1b68-4a58-b8e0-9ed300ed6825" containerID="a6b5355758639963f07c3196d8722af04d255a551d910f01064c3655fbd4b44b" exitCode=0 Jan 26 09:00:58 crc kubenswrapper[4806]: I0126 09:00:58.175954 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k7lhg" event={"ID":"56e5cbed-1b68-4a58-b8e0-9ed300ed6825","Type":"ContainerDied","Data":"a6b5355758639963f07c3196d8722af04d255a551d910f01064c3655fbd4b44b"} Jan 26 09:00:58 crc kubenswrapper[4806]: I0126 09:00:58.393213 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k7lhg" Jan 26 09:00:58 crc kubenswrapper[4806]: I0126 09:00:58.456437 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56e5cbed-1b68-4a58-b8e0-9ed300ed6825-utilities\") pod \"56e5cbed-1b68-4a58-b8e0-9ed300ed6825\" (UID: \"56e5cbed-1b68-4a58-b8e0-9ed300ed6825\") " Jan 26 09:00:58 crc kubenswrapper[4806]: I0126 09:00:58.456664 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56e5cbed-1b68-4a58-b8e0-9ed300ed6825-catalog-content\") pod \"56e5cbed-1b68-4a58-b8e0-9ed300ed6825\" (UID: \"56e5cbed-1b68-4a58-b8e0-9ed300ed6825\") " Jan 26 09:00:58 crc kubenswrapper[4806]: I0126 09:00:58.456715 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8szqh\" (UniqueName: \"kubernetes.io/projected/56e5cbed-1b68-4a58-b8e0-9ed300ed6825-kube-api-access-8szqh\") pod \"56e5cbed-1b68-4a58-b8e0-9ed300ed6825\" (UID: \"56e5cbed-1b68-4a58-b8e0-9ed300ed6825\") " Jan 26 09:00:58 crc kubenswrapper[4806]: I0126 09:00:58.458238 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/56e5cbed-1b68-4a58-b8e0-9ed300ed6825-utilities" (OuterVolumeSpecName: "utilities") pod "56e5cbed-1b68-4a58-b8e0-9ed300ed6825" (UID: "56e5cbed-1b68-4a58-b8e0-9ed300ed6825"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 09:00:58 crc kubenswrapper[4806]: I0126 09:00:58.462772 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56e5cbed-1b68-4a58-b8e0-9ed300ed6825-kube-api-access-8szqh" (OuterVolumeSpecName: "kube-api-access-8szqh") pod "56e5cbed-1b68-4a58-b8e0-9ed300ed6825" (UID: "56e5cbed-1b68-4a58-b8e0-9ed300ed6825"). InnerVolumeSpecName "kube-api-access-8szqh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:00:58 crc kubenswrapper[4806]: I0126 09:00:58.481532 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/56e5cbed-1b68-4a58-b8e0-9ed300ed6825-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "56e5cbed-1b68-4a58-b8e0-9ed300ed6825" (UID: "56e5cbed-1b68-4a58-b8e0-9ed300ed6825"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 09:00:58 crc kubenswrapper[4806]: I0126 09:00:58.558671 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8szqh\" (UniqueName: \"kubernetes.io/projected/56e5cbed-1b68-4a58-b8e0-9ed300ed6825-kube-api-access-8szqh\") on node \"crc\" DevicePath \"\"" Jan 26 09:00:58 crc kubenswrapper[4806]: I0126 09:00:58.558698 4806 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56e5cbed-1b68-4a58-b8e0-9ed300ed6825-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 09:00:58 crc kubenswrapper[4806]: I0126 09:00:58.558709 4806 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56e5cbed-1b68-4a58-b8e0-9ed300ed6825-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 09:00:59 crc kubenswrapper[4806]: I0126 09:00:59.192782 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k7lhg" event={"ID":"56e5cbed-1b68-4a58-b8e0-9ed300ed6825","Type":"ContainerDied","Data":"fa6b6ab0447b7123556ec6db96ae0b01e31a2aefb5ec5610b873fe89aad94f6a"} Jan 26 09:00:59 crc kubenswrapper[4806]: I0126 09:00:59.193146 4806 scope.go:117] "RemoveContainer" containerID="a6b5355758639963f07c3196d8722af04d255a551d910f01064c3655fbd4b44b" Jan 26 09:00:59 crc kubenswrapper[4806]: I0126 09:00:59.192802 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k7lhg" Jan 26 09:00:59 crc kubenswrapper[4806]: I0126 09:00:59.224573 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-k7lhg"] Jan 26 09:00:59 crc kubenswrapper[4806]: I0126 09:00:59.233303 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-k7lhg"] Jan 26 09:00:59 crc kubenswrapper[4806]: I0126 09:00:59.236078 4806 scope.go:117] "RemoveContainer" containerID="e2a39daca47f22fcaafe49cc28a243fd11f00651bfaec98ec8cc95fa6e901392" Jan 26 09:00:59 crc kubenswrapper[4806]: I0126 09:00:59.437880 4806 scope.go:117] "RemoveContainer" containerID="f4380de4144eb322f8c0bfbf74deb29c07e6bb19e60e5afe22d6c93e111242c0" Jan 26 09:01:00 crc kubenswrapper[4806]: I0126 09:01:00.167137 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29490301-fw69s"] Jan 26 09:01:00 crc kubenswrapper[4806]: E0126 09:01:00.167785 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56e5cbed-1b68-4a58-b8e0-9ed300ed6825" containerName="extract-utilities" Jan 26 09:01:00 crc kubenswrapper[4806]: I0126 09:01:00.167815 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="56e5cbed-1b68-4a58-b8e0-9ed300ed6825" containerName="extract-utilities" Jan 26 09:01:00 crc kubenswrapper[4806]: E0126 09:01:00.167862 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56e5cbed-1b68-4a58-b8e0-9ed300ed6825" containerName="extract-content" Jan 26 09:01:00 crc kubenswrapper[4806]: I0126 09:01:00.167880 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="56e5cbed-1b68-4a58-b8e0-9ed300ed6825" containerName="extract-content" Jan 26 09:01:00 crc kubenswrapper[4806]: E0126 09:01:00.167935 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56e5cbed-1b68-4a58-b8e0-9ed300ed6825" containerName="registry-server" Jan 26 09:01:00 crc kubenswrapper[4806]: I0126 09:01:00.167949 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="56e5cbed-1b68-4a58-b8e0-9ed300ed6825" containerName="registry-server" Jan 26 09:01:00 crc kubenswrapper[4806]: I0126 09:01:00.168338 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="56e5cbed-1b68-4a58-b8e0-9ed300ed6825" containerName="registry-server" Jan 26 09:01:00 crc kubenswrapper[4806]: I0126 09:01:00.169426 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29490301-fw69s" Jan 26 09:01:00 crc kubenswrapper[4806]: I0126 09:01:00.181538 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29490301-fw69s"] Jan 26 09:01:00 crc kubenswrapper[4806]: I0126 09:01:00.289410 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nl2jd\" (UniqueName: \"kubernetes.io/projected/5fba021e-0dda-4793-abae-5b9137baf1ef-kube-api-access-nl2jd\") pod \"keystone-cron-29490301-fw69s\" (UID: \"5fba021e-0dda-4793-abae-5b9137baf1ef\") " pod="openstack/keystone-cron-29490301-fw69s" Jan 26 09:01:00 crc kubenswrapper[4806]: I0126 09:01:00.289954 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5fba021e-0dda-4793-abae-5b9137baf1ef-config-data\") pod \"keystone-cron-29490301-fw69s\" (UID: \"5fba021e-0dda-4793-abae-5b9137baf1ef\") " pod="openstack/keystone-cron-29490301-fw69s" Jan 26 09:01:00 crc kubenswrapper[4806]: I0126 09:01:00.289984 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fba021e-0dda-4793-abae-5b9137baf1ef-combined-ca-bundle\") pod \"keystone-cron-29490301-fw69s\" (UID: \"5fba021e-0dda-4793-abae-5b9137baf1ef\") " pod="openstack/keystone-cron-29490301-fw69s" Jan 26 09:01:00 crc kubenswrapper[4806]: I0126 09:01:00.290152 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5fba021e-0dda-4793-abae-5b9137baf1ef-fernet-keys\") pod \"keystone-cron-29490301-fw69s\" (UID: \"5fba021e-0dda-4793-abae-5b9137baf1ef\") " pod="openstack/keystone-cron-29490301-fw69s" Jan 26 09:01:00 crc kubenswrapper[4806]: I0126 09:01:00.391902 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5fba021e-0dda-4793-abae-5b9137baf1ef-fernet-keys\") pod \"keystone-cron-29490301-fw69s\" (UID: \"5fba021e-0dda-4793-abae-5b9137baf1ef\") " pod="openstack/keystone-cron-29490301-fw69s" Jan 26 09:01:00 crc kubenswrapper[4806]: I0126 09:01:00.392046 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nl2jd\" (UniqueName: \"kubernetes.io/projected/5fba021e-0dda-4793-abae-5b9137baf1ef-kube-api-access-nl2jd\") pod \"keystone-cron-29490301-fw69s\" (UID: \"5fba021e-0dda-4793-abae-5b9137baf1ef\") " pod="openstack/keystone-cron-29490301-fw69s" Jan 26 09:01:00 crc kubenswrapper[4806]: I0126 09:01:00.392119 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5fba021e-0dda-4793-abae-5b9137baf1ef-config-data\") pod \"keystone-cron-29490301-fw69s\" (UID: \"5fba021e-0dda-4793-abae-5b9137baf1ef\") " pod="openstack/keystone-cron-29490301-fw69s" Jan 26 09:01:00 crc kubenswrapper[4806]: I0126 09:01:00.392144 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fba021e-0dda-4793-abae-5b9137baf1ef-combined-ca-bundle\") pod \"keystone-cron-29490301-fw69s\" (UID: \"5fba021e-0dda-4793-abae-5b9137baf1ef\") " pod="openstack/keystone-cron-29490301-fw69s" Jan 26 09:01:00 crc kubenswrapper[4806]: I0126 09:01:00.399148 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fba021e-0dda-4793-abae-5b9137baf1ef-combined-ca-bundle\") pod \"keystone-cron-29490301-fw69s\" (UID: \"5fba021e-0dda-4793-abae-5b9137baf1ef\") " pod="openstack/keystone-cron-29490301-fw69s" Jan 26 09:01:00 crc kubenswrapper[4806]: I0126 09:01:00.400040 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5fba021e-0dda-4793-abae-5b9137baf1ef-config-data\") pod \"keystone-cron-29490301-fw69s\" (UID: \"5fba021e-0dda-4793-abae-5b9137baf1ef\") " pod="openstack/keystone-cron-29490301-fw69s" Jan 26 09:01:00 crc kubenswrapper[4806]: I0126 09:01:00.402642 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5fba021e-0dda-4793-abae-5b9137baf1ef-fernet-keys\") pod \"keystone-cron-29490301-fw69s\" (UID: \"5fba021e-0dda-4793-abae-5b9137baf1ef\") " pod="openstack/keystone-cron-29490301-fw69s" Jan 26 09:01:00 crc kubenswrapper[4806]: I0126 09:01:00.407473 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nl2jd\" (UniqueName: \"kubernetes.io/projected/5fba021e-0dda-4793-abae-5b9137baf1ef-kube-api-access-nl2jd\") pod \"keystone-cron-29490301-fw69s\" (UID: \"5fba021e-0dda-4793-abae-5b9137baf1ef\") " pod="openstack/keystone-cron-29490301-fw69s" Jan 26 09:01:00 crc kubenswrapper[4806]: I0126 09:01:00.488753 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29490301-fw69s" Jan 26 09:01:00 crc kubenswrapper[4806]: I0126 09:01:00.940687 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29490301-fw69s"] Jan 26 09:01:01 crc kubenswrapper[4806]: I0126 09:01:01.055763 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56e5cbed-1b68-4a58-b8e0-9ed300ed6825" path="/var/lib/kubelet/pods/56e5cbed-1b68-4a58-b8e0-9ed300ed6825/volumes" Jan 26 09:01:01 crc kubenswrapper[4806]: I0126 09:01:01.248454 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29490301-fw69s" event={"ID":"5fba021e-0dda-4793-abae-5b9137baf1ef","Type":"ContainerStarted","Data":"d57b092d049a925749830143146907eb8b1da4aff72666c86a5153309056e0e7"} Jan 26 09:01:02 crc kubenswrapper[4806]: I0126 09:01:02.259322 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29490301-fw69s" event={"ID":"5fba021e-0dda-4793-abae-5b9137baf1ef","Type":"ContainerStarted","Data":"9620f96ca9b33dfd53c5ac883fa8b16d786ba7b9b16fa1ca0ed3387245245a6d"} Jan 26 09:01:02 crc kubenswrapper[4806]: I0126 09:01:02.276030 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29490301-fw69s" podStartSLOduration=2.276007826 podStartE2EDuration="2.276007826s" podCreationTimestamp="2026-01-26 09:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 09:01:02.275131741 +0000 UTC m=+4041.539539797" watchObservedRunningTime="2026-01-26 09:01:02.276007826 +0000 UTC m=+4041.540415882" Jan 26 09:01:03 crc kubenswrapper[4806]: I0126 09:01:03.043232 4806 scope.go:117] "RemoveContainer" containerID="19d15f3db83358f34004e7d990728b7d25579f940569bef0ed6a4be6149f4437" Jan 26 09:01:03 crc kubenswrapper[4806]: E0126 09:01:03.043570 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 09:01:04 crc kubenswrapper[4806]: I0126 09:01:04.277963 4806 generic.go:334] "Generic (PLEG): container finished" podID="5fba021e-0dda-4793-abae-5b9137baf1ef" containerID="9620f96ca9b33dfd53c5ac883fa8b16d786ba7b9b16fa1ca0ed3387245245a6d" exitCode=0 Jan 26 09:01:04 crc kubenswrapper[4806]: I0126 09:01:04.278006 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29490301-fw69s" event={"ID":"5fba021e-0dda-4793-abae-5b9137baf1ef","Type":"ContainerDied","Data":"9620f96ca9b33dfd53c5ac883fa8b16d786ba7b9b16fa1ca0ed3387245245a6d"} Jan 26 09:01:05 crc kubenswrapper[4806]: I0126 09:01:05.748691 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29490301-fw69s" Jan 26 09:01:05 crc kubenswrapper[4806]: I0126 09:01:05.921630 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5fba021e-0dda-4793-abae-5b9137baf1ef-fernet-keys\") pod \"5fba021e-0dda-4793-abae-5b9137baf1ef\" (UID: \"5fba021e-0dda-4793-abae-5b9137baf1ef\") " Jan 26 09:01:05 crc kubenswrapper[4806]: I0126 09:01:05.922080 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fba021e-0dda-4793-abae-5b9137baf1ef-combined-ca-bundle\") pod \"5fba021e-0dda-4793-abae-5b9137baf1ef\" (UID: \"5fba021e-0dda-4793-abae-5b9137baf1ef\") " Jan 26 09:01:05 crc kubenswrapper[4806]: I0126 09:01:05.922109 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5fba021e-0dda-4793-abae-5b9137baf1ef-config-data\") pod \"5fba021e-0dda-4793-abae-5b9137baf1ef\" (UID: \"5fba021e-0dda-4793-abae-5b9137baf1ef\") " Jan 26 09:01:05 crc kubenswrapper[4806]: I0126 09:01:05.922232 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nl2jd\" (UniqueName: \"kubernetes.io/projected/5fba021e-0dda-4793-abae-5b9137baf1ef-kube-api-access-nl2jd\") pod \"5fba021e-0dda-4793-abae-5b9137baf1ef\" (UID: \"5fba021e-0dda-4793-abae-5b9137baf1ef\") " Jan 26 09:01:05 crc kubenswrapper[4806]: I0126 09:01:05.928252 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fba021e-0dda-4793-abae-5b9137baf1ef-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "5fba021e-0dda-4793-abae-5b9137baf1ef" (UID: "5fba021e-0dda-4793-abae-5b9137baf1ef"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:01:05 crc kubenswrapper[4806]: I0126 09:01:05.929097 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fba021e-0dda-4793-abae-5b9137baf1ef-kube-api-access-nl2jd" (OuterVolumeSpecName: "kube-api-access-nl2jd") pod "5fba021e-0dda-4793-abae-5b9137baf1ef" (UID: "5fba021e-0dda-4793-abae-5b9137baf1ef"). InnerVolumeSpecName "kube-api-access-nl2jd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:01:05 crc kubenswrapper[4806]: I0126 09:01:05.959970 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fba021e-0dda-4793-abae-5b9137baf1ef-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5fba021e-0dda-4793-abae-5b9137baf1ef" (UID: "5fba021e-0dda-4793-abae-5b9137baf1ef"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:01:06 crc kubenswrapper[4806]: I0126 09:01:06.008857 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fba021e-0dda-4793-abae-5b9137baf1ef-config-data" (OuterVolumeSpecName: "config-data") pod "5fba021e-0dda-4793-abae-5b9137baf1ef" (UID: "5fba021e-0dda-4793-abae-5b9137baf1ef"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:01:06 crc kubenswrapper[4806]: I0126 09:01:06.024231 4806 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5fba021e-0dda-4793-abae-5b9137baf1ef-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 26 09:01:06 crc kubenswrapper[4806]: I0126 09:01:06.024268 4806 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fba021e-0dda-4793-abae-5b9137baf1ef-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 09:01:06 crc kubenswrapper[4806]: I0126 09:01:06.024279 4806 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5fba021e-0dda-4793-abae-5b9137baf1ef-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 09:01:06 crc kubenswrapper[4806]: I0126 09:01:06.024288 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nl2jd\" (UniqueName: \"kubernetes.io/projected/5fba021e-0dda-4793-abae-5b9137baf1ef-kube-api-access-nl2jd\") on node \"crc\" DevicePath \"\"" Jan 26 09:01:06 crc kubenswrapper[4806]: I0126 09:01:06.298513 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29490301-fw69s" event={"ID":"5fba021e-0dda-4793-abae-5b9137baf1ef","Type":"ContainerDied","Data":"d57b092d049a925749830143146907eb8b1da4aff72666c86a5153309056e0e7"} Jan 26 09:01:06 crc kubenswrapper[4806]: I0126 09:01:06.298922 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d57b092d049a925749830143146907eb8b1da4aff72666c86a5153309056e0e7" Jan 26 09:01:06 crc kubenswrapper[4806]: I0126 09:01:06.298751 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29490301-fw69s" Jan 26 09:01:16 crc kubenswrapper[4806]: I0126 09:01:16.041771 4806 scope.go:117] "RemoveContainer" containerID="19d15f3db83358f34004e7d990728b7d25579f940569bef0ed6a4be6149f4437" Jan 26 09:01:16 crc kubenswrapper[4806]: E0126 09:01:16.042664 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 09:01:27 crc kubenswrapper[4806]: I0126 09:01:27.267409 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-s88rh"] Jan 26 09:01:27 crc kubenswrapper[4806]: E0126 09:01:27.268475 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fba021e-0dda-4793-abae-5b9137baf1ef" containerName="keystone-cron" Jan 26 09:01:27 crc kubenswrapper[4806]: I0126 09:01:27.268492 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fba021e-0dda-4793-abae-5b9137baf1ef" containerName="keystone-cron" Jan 26 09:01:27 crc kubenswrapper[4806]: I0126 09:01:27.268778 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="5fba021e-0dda-4793-abae-5b9137baf1ef" containerName="keystone-cron" Jan 26 09:01:27 crc kubenswrapper[4806]: I0126 09:01:27.270514 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s88rh" Jan 26 09:01:27 crc kubenswrapper[4806]: I0126 09:01:27.285544 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-s88rh"] Jan 26 09:01:27 crc kubenswrapper[4806]: I0126 09:01:27.415444 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f358730-e17c-4a20-b173-99f7eeb1a947-catalog-content\") pod \"redhat-operators-s88rh\" (UID: \"1f358730-e17c-4a20-b173-99f7eeb1a947\") " pod="openshift-marketplace/redhat-operators-s88rh" Jan 26 09:01:27 crc kubenswrapper[4806]: I0126 09:01:27.415547 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wgw4\" (UniqueName: \"kubernetes.io/projected/1f358730-e17c-4a20-b173-99f7eeb1a947-kube-api-access-2wgw4\") pod \"redhat-operators-s88rh\" (UID: \"1f358730-e17c-4a20-b173-99f7eeb1a947\") " pod="openshift-marketplace/redhat-operators-s88rh" Jan 26 09:01:27 crc kubenswrapper[4806]: I0126 09:01:27.415591 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f358730-e17c-4a20-b173-99f7eeb1a947-utilities\") pod \"redhat-operators-s88rh\" (UID: \"1f358730-e17c-4a20-b173-99f7eeb1a947\") " pod="openshift-marketplace/redhat-operators-s88rh" Jan 26 09:01:27 crc kubenswrapper[4806]: I0126 09:01:27.517260 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f358730-e17c-4a20-b173-99f7eeb1a947-catalog-content\") pod \"redhat-operators-s88rh\" (UID: \"1f358730-e17c-4a20-b173-99f7eeb1a947\") " pod="openshift-marketplace/redhat-operators-s88rh" Jan 26 09:01:27 crc kubenswrapper[4806]: I0126 09:01:27.517359 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wgw4\" (UniqueName: \"kubernetes.io/projected/1f358730-e17c-4a20-b173-99f7eeb1a947-kube-api-access-2wgw4\") pod \"redhat-operators-s88rh\" (UID: \"1f358730-e17c-4a20-b173-99f7eeb1a947\") " pod="openshift-marketplace/redhat-operators-s88rh" Jan 26 09:01:27 crc kubenswrapper[4806]: I0126 09:01:27.517424 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f358730-e17c-4a20-b173-99f7eeb1a947-utilities\") pod \"redhat-operators-s88rh\" (UID: \"1f358730-e17c-4a20-b173-99f7eeb1a947\") " pod="openshift-marketplace/redhat-operators-s88rh" Jan 26 09:01:27 crc kubenswrapper[4806]: I0126 09:01:27.518215 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f358730-e17c-4a20-b173-99f7eeb1a947-utilities\") pod \"redhat-operators-s88rh\" (UID: \"1f358730-e17c-4a20-b173-99f7eeb1a947\") " pod="openshift-marketplace/redhat-operators-s88rh" Jan 26 09:01:27 crc kubenswrapper[4806]: I0126 09:01:27.519142 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f358730-e17c-4a20-b173-99f7eeb1a947-catalog-content\") pod \"redhat-operators-s88rh\" (UID: \"1f358730-e17c-4a20-b173-99f7eeb1a947\") " pod="openshift-marketplace/redhat-operators-s88rh" Jan 26 09:01:27 crc kubenswrapper[4806]: I0126 09:01:27.540336 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2wgw4\" (UniqueName: \"kubernetes.io/projected/1f358730-e17c-4a20-b173-99f7eeb1a947-kube-api-access-2wgw4\") pod \"redhat-operators-s88rh\" (UID: \"1f358730-e17c-4a20-b173-99f7eeb1a947\") " pod="openshift-marketplace/redhat-operators-s88rh" Jan 26 09:01:27 crc kubenswrapper[4806]: I0126 09:01:27.611366 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s88rh" Jan 26 09:01:28 crc kubenswrapper[4806]: I0126 09:01:28.091710 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-s88rh"] Jan 26 09:01:28 crc kubenswrapper[4806]: I0126 09:01:28.478238 4806 generic.go:334] "Generic (PLEG): container finished" podID="1f358730-e17c-4a20-b173-99f7eeb1a947" containerID="d24022720b4bdb41c163580bc12a0f1906cade32cccda96eae4754a5127e1e80" exitCode=0 Jan 26 09:01:28 crc kubenswrapper[4806]: I0126 09:01:28.478436 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s88rh" event={"ID":"1f358730-e17c-4a20-b173-99f7eeb1a947","Type":"ContainerDied","Data":"d24022720b4bdb41c163580bc12a0f1906cade32cccda96eae4754a5127e1e80"} Jan 26 09:01:28 crc kubenswrapper[4806]: I0126 09:01:28.478618 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s88rh" event={"ID":"1f358730-e17c-4a20-b173-99f7eeb1a947","Type":"ContainerStarted","Data":"99e08c84b0b70f01b159768f6affb61029d28f44a6c51ce81b5dfa2fa1f9ab8e"} Jan 26 09:01:29 crc kubenswrapper[4806]: I0126 09:01:29.493214 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s88rh" event={"ID":"1f358730-e17c-4a20-b173-99f7eeb1a947","Type":"ContainerStarted","Data":"9eca8f774f598fafd6cd7c1b41f8e31b4a66a013805de44c29b798072b6dca1c"} Jan 26 09:01:31 crc kubenswrapper[4806]: I0126 09:01:31.050665 4806 scope.go:117] "RemoveContainer" containerID="19d15f3db83358f34004e7d990728b7d25579f940569bef0ed6a4be6149f4437" Jan 26 09:01:31 crc kubenswrapper[4806]: E0126 09:01:31.051420 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 09:01:33 crc kubenswrapper[4806]: I0126 09:01:33.529582 4806 generic.go:334] "Generic (PLEG): container finished" podID="1f358730-e17c-4a20-b173-99f7eeb1a947" containerID="9eca8f774f598fafd6cd7c1b41f8e31b4a66a013805de44c29b798072b6dca1c" exitCode=0 Jan 26 09:01:33 crc kubenswrapper[4806]: I0126 09:01:33.529873 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s88rh" event={"ID":"1f358730-e17c-4a20-b173-99f7eeb1a947","Type":"ContainerDied","Data":"9eca8f774f598fafd6cd7c1b41f8e31b4a66a013805de44c29b798072b6dca1c"} Jan 26 09:01:34 crc kubenswrapper[4806]: I0126 09:01:34.539846 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s88rh" event={"ID":"1f358730-e17c-4a20-b173-99f7eeb1a947","Type":"ContainerStarted","Data":"5a9437aa27528ebb7cf67344d7e3c071485e06c60c423173736bfa06ba67dc12"} Jan 26 09:01:34 crc kubenswrapper[4806]: I0126 09:01:34.563629 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-s88rh" podStartSLOduration=2.037531851 podStartE2EDuration="7.563609766s" podCreationTimestamp="2026-01-26 09:01:27 +0000 UTC" firstStartedPulling="2026-01-26 09:01:28.48006395 +0000 UTC m=+4067.744472006" lastFinishedPulling="2026-01-26 09:01:34.006141865 +0000 UTC m=+4073.270549921" observedRunningTime="2026-01-26 09:01:34.558684588 +0000 UTC m=+4073.823092654" watchObservedRunningTime="2026-01-26 09:01:34.563609766 +0000 UTC m=+4073.828017822" Jan 26 09:01:37 crc kubenswrapper[4806]: I0126 09:01:37.611578 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-s88rh" Jan 26 09:01:37 crc kubenswrapper[4806]: I0126 09:01:37.612187 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-s88rh" Jan 26 09:01:38 crc kubenswrapper[4806]: I0126 09:01:38.661321 4806 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-s88rh" podUID="1f358730-e17c-4a20-b173-99f7eeb1a947" containerName="registry-server" probeResult="failure" output=< Jan 26 09:01:38 crc kubenswrapper[4806]: timeout: failed to connect service ":50051" within 1s Jan 26 09:01:38 crc kubenswrapper[4806]: > Jan 26 09:01:43 crc kubenswrapper[4806]: I0126 09:01:43.042774 4806 scope.go:117] "RemoveContainer" containerID="19d15f3db83358f34004e7d990728b7d25579f940569bef0ed6a4be6149f4437" Jan 26 09:01:43 crc kubenswrapper[4806]: E0126 09:01:43.043848 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 09:01:47 crc kubenswrapper[4806]: I0126 09:01:47.670752 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-s88rh" Jan 26 09:01:47 crc kubenswrapper[4806]: I0126 09:01:47.727825 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-s88rh" Jan 26 09:01:47 crc kubenswrapper[4806]: I0126 09:01:47.911035 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-s88rh"] Jan 26 09:01:49 crc kubenswrapper[4806]: I0126 09:01:49.671141 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-s88rh" podUID="1f358730-e17c-4a20-b173-99f7eeb1a947" containerName="registry-server" containerID="cri-o://5a9437aa27528ebb7cf67344d7e3c071485e06c60c423173736bfa06ba67dc12" gracePeriod=2 Jan 26 09:01:50 crc kubenswrapper[4806]: I0126 09:01:50.188298 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s88rh" Jan 26 09:01:50 crc kubenswrapper[4806]: I0126 09:01:50.346398 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2wgw4\" (UniqueName: \"kubernetes.io/projected/1f358730-e17c-4a20-b173-99f7eeb1a947-kube-api-access-2wgw4\") pod \"1f358730-e17c-4a20-b173-99f7eeb1a947\" (UID: \"1f358730-e17c-4a20-b173-99f7eeb1a947\") " Jan 26 09:01:50 crc kubenswrapper[4806]: I0126 09:01:50.346474 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f358730-e17c-4a20-b173-99f7eeb1a947-catalog-content\") pod \"1f358730-e17c-4a20-b173-99f7eeb1a947\" (UID: \"1f358730-e17c-4a20-b173-99f7eeb1a947\") " Jan 26 09:01:50 crc kubenswrapper[4806]: I0126 09:01:50.346701 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f358730-e17c-4a20-b173-99f7eeb1a947-utilities\") pod \"1f358730-e17c-4a20-b173-99f7eeb1a947\" (UID: \"1f358730-e17c-4a20-b173-99f7eeb1a947\") " Jan 26 09:01:50 crc kubenswrapper[4806]: I0126 09:01:50.347796 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f358730-e17c-4a20-b173-99f7eeb1a947-utilities" (OuterVolumeSpecName: "utilities") pod "1f358730-e17c-4a20-b173-99f7eeb1a947" (UID: "1f358730-e17c-4a20-b173-99f7eeb1a947"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 09:01:50 crc kubenswrapper[4806]: I0126 09:01:50.361201 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f358730-e17c-4a20-b173-99f7eeb1a947-kube-api-access-2wgw4" (OuterVolumeSpecName: "kube-api-access-2wgw4") pod "1f358730-e17c-4a20-b173-99f7eeb1a947" (UID: "1f358730-e17c-4a20-b173-99f7eeb1a947"). InnerVolumeSpecName "kube-api-access-2wgw4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:01:50 crc kubenswrapper[4806]: I0126 09:01:50.450959 4806 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f358730-e17c-4a20-b173-99f7eeb1a947-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 09:01:50 crc kubenswrapper[4806]: I0126 09:01:50.451030 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2wgw4\" (UniqueName: \"kubernetes.io/projected/1f358730-e17c-4a20-b173-99f7eeb1a947-kube-api-access-2wgw4\") on node \"crc\" DevicePath \"\"" Jan 26 09:01:50 crc kubenswrapper[4806]: I0126 09:01:50.464218 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f358730-e17c-4a20-b173-99f7eeb1a947-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1f358730-e17c-4a20-b173-99f7eeb1a947" (UID: "1f358730-e17c-4a20-b173-99f7eeb1a947"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 09:01:50 crc kubenswrapper[4806]: I0126 09:01:50.553631 4806 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f358730-e17c-4a20-b173-99f7eeb1a947-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 09:01:50 crc kubenswrapper[4806]: I0126 09:01:50.697417 4806 generic.go:334] "Generic (PLEG): container finished" podID="1f358730-e17c-4a20-b173-99f7eeb1a947" containerID="5a9437aa27528ebb7cf67344d7e3c071485e06c60c423173736bfa06ba67dc12" exitCode=0 Jan 26 09:01:50 crc kubenswrapper[4806]: I0126 09:01:50.697474 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s88rh" event={"ID":"1f358730-e17c-4a20-b173-99f7eeb1a947","Type":"ContainerDied","Data":"5a9437aa27528ebb7cf67344d7e3c071485e06c60c423173736bfa06ba67dc12"} Jan 26 09:01:50 crc kubenswrapper[4806]: I0126 09:01:50.697500 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s88rh" event={"ID":"1f358730-e17c-4a20-b173-99f7eeb1a947","Type":"ContainerDied","Data":"99e08c84b0b70f01b159768f6affb61029d28f44a6c51ce81b5dfa2fa1f9ab8e"} Jan 26 09:01:50 crc kubenswrapper[4806]: I0126 09:01:50.697544 4806 scope.go:117] "RemoveContainer" containerID="5a9437aa27528ebb7cf67344d7e3c071485e06c60c423173736bfa06ba67dc12" Jan 26 09:01:50 crc kubenswrapper[4806]: I0126 09:01:50.697707 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s88rh" Jan 26 09:01:50 crc kubenswrapper[4806]: I0126 09:01:50.746845 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-s88rh"] Jan 26 09:01:50 crc kubenswrapper[4806]: I0126 09:01:50.746883 4806 scope.go:117] "RemoveContainer" containerID="9eca8f774f598fafd6cd7c1b41f8e31b4a66a013805de44c29b798072b6dca1c" Jan 26 09:01:50 crc kubenswrapper[4806]: I0126 09:01:50.759179 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-s88rh"] Jan 26 09:01:51 crc kubenswrapper[4806]: I0126 09:01:51.052758 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f358730-e17c-4a20-b173-99f7eeb1a947" path="/var/lib/kubelet/pods/1f358730-e17c-4a20-b173-99f7eeb1a947/volumes" Jan 26 09:01:51 crc kubenswrapper[4806]: I0126 09:01:51.345081 4806 scope.go:117] "RemoveContainer" containerID="d24022720b4bdb41c163580bc12a0f1906cade32cccda96eae4754a5127e1e80" Jan 26 09:01:51 crc kubenswrapper[4806]: I0126 09:01:51.397430 4806 scope.go:117] "RemoveContainer" containerID="5a9437aa27528ebb7cf67344d7e3c071485e06c60c423173736bfa06ba67dc12" Jan 26 09:01:51 crc kubenswrapper[4806]: E0126 09:01:51.398281 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a9437aa27528ebb7cf67344d7e3c071485e06c60c423173736bfa06ba67dc12\": container with ID starting with 5a9437aa27528ebb7cf67344d7e3c071485e06c60c423173736bfa06ba67dc12 not found: ID does not exist" containerID="5a9437aa27528ebb7cf67344d7e3c071485e06c60c423173736bfa06ba67dc12" Jan 26 09:01:51 crc kubenswrapper[4806]: I0126 09:01:51.398338 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a9437aa27528ebb7cf67344d7e3c071485e06c60c423173736bfa06ba67dc12"} err="failed to get container status \"5a9437aa27528ebb7cf67344d7e3c071485e06c60c423173736bfa06ba67dc12\": rpc error: code = NotFound desc = could not find container \"5a9437aa27528ebb7cf67344d7e3c071485e06c60c423173736bfa06ba67dc12\": container with ID starting with 5a9437aa27528ebb7cf67344d7e3c071485e06c60c423173736bfa06ba67dc12 not found: ID does not exist" Jan 26 09:01:51 crc kubenswrapper[4806]: I0126 09:01:51.398449 4806 scope.go:117] "RemoveContainer" containerID="9eca8f774f598fafd6cd7c1b41f8e31b4a66a013805de44c29b798072b6dca1c" Jan 26 09:01:51 crc kubenswrapper[4806]: E0126 09:01:51.399010 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9eca8f774f598fafd6cd7c1b41f8e31b4a66a013805de44c29b798072b6dca1c\": container with ID starting with 9eca8f774f598fafd6cd7c1b41f8e31b4a66a013805de44c29b798072b6dca1c not found: ID does not exist" containerID="9eca8f774f598fafd6cd7c1b41f8e31b4a66a013805de44c29b798072b6dca1c" Jan 26 09:01:51 crc kubenswrapper[4806]: I0126 09:01:51.399095 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9eca8f774f598fafd6cd7c1b41f8e31b4a66a013805de44c29b798072b6dca1c"} err="failed to get container status \"9eca8f774f598fafd6cd7c1b41f8e31b4a66a013805de44c29b798072b6dca1c\": rpc error: code = NotFound desc = could not find container \"9eca8f774f598fafd6cd7c1b41f8e31b4a66a013805de44c29b798072b6dca1c\": container with ID starting with 9eca8f774f598fafd6cd7c1b41f8e31b4a66a013805de44c29b798072b6dca1c not found: ID does not exist" Jan 26 09:01:51 crc kubenswrapper[4806]: I0126 09:01:51.399126 4806 scope.go:117] "RemoveContainer" containerID="d24022720b4bdb41c163580bc12a0f1906cade32cccda96eae4754a5127e1e80" Jan 26 09:01:51 crc kubenswrapper[4806]: E0126 09:01:51.399686 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d24022720b4bdb41c163580bc12a0f1906cade32cccda96eae4754a5127e1e80\": container with ID starting with d24022720b4bdb41c163580bc12a0f1906cade32cccda96eae4754a5127e1e80 not found: ID does not exist" containerID="d24022720b4bdb41c163580bc12a0f1906cade32cccda96eae4754a5127e1e80" Jan 26 09:01:51 crc kubenswrapper[4806]: I0126 09:01:51.399735 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d24022720b4bdb41c163580bc12a0f1906cade32cccda96eae4754a5127e1e80"} err="failed to get container status \"d24022720b4bdb41c163580bc12a0f1906cade32cccda96eae4754a5127e1e80\": rpc error: code = NotFound desc = could not find container \"d24022720b4bdb41c163580bc12a0f1906cade32cccda96eae4754a5127e1e80\": container with ID starting with d24022720b4bdb41c163580bc12a0f1906cade32cccda96eae4754a5127e1e80 not found: ID does not exist" Jan 26 09:01:57 crc kubenswrapper[4806]: I0126 09:01:57.046257 4806 scope.go:117] "RemoveContainer" containerID="19d15f3db83358f34004e7d990728b7d25579f940569bef0ed6a4be6149f4437" Jan 26 09:01:57 crc kubenswrapper[4806]: E0126 09:01:57.047877 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 09:02:11 crc kubenswrapper[4806]: I0126 09:02:11.049933 4806 scope.go:117] "RemoveContainer" containerID="19d15f3db83358f34004e7d990728b7d25579f940569bef0ed6a4be6149f4437" Jan 26 09:02:11 crc kubenswrapper[4806]: E0126 09:02:11.050767 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 09:02:22 crc kubenswrapper[4806]: I0126 09:02:22.041948 4806 scope.go:117] "RemoveContainer" containerID="19d15f3db83358f34004e7d990728b7d25579f940569bef0ed6a4be6149f4437" Jan 26 09:02:22 crc kubenswrapper[4806]: E0126 09:02:22.042669 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 09:02:33 crc kubenswrapper[4806]: I0126 09:02:33.042915 4806 scope.go:117] "RemoveContainer" containerID="19d15f3db83358f34004e7d990728b7d25579f940569bef0ed6a4be6149f4437" Jan 26 09:02:33 crc kubenswrapper[4806]: E0126 09:02:33.043724 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 09:02:48 crc kubenswrapper[4806]: I0126 09:02:48.042226 4806 scope.go:117] "RemoveContainer" containerID="19d15f3db83358f34004e7d990728b7d25579f940569bef0ed6a4be6149f4437" Jan 26 09:02:49 crc kubenswrapper[4806]: I0126 09:02:49.200247 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" event={"ID":"d07502a2-50b0-4012-b335-340a1c694c50","Type":"ContainerStarted","Data":"c557dee8f3a16a585391074d1cc9d59d8ecb6aff5d1c0fdf3387cdb8c81bb2cf"} Jan 26 09:05:15 crc kubenswrapper[4806]: I0126 09:05:15.806940 4806 patch_prober.go:28] interesting pod/machine-config-daemon-k2tlk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 09:05:15 crc kubenswrapper[4806]: I0126 09:05:15.808076 4806 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 09:05:45 crc kubenswrapper[4806]: I0126 09:05:45.806455 4806 patch_prober.go:28] interesting pod/machine-config-daemon-k2tlk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 09:05:45 crc kubenswrapper[4806]: I0126 09:05:45.806980 4806 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 09:05:45 crc kubenswrapper[4806]: I0126 09:05:45.996189 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-9bwhb"] Jan 26 09:05:45 crc kubenswrapper[4806]: E0126 09:05:45.997319 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f358730-e17c-4a20-b173-99f7eeb1a947" containerName="extract-utilities" Jan 26 09:05:45 crc kubenswrapper[4806]: I0126 09:05:45.997352 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f358730-e17c-4a20-b173-99f7eeb1a947" containerName="extract-utilities" Jan 26 09:05:45 crc kubenswrapper[4806]: E0126 09:05:45.997384 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f358730-e17c-4a20-b173-99f7eeb1a947" containerName="extract-content" Jan 26 09:05:45 crc kubenswrapper[4806]: I0126 09:05:45.997396 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f358730-e17c-4a20-b173-99f7eeb1a947" containerName="extract-content" Jan 26 09:05:45 crc kubenswrapper[4806]: E0126 09:05:45.997424 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f358730-e17c-4a20-b173-99f7eeb1a947" containerName="registry-server" Jan 26 09:05:45 crc kubenswrapper[4806]: I0126 09:05:45.997437 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f358730-e17c-4a20-b173-99f7eeb1a947" containerName="registry-server" Jan 26 09:05:45 crc kubenswrapper[4806]: I0126 09:05:45.997795 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f358730-e17c-4a20-b173-99f7eeb1a947" containerName="registry-server" Jan 26 09:05:46 crc kubenswrapper[4806]: I0126 09:05:46.001301 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9bwhb" Jan 26 09:05:46 crc kubenswrapper[4806]: I0126 09:05:46.009515 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9bwhb"] Jan 26 09:05:46 crc kubenswrapper[4806]: I0126 09:05:46.142797 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgmf7\" (UniqueName: \"kubernetes.io/projected/7b5abc09-6069-421c-b9bb-198889dcc2c3-kube-api-access-wgmf7\") pod \"certified-operators-9bwhb\" (UID: \"7b5abc09-6069-421c-b9bb-198889dcc2c3\") " pod="openshift-marketplace/certified-operators-9bwhb" Jan 26 09:05:46 crc kubenswrapper[4806]: I0126 09:05:46.143380 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b5abc09-6069-421c-b9bb-198889dcc2c3-utilities\") pod \"certified-operators-9bwhb\" (UID: \"7b5abc09-6069-421c-b9bb-198889dcc2c3\") " pod="openshift-marketplace/certified-operators-9bwhb" Jan 26 09:05:46 crc kubenswrapper[4806]: I0126 09:05:46.144032 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b5abc09-6069-421c-b9bb-198889dcc2c3-catalog-content\") pod \"certified-operators-9bwhb\" (UID: \"7b5abc09-6069-421c-b9bb-198889dcc2c3\") " pod="openshift-marketplace/certified-operators-9bwhb" Jan 26 09:05:46 crc kubenswrapper[4806]: I0126 09:05:46.245609 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wgmf7\" (UniqueName: \"kubernetes.io/projected/7b5abc09-6069-421c-b9bb-198889dcc2c3-kube-api-access-wgmf7\") pod \"certified-operators-9bwhb\" (UID: \"7b5abc09-6069-421c-b9bb-198889dcc2c3\") " pod="openshift-marketplace/certified-operators-9bwhb" Jan 26 09:05:46 crc kubenswrapper[4806]: I0126 09:05:46.245888 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b5abc09-6069-421c-b9bb-198889dcc2c3-utilities\") pod \"certified-operators-9bwhb\" (UID: \"7b5abc09-6069-421c-b9bb-198889dcc2c3\") " pod="openshift-marketplace/certified-operators-9bwhb" Jan 26 09:05:46 crc kubenswrapper[4806]: I0126 09:05:46.246027 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b5abc09-6069-421c-b9bb-198889dcc2c3-catalog-content\") pod \"certified-operators-9bwhb\" (UID: \"7b5abc09-6069-421c-b9bb-198889dcc2c3\") " pod="openshift-marketplace/certified-operators-9bwhb" Jan 26 09:05:46 crc kubenswrapper[4806]: I0126 09:05:46.246530 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b5abc09-6069-421c-b9bb-198889dcc2c3-utilities\") pod \"certified-operators-9bwhb\" (UID: \"7b5abc09-6069-421c-b9bb-198889dcc2c3\") " pod="openshift-marketplace/certified-operators-9bwhb" Jan 26 09:05:46 crc kubenswrapper[4806]: I0126 09:05:46.246639 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b5abc09-6069-421c-b9bb-198889dcc2c3-catalog-content\") pod \"certified-operators-9bwhb\" (UID: \"7b5abc09-6069-421c-b9bb-198889dcc2c3\") " pod="openshift-marketplace/certified-operators-9bwhb" Jan 26 09:05:46 crc kubenswrapper[4806]: I0126 09:05:46.276311 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wgmf7\" (UniqueName: \"kubernetes.io/projected/7b5abc09-6069-421c-b9bb-198889dcc2c3-kube-api-access-wgmf7\") pod \"certified-operators-9bwhb\" (UID: \"7b5abc09-6069-421c-b9bb-198889dcc2c3\") " pod="openshift-marketplace/certified-operators-9bwhb" Jan 26 09:05:46 crc kubenswrapper[4806]: I0126 09:05:46.324565 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9bwhb" Jan 26 09:05:47 crc kubenswrapper[4806]: I0126 09:05:47.019107 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9bwhb"] Jan 26 09:05:47 crc kubenswrapper[4806]: I0126 09:05:47.836529 4806 generic.go:334] "Generic (PLEG): container finished" podID="7b5abc09-6069-421c-b9bb-198889dcc2c3" containerID="2b2b9f57e50f5cd876e1d96d5d426d7b04ed60a82c8643959ed468dc2f4abd66" exitCode=0 Jan 26 09:05:47 crc kubenswrapper[4806]: I0126 09:05:47.836574 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9bwhb" event={"ID":"7b5abc09-6069-421c-b9bb-198889dcc2c3","Type":"ContainerDied","Data":"2b2b9f57e50f5cd876e1d96d5d426d7b04ed60a82c8643959ed468dc2f4abd66"} Jan 26 09:05:47 crc kubenswrapper[4806]: I0126 09:05:47.836601 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9bwhb" event={"ID":"7b5abc09-6069-421c-b9bb-198889dcc2c3","Type":"ContainerStarted","Data":"709ee9f9c612e1cc9fb2678e71b808035d1c4dff15a93bc9d7e9c84aa98a04cd"} Jan 26 09:05:47 crc kubenswrapper[4806]: I0126 09:05:47.838627 4806 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 09:05:48 crc kubenswrapper[4806]: I0126 09:05:48.849837 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9bwhb" event={"ID":"7b5abc09-6069-421c-b9bb-198889dcc2c3","Type":"ContainerStarted","Data":"68e8dc3d552bccb131617e794e36ed6c552ca08946d82fa03cf126be40c58329"} Jan 26 09:05:49 crc kubenswrapper[4806]: I0126 09:05:49.861999 4806 generic.go:334] "Generic (PLEG): container finished" podID="7b5abc09-6069-421c-b9bb-198889dcc2c3" containerID="68e8dc3d552bccb131617e794e36ed6c552ca08946d82fa03cf126be40c58329" exitCode=0 Jan 26 09:05:49 crc kubenswrapper[4806]: I0126 09:05:49.862064 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9bwhb" event={"ID":"7b5abc09-6069-421c-b9bb-198889dcc2c3","Type":"ContainerDied","Data":"68e8dc3d552bccb131617e794e36ed6c552ca08946d82fa03cf126be40c58329"} Jan 26 09:05:50 crc kubenswrapper[4806]: I0126 09:05:50.872205 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9bwhb" event={"ID":"7b5abc09-6069-421c-b9bb-198889dcc2c3","Type":"ContainerStarted","Data":"73c27aff590c9fb9fcd9cf57052a12b019daf98826bd3c8e45e22573314f2aae"} Jan 26 09:05:50 crc kubenswrapper[4806]: I0126 09:05:50.909948 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-9bwhb" podStartSLOduration=3.292333724 podStartE2EDuration="5.909929319s" podCreationTimestamp="2026-01-26 09:05:45 +0000 UTC" firstStartedPulling="2026-01-26 09:05:47.838407083 +0000 UTC m=+4327.102815139" lastFinishedPulling="2026-01-26 09:05:50.456002658 +0000 UTC m=+4329.720410734" observedRunningTime="2026-01-26 09:05:50.900094011 +0000 UTC m=+4330.164502067" watchObservedRunningTime="2026-01-26 09:05:50.909929319 +0000 UTC m=+4330.174337375" Jan 26 09:05:56 crc kubenswrapper[4806]: I0126 09:05:56.325179 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-9bwhb" Jan 26 09:05:56 crc kubenswrapper[4806]: I0126 09:05:56.325846 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-9bwhb" Jan 26 09:05:56 crc kubenswrapper[4806]: I0126 09:05:56.390923 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-9bwhb" Jan 26 09:05:56 crc kubenswrapper[4806]: I0126 09:05:56.969321 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-9bwhb" Jan 26 09:05:57 crc kubenswrapper[4806]: I0126 09:05:57.022732 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9bwhb"] Jan 26 09:05:58 crc kubenswrapper[4806]: I0126 09:05:58.938579 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-9bwhb" podUID="7b5abc09-6069-421c-b9bb-198889dcc2c3" containerName="registry-server" containerID="cri-o://73c27aff590c9fb9fcd9cf57052a12b019daf98826bd3c8e45e22573314f2aae" gracePeriod=2 Jan 26 09:05:59 crc kubenswrapper[4806]: I0126 09:05:59.444458 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9bwhb" Jan 26 09:05:59 crc kubenswrapper[4806]: I0126 09:05:59.540772 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wgmf7\" (UniqueName: \"kubernetes.io/projected/7b5abc09-6069-421c-b9bb-198889dcc2c3-kube-api-access-wgmf7\") pod \"7b5abc09-6069-421c-b9bb-198889dcc2c3\" (UID: \"7b5abc09-6069-421c-b9bb-198889dcc2c3\") " Jan 26 09:05:59 crc kubenswrapper[4806]: I0126 09:05:59.540828 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b5abc09-6069-421c-b9bb-198889dcc2c3-utilities\") pod \"7b5abc09-6069-421c-b9bb-198889dcc2c3\" (UID: \"7b5abc09-6069-421c-b9bb-198889dcc2c3\") " Jan 26 09:05:59 crc kubenswrapper[4806]: I0126 09:05:59.540880 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b5abc09-6069-421c-b9bb-198889dcc2c3-catalog-content\") pod \"7b5abc09-6069-421c-b9bb-198889dcc2c3\" (UID: \"7b5abc09-6069-421c-b9bb-198889dcc2c3\") " Jan 26 09:05:59 crc kubenswrapper[4806]: I0126 09:05:59.541735 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b5abc09-6069-421c-b9bb-198889dcc2c3-utilities" (OuterVolumeSpecName: "utilities") pod "7b5abc09-6069-421c-b9bb-198889dcc2c3" (UID: "7b5abc09-6069-421c-b9bb-198889dcc2c3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 09:05:59 crc kubenswrapper[4806]: I0126 09:05:59.546991 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b5abc09-6069-421c-b9bb-198889dcc2c3-kube-api-access-wgmf7" (OuterVolumeSpecName: "kube-api-access-wgmf7") pod "7b5abc09-6069-421c-b9bb-198889dcc2c3" (UID: "7b5abc09-6069-421c-b9bb-198889dcc2c3"). InnerVolumeSpecName "kube-api-access-wgmf7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:05:59 crc kubenswrapper[4806]: I0126 09:05:59.587606 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b5abc09-6069-421c-b9bb-198889dcc2c3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7b5abc09-6069-421c-b9bb-198889dcc2c3" (UID: "7b5abc09-6069-421c-b9bb-198889dcc2c3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 09:05:59 crc kubenswrapper[4806]: I0126 09:05:59.642955 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wgmf7\" (UniqueName: \"kubernetes.io/projected/7b5abc09-6069-421c-b9bb-198889dcc2c3-kube-api-access-wgmf7\") on node \"crc\" DevicePath \"\"" Jan 26 09:05:59 crc kubenswrapper[4806]: I0126 09:05:59.643288 4806 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b5abc09-6069-421c-b9bb-198889dcc2c3-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 09:05:59 crc kubenswrapper[4806]: I0126 09:05:59.643299 4806 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b5abc09-6069-421c-b9bb-198889dcc2c3-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 09:05:59 crc kubenswrapper[4806]: I0126 09:05:59.948149 4806 generic.go:334] "Generic (PLEG): container finished" podID="7b5abc09-6069-421c-b9bb-198889dcc2c3" containerID="73c27aff590c9fb9fcd9cf57052a12b019daf98826bd3c8e45e22573314f2aae" exitCode=0 Jan 26 09:05:59 crc kubenswrapper[4806]: I0126 09:05:59.948202 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9bwhb" event={"ID":"7b5abc09-6069-421c-b9bb-198889dcc2c3","Type":"ContainerDied","Data":"73c27aff590c9fb9fcd9cf57052a12b019daf98826bd3c8e45e22573314f2aae"} Jan 26 09:05:59 crc kubenswrapper[4806]: I0126 09:05:59.948224 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9bwhb" Jan 26 09:05:59 crc kubenswrapper[4806]: I0126 09:05:59.948235 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9bwhb" event={"ID":"7b5abc09-6069-421c-b9bb-198889dcc2c3","Type":"ContainerDied","Data":"709ee9f9c612e1cc9fb2678e71b808035d1c4dff15a93bc9d7e9c84aa98a04cd"} Jan 26 09:05:59 crc kubenswrapper[4806]: I0126 09:05:59.948257 4806 scope.go:117] "RemoveContainer" containerID="73c27aff590c9fb9fcd9cf57052a12b019daf98826bd3c8e45e22573314f2aae" Jan 26 09:05:59 crc kubenswrapper[4806]: I0126 09:05:59.995830 4806 scope.go:117] "RemoveContainer" containerID="68e8dc3d552bccb131617e794e36ed6c552ca08946d82fa03cf126be40c58329" Jan 26 09:05:59 crc kubenswrapper[4806]: I0126 09:05:59.996596 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9bwhb"] Jan 26 09:06:00 crc kubenswrapper[4806]: I0126 09:06:00.005206 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-9bwhb"] Jan 26 09:06:00 crc kubenswrapper[4806]: I0126 09:06:00.028000 4806 scope.go:117] "RemoveContainer" containerID="2b2b9f57e50f5cd876e1d96d5d426d7b04ed60a82c8643959ed468dc2f4abd66" Jan 26 09:06:00 crc kubenswrapper[4806]: I0126 09:06:00.081684 4806 scope.go:117] "RemoveContainer" containerID="73c27aff590c9fb9fcd9cf57052a12b019daf98826bd3c8e45e22573314f2aae" Jan 26 09:06:00 crc kubenswrapper[4806]: E0126 09:06:00.082198 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"73c27aff590c9fb9fcd9cf57052a12b019daf98826bd3c8e45e22573314f2aae\": container with ID starting with 73c27aff590c9fb9fcd9cf57052a12b019daf98826bd3c8e45e22573314f2aae not found: ID does not exist" containerID="73c27aff590c9fb9fcd9cf57052a12b019daf98826bd3c8e45e22573314f2aae" Jan 26 09:06:00 crc kubenswrapper[4806]: I0126 09:06:00.082242 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73c27aff590c9fb9fcd9cf57052a12b019daf98826bd3c8e45e22573314f2aae"} err="failed to get container status \"73c27aff590c9fb9fcd9cf57052a12b019daf98826bd3c8e45e22573314f2aae\": rpc error: code = NotFound desc = could not find container \"73c27aff590c9fb9fcd9cf57052a12b019daf98826bd3c8e45e22573314f2aae\": container with ID starting with 73c27aff590c9fb9fcd9cf57052a12b019daf98826bd3c8e45e22573314f2aae not found: ID does not exist" Jan 26 09:06:00 crc kubenswrapper[4806]: I0126 09:06:00.082266 4806 scope.go:117] "RemoveContainer" containerID="68e8dc3d552bccb131617e794e36ed6c552ca08946d82fa03cf126be40c58329" Jan 26 09:06:00 crc kubenswrapper[4806]: E0126 09:06:00.083262 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"68e8dc3d552bccb131617e794e36ed6c552ca08946d82fa03cf126be40c58329\": container with ID starting with 68e8dc3d552bccb131617e794e36ed6c552ca08946d82fa03cf126be40c58329 not found: ID does not exist" containerID="68e8dc3d552bccb131617e794e36ed6c552ca08946d82fa03cf126be40c58329" Jan 26 09:06:00 crc kubenswrapper[4806]: I0126 09:06:00.083306 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68e8dc3d552bccb131617e794e36ed6c552ca08946d82fa03cf126be40c58329"} err="failed to get container status \"68e8dc3d552bccb131617e794e36ed6c552ca08946d82fa03cf126be40c58329\": rpc error: code = NotFound desc = could not find container \"68e8dc3d552bccb131617e794e36ed6c552ca08946d82fa03cf126be40c58329\": container with ID starting with 68e8dc3d552bccb131617e794e36ed6c552ca08946d82fa03cf126be40c58329 not found: ID does not exist" Jan 26 09:06:00 crc kubenswrapper[4806]: I0126 09:06:00.083353 4806 scope.go:117] "RemoveContainer" containerID="2b2b9f57e50f5cd876e1d96d5d426d7b04ed60a82c8643959ed468dc2f4abd66" Jan 26 09:06:00 crc kubenswrapper[4806]: E0126 09:06:00.083851 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b2b9f57e50f5cd876e1d96d5d426d7b04ed60a82c8643959ed468dc2f4abd66\": container with ID starting with 2b2b9f57e50f5cd876e1d96d5d426d7b04ed60a82c8643959ed468dc2f4abd66 not found: ID does not exist" containerID="2b2b9f57e50f5cd876e1d96d5d426d7b04ed60a82c8643959ed468dc2f4abd66" Jan 26 09:06:00 crc kubenswrapper[4806]: I0126 09:06:00.083883 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b2b9f57e50f5cd876e1d96d5d426d7b04ed60a82c8643959ed468dc2f4abd66"} err="failed to get container status \"2b2b9f57e50f5cd876e1d96d5d426d7b04ed60a82c8643959ed468dc2f4abd66\": rpc error: code = NotFound desc = could not find container \"2b2b9f57e50f5cd876e1d96d5d426d7b04ed60a82c8643959ed468dc2f4abd66\": container with ID starting with 2b2b9f57e50f5cd876e1d96d5d426d7b04ed60a82c8643959ed468dc2f4abd66 not found: ID does not exist" Jan 26 09:06:01 crc kubenswrapper[4806]: I0126 09:06:01.052927 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b5abc09-6069-421c-b9bb-198889dcc2c3" path="/var/lib/kubelet/pods/7b5abc09-6069-421c-b9bb-198889dcc2c3/volumes" Jan 26 09:06:15 crc kubenswrapper[4806]: I0126 09:06:15.806513 4806 patch_prober.go:28] interesting pod/machine-config-daemon-k2tlk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 09:06:15 crc kubenswrapper[4806]: I0126 09:06:15.807303 4806 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 09:06:15 crc kubenswrapper[4806]: I0126 09:06:15.807361 4806 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" Jan 26 09:06:15 crc kubenswrapper[4806]: I0126 09:06:15.808634 4806 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c557dee8f3a16a585391074d1cc9d59d8ecb6aff5d1c0fdf3387cdb8c81bb2cf"} pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 09:06:15 crc kubenswrapper[4806]: I0126 09:06:15.808749 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" containerName="machine-config-daemon" containerID="cri-o://c557dee8f3a16a585391074d1cc9d59d8ecb6aff5d1c0fdf3387cdb8c81bb2cf" gracePeriod=600 Jan 26 09:06:16 crc kubenswrapper[4806]: I0126 09:06:16.091004 4806 generic.go:334] "Generic (PLEG): container finished" podID="d07502a2-50b0-4012-b335-340a1c694c50" containerID="c557dee8f3a16a585391074d1cc9d59d8ecb6aff5d1c0fdf3387cdb8c81bb2cf" exitCode=0 Jan 26 09:06:16 crc kubenswrapper[4806]: I0126 09:06:16.091085 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" event={"ID":"d07502a2-50b0-4012-b335-340a1c694c50","Type":"ContainerDied","Data":"c557dee8f3a16a585391074d1cc9d59d8ecb6aff5d1c0fdf3387cdb8c81bb2cf"} Jan 26 09:06:16 crc kubenswrapper[4806]: I0126 09:06:16.091159 4806 scope.go:117] "RemoveContainer" containerID="19d15f3db83358f34004e7d990728b7d25579f940569bef0ed6a4be6149f4437" Jan 26 09:06:17 crc kubenswrapper[4806]: I0126 09:06:17.101094 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" event={"ID":"d07502a2-50b0-4012-b335-340a1c694c50","Type":"ContainerStarted","Data":"0352a635b134aa1af5c582c6df3c8dae2524a5c47f9d9d0d06d1e31e7300a542"} Jan 26 09:08:45 crc kubenswrapper[4806]: I0126 09:08:45.806024 4806 patch_prober.go:28] interesting pod/machine-config-daemon-k2tlk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 09:08:45 crc kubenswrapper[4806]: I0126 09:08:45.806687 4806 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 09:09:15 crc kubenswrapper[4806]: I0126 09:09:15.806680 4806 patch_prober.go:28] interesting pod/machine-config-daemon-k2tlk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 09:09:15 crc kubenswrapper[4806]: I0126 09:09:15.808074 4806 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 09:09:45 crc kubenswrapper[4806]: I0126 09:09:45.806894 4806 patch_prober.go:28] interesting pod/machine-config-daemon-k2tlk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 09:09:45 crc kubenswrapper[4806]: I0126 09:09:45.807382 4806 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 09:09:45 crc kubenswrapper[4806]: I0126 09:09:45.807449 4806 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" Jan 26 09:09:45 crc kubenswrapper[4806]: I0126 09:09:45.808510 4806 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0352a635b134aa1af5c582c6df3c8dae2524a5c47f9d9d0d06d1e31e7300a542"} pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 09:09:45 crc kubenswrapper[4806]: I0126 09:09:45.808628 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" containerName="machine-config-daemon" containerID="cri-o://0352a635b134aa1af5c582c6df3c8dae2524a5c47f9d9d0d06d1e31e7300a542" gracePeriod=600 Jan 26 09:09:45 crc kubenswrapper[4806]: E0126 09:09:45.938611 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 09:09:45 crc kubenswrapper[4806]: I0126 09:09:45.995616 4806 generic.go:334] "Generic (PLEG): container finished" podID="d07502a2-50b0-4012-b335-340a1c694c50" containerID="0352a635b134aa1af5c582c6df3c8dae2524a5c47f9d9d0d06d1e31e7300a542" exitCode=0 Jan 26 09:09:45 crc kubenswrapper[4806]: I0126 09:09:45.995692 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" event={"ID":"d07502a2-50b0-4012-b335-340a1c694c50","Type":"ContainerDied","Data":"0352a635b134aa1af5c582c6df3c8dae2524a5c47f9d9d0d06d1e31e7300a542"} Jan 26 09:09:45 crc kubenswrapper[4806]: I0126 09:09:45.995731 4806 scope.go:117] "RemoveContainer" containerID="c557dee8f3a16a585391074d1cc9d59d8ecb6aff5d1c0fdf3387cdb8c81bb2cf" Jan 26 09:09:45 crc kubenswrapper[4806]: I0126 09:09:45.996385 4806 scope.go:117] "RemoveContainer" containerID="0352a635b134aa1af5c582c6df3c8dae2524a5c47f9d9d0d06d1e31e7300a542" Jan 26 09:09:45 crc kubenswrapper[4806]: E0126 09:09:45.996693 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 09:10:01 crc kubenswrapper[4806]: I0126 09:10:01.050027 4806 scope.go:117] "RemoveContainer" containerID="0352a635b134aa1af5c582c6df3c8dae2524a5c47f9d9d0d06d1e31e7300a542" Jan 26 09:10:01 crc kubenswrapper[4806]: E0126 09:10:01.052414 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 09:10:12 crc kubenswrapper[4806]: I0126 09:10:12.041856 4806 scope.go:117] "RemoveContainer" containerID="0352a635b134aa1af5c582c6df3c8dae2524a5c47f9d9d0d06d1e31e7300a542" Jan 26 09:10:12 crc kubenswrapper[4806]: E0126 09:10:12.042596 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 09:10:24 crc kubenswrapper[4806]: I0126 09:10:24.042736 4806 scope.go:117] "RemoveContainer" containerID="0352a635b134aa1af5c582c6df3c8dae2524a5c47f9d9d0d06d1e31e7300a542" Jan 26 09:10:24 crc kubenswrapper[4806]: E0126 09:10:24.043567 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 09:10:39 crc kubenswrapper[4806]: I0126 09:10:39.042513 4806 scope.go:117] "RemoveContainer" containerID="0352a635b134aa1af5c582c6df3c8dae2524a5c47f9d9d0d06d1e31e7300a542" Jan 26 09:10:39 crc kubenswrapper[4806]: E0126 09:10:39.043402 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 09:10:45 crc kubenswrapper[4806]: I0126 09:10:45.971815 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-sbl9t"] Jan 26 09:10:45 crc kubenswrapper[4806]: E0126 09:10:45.972864 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b5abc09-6069-421c-b9bb-198889dcc2c3" containerName="extract-utilities" Jan 26 09:10:45 crc kubenswrapper[4806]: I0126 09:10:45.972878 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b5abc09-6069-421c-b9bb-198889dcc2c3" containerName="extract-utilities" Jan 26 09:10:45 crc kubenswrapper[4806]: E0126 09:10:45.972995 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b5abc09-6069-421c-b9bb-198889dcc2c3" containerName="extract-content" Jan 26 09:10:45 crc kubenswrapper[4806]: I0126 09:10:45.973042 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b5abc09-6069-421c-b9bb-198889dcc2c3" containerName="extract-content" Jan 26 09:10:45 crc kubenswrapper[4806]: E0126 09:10:45.973058 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b5abc09-6069-421c-b9bb-198889dcc2c3" containerName="registry-server" Jan 26 09:10:45 crc kubenswrapper[4806]: I0126 09:10:45.973064 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b5abc09-6069-421c-b9bb-198889dcc2c3" containerName="registry-server" Jan 26 09:10:45 crc kubenswrapper[4806]: I0126 09:10:45.973248 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b5abc09-6069-421c-b9bb-198889dcc2c3" containerName="registry-server" Jan 26 09:10:45 crc kubenswrapper[4806]: I0126 09:10:45.974564 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sbl9t" Jan 26 09:10:45 crc kubenswrapper[4806]: I0126 09:10:45.987242 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sbl9t"] Jan 26 09:10:46 crc kubenswrapper[4806]: I0126 09:10:46.076139 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59bab334-2b4b-4b73-80ac-6192ad036095-catalog-content\") pod \"community-operators-sbl9t\" (UID: \"59bab334-2b4b-4b73-80ac-6192ad036095\") " pod="openshift-marketplace/community-operators-sbl9t" Jan 26 09:10:46 crc kubenswrapper[4806]: I0126 09:10:46.076361 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjpbv\" (UniqueName: \"kubernetes.io/projected/59bab334-2b4b-4b73-80ac-6192ad036095-kube-api-access-tjpbv\") pod \"community-operators-sbl9t\" (UID: \"59bab334-2b4b-4b73-80ac-6192ad036095\") " pod="openshift-marketplace/community-operators-sbl9t" Jan 26 09:10:46 crc kubenswrapper[4806]: I0126 09:10:46.076696 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59bab334-2b4b-4b73-80ac-6192ad036095-utilities\") pod \"community-operators-sbl9t\" (UID: \"59bab334-2b4b-4b73-80ac-6192ad036095\") " pod="openshift-marketplace/community-operators-sbl9t" Jan 26 09:10:46 crc kubenswrapper[4806]: I0126 09:10:46.178606 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59bab334-2b4b-4b73-80ac-6192ad036095-catalog-content\") pod \"community-operators-sbl9t\" (UID: \"59bab334-2b4b-4b73-80ac-6192ad036095\") " pod="openshift-marketplace/community-operators-sbl9t" Jan 26 09:10:46 crc kubenswrapper[4806]: I0126 09:10:46.178679 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tjpbv\" (UniqueName: \"kubernetes.io/projected/59bab334-2b4b-4b73-80ac-6192ad036095-kube-api-access-tjpbv\") pod \"community-operators-sbl9t\" (UID: \"59bab334-2b4b-4b73-80ac-6192ad036095\") " pod="openshift-marketplace/community-operators-sbl9t" Jan 26 09:10:46 crc kubenswrapper[4806]: I0126 09:10:46.178746 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59bab334-2b4b-4b73-80ac-6192ad036095-utilities\") pod \"community-operators-sbl9t\" (UID: \"59bab334-2b4b-4b73-80ac-6192ad036095\") " pod="openshift-marketplace/community-operators-sbl9t" Jan 26 09:10:46 crc kubenswrapper[4806]: I0126 09:10:46.179074 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59bab334-2b4b-4b73-80ac-6192ad036095-catalog-content\") pod \"community-operators-sbl9t\" (UID: \"59bab334-2b4b-4b73-80ac-6192ad036095\") " pod="openshift-marketplace/community-operators-sbl9t" Jan 26 09:10:46 crc kubenswrapper[4806]: I0126 09:10:46.179230 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59bab334-2b4b-4b73-80ac-6192ad036095-utilities\") pod \"community-operators-sbl9t\" (UID: \"59bab334-2b4b-4b73-80ac-6192ad036095\") " pod="openshift-marketplace/community-operators-sbl9t" Jan 26 09:10:46 crc kubenswrapper[4806]: I0126 09:10:46.196625 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tjpbv\" (UniqueName: \"kubernetes.io/projected/59bab334-2b4b-4b73-80ac-6192ad036095-kube-api-access-tjpbv\") pod \"community-operators-sbl9t\" (UID: \"59bab334-2b4b-4b73-80ac-6192ad036095\") " pod="openshift-marketplace/community-operators-sbl9t" Jan 26 09:10:46 crc kubenswrapper[4806]: I0126 09:10:46.299132 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sbl9t" Jan 26 09:10:46 crc kubenswrapper[4806]: I0126 09:10:46.817446 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sbl9t"] Jan 26 09:10:47 crc kubenswrapper[4806]: I0126 09:10:47.523762 4806 generic.go:334] "Generic (PLEG): container finished" podID="59bab334-2b4b-4b73-80ac-6192ad036095" containerID="2800a1c2b8a72c4d15cd57c7c9a6d9ab0aafa5c10a05f96e20c68cf74642be86" exitCode=0 Jan 26 09:10:47 crc kubenswrapper[4806]: I0126 09:10:47.524235 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sbl9t" event={"ID":"59bab334-2b4b-4b73-80ac-6192ad036095","Type":"ContainerDied","Data":"2800a1c2b8a72c4d15cd57c7c9a6d9ab0aafa5c10a05f96e20c68cf74642be86"} Jan 26 09:10:47 crc kubenswrapper[4806]: I0126 09:10:47.524264 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sbl9t" event={"ID":"59bab334-2b4b-4b73-80ac-6192ad036095","Type":"ContainerStarted","Data":"0c3b628b4a86bb5b0aa4e728ad1565117ea921e89c84a18fdcd0ea1ed1231ad6"} Jan 26 09:10:48 crc kubenswrapper[4806]: I0126 09:10:48.544113 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sbl9t" event={"ID":"59bab334-2b4b-4b73-80ac-6192ad036095","Type":"ContainerStarted","Data":"f8b6c757b07a025c6dbea631ddb021af714512f6933f773142040e2132847754"} Jan 26 09:10:49 crc kubenswrapper[4806]: I0126 09:10:49.553232 4806 generic.go:334] "Generic (PLEG): container finished" podID="59bab334-2b4b-4b73-80ac-6192ad036095" containerID="f8b6c757b07a025c6dbea631ddb021af714512f6933f773142040e2132847754" exitCode=0 Jan 26 09:10:49 crc kubenswrapper[4806]: I0126 09:10:49.553294 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sbl9t" event={"ID":"59bab334-2b4b-4b73-80ac-6192ad036095","Type":"ContainerDied","Data":"f8b6c757b07a025c6dbea631ddb021af714512f6933f773142040e2132847754"} Jan 26 09:10:49 crc kubenswrapper[4806]: I0126 09:10:49.555290 4806 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 09:10:50 crc kubenswrapper[4806]: I0126 09:10:50.043466 4806 scope.go:117] "RemoveContainer" containerID="0352a635b134aa1af5c582c6df3c8dae2524a5c47f9d9d0d06d1e31e7300a542" Jan 26 09:10:50 crc kubenswrapper[4806]: E0126 09:10:50.044544 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 09:10:50 crc kubenswrapper[4806]: I0126 09:10:50.580491 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sbl9t" event={"ID":"59bab334-2b4b-4b73-80ac-6192ad036095","Type":"ContainerStarted","Data":"a6ab6c3c0219ff3f9bfbe1196e80d725869ec7c985f3ab08252ccf442fc20c97"} Jan 26 09:10:50 crc kubenswrapper[4806]: I0126 09:10:50.604090 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-sbl9t" podStartSLOduration=3.202547406 podStartE2EDuration="5.604070394s" podCreationTimestamp="2026-01-26 09:10:45 +0000 UTC" firstStartedPulling="2026-01-26 09:10:47.528064728 +0000 UTC m=+4626.792472784" lastFinishedPulling="2026-01-26 09:10:49.929587716 +0000 UTC m=+4629.193995772" observedRunningTime="2026-01-26 09:10:50.594730277 +0000 UTC m=+4629.859138343" watchObservedRunningTime="2026-01-26 09:10:50.604070394 +0000 UTC m=+4629.868478450" Jan 26 09:10:56 crc kubenswrapper[4806]: I0126 09:10:56.299568 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-sbl9t" Jan 26 09:10:56 crc kubenswrapper[4806]: I0126 09:10:56.300698 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-sbl9t" Jan 26 09:10:56 crc kubenswrapper[4806]: I0126 09:10:56.356365 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-sbl9t" Jan 26 09:10:56 crc kubenswrapper[4806]: I0126 09:10:56.677385 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-sbl9t" Jan 26 09:10:56 crc kubenswrapper[4806]: I0126 09:10:56.756775 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-sbl9t"] Jan 26 09:10:58 crc kubenswrapper[4806]: I0126 09:10:58.647158 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-sbl9t" podUID="59bab334-2b4b-4b73-80ac-6192ad036095" containerName="registry-server" containerID="cri-o://a6ab6c3c0219ff3f9bfbe1196e80d725869ec7c985f3ab08252ccf442fc20c97" gracePeriod=2 Jan 26 09:10:59 crc kubenswrapper[4806]: I0126 09:10:59.202843 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sbl9t" Jan 26 09:10:59 crc kubenswrapper[4806]: I0126 09:10:59.260869 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59bab334-2b4b-4b73-80ac-6192ad036095-catalog-content\") pod \"59bab334-2b4b-4b73-80ac-6192ad036095\" (UID: \"59bab334-2b4b-4b73-80ac-6192ad036095\") " Jan 26 09:10:59 crc kubenswrapper[4806]: I0126 09:10:59.260936 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59bab334-2b4b-4b73-80ac-6192ad036095-utilities\") pod \"59bab334-2b4b-4b73-80ac-6192ad036095\" (UID: \"59bab334-2b4b-4b73-80ac-6192ad036095\") " Jan 26 09:10:59 crc kubenswrapper[4806]: I0126 09:10:59.261068 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tjpbv\" (UniqueName: \"kubernetes.io/projected/59bab334-2b4b-4b73-80ac-6192ad036095-kube-api-access-tjpbv\") pod \"59bab334-2b4b-4b73-80ac-6192ad036095\" (UID: \"59bab334-2b4b-4b73-80ac-6192ad036095\") " Jan 26 09:10:59 crc kubenswrapper[4806]: I0126 09:10:59.262769 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/59bab334-2b4b-4b73-80ac-6192ad036095-utilities" (OuterVolumeSpecName: "utilities") pod "59bab334-2b4b-4b73-80ac-6192ad036095" (UID: "59bab334-2b4b-4b73-80ac-6192ad036095"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 09:10:59 crc kubenswrapper[4806]: I0126 09:10:59.281019 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59bab334-2b4b-4b73-80ac-6192ad036095-kube-api-access-tjpbv" (OuterVolumeSpecName: "kube-api-access-tjpbv") pod "59bab334-2b4b-4b73-80ac-6192ad036095" (UID: "59bab334-2b4b-4b73-80ac-6192ad036095"). InnerVolumeSpecName "kube-api-access-tjpbv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:10:59 crc kubenswrapper[4806]: I0126 09:10:59.332323 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/59bab334-2b4b-4b73-80ac-6192ad036095-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "59bab334-2b4b-4b73-80ac-6192ad036095" (UID: "59bab334-2b4b-4b73-80ac-6192ad036095"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 09:10:59 crc kubenswrapper[4806]: I0126 09:10:59.363732 4806 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59bab334-2b4b-4b73-80ac-6192ad036095-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 09:10:59 crc kubenswrapper[4806]: I0126 09:10:59.363760 4806 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59bab334-2b4b-4b73-80ac-6192ad036095-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 09:10:59 crc kubenswrapper[4806]: I0126 09:10:59.363771 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tjpbv\" (UniqueName: \"kubernetes.io/projected/59bab334-2b4b-4b73-80ac-6192ad036095-kube-api-access-tjpbv\") on node \"crc\" DevicePath \"\"" Jan 26 09:10:59 crc kubenswrapper[4806]: I0126 09:10:59.659188 4806 generic.go:334] "Generic (PLEG): container finished" podID="59bab334-2b4b-4b73-80ac-6192ad036095" containerID="a6ab6c3c0219ff3f9bfbe1196e80d725869ec7c985f3ab08252ccf442fc20c97" exitCode=0 Jan 26 09:10:59 crc kubenswrapper[4806]: I0126 09:10:59.659234 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sbl9t" event={"ID":"59bab334-2b4b-4b73-80ac-6192ad036095","Type":"ContainerDied","Data":"a6ab6c3c0219ff3f9bfbe1196e80d725869ec7c985f3ab08252ccf442fc20c97"} Jan 26 09:10:59 crc kubenswrapper[4806]: I0126 09:10:59.659266 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sbl9t" event={"ID":"59bab334-2b4b-4b73-80ac-6192ad036095","Type":"ContainerDied","Data":"0c3b628b4a86bb5b0aa4e728ad1565117ea921e89c84a18fdcd0ea1ed1231ad6"} Jan 26 09:10:59 crc kubenswrapper[4806]: I0126 09:10:59.659285 4806 scope.go:117] "RemoveContainer" containerID="a6ab6c3c0219ff3f9bfbe1196e80d725869ec7c985f3ab08252ccf442fc20c97" Jan 26 09:10:59 crc kubenswrapper[4806]: I0126 09:10:59.659334 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sbl9t" Jan 26 09:10:59 crc kubenswrapper[4806]: I0126 09:10:59.700844 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-sbl9t"] Jan 26 09:10:59 crc kubenswrapper[4806]: I0126 09:10:59.702408 4806 scope.go:117] "RemoveContainer" containerID="f8b6c757b07a025c6dbea631ddb021af714512f6933f773142040e2132847754" Jan 26 09:10:59 crc kubenswrapper[4806]: I0126 09:10:59.709926 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-sbl9t"] Jan 26 09:10:59 crc kubenswrapper[4806]: I0126 09:10:59.755785 4806 scope.go:117] "RemoveContainer" containerID="2800a1c2b8a72c4d15cd57c7c9a6d9ab0aafa5c10a05f96e20c68cf74642be86" Jan 26 09:10:59 crc kubenswrapper[4806]: I0126 09:10:59.780551 4806 scope.go:117] "RemoveContainer" containerID="a6ab6c3c0219ff3f9bfbe1196e80d725869ec7c985f3ab08252ccf442fc20c97" Jan 26 09:10:59 crc kubenswrapper[4806]: E0126 09:10:59.781333 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a6ab6c3c0219ff3f9bfbe1196e80d725869ec7c985f3ab08252ccf442fc20c97\": container with ID starting with a6ab6c3c0219ff3f9bfbe1196e80d725869ec7c985f3ab08252ccf442fc20c97 not found: ID does not exist" containerID="a6ab6c3c0219ff3f9bfbe1196e80d725869ec7c985f3ab08252ccf442fc20c97" Jan 26 09:10:59 crc kubenswrapper[4806]: I0126 09:10:59.781372 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6ab6c3c0219ff3f9bfbe1196e80d725869ec7c985f3ab08252ccf442fc20c97"} err="failed to get container status \"a6ab6c3c0219ff3f9bfbe1196e80d725869ec7c985f3ab08252ccf442fc20c97\": rpc error: code = NotFound desc = could not find container \"a6ab6c3c0219ff3f9bfbe1196e80d725869ec7c985f3ab08252ccf442fc20c97\": container with ID starting with a6ab6c3c0219ff3f9bfbe1196e80d725869ec7c985f3ab08252ccf442fc20c97 not found: ID does not exist" Jan 26 09:10:59 crc kubenswrapper[4806]: I0126 09:10:59.781419 4806 scope.go:117] "RemoveContainer" containerID="f8b6c757b07a025c6dbea631ddb021af714512f6933f773142040e2132847754" Jan 26 09:10:59 crc kubenswrapper[4806]: E0126 09:10:59.781744 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f8b6c757b07a025c6dbea631ddb021af714512f6933f773142040e2132847754\": container with ID starting with f8b6c757b07a025c6dbea631ddb021af714512f6933f773142040e2132847754 not found: ID does not exist" containerID="f8b6c757b07a025c6dbea631ddb021af714512f6933f773142040e2132847754" Jan 26 09:10:59 crc kubenswrapper[4806]: I0126 09:10:59.781792 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8b6c757b07a025c6dbea631ddb021af714512f6933f773142040e2132847754"} err="failed to get container status \"f8b6c757b07a025c6dbea631ddb021af714512f6933f773142040e2132847754\": rpc error: code = NotFound desc = could not find container \"f8b6c757b07a025c6dbea631ddb021af714512f6933f773142040e2132847754\": container with ID starting with f8b6c757b07a025c6dbea631ddb021af714512f6933f773142040e2132847754 not found: ID does not exist" Jan 26 09:10:59 crc kubenswrapper[4806]: I0126 09:10:59.781812 4806 scope.go:117] "RemoveContainer" containerID="2800a1c2b8a72c4d15cd57c7c9a6d9ab0aafa5c10a05f96e20c68cf74642be86" Jan 26 09:10:59 crc kubenswrapper[4806]: E0126 09:10:59.782554 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2800a1c2b8a72c4d15cd57c7c9a6d9ab0aafa5c10a05f96e20c68cf74642be86\": container with ID starting with 2800a1c2b8a72c4d15cd57c7c9a6d9ab0aafa5c10a05f96e20c68cf74642be86 not found: ID does not exist" containerID="2800a1c2b8a72c4d15cd57c7c9a6d9ab0aafa5c10a05f96e20c68cf74642be86" Jan 26 09:10:59 crc kubenswrapper[4806]: I0126 09:10:59.782583 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2800a1c2b8a72c4d15cd57c7c9a6d9ab0aafa5c10a05f96e20c68cf74642be86"} err="failed to get container status \"2800a1c2b8a72c4d15cd57c7c9a6d9ab0aafa5c10a05f96e20c68cf74642be86\": rpc error: code = NotFound desc = could not find container \"2800a1c2b8a72c4d15cd57c7c9a6d9ab0aafa5c10a05f96e20c68cf74642be86\": container with ID starting with 2800a1c2b8a72c4d15cd57c7c9a6d9ab0aafa5c10a05f96e20c68cf74642be86 not found: ID does not exist" Jan 26 09:11:01 crc kubenswrapper[4806]: I0126 09:11:01.051561 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59bab334-2b4b-4b73-80ac-6192ad036095" path="/var/lib/kubelet/pods/59bab334-2b4b-4b73-80ac-6192ad036095/volumes" Jan 26 09:11:04 crc kubenswrapper[4806]: I0126 09:11:04.041996 4806 scope.go:117] "RemoveContainer" containerID="0352a635b134aa1af5c582c6df3c8dae2524a5c47f9d9d0d06d1e31e7300a542" Jan 26 09:11:04 crc kubenswrapper[4806]: E0126 09:11:04.042725 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 09:11:18 crc kubenswrapper[4806]: I0126 09:11:18.041993 4806 scope.go:117] "RemoveContainer" containerID="0352a635b134aa1af5c582c6df3c8dae2524a5c47f9d9d0d06d1e31e7300a542" Jan 26 09:11:18 crc kubenswrapper[4806]: E0126 09:11:18.042886 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 09:11:33 crc kubenswrapper[4806]: I0126 09:11:33.041593 4806 scope.go:117] "RemoveContainer" containerID="0352a635b134aa1af5c582c6df3c8dae2524a5c47f9d9d0d06d1e31e7300a542" Jan 26 09:11:33 crc kubenswrapper[4806]: E0126 09:11:33.042251 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 09:11:47 crc kubenswrapper[4806]: I0126 09:11:47.042355 4806 scope.go:117] "RemoveContainer" containerID="0352a635b134aa1af5c582c6df3c8dae2524a5c47f9d9d0d06d1e31e7300a542" Jan 26 09:11:47 crc kubenswrapper[4806]: E0126 09:11:47.043075 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 09:12:02 crc kubenswrapper[4806]: I0126 09:12:02.042123 4806 scope.go:117] "RemoveContainer" containerID="0352a635b134aa1af5c582c6df3c8dae2524a5c47f9d9d0d06d1e31e7300a542" Jan 26 09:12:02 crc kubenswrapper[4806]: E0126 09:12:02.042816 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 09:12:17 crc kubenswrapper[4806]: I0126 09:12:17.042248 4806 scope.go:117] "RemoveContainer" containerID="0352a635b134aa1af5c582c6df3c8dae2524a5c47f9d9d0d06d1e31e7300a542" Jan 26 09:12:17 crc kubenswrapper[4806]: E0126 09:12:17.043069 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 09:12:28 crc kubenswrapper[4806]: I0126 09:12:28.042378 4806 scope.go:117] "RemoveContainer" containerID="0352a635b134aa1af5c582c6df3c8dae2524a5c47f9d9d0d06d1e31e7300a542" Jan 26 09:12:28 crc kubenswrapper[4806]: E0126 09:12:28.044428 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 09:12:43 crc kubenswrapper[4806]: I0126 09:12:43.042448 4806 scope.go:117] "RemoveContainer" containerID="0352a635b134aa1af5c582c6df3c8dae2524a5c47f9d9d0d06d1e31e7300a542" Jan 26 09:12:43 crc kubenswrapper[4806]: E0126 09:12:43.043182 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 09:12:56 crc kubenswrapper[4806]: I0126 09:12:56.045052 4806 scope.go:117] "RemoveContainer" containerID="0352a635b134aa1af5c582c6df3c8dae2524a5c47f9d9d0d06d1e31e7300a542" Jan 26 09:12:56 crc kubenswrapper[4806]: E0126 09:12:56.045980 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 09:13:11 crc kubenswrapper[4806]: I0126 09:13:11.048449 4806 scope.go:117] "RemoveContainer" containerID="0352a635b134aa1af5c582c6df3c8dae2524a5c47f9d9d0d06d1e31e7300a542" Jan 26 09:13:11 crc kubenswrapper[4806]: E0126 09:13:11.049344 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 09:13:26 crc kubenswrapper[4806]: I0126 09:13:26.041919 4806 scope.go:117] "RemoveContainer" containerID="0352a635b134aa1af5c582c6df3c8dae2524a5c47f9d9d0d06d1e31e7300a542" Jan 26 09:13:26 crc kubenswrapper[4806]: E0126 09:13:26.042791 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 09:13:39 crc kubenswrapper[4806]: I0126 09:13:39.041990 4806 scope.go:117] "RemoveContainer" containerID="0352a635b134aa1af5c582c6df3c8dae2524a5c47f9d9d0d06d1e31e7300a542" Jan 26 09:13:39 crc kubenswrapper[4806]: E0126 09:13:39.042814 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 09:13:51 crc kubenswrapper[4806]: I0126 09:13:51.048416 4806 scope.go:117] "RemoveContainer" containerID="0352a635b134aa1af5c582c6df3c8dae2524a5c47f9d9d0d06d1e31e7300a542" Jan 26 09:13:51 crc kubenswrapper[4806]: E0126 09:13:51.049213 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 09:13:55 crc kubenswrapper[4806]: I0126 09:13:55.568594 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-mwgrd"] Jan 26 09:13:55 crc kubenswrapper[4806]: E0126 09:13:55.570905 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59bab334-2b4b-4b73-80ac-6192ad036095" containerName="extract-content" Jan 26 09:13:55 crc kubenswrapper[4806]: I0126 09:13:55.570999 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="59bab334-2b4b-4b73-80ac-6192ad036095" containerName="extract-content" Jan 26 09:13:55 crc kubenswrapper[4806]: E0126 09:13:55.571075 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59bab334-2b4b-4b73-80ac-6192ad036095" containerName="registry-server" Jan 26 09:13:55 crc kubenswrapper[4806]: I0126 09:13:55.571128 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="59bab334-2b4b-4b73-80ac-6192ad036095" containerName="registry-server" Jan 26 09:13:55 crc kubenswrapper[4806]: E0126 09:13:55.571260 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59bab334-2b4b-4b73-80ac-6192ad036095" containerName="extract-utilities" Jan 26 09:13:55 crc kubenswrapper[4806]: I0126 09:13:55.571316 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="59bab334-2b4b-4b73-80ac-6192ad036095" containerName="extract-utilities" Jan 26 09:13:55 crc kubenswrapper[4806]: I0126 09:13:55.571567 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="59bab334-2b4b-4b73-80ac-6192ad036095" containerName="registry-server" Jan 26 09:13:55 crc kubenswrapper[4806]: I0126 09:13:55.572963 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mwgrd" Jan 26 09:13:55 crc kubenswrapper[4806]: I0126 09:13:55.591203 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f331153-d66c-484b-b672-f71b26b7b474-utilities\") pod \"redhat-marketplace-mwgrd\" (UID: \"3f331153-d66c-484b-b672-f71b26b7b474\") " pod="openshift-marketplace/redhat-marketplace-mwgrd" Jan 26 09:13:55 crc kubenswrapper[4806]: I0126 09:13:55.591459 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f331153-d66c-484b-b672-f71b26b7b474-catalog-content\") pod \"redhat-marketplace-mwgrd\" (UID: \"3f331153-d66c-484b-b672-f71b26b7b474\") " pod="openshift-marketplace/redhat-marketplace-mwgrd" Jan 26 09:13:55 crc kubenswrapper[4806]: I0126 09:13:55.591638 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49jmt\" (UniqueName: \"kubernetes.io/projected/3f331153-d66c-484b-b672-f71b26b7b474-kube-api-access-49jmt\") pod \"redhat-marketplace-mwgrd\" (UID: \"3f331153-d66c-484b-b672-f71b26b7b474\") " pod="openshift-marketplace/redhat-marketplace-mwgrd" Jan 26 09:13:55 crc kubenswrapper[4806]: I0126 09:13:55.615012 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mwgrd"] Jan 26 09:13:55 crc kubenswrapper[4806]: I0126 09:13:55.694071 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f331153-d66c-484b-b672-f71b26b7b474-utilities\") pod \"redhat-marketplace-mwgrd\" (UID: \"3f331153-d66c-484b-b672-f71b26b7b474\") " pod="openshift-marketplace/redhat-marketplace-mwgrd" Jan 26 09:13:55 crc kubenswrapper[4806]: I0126 09:13:55.694356 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f331153-d66c-484b-b672-f71b26b7b474-catalog-content\") pod \"redhat-marketplace-mwgrd\" (UID: \"3f331153-d66c-484b-b672-f71b26b7b474\") " pod="openshift-marketplace/redhat-marketplace-mwgrd" Jan 26 09:13:55 crc kubenswrapper[4806]: I0126 09:13:55.694424 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49jmt\" (UniqueName: \"kubernetes.io/projected/3f331153-d66c-484b-b672-f71b26b7b474-kube-api-access-49jmt\") pod \"redhat-marketplace-mwgrd\" (UID: \"3f331153-d66c-484b-b672-f71b26b7b474\") " pod="openshift-marketplace/redhat-marketplace-mwgrd" Jan 26 09:13:55 crc kubenswrapper[4806]: I0126 09:13:55.694754 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f331153-d66c-484b-b672-f71b26b7b474-utilities\") pod \"redhat-marketplace-mwgrd\" (UID: \"3f331153-d66c-484b-b672-f71b26b7b474\") " pod="openshift-marketplace/redhat-marketplace-mwgrd" Jan 26 09:13:55 crc kubenswrapper[4806]: I0126 09:13:55.694786 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f331153-d66c-484b-b672-f71b26b7b474-catalog-content\") pod \"redhat-marketplace-mwgrd\" (UID: \"3f331153-d66c-484b-b672-f71b26b7b474\") " pod="openshift-marketplace/redhat-marketplace-mwgrd" Jan 26 09:13:55 crc kubenswrapper[4806]: I0126 09:13:55.741467 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49jmt\" (UniqueName: \"kubernetes.io/projected/3f331153-d66c-484b-b672-f71b26b7b474-kube-api-access-49jmt\") pod \"redhat-marketplace-mwgrd\" (UID: \"3f331153-d66c-484b-b672-f71b26b7b474\") " pod="openshift-marketplace/redhat-marketplace-mwgrd" Jan 26 09:13:55 crc kubenswrapper[4806]: I0126 09:13:55.895320 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mwgrd" Jan 26 09:13:56 crc kubenswrapper[4806]: I0126 09:13:56.561342 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mwgrd"] Jan 26 09:13:57 crc kubenswrapper[4806]: I0126 09:13:57.576977 4806 generic.go:334] "Generic (PLEG): container finished" podID="3f331153-d66c-484b-b672-f71b26b7b474" containerID="b0221a5bddc996c3a3949ef706d75a8f693d9163a0d23d764ba2d2044db651fc" exitCode=0 Jan 26 09:13:57 crc kubenswrapper[4806]: I0126 09:13:57.577037 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mwgrd" event={"ID":"3f331153-d66c-484b-b672-f71b26b7b474","Type":"ContainerDied","Data":"b0221a5bddc996c3a3949ef706d75a8f693d9163a0d23d764ba2d2044db651fc"} Jan 26 09:13:57 crc kubenswrapper[4806]: I0126 09:13:57.577273 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mwgrd" event={"ID":"3f331153-d66c-484b-b672-f71b26b7b474","Type":"ContainerStarted","Data":"2549c70c0e271fed8769cfee4d797b5cc9799c4f974f380fcd107929ebab810f"} Jan 26 09:13:58 crc kubenswrapper[4806]: I0126 09:13:58.160442 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-dhm8r"] Jan 26 09:13:58 crc kubenswrapper[4806]: I0126 09:13:58.164032 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dhm8r" Jan 26 09:13:58 crc kubenswrapper[4806]: I0126 09:13:58.192698 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dhm8r"] Jan 26 09:13:58 crc kubenswrapper[4806]: I0126 09:13:58.343414 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9609222-84cc-4ac2-bae6-381af776ace2-catalog-content\") pod \"redhat-operators-dhm8r\" (UID: \"d9609222-84cc-4ac2-bae6-381af776ace2\") " pod="openshift-marketplace/redhat-operators-dhm8r" Jan 26 09:13:58 crc kubenswrapper[4806]: I0126 09:13:58.343478 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrh98\" (UniqueName: \"kubernetes.io/projected/d9609222-84cc-4ac2-bae6-381af776ace2-kube-api-access-lrh98\") pod \"redhat-operators-dhm8r\" (UID: \"d9609222-84cc-4ac2-bae6-381af776ace2\") " pod="openshift-marketplace/redhat-operators-dhm8r" Jan 26 09:13:58 crc kubenswrapper[4806]: I0126 09:13:58.343626 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9609222-84cc-4ac2-bae6-381af776ace2-utilities\") pod \"redhat-operators-dhm8r\" (UID: \"d9609222-84cc-4ac2-bae6-381af776ace2\") " pod="openshift-marketplace/redhat-operators-dhm8r" Jan 26 09:13:58 crc kubenswrapper[4806]: I0126 09:13:58.445684 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9609222-84cc-4ac2-bae6-381af776ace2-utilities\") pod \"redhat-operators-dhm8r\" (UID: \"d9609222-84cc-4ac2-bae6-381af776ace2\") " pod="openshift-marketplace/redhat-operators-dhm8r" Jan 26 09:13:58 crc kubenswrapper[4806]: I0126 09:13:58.445819 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9609222-84cc-4ac2-bae6-381af776ace2-catalog-content\") pod \"redhat-operators-dhm8r\" (UID: \"d9609222-84cc-4ac2-bae6-381af776ace2\") " pod="openshift-marketplace/redhat-operators-dhm8r" Jan 26 09:13:58 crc kubenswrapper[4806]: I0126 09:13:58.445868 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrh98\" (UniqueName: \"kubernetes.io/projected/d9609222-84cc-4ac2-bae6-381af776ace2-kube-api-access-lrh98\") pod \"redhat-operators-dhm8r\" (UID: \"d9609222-84cc-4ac2-bae6-381af776ace2\") " pod="openshift-marketplace/redhat-operators-dhm8r" Jan 26 09:13:58 crc kubenswrapper[4806]: I0126 09:13:58.446313 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9609222-84cc-4ac2-bae6-381af776ace2-utilities\") pod \"redhat-operators-dhm8r\" (UID: \"d9609222-84cc-4ac2-bae6-381af776ace2\") " pod="openshift-marketplace/redhat-operators-dhm8r" Jan 26 09:13:58 crc kubenswrapper[4806]: I0126 09:13:58.446640 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9609222-84cc-4ac2-bae6-381af776ace2-catalog-content\") pod \"redhat-operators-dhm8r\" (UID: \"d9609222-84cc-4ac2-bae6-381af776ace2\") " pod="openshift-marketplace/redhat-operators-dhm8r" Jan 26 09:13:58 crc kubenswrapper[4806]: I0126 09:13:58.465977 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrh98\" (UniqueName: \"kubernetes.io/projected/d9609222-84cc-4ac2-bae6-381af776ace2-kube-api-access-lrh98\") pod \"redhat-operators-dhm8r\" (UID: \"d9609222-84cc-4ac2-bae6-381af776ace2\") " pod="openshift-marketplace/redhat-operators-dhm8r" Jan 26 09:13:58 crc kubenswrapper[4806]: I0126 09:13:58.486795 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dhm8r" Jan 26 09:13:58 crc kubenswrapper[4806]: I0126 09:13:58.595127 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mwgrd" event={"ID":"3f331153-d66c-484b-b672-f71b26b7b474","Type":"ContainerStarted","Data":"f481dabc224afabef93f1de2a7c2331352832a819a7e486adcb1106749777461"} Jan 26 09:13:59 crc kubenswrapper[4806]: I0126 09:13:59.420422 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dhm8r"] Jan 26 09:13:59 crc kubenswrapper[4806]: I0126 09:13:59.604331 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dhm8r" event={"ID":"d9609222-84cc-4ac2-bae6-381af776ace2","Type":"ContainerStarted","Data":"2cf53552e002860bac885e6f88a1efc1ecbc419b446b8b5cfce7b1be1b21b459"} Jan 26 09:14:00 crc kubenswrapper[4806]: I0126 09:14:00.614903 4806 generic.go:334] "Generic (PLEG): container finished" podID="d9609222-84cc-4ac2-bae6-381af776ace2" containerID="4d972532392d435b79bd9dcde7af9b6399fab31aaa02705816ce647ef69ab8cb" exitCode=0 Jan 26 09:14:00 crc kubenswrapper[4806]: I0126 09:14:00.615028 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dhm8r" event={"ID":"d9609222-84cc-4ac2-bae6-381af776ace2","Type":"ContainerDied","Data":"4d972532392d435b79bd9dcde7af9b6399fab31aaa02705816ce647ef69ab8cb"} Jan 26 09:14:00 crc kubenswrapper[4806]: I0126 09:14:00.617560 4806 generic.go:334] "Generic (PLEG): container finished" podID="3f331153-d66c-484b-b672-f71b26b7b474" containerID="f481dabc224afabef93f1de2a7c2331352832a819a7e486adcb1106749777461" exitCode=0 Jan 26 09:14:00 crc kubenswrapper[4806]: I0126 09:14:00.617591 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mwgrd" event={"ID":"3f331153-d66c-484b-b672-f71b26b7b474","Type":"ContainerDied","Data":"f481dabc224afabef93f1de2a7c2331352832a819a7e486adcb1106749777461"} Jan 26 09:14:01 crc kubenswrapper[4806]: I0126 09:14:01.630625 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mwgrd" event={"ID":"3f331153-d66c-484b-b672-f71b26b7b474","Type":"ContainerStarted","Data":"a3c42c9e2fe42a33d175df68baa7e93cbb055cf3078f59bb5e06eacb45229e4a"} Jan 26 09:14:02 crc kubenswrapper[4806]: I0126 09:14:02.644352 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dhm8r" event={"ID":"d9609222-84cc-4ac2-bae6-381af776ace2","Type":"ContainerStarted","Data":"d055a718fe5fc27a143e07bc35f3be5edbad3575d95bdab1e2319b9b34c1dd95"} Jan 26 09:14:02 crc kubenswrapper[4806]: I0126 09:14:02.671425 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-mwgrd" podStartSLOduration=3.915103964 podStartE2EDuration="7.671403555s" podCreationTimestamp="2026-01-26 09:13:55 +0000 UTC" firstStartedPulling="2026-01-26 09:13:57.580028494 +0000 UTC m=+4816.844436550" lastFinishedPulling="2026-01-26 09:14:01.336328085 +0000 UTC m=+4820.600736141" observedRunningTime="2026-01-26 09:14:02.663306495 +0000 UTC m=+4821.927714551" watchObservedRunningTime="2026-01-26 09:14:02.671403555 +0000 UTC m=+4821.935811611" Jan 26 09:14:04 crc kubenswrapper[4806]: I0126 09:14:04.042705 4806 scope.go:117] "RemoveContainer" containerID="0352a635b134aa1af5c582c6df3c8dae2524a5c47f9d9d0d06d1e31e7300a542" Jan 26 09:14:04 crc kubenswrapper[4806]: E0126 09:14:04.043295 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 09:14:05 crc kubenswrapper[4806]: I0126 09:14:05.897509 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-mwgrd" Jan 26 09:14:05 crc kubenswrapper[4806]: I0126 09:14:05.897846 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-mwgrd" Jan 26 09:14:07 crc kubenswrapper[4806]: I0126 09:14:07.202897 4806 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-mwgrd" podUID="3f331153-d66c-484b-b672-f71b26b7b474" containerName="registry-server" probeResult="failure" output=< Jan 26 09:14:07 crc kubenswrapper[4806]: timeout: failed to connect service ":50051" within 1s Jan 26 09:14:07 crc kubenswrapper[4806]: > Jan 26 09:14:07 crc kubenswrapper[4806]: I0126 09:14:07.687384 4806 generic.go:334] "Generic (PLEG): container finished" podID="d9609222-84cc-4ac2-bae6-381af776ace2" containerID="d055a718fe5fc27a143e07bc35f3be5edbad3575d95bdab1e2319b9b34c1dd95" exitCode=0 Jan 26 09:14:07 crc kubenswrapper[4806]: I0126 09:14:07.687426 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dhm8r" event={"ID":"d9609222-84cc-4ac2-bae6-381af776ace2","Type":"ContainerDied","Data":"d055a718fe5fc27a143e07bc35f3be5edbad3575d95bdab1e2319b9b34c1dd95"} Jan 26 09:14:08 crc kubenswrapper[4806]: I0126 09:14:08.699127 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dhm8r" event={"ID":"d9609222-84cc-4ac2-bae6-381af776ace2","Type":"ContainerStarted","Data":"e4bd5f9a78e230166f822db1b6f0a6f30f1932cbd4c41bb279ba62c8a4e6453c"} Jan 26 09:14:08 crc kubenswrapper[4806]: I0126 09:14:08.725400 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-dhm8r" podStartSLOduration=2.980314321 podStartE2EDuration="10.725357465s" podCreationTimestamp="2026-01-26 09:13:58 +0000 UTC" firstStartedPulling="2026-01-26 09:14:00.616856418 +0000 UTC m=+4819.881264464" lastFinishedPulling="2026-01-26 09:14:08.361899542 +0000 UTC m=+4827.626307608" observedRunningTime="2026-01-26 09:14:08.724125591 +0000 UTC m=+4827.988533647" watchObservedRunningTime="2026-01-26 09:14:08.725357465 +0000 UTC m=+4827.989765521" Jan 26 09:14:15 crc kubenswrapper[4806]: I0126 09:14:15.042123 4806 scope.go:117] "RemoveContainer" containerID="0352a635b134aa1af5c582c6df3c8dae2524a5c47f9d9d0d06d1e31e7300a542" Jan 26 09:14:15 crc kubenswrapper[4806]: E0126 09:14:15.043029 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 09:14:15 crc kubenswrapper[4806]: I0126 09:14:15.954679 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-mwgrd" Jan 26 09:14:16 crc kubenswrapper[4806]: I0126 09:14:16.003035 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-mwgrd" Jan 26 09:14:16 crc kubenswrapper[4806]: I0126 09:14:16.190674 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mwgrd"] Jan 26 09:14:17 crc kubenswrapper[4806]: I0126 09:14:17.776594 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-mwgrd" podUID="3f331153-d66c-484b-b672-f71b26b7b474" containerName="registry-server" containerID="cri-o://a3c42c9e2fe42a33d175df68baa7e93cbb055cf3078f59bb5e06eacb45229e4a" gracePeriod=2 Jan 26 09:14:18 crc kubenswrapper[4806]: I0126 09:14:18.397015 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mwgrd" Jan 26 09:14:18 crc kubenswrapper[4806]: I0126 09:14:18.488006 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-dhm8r" Jan 26 09:14:18 crc kubenswrapper[4806]: I0126 09:14:18.488066 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-dhm8r" Jan 26 09:14:18 crc kubenswrapper[4806]: I0126 09:14:18.594015 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-49jmt\" (UniqueName: \"kubernetes.io/projected/3f331153-d66c-484b-b672-f71b26b7b474-kube-api-access-49jmt\") pod \"3f331153-d66c-484b-b672-f71b26b7b474\" (UID: \"3f331153-d66c-484b-b672-f71b26b7b474\") " Jan 26 09:14:18 crc kubenswrapper[4806]: I0126 09:14:18.594217 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f331153-d66c-484b-b672-f71b26b7b474-utilities\") pod \"3f331153-d66c-484b-b672-f71b26b7b474\" (UID: \"3f331153-d66c-484b-b672-f71b26b7b474\") " Jan 26 09:14:18 crc kubenswrapper[4806]: I0126 09:14:18.594302 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f331153-d66c-484b-b672-f71b26b7b474-catalog-content\") pod \"3f331153-d66c-484b-b672-f71b26b7b474\" (UID: \"3f331153-d66c-484b-b672-f71b26b7b474\") " Jan 26 09:14:18 crc kubenswrapper[4806]: I0126 09:14:18.594599 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f331153-d66c-484b-b672-f71b26b7b474-utilities" (OuterVolumeSpecName: "utilities") pod "3f331153-d66c-484b-b672-f71b26b7b474" (UID: "3f331153-d66c-484b-b672-f71b26b7b474"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 09:14:18 crc kubenswrapper[4806]: I0126 09:14:18.594799 4806 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f331153-d66c-484b-b672-f71b26b7b474-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 09:14:18 crc kubenswrapper[4806]: I0126 09:14:18.613274 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f331153-d66c-484b-b672-f71b26b7b474-kube-api-access-49jmt" (OuterVolumeSpecName: "kube-api-access-49jmt") pod "3f331153-d66c-484b-b672-f71b26b7b474" (UID: "3f331153-d66c-484b-b672-f71b26b7b474"). InnerVolumeSpecName "kube-api-access-49jmt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:14:18 crc kubenswrapper[4806]: I0126 09:14:18.615355 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f331153-d66c-484b-b672-f71b26b7b474-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3f331153-d66c-484b-b672-f71b26b7b474" (UID: "3f331153-d66c-484b-b672-f71b26b7b474"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 09:14:18 crc kubenswrapper[4806]: I0126 09:14:18.696161 4806 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f331153-d66c-484b-b672-f71b26b7b474-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 09:14:18 crc kubenswrapper[4806]: I0126 09:14:18.696502 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-49jmt\" (UniqueName: \"kubernetes.io/projected/3f331153-d66c-484b-b672-f71b26b7b474-kube-api-access-49jmt\") on node \"crc\" DevicePath \"\"" Jan 26 09:14:18 crc kubenswrapper[4806]: I0126 09:14:18.786013 4806 generic.go:334] "Generic (PLEG): container finished" podID="3f331153-d66c-484b-b672-f71b26b7b474" containerID="a3c42c9e2fe42a33d175df68baa7e93cbb055cf3078f59bb5e06eacb45229e4a" exitCode=0 Jan 26 09:14:18 crc kubenswrapper[4806]: I0126 09:14:18.786058 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mwgrd" event={"ID":"3f331153-d66c-484b-b672-f71b26b7b474","Type":"ContainerDied","Data":"a3c42c9e2fe42a33d175df68baa7e93cbb055cf3078f59bb5e06eacb45229e4a"} Jan 26 09:14:18 crc kubenswrapper[4806]: I0126 09:14:18.786084 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mwgrd" event={"ID":"3f331153-d66c-484b-b672-f71b26b7b474","Type":"ContainerDied","Data":"2549c70c0e271fed8769cfee4d797b5cc9799c4f974f380fcd107929ebab810f"} Jan 26 09:14:18 crc kubenswrapper[4806]: I0126 09:14:18.786091 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mwgrd" Jan 26 09:14:18 crc kubenswrapper[4806]: I0126 09:14:18.786103 4806 scope.go:117] "RemoveContainer" containerID="a3c42c9e2fe42a33d175df68baa7e93cbb055cf3078f59bb5e06eacb45229e4a" Jan 26 09:14:18 crc kubenswrapper[4806]: I0126 09:14:18.811651 4806 scope.go:117] "RemoveContainer" containerID="f481dabc224afabef93f1de2a7c2331352832a819a7e486adcb1106749777461" Jan 26 09:14:18 crc kubenswrapper[4806]: I0126 09:14:18.829591 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mwgrd"] Jan 26 09:14:18 crc kubenswrapper[4806]: I0126 09:14:18.838130 4806 scope.go:117] "RemoveContainer" containerID="b0221a5bddc996c3a3949ef706d75a8f693d9163a0d23d764ba2d2044db651fc" Jan 26 09:14:18 crc kubenswrapper[4806]: I0126 09:14:18.848878 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-mwgrd"] Jan 26 09:14:18 crc kubenswrapper[4806]: I0126 09:14:18.874296 4806 scope.go:117] "RemoveContainer" containerID="a3c42c9e2fe42a33d175df68baa7e93cbb055cf3078f59bb5e06eacb45229e4a" Jan 26 09:14:18 crc kubenswrapper[4806]: E0126 09:14:18.874845 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a3c42c9e2fe42a33d175df68baa7e93cbb055cf3078f59bb5e06eacb45229e4a\": container with ID starting with a3c42c9e2fe42a33d175df68baa7e93cbb055cf3078f59bb5e06eacb45229e4a not found: ID does not exist" containerID="a3c42c9e2fe42a33d175df68baa7e93cbb055cf3078f59bb5e06eacb45229e4a" Jan 26 09:14:18 crc kubenswrapper[4806]: I0126 09:14:18.874893 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3c42c9e2fe42a33d175df68baa7e93cbb055cf3078f59bb5e06eacb45229e4a"} err="failed to get container status \"a3c42c9e2fe42a33d175df68baa7e93cbb055cf3078f59bb5e06eacb45229e4a\": rpc error: code = NotFound desc = could not find container \"a3c42c9e2fe42a33d175df68baa7e93cbb055cf3078f59bb5e06eacb45229e4a\": container with ID starting with a3c42c9e2fe42a33d175df68baa7e93cbb055cf3078f59bb5e06eacb45229e4a not found: ID does not exist" Jan 26 09:14:18 crc kubenswrapper[4806]: I0126 09:14:18.874913 4806 scope.go:117] "RemoveContainer" containerID="f481dabc224afabef93f1de2a7c2331352832a819a7e486adcb1106749777461" Jan 26 09:14:18 crc kubenswrapper[4806]: E0126 09:14:18.875254 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f481dabc224afabef93f1de2a7c2331352832a819a7e486adcb1106749777461\": container with ID starting with f481dabc224afabef93f1de2a7c2331352832a819a7e486adcb1106749777461 not found: ID does not exist" containerID="f481dabc224afabef93f1de2a7c2331352832a819a7e486adcb1106749777461" Jan 26 09:14:18 crc kubenswrapper[4806]: I0126 09:14:18.875312 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f481dabc224afabef93f1de2a7c2331352832a819a7e486adcb1106749777461"} err="failed to get container status \"f481dabc224afabef93f1de2a7c2331352832a819a7e486adcb1106749777461\": rpc error: code = NotFound desc = could not find container \"f481dabc224afabef93f1de2a7c2331352832a819a7e486adcb1106749777461\": container with ID starting with f481dabc224afabef93f1de2a7c2331352832a819a7e486adcb1106749777461 not found: ID does not exist" Jan 26 09:14:18 crc kubenswrapper[4806]: I0126 09:14:18.875367 4806 scope.go:117] "RemoveContainer" containerID="b0221a5bddc996c3a3949ef706d75a8f693d9163a0d23d764ba2d2044db651fc" Jan 26 09:14:18 crc kubenswrapper[4806]: E0126 09:14:18.875702 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b0221a5bddc996c3a3949ef706d75a8f693d9163a0d23d764ba2d2044db651fc\": container with ID starting with b0221a5bddc996c3a3949ef706d75a8f693d9163a0d23d764ba2d2044db651fc not found: ID does not exist" containerID="b0221a5bddc996c3a3949ef706d75a8f693d9163a0d23d764ba2d2044db651fc" Jan 26 09:14:18 crc kubenswrapper[4806]: I0126 09:14:18.875732 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0221a5bddc996c3a3949ef706d75a8f693d9163a0d23d764ba2d2044db651fc"} err="failed to get container status \"b0221a5bddc996c3a3949ef706d75a8f693d9163a0d23d764ba2d2044db651fc\": rpc error: code = NotFound desc = could not find container \"b0221a5bddc996c3a3949ef706d75a8f693d9163a0d23d764ba2d2044db651fc\": container with ID starting with b0221a5bddc996c3a3949ef706d75a8f693d9163a0d23d764ba2d2044db651fc not found: ID does not exist" Jan 26 09:14:19 crc kubenswrapper[4806]: I0126 09:14:19.052224 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f331153-d66c-484b-b672-f71b26b7b474" path="/var/lib/kubelet/pods/3f331153-d66c-484b-b672-f71b26b7b474/volumes" Jan 26 09:14:19 crc kubenswrapper[4806]: I0126 09:14:19.532655 4806 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-dhm8r" podUID="d9609222-84cc-4ac2-bae6-381af776ace2" containerName="registry-server" probeResult="failure" output=< Jan 26 09:14:19 crc kubenswrapper[4806]: timeout: failed to connect service ":50051" within 1s Jan 26 09:14:19 crc kubenswrapper[4806]: > Jan 26 09:14:28 crc kubenswrapper[4806]: I0126 09:14:28.540471 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-dhm8r" Jan 26 09:14:28 crc kubenswrapper[4806]: I0126 09:14:28.602334 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-dhm8r" Jan 26 09:14:30 crc kubenswrapper[4806]: I0126 09:14:30.042214 4806 scope.go:117] "RemoveContainer" containerID="0352a635b134aa1af5c582c6df3c8dae2524a5c47f9d9d0d06d1e31e7300a542" Jan 26 09:14:30 crc kubenswrapper[4806]: E0126 09:14:30.042886 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 09:14:30 crc kubenswrapper[4806]: I0126 09:14:30.558075 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dhm8r"] Jan 26 09:14:30 crc kubenswrapper[4806]: I0126 09:14:30.558677 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-dhm8r" podUID="d9609222-84cc-4ac2-bae6-381af776ace2" containerName="registry-server" containerID="cri-o://e4bd5f9a78e230166f822db1b6f0a6f30f1932cbd4c41bb279ba62c8a4e6453c" gracePeriod=2 Jan 26 09:14:31 crc kubenswrapper[4806]: I0126 09:14:31.735762 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dhm8r" Jan 26 09:14:31 crc kubenswrapper[4806]: I0126 09:14:31.802255 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9609222-84cc-4ac2-bae6-381af776ace2-utilities\") pod \"d9609222-84cc-4ac2-bae6-381af776ace2\" (UID: \"d9609222-84cc-4ac2-bae6-381af776ace2\") " Jan 26 09:14:31 crc kubenswrapper[4806]: I0126 09:14:31.802497 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lrh98\" (UniqueName: \"kubernetes.io/projected/d9609222-84cc-4ac2-bae6-381af776ace2-kube-api-access-lrh98\") pod \"d9609222-84cc-4ac2-bae6-381af776ace2\" (UID: \"d9609222-84cc-4ac2-bae6-381af776ace2\") " Jan 26 09:14:31 crc kubenswrapper[4806]: I0126 09:14:31.802615 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9609222-84cc-4ac2-bae6-381af776ace2-catalog-content\") pod \"d9609222-84cc-4ac2-bae6-381af776ace2\" (UID: \"d9609222-84cc-4ac2-bae6-381af776ace2\") " Jan 26 09:14:31 crc kubenswrapper[4806]: I0126 09:14:31.808167 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d9609222-84cc-4ac2-bae6-381af776ace2-utilities" (OuterVolumeSpecName: "utilities") pod "d9609222-84cc-4ac2-bae6-381af776ace2" (UID: "d9609222-84cc-4ac2-bae6-381af776ace2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 09:14:31 crc kubenswrapper[4806]: I0126 09:14:31.813421 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9609222-84cc-4ac2-bae6-381af776ace2-kube-api-access-lrh98" (OuterVolumeSpecName: "kube-api-access-lrh98") pod "d9609222-84cc-4ac2-bae6-381af776ace2" (UID: "d9609222-84cc-4ac2-bae6-381af776ace2"). InnerVolumeSpecName "kube-api-access-lrh98". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:14:31 crc kubenswrapper[4806]: I0126 09:14:31.894383 4806 generic.go:334] "Generic (PLEG): container finished" podID="d9609222-84cc-4ac2-bae6-381af776ace2" containerID="e4bd5f9a78e230166f822db1b6f0a6f30f1932cbd4c41bb279ba62c8a4e6453c" exitCode=0 Jan 26 09:14:31 crc kubenswrapper[4806]: I0126 09:14:31.894424 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dhm8r" event={"ID":"d9609222-84cc-4ac2-bae6-381af776ace2","Type":"ContainerDied","Data":"e4bd5f9a78e230166f822db1b6f0a6f30f1932cbd4c41bb279ba62c8a4e6453c"} Jan 26 09:14:31 crc kubenswrapper[4806]: I0126 09:14:31.894450 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dhm8r" event={"ID":"d9609222-84cc-4ac2-bae6-381af776ace2","Type":"ContainerDied","Data":"2cf53552e002860bac885e6f88a1efc1ecbc419b446b8b5cfce7b1be1b21b459"} Jan 26 09:14:31 crc kubenswrapper[4806]: I0126 09:14:31.894448 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dhm8r" Jan 26 09:14:31 crc kubenswrapper[4806]: I0126 09:14:31.894665 4806 scope.go:117] "RemoveContainer" containerID="e4bd5f9a78e230166f822db1b6f0a6f30f1932cbd4c41bb279ba62c8a4e6453c" Jan 26 09:14:31 crc kubenswrapper[4806]: I0126 09:14:31.904911 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lrh98\" (UniqueName: \"kubernetes.io/projected/d9609222-84cc-4ac2-bae6-381af776ace2-kube-api-access-lrh98\") on node \"crc\" DevicePath \"\"" Jan 26 09:14:31 crc kubenswrapper[4806]: I0126 09:14:31.904946 4806 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9609222-84cc-4ac2-bae6-381af776ace2-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 09:14:31 crc kubenswrapper[4806]: I0126 09:14:31.916783 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d9609222-84cc-4ac2-bae6-381af776ace2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d9609222-84cc-4ac2-bae6-381af776ace2" (UID: "d9609222-84cc-4ac2-bae6-381af776ace2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 09:14:31 crc kubenswrapper[4806]: I0126 09:14:31.918768 4806 scope.go:117] "RemoveContainer" containerID="d055a718fe5fc27a143e07bc35f3be5edbad3575d95bdab1e2319b9b34c1dd95" Jan 26 09:14:31 crc kubenswrapper[4806]: I0126 09:14:31.940989 4806 scope.go:117] "RemoveContainer" containerID="4d972532392d435b79bd9dcde7af9b6399fab31aaa02705816ce647ef69ab8cb" Jan 26 09:14:31 crc kubenswrapper[4806]: I0126 09:14:31.992392 4806 scope.go:117] "RemoveContainer" containerID="e4bd5f9a78e230166f822db1b6f0a6f30f1932cbd4c41bb279ba62c8a4e6453c" Jan 26 09:14:31 crc kubenswrapper[4806]: E0126 09:14:31.992813 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4bd5f9a78e230166f822db1b6f0a6f30f1932cbd4c41bb279ba62c8a4e6453c\": container with ID starting with e4bd5f9a78e230166f822db1b6f0a6f30f1932cbd4c41bb279ba62c8a4e6453c not found: ID does not exist" containerID="e4bd5f9a78e230166f822db1b6f0a6f30f1932cbd4c41bb279ba62c8a4e6453c" Jan 26 09:14:31 crc kubenswrapper[4806]: I0126 09:14:31.992855 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4bd5f9a78e230166f822db1b6f0a6f30f1932cbd4c41bb279ba62c8a4e6453c"} err="failed to get container status \"e4bd5f9a78e230166f822db1b6f0a6f30f1932cbd4c41bb279ba62c8a4e6453c\": rpc error: code = NotFound desc = could not find container \"e4bd5f9a78e230166f822db1b6f0a6f30f1932cbd4c41bb279ba62c8a4e6453c\": container with ID starting with e4bd5f9a78e230166f822db1b6f0a6f30f1932cbd4c41bb279ba62c8a4e6453c not found: ID does not exist" Jan 26 09:14:31 crc kubenswrapper[4806]: I0126 09:14:31.992873 4806 scope.go:117] "RemoveContainer" containerID="d055a718fe5fc27a143e07bc35f3be5edbad3575d95bdab1e2319b9b34c1dd95" Jan 26 09:14:31 crc kubenswrapper[4806]: E0126 09:14:31.993114 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d055a718fe5fc27a143e07bc35f3be5edbad3575d95bdab1e2319b9b34c1dd95\": container with ID starting with d055a718fe5fc27a143e07bc35f3be5edbad3575d95bdab1e2319b9b34c1dd95 not found: ID does not exist" containerID="d055a718fe5fc27a143e07bc35f3be5edbad3575d95bdab1e2319b9b34c1dd95" Jan 26 09:14:31 crc kubenswrapper[4806]: I0126 09:14:31.993137 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d055a718fe5fc27a143e07bc35f3be5edbad3575d95bdab1e2319b9b34c1dd95"} err="failed to get container status \"d055a718fe5fc27a143e07bc35f3be5edbad3575d95bdab1e2319b9b34c1dd95\": rpc error: code = NotFound desc = could not find container \"d055a718fe5fc27a143e07bc35f3be5edbad3575d95bdab1e2319b9b34c1dd95\": container with ID starting with d055a718fe5fc27a143e07bc35f3be5edbad3575d95bdab1e2319b9b34c1dd95 not found: ID does not exist" Jan 26 09:14:31 crc kubenswrapper[4806]: I0126 09:14:31.993190 4806 scope.go:117] "RemoveContainer" containerID="4d972532392d435b79bd9dcde7af9b6399fab31aaa02705816ce647ef69ab8cb" Jan 26 09:14:31 crc kubenswrapper[4806]: E0126 09:14:31.993436 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d972532392d435b79bd9dcde7af9b6399fab31aaa02705816ce647ef69ab8cb\": container with ID starting with 4d972532392d435b79bd9dcde7af9b6399fab31aaa02705816ce647ef69ab8cb not found: ID does not exist" containerID="4d972532392d435b79bd9dcde7af9b6399fab31aaa02705816ce647ef69ab8cb" Jan 26 09:14:31 crc kubenswrapper[4806]: I0126 09:14:31.993489 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d972532392d435b79bd9dcde7af9b6399fab31aaa02705816ce647ef69ab8cb"} err="failed to get container status \"4d972532392d435b79bd9dcde7af9b6399fab31aaa02705816ce647ef69ab8cb\": rpc error: code = NotFound desc = could not find container \"4d972532392d435b79bd9dcde7af9b6399fab31aaa02705816ce647ef69ab8cb\": container with ID starting with 4d972532392d435b79bd9dcde7af9b6399fab31aaa02705816ce647ef69ab8cb not found: ID does not exist" Jan 26 09:14:32 crc kubenswrapper[4806]: I0126 09:14:32.006810 4806 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9609222-84cc-4ac2-bae6-381af776ace2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 09:14:32 crc kubenswrapper[4806]: I0126 09:14:32.239019 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dhm8r"] Jan 26 09:14:32 crc kubenswrapper[4806]: I0126 09:14:32.250179 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-dhm8r"] Jan 26 09:14:33 crc kubenswrapper[4806]: I0126 09:14:33.052631 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9609222-84cc-4ac2-bae6-381af776ace2" path="/var/lib/kubelet/pods/d9609222-84cc-4ac2-bae6-381af776ace2/volumes" Jan 26 09:14:41 crc kubenswrapper[4806]: I0126 09:14:41.048278 4806 scope.go:117] "RemoveContainer" containerID="0352a635b134aa1af5c582c6df3c8dae2524a5c47f9d9d0d06d1e31e7300a542" Jan 26 09:14:41 crc kubenswrapper[4806]: E0126 09:14:41.049437 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 09:14:54 crc kubenswrapper[4806]: I0126 09:14:54.041826 4806 scope.go:117] "RemoveContainer" containerID="0352a635b134aa1af5c582c6df3c8dae2524a5c47f9d9d0d06d1e31e7300a542" Jan 26 09:14:55 crc kubenswrapper[4806]: I0126 09:14:55.101066 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" event={"ID":"d07502a2-50b0-4012-b335-340a1c694c50","Type":"ContainerStarted","Data":"96e9b18336a9454be75b86334a09c23dec3ea9118320b75e864a2907328c228f"} Jan 26 09:15:00 crc kubenswrapper[4806]: I0126 09:15:00.164242 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490315-zqqkz"] Jan 26 09:15:00 crc kubenswrapper[4806]: E0126 09:15:00.165158 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f331153-d66c-484b-b672-f71b26b7b474" containerName="extract-utilities" Jan 26 09:15:00 crc kubenswrapper[4806]: I0126 09:15:00.165170 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f331153-d66c-484b-b672-f71b26b7b474" containerName="extract-utilities" Jan 26 09:15:00 crc kubenswrapper[4806]: E0126 09:15:00.165181 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9609222-84cc-4ac2-bae6-381af776ace2" containerName="extract-utilities" Jan 26 09:15:00 crc kubenswrapper[4806]: I0126 09:15:00.165187 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9609222-84cc-4ac2-bae6-381af776ace2" containerName="extract-utilities" Jan 26 09:15:00 crc kubenswrapper[4806]: E0126 09:15:00.165209 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f331153-d66c-484b-b672-f71b26b7b474" containerName="extract-content" Jan 26 09:15:00 crc kubenswrapper[4806]: I0126 09:15:00.165215 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f331153-d66c-484b-b672-f71b26b7b474" containerName="extract-content" Jan 26 09:15:00 crc kubenswrapper[4806]: E0126 09:15:00.165234 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9609222-84cc-4ac2-bae6-381af776ace2" containerName="extract-content" Jan 26 09:15:00 crc kubenswrapper[4806]: I0126 09:15:00.165240 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9609222-84cc-4ac2-bae6-381af776ace2" containerName="extract-content" Jan 26 09:15:00 crc kubenswrapper[4806]: E0126 09:15:00.165251 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f331153-d66c-484b-b672-f71b26b7b474" containerName="registry-server" Jan 26 09:15:00 crc kubenswrapper[4806]: I0126 09:15:00.165257 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f331153-d66c-484b-b672-f71b26b7b474" containerName="registry-server" Jan 26 09:15:00 crc kubenswrapper[4806]: E0126 09:15:00.165270 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9609222-84cc-4ac2-bae6-381af776ace2" containerName="registry-server" Jan 26 09:15:00 crc kubenswrapper[4806]: I0126 09:15:00.165275 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9609222-84cc-4ac2-bae6-381af776ace2" containerName="registry-server" Jan 26 09:15:00 crc kubenswrapper[4806]: I0126 09:15:00.165445 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9609222-84cc-4ac2-bae6-381af776ace2" containerName="registry-server" Jan 26 09:15:00 crc kubenswrapper[4806]: I0126 09:15:00.165460 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f331153-d66c-484b-b672-f71b26b7b474" containerName="registry-server" Jan 26 09:15:00 crc kubenswrapper[4806]: I0126 09:15:00.166180 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490315-zqqkz" Jan 26 09:15:00 crc kubenswrapper[4806]: I0126 09:15:00.170454 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 26 09:15:00 crc kubenswrapper[4806]: I0126 09:15:00.172329 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 26 09:15:00 crc kubenswrapper[4806]: I0126 09:15:00.184231 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490315-zqqkz"] Jan 26 09:15:00 crc kubenswrapper[4806]: I0126 09:15:00.220624 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/77b0ffed-1723-4897-97d7-1e459f298b6c-config-volume\") pod \"collect-profiles-29490315-zqqkz\" (UID: \"77b0ffed-1723-4897-97d7-1e459f298b6c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490315-zqqkz" Jan 26 09:15:00 crc kubenswrapper[4806]: I0126 09:15:00.220852 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drqfk\" (UniqueName: \"kubernetes.io/projected/77b0ffed-1723-4897-97d7-1e459f298b6c-kube-api-access-drqfk\") pod \"collect-profiles-29490315-zqqkz\" (UID: \"77b0ffed-1723-4897-97d7-1e459f298b6c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490315-zqqkz" Jan 26 09:15:00 crc kubenswrapper[4806]: I0126 09:15:00.220989 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/77b0ffed-1723-4897-97d7-1e459f298b6c-secret-volume\") pod \"collect-profiles-29490315-zqqkz\" (UID: \"77b0ffed-1723-4897-97d7-1e459f298b6c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490315-zqqkz" Jan 26 09:15:00 crc kubenswrapper[4806]: I0126 09:15:00.323667 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/77b0ffed-1723-4897-97d7-1e459f298b6c-config-volume\") pod \"collect-profiles-29490315-zqqkz\" (UID: \"77b0ffed-1723-4897-97d7-1e459f298b6c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490315-zqqkz" Jan 26 09:15:00 crc kubenswrapper[4806]: I0126 09:15:00.323740 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drqfk\" (UniqueName: \"kubernetes.io/projected/77b0ffed-1723-4897-97d7-1e459f298b6c-kube-api-access-drqfk\") pod \"collect-profiles-29490315-zqqkz\" (UID: \"77b0ffed-1723-4897-97d7-1e459f298b6c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490315-zqqkz" Jan 26 09:15:00 crc kubenswrapper[4806]: I0126 09:15:00.323875 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/77b0ffed-1723-4897-97d7-1e459f298b6c-secret-volume\") pod \"collect-profiles-29490315-zqqkz\" (UID: \"77b0ffed-1723-4897-97d7-1e459f298b6c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490315-zqqkz" Jan 26 09:15:00 crc kubenswrapper[4806]: I0126 09:15:00.324563 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/77b0ffed-1723-4897-97d7-1e459f298b6c-config-volume\") pod \"collect-profiles-29490315-zqqkz\" (UID: \"77b0ffed-1723-4897-97d7-1e459f298b6c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490315-zqqkz" Jan 26 09:15:00 crc kubenswrapper[4806]: I0126 09:15:00.334083 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/77b0ffed-1723-4897-97d7-1e459f298b6c-secret-volume\") pod \"collect-profiles-29490315-zqqkz\" (UID: \"77b0ffed-1723-4897-97d7-1e459f298b6c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490315-zqqkz" Jan 26 09:15:00 crc kubenswrapper[4806]: I0126 09:15:00.358337 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drqfk\" (UniqueName: \"kubernetes.io/projected/77b0ffed-1723-4897-97d7-1e459f298b6c-kube-api-access-drqfk\") pod \"collect-profiles-29490315-zqqkz\" (UID: \"77b0ffed-1723-4897-97d7-1e459f298b6c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490315-zqqkz" Jan 26 09:15:00 crc kubenswrapper[4806]: I0126 09:15:00.491388 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490315-zqqkz" Jan 26 09:15:00 crc kubenswrapper[4806]: I0126 09:15:00.927157 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490315-zqqkz"] Jan 26 09:15:01 crc kubenswrapper[4806]: I0126 09:15:01.150655 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490315-zqqkz" event={"ID":"77b0ffed-1723-4897-97d7-1e459f298b6c","Type":"ContainerStarted","Data":"98d0ff311ad0f4e1275fd6ceb536d2bf0e0f6ff96be58a2e8e72fdcfacf28282"} Jan 26 09:15:01 crc kubenswrapper[4806]: I0126 09:15:01.150697 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490315-zqqkz" event={"ID":"77b0ffed-1723-4897-97d7-1e459f298b6c","Type":"ContainerStarted","Data":"3ddfb4844b35fb2155964419d4b8c372b937f2ca76056e7c562c9d4feaf368dd"} Jan 26 09:15:02 crc kubenswrapper[4806]: I0126 09:15:02.161444 4806 generic.go:334] "Generic (PLEG): container finished" podID="77b0ffed-1723-4897-97d7-1e459f298b6c" containerID="98d0ff311ad0f4e1275fd6ceb536d2bf0e0f6ff96be58a2e8e72fdcfacf28282" exitCode=0 Jan 26 09:15:02 crc kubenswrapper[4806]: I0126 09:15:02.161499 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490315-zqqkz" event={"ID":"77b0ffed-1723-4897-97d7-1e459f298b6c","Type":"ContainerDied","Data":"98d0ff311ad0f4e1275fd6ceb536d2bf0e0f6ff96be58a2e8e72fdcfacf28282"} Jan 26 09:15:03 crc kubenswrapper[4806]: I0126 09:15:03.531132 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490315-zqqkz" Jan 26 09:15:03 crc kubenswrapper[4806]: I0126 09:15:03.584854 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-drqfk\" (UniqueName: \"kubernetes.io/projected/77b0ffed-1723-4897-97d7-1e459f298b6c-kube-api-access-drqfk\") pod \"77b0ffed-1723-4897-97d7-1e459f298b6c\" (UID: \"77b0ffed-1723-4897-97d7-1e459f298b6c\") " Jan 26 09:15:03 crc kubenswrapper[4806]: I0126 09:15:03.584902 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/77b0ffed-1723-4897-97d7-1e459f298b6c-config-volume\") pod \"77b0ffed-1723-4897-97d7-1e459f298b6c\" (UID: \"77b0ffed-1723-4897-97d7-1e459f298b6c\") " Jan 26 09:15:03 crc kubenswrapper[4806]: I0126 09:15:03.584989 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/77b0ffed-1723-4897-97d7-1e459f298b6c-secret-volume\") pod \"77b0ffed-1723-4897-97d7-1e459f298b6c\" (UID: \"77b0ffed-1723-4897-97d7-1e459f298b6c\") " Jan 26 09:15:03 crc kubenswrapper[4806]: I0126 09:15:03.585799 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/77b0ffed-1723-4897-97d7-1e459f298b6c-config-volume" (OuterVolumeSpecName: "config-volume") pod "77b0ffed-1723-4897-97d7-1e459f298b6c" (UID: "77b0ffed-1723-4897-97d7-1e459f298b6c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:15:03 crc kubenswrapper[4806]: I0126 09:15:03.591153 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77b0ffed-1723-4897-97d7-1e459f298b6c-kube-api-access-drqfk" (OuterVolumeSpecName: "kube-api-access-drqfk") pod "77b0ffed-1723-4897-97d7-1e459f298b6c" (UID: "77b0ffed-1723-4897-97d7-1e459f298b6c"). InnerVolumeSpecName "kube-api-access-drqfk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:15:03 crc kubenswrapper[4806]: I0126 09:15:03.591245 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77b0ffed-1723-4897-97d7-1e459f298b6c-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "77b0ffed-1723-4897-97d7-1e459f298b6c" (UID: "77b0ffed-1723-4897-97d7-1e459f298b6c"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:15:03 crc kubenswrapper[4806]: I0126 09:15:03.687544 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-drqfk\" (UniqueName: \"kubernetes.io/projected/77b0ffed-1723-4897-97d7-1e459f298b6c-kube-api-access-drqfk\") on node \"crc\" DevicePath \"\"" Jan 26 09:15:03 crc kubenswrapper[4806]: I0126 09:15:03.687587 4806 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/77b0ffed-1723-4897-97d7-1e459f298b6c-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 09:15:03 crc kubenswrapper[4806]: I0126 09:15:03.687599 4806 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/77b0ffed-1723-4897-97d7-1e459f298b6c-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 09:15:04 crc kubenswrapper[4806]: I0126 09:15:04.180428 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490315-zqqkz" event={"ID":"77b0ffed-1723-4897-97d7-1e459f298b6c","Type":"ContainerDied","Data":"3ddfb4844b35fb2155964419d4b8c372b937f2ca76056e7c562c9d4feaf368dd"} Jan 26 09:15:04 crc kubenswrapper[4806]: I0126 09:15:04.180887 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490315-zqqkz" Jan 26 09:15:04 crc kubenswrapper[4806]: I0126 09:15:04.180747 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3ddfb4844b35fb2155964419d4b8c372b937f2ca76056e7c562c9d4feaf368dd" Jan 26 09:15:04 crc kubenswrapper[4806]: I0126 09:15:04.622591 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490270-65lcp"] Jan 26 09:15:04 crc kubenswrapper[4806]: I0126 09:15:04.630921 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490270-65lcp"] Jan 26 09:15:05 crc kubenswrapper[4806]: I0126 09:15:05.052148 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7c14310-f673-4a0f-a892-477b7a76b6ab" path="/var/lib/kubelet/pods/c7c14310-f673-4a0f-a892-477b7a76b6ab/volumes" Jan 26 09:15:38 crc kubenswrapper[4806]: I0126 09:15:38.654087 4806 scope.go:117] "RemoveContainer" containerID="4798f7dd37e755aa247ebc55fbfade0c9e5bee012f337adc7719d10f368ae5c0" Jan 26 09:17:15 crc kubenswrapper[4806]: I0126 09:17:15.806379 4806 patch_prober.go:28] interesting pod/machine-config-daemon-k2tlk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 09:17:15 crc kubenswrapper[4806]: I0126 09:17:15.806938 4806 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 09:17:45 crc kubenswrapper[4806]: I0126 09:17:45.806100 4806 patch_prober.go:28] interesting pod/machine-config-daemon-k2tlk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 09:17:45 crc kubenswrapper[4806]: I0126 09:17:45.806788 4806 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 09:18:15 crc kubenswrapper[4806]: I0126 09:18:15.806578 4806 patch_prober.go:28] interesting pod/machine-config-daemon-k2tlk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 09:18:15 crc kubenswrapper[4806]: I0126 09:18:15.807090 4806 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 09:18:15 crc kubenswrapper[4806]: I0126 09:18:15.807131 4806 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" Jan 26 09:18:15 crc kubenswrapper[4806]: I0126 09:18:15.807873 4806 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"96e9b18336a9454be75b86334a09c23dec3ea9118320b75e864a2907328c228f"} pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 09:18:15 crc kubenswrapper[4806]: I0126 09:18:15.807920 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" containerName="machine-config-daemon" containerID="cri-o://96e9b18336a9454be75b86334a09c23dec3ea9118320b75e864a2907328c228f" gracePeriod=600 Jan 26 09:18:16 crc kubenswrapper[4806]: I0126 09:18:16.846952 4806 generic.go:334] "Generic (PLEG): container finished" podID="d07502a2-50b0-4012-b335-340a1c694c50" containerID="96e9b18336a9454be75b86334a09c23dec3ea9118320b75e864a2907328c228f" exitCode=0 Jan 26 09:18:16 crc kubenswrapper[4806]: I0126 09:18:16.847575 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" event={"ID":"d07502a2-50b0-4012-b335-340a1c694c50","Type":"ContainerDied","Data":"96e9b18336a9454be75b86334a09c23dec3ea9118320b75e864a2907328c228f"} Jan 26 09:18:16 crc kubenswrapper[4806]: I0126 09:18:16.847609 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" event={"ID":"d07502a2-50b0-4012-b335-340a1c694c50","Type":"ContainerStarted","Data":"6fd55683b8f15334183fb0b95891d473becb88c0609832bd732db6a7f091647a"} Jan 26 09:18:16 crc kubenswrapper[4806]: I0126 09:18:16.847628 4806 scope.go:117] "RemoveContainer" containerID="0352a635b134aa1af5c582c6df3c8dae2524a5c47f9d9d0d06d1e31e7300a542" Jan 26 09:20:45 crc kubenswrapper[4806]: I0126 09:20:45.806204 4806 patch_prober.go:28] interesting pod/machine-config-daemon-k2tlk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 09:20:45 crc kubenswrapper[4806]: I0126 09:20:45.806752 4806 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 09:21:15 crc kubenswrapper[4806]: I0126 09:21:15.806749 4806 patch_prober.go:28] interesting pod/machine-config-daemon-k2tlk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 09:21:15 crc kubenswrapper[4806]: I0126 09:21:15.807160 4806 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 09:21:45 crc kubenswrapper[4806]: I0126 09:21:45.807073 4806 patch_prober.go:28] interesting pod/machine-config-daemon-k2tlk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 09:21:45 crc kubenswrapper[4806]: I0126 09:21:45.807669 4806 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 09:21:45 crc kubenswrapper[4806]: I0126 09:21:45.807723 4806 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" Jan 26 09:21:45 crc kubenswrapper[4806]: I0126 09:21:45.808540 4806 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6fd55683b8f15334183fb0b95891d473becb88c0609832bd732db6a7f091647a"} pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 09:21:45 crc kubenswrapper[4806]: I0126 09:21:45.808605 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" containerName="machine-config-daemon" containerID="cri-o://6fd55683b8f15334183fb0b95891d473becb88c0609832bd732db6a7f091647a" gracePeriod=600 Jan 26 09:21:45 crc kubenswrapper[4806]: E0126 09:21:45.928689 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 09:21:46 crc kubenswrapper[4806]: I0126 09:21:46.644903 4806 generic.go:334] "Generic (PLEG): container finished" podID="d07502a2-50b0-4012-b335-340a1c694c50" containerID="6fd55683b8f15334183fb0b95891d473becb88c0609832bd732db6a7f091647a" exitCode=0 Jan 26 09:21:46 crc kubenswrapper[4806]: I0126 09:21:46.644981 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" event={"ID":"d07502a2-50b0-4012-b335-340a1c694c50","Type":"ContainerDied","Data":"6fd55683b8f15334183fb0b95891d473becb88c0609832bd732db6a7f091647a"} Jan 26 09:21:46 crc kubenswrapper[4806]: I0126 09:21:46.645267 4806 scope.go:117] "RemoveContainer" containerID="96e9b18336a9454be75b86334a09c23dec3ea9118320b75e864a2907328c228f" Jan 26 09:21:46 crc kubenswrapper[4806]: I0126 09:21:46.646007 4806 scope.go:117] "RemoveContainer" containerID="6fd55683b8f15334183fb0b95891d473becb88c0609832bd732db6a7f091647a" Jan 26 09:21:46 crc kubenswrapper[4806]: E0126 09:21:46.646312 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 09:21:57 crc kubenswrapper[4806]: I0126 09:21:57.042435 4806 scope.go:117] "RemoveContainer" containerID="6fd55683b8f15334183fb0b95891d473becb88c0609832bd732db6a7f091647a" Jan 26 09:21:57 crc kubenswrapper[4806]: E0126 09:21:57.043256 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 09:22:11 crc kubenswrapper[4806]: I0126 09:22:11.961175 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-msp45"] Jan 26 09:22:11 crc kubenswrapper[4806]: E0126 09:22:11.966715 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77b0ffed-1723-4897-97d7-1e459f298b6c" containerName="collect-profiles" Jan 26 09:22:11 crc kubenswrapper[4806]: I0126 09:22:11.966733 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="77b0ffed-1723-4897-97d7-1e459f298b6c" containerName="collect-profiles" Jan 26 09:22:11 crc kubenswrapper[4806]: I0126 09:22:11.967009 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="77b0ffed-1723-4897-97d7-1e459f298b6c" containerName="collect-profiles" Jan 26 09:22:11 crc kubenswrapper[4806]: I0126 09:22:11.968697 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-msp45" Jan 26 09:22:11 crc kubenswrapper[4806]: I0126 09:22:11.981096 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-msp45"] Jan 26 09:22:12 crc kubenswrapper[4806]: I0126 09:22:12.042395 4806 scope.go:117] "RemoveContainer" containerID="6fd55683b8f15334183fb0b95891d473becb88c0609832bd732db6a7f091647a" Jan 26 09:22:12 crc kubenswrapper[4806]: E0126 09:22:12.042766 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 09:22:12 crc kubenswrapper[4806]: I0126 09:22:12.056877 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tszj5\" (UniqueName: \"kubernetes.io/projected/747a935c-827f-4295-8e7e-c136dd6ae1a0-kube-api-access-tszj5\") pod \"community-operators-msp45\" (UID: \"747a935c-827f-4295-8e7e-c136dd6ae1a0\") " pod="openshift-marketplace/community-operators-msp45" Jan 26 09:22:12 crc kubenswrapper[4806]: I0126 09:22:12.057040 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/747a935c-827f-4295-8e7e-c136dd6ae1a0-catalog-content\") pod \"community-operators-msp45\" (UID: \"747a935c-827f-4295-8e7e-c136dd6ae1a0\") " pod="openshift-marketplace/community-operators-msp45" Jan 26 09:22:12 crc kubenswrapper[4806]: I0126 09:22:12.057102 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/747a935c-827f-4295-8e7e-c136dd6ae1a0-utilities\") pod \"community-operators-msp45\" (UID: \"747a935c-827f-4295-8e7e-c136dd6ae1a0\") " pod="openshift-marketplace/community-operators-msp45" Jan 26 09:22:12 crc kubenswrapper[4806]: I0126 09:22:12.158442 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/747a935c-827f-4295-8e7e-c136dd6ae1a0-catalog-content\") pod \"community-operators-msp45\" (UID: \"747a935c-827f-4295-8e7e-c136dd6ae1a0\") " pod="openshift-marketplace/community-operators-msp45" Jan 26 09:22:12 crc kubenswrapper[4806]: I0126 09:22:12.158509 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/747a935c-827f-4295-8e7e-c136dd6ae1a0-utilities\") pod \"community-operators-msp45\" (UID: \"747a935c-827f-4295-8e7e-c136dd6ae1a0\") " pod="openshift-marketplace/community-operators-msp45" Jan 26 09:22:12 crc kubenswrapper[4806]: I0126 09:22:12.158695 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tszj5\" (UniqueName: \"kubernetes.io/projected/747a935c-827f-4295-8e7e-c136dd6ae1a0-kube-api-access-tszj5\") pod \"community-operators-msp45\" (UID: \"747a935c-827f-4295-8e7e-c136dd6ae1a0\") " pod="openshift-marketplace/community-operators-msp45" Jan 26 09:22:12 crc kubenswrapper[4806]: I0126 09:22:12.159560 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/747a935c-827f-4295-8e7e-c136dd6ae1a0-catalog-content\") pod \"community-operators-msp45\" (UID: \"747a935c-827f-4295-8e7e-c136dd6ae1a0\") " pod="openshift-marketplace/community-operators-msp45" Jan 26 09:22:12 crc kubenswrapper[4806]: I0126 09:22:12.159602 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/747a935c-827f-4295-8e7e-c136dd6ae1a0-utilities\") pod \"community-operators-msp45\" (UID: \"747a935c-827f-4295-8e7e-c136dd6ae1a0\") " pod="openshift-marketplace/community-operators-msp45" Jan 26 09:22:12 crc kubenswrapper[4806]: I0126 09:22:12.181029 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tszj5\" (UniqueName: \"kubernetes.io/projected/747a935c-827f-4295-8e7e-c136dd6ae1a0-kube-api-access-tszj5\") pod \"community-operators-msp45\" (UID: \"747a935c-827f-4295-8e7e-c136dd6ae1a0\") " pod="openshift-marketplace/community-operators-msp45" Jan 26 09:22:12 crc kubenswrapper[4806]: I0126 09:22:12.293895 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-msp45" Jan 26 09:22:13 crc kubenswrapper[4806]: I0126 09:22:13.021399 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-msp45"] Jan 26 09:22:13 crc kubenswrapper[4806]: I0126 09:22:13.865446 4806 generic.go:334] "Generic (PLEG): container finished" podID="747a935c-827f-4295-8e7e-c136dd6ae1a0" containerID="95e2d53695658ad3d578d0e80891cd5a9d6c7f7bc1946016963bacea4edf68c1" exitCode=0 Jan 26 09:22:13 crc kubenswrapper[4806]: I0126 09:22:13.865578 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-msp45" event={"ID":"747a935c-827f-4295-8e7e-c136dd6ae1a0","Type":"ContainerDied","Data":"95e2d53695658ad3d578d0e80891cd5a9d6c7f7bc1946016963bacea4edf68c1"} Jan 26 09:22:13 crc kubenswrapper[4806]: I0126 09:22:13.866249 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-msp45" event={"ID":"747a935c-827f-4295-8e7e-c136dd6ae1a0","Type":"ContainerStarted","Data":"c348c6afc17ce782f2cac2a81fde2f0fb1f389525cb4d3d6bfbd82b963024b8d"} Jan 26 09:22:13 crc kubenswrapper[4806]: I0126 09:22:13.867729 4806 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 09:22:14 crc kubenswrapper[4806]: I0126 09:22:14.877226 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-msp45" event={"ID":"747a935c-827f-4295-8e7e-c136dd6ae1a0","Type":"ContainerStarted","Data":"42bc5d1a04e4ce5481ae0ee84415ca49db9cee922cbaecadd34aa8982405174d"} Jan 26 09:22:16 crc kubenswrapper[4806]: I0126 09:22:16.898487 4806 generic.go:334] "Generic (PLEG): container finished" podID="747a935c-827f-4295-8e7e-c136dd6ae1a0" containerID="42bc5d1a04e4ce5481ae0ee84415ca49db9cee922cbaecadd34aa8982405174d" exitCode=0 Jan 26 09:22:16 crc kubenswrapper[4806]: I0126 09:22:16.898565 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-msp45" event={"ID":"747a935c-827f-4295-8e7e-c136dd6ae1a0","Type":"ContainerDied","Data":"42bc5d1a04e4ce5481ae0ee84415ca49db9cee922cbaecadd34aa8982405174d"} Jan 26 09:22:17 crc kubenswrapper[4806]: I0126 09:22:17.910586 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-msp45" event={"ID":"747a935c-827f-4295-8e7e-c136dd6ae1a0","Type":"ContainerStarted","Data":"3f856ae733606927d4145cc55a0f839b2f35081cdb98b8f520b1fc2e3aac5429"} Jan 26 09:22:17 crc kubenswrapper[4806]: I0126 09:22:17.981385 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-msp45" podStartSLOduration=3.518027341 podStartE2EDuration="6.981365987s" podCreationTimestamp="2026-01-26 09:22:11 +0000 UTC" firstStartedPulling="2026-01-26 09:22:13.867450142 +0000 UTC m=+5313.131858208" lastFinishedPulling="2026-01-26 09:22:17.330788798 +0000 UTC m=+5316.595196854" observedRunningTime="2026-01-26 09:22:17.974467942 +0000 UTC m=+5317.238875998" watchObservedRunningTime="2026-01-26 09:22:17.981365987 +0000 UTC m=+5317.245774043" Jan 26 09:22:20 crc kubenswrapper[4806]: I0126 09:22:20.334234 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-dgdsf"] Jan 26 09:22:20 crc kubenswrapper[4806]: I0126 09:22:20.336408 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dgdsf" Jan 26 09:22:20 crc kubenswrapper[4806]: I0126 09:22:20.359051 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dgdsf"] Jan 26 09:22:20 crc kubenswrapper[4806]: I0126 09:22:20.538198 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24274be1-353b-4afa-8d99-01c1a737f1e8-utilities\") pod \"certified-operators-dgdsf\" (UID: \"24274be1-353b-4afa-8d99-01c1a737f1e8\") " pod="openshift-marketplace/certified-operators-dgdsf" Jan 26 09:22:20 crc kubenswrapper[4806]: I0126 09:22:20.538421 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24274be1-353b-4afa-8d99-01c1a737f1e8-catalog-content\") pod \"certified-operators-dgdsf\" (UID: \"24274be1-353b-4afa-8d99-01c1a737f1e8\") " pod="openshift-marketplace/certified-operators-dgdsf" Jan 26 09:22:20 crc kubenswrapper[4806]: I0126 09:22:20.538506 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlrll\" (UniqueName: \"kubernetes.io/projected/24274be1-353b-4afa-8d99-01c1a737f1e8-kube-api-access-dlrll\") pod \"certified-operators-dgdsf\" (UID: \"24274be1-353b-4afa-8d99-01c1a737f1e8\") " pod="openshift-marketplace/certified-operators-dgdsf" Jan 26 09:22:20 crc kubenswrapper[4806]: I0126 09:22:20.640957 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dlrll\" (UniqueName: \"kubernetes.io/projected/24274be1-353b-4afa-8d99-01c1a737f1e8-kube-api-access-dlrll\") pod \"certified-operators-dgdsf\" (UID: \"24274be1-353b-4afa-8d99-01c1a737f1e8\") " pod="openshift-marketplace/certified-operators-dgdsf" Jan 26 09:22:20 crc kubenswrapper[4806]: I0126 09:22:20.641092 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24274be1-353b-4afa-8d99-01c1a737f1e8-utilities\") pod \"certified-operators-dgdsf\" (UID: \"24274be1-353b-4afa-8d99-01c1a737f1e8\") " pod="openshift-marketplace/certified-operators-dgdsf" Jan 26 09:22:20 crc kubenswrapper[4806]: I0126 09:22:20.641219 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24274be1-353b-4afa-8d99-01c1a737f1e8-catalog-content\") pod \"certified-operators-dgdsf\" (UID: \"24274be1-353b-4afa-8d99-01c1a737f1e8\") " pod="openshift-marketplace/certified-operators-dgdsf" Jan 26 09:22:20 crc kubenswrapper[4806]: I0126 09:22:20.641640 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24274be1-353b-4afa-8d99-01c1a737f1e8-utilities\") pod \"certified-operators-dgdsf\" (UID: \"24274be1-353b-4afa-8d99-01c1a737f1e8\") " pod="openshift-marketplace/certified-operators-dgdsf" Jan 26 09:22:20 crc kubenswrapper[4806]: I0126 09:22:20.641728 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24274be1-353b-4afa-8d99-01c1a737f1e8-catalog-content\") pod \"certified-operators-dgdsf\" (UID: \"24274be1-353b-4afa-8d99-01c1a737f1e8\") " pod="openshift-marketplace/certified-operators-dgdsf" Jan 26 09:22:20 crc kubenswrapper[4806]: I0126 09:22:20.674268 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlrll\" (UniqueName: \"kubernetes.io/projected/24274be1-353b-4afa-8d99-01c1a737f1e8-kube-api-access-dlrll\") pod \"certified-operators-dgdsf\" (UID: \"24274be1-353b-4afa-8d99-01c1a737f1e8\") " pod="openshift-marketplace/certified-operators-dgdsf" Jan 26 09:22:20 crc kubenswrapper[4806]: I0126 09:22:20.959918 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dgdsf" Jan 26 09:22:21 crc kubenswrapper[4806]: I0126 09:22:21.554389 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dgdsf"] Jan 26 09:22:21 crc kubenswrapper[4806]: I0126 09:22:21.950810 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dgdsf" event={"ID":"24274be1-353b-4afa-8d99-01c1a737f1e8","Type":"ContainerStarted","Data":"d782bd5f8256ab3c7520c9921a98cb1b7bec75bb3bf2ac20ca190a1bded78170"} Jan 26 09:22:21 crc kubenswrapper[4806]: I0126 09:22:21.951103 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dgdsf" event={"ID":"24274be1-353b-4afa-8d99-01c1a737f1e8","Type":"ContainerStarted","Data":"0f8817d65fdc4a459f97bb03c45cde6da933f9e85bc51663275260da4124a578"} Jan 26 09:22:22 crc kubenswrapper[4806]: I0126 09:22:22.296017 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-msp45" Jan 26 09:22:22 crc kubenswrapper[4806]: I0126 09:22:22.296082 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-msp45" Jan 26 09:22:22 crc kubenswrapper[4806]: I0126 09:22:22.963194 4806 generic.go:334] "Generic (PLEG): container finished" podID="24274be1-353b-4afa-8d99-01c1a737f1e8" containerID="d782bd5f8256ab3c7520c9921a98cb1b7bec75bb3bf2ac20ca190a1bded78170" exitCode=0 Jan 26 09:22:22 crc kubenswrapper[4806]: I0126 09:22:22.963331 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dgdsf" event={"ID":"24274be1-353b-4afa-8d99-01c1a737f1e8","Type":"ContainerDied","Data":"d782bd5f8256ab3c7520c9921a98cb1b7bec75bb3bf2ac20ca190a1bded78170"} Jan 26 09:22:23 crc kubenswrapper[4806]: I0126 09:22:23.351439 4806 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-msp45" podUID="747a935c-827f-4295-8e7e-c136dd6ae1a0" containerName="registry-server" probeResult="failure" output=< Jan 26 09:22:23 crc kubenswrapper[4806]: timeout: failed to connect service ":50051" within 1s Jan 26 09:22:23 crc kubenswrapper[4806]: > Jan 26 09:22:23 crc kubenswrapper[4806]: I0126 09:22:23.976267 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dgdsf" event={"ID":"24274be1-353b-4afa-8d99-01c1a737f1e8","Type":"ContainerStarted","Data":"153159c2969b1942cebbbf90929e06a708346dd5c0352faaafe56dc9c5b26442"} Jan 26 09:22:24 crc kubenswrapper[4806]: I0126 09:22:24.986881 4806 generic.go:334] "Generic (PLEG): container finished" podID="24274be1-353b-4afa-8d99-01c1a737f1e8" containerID="153159c2969b1942cebbbf90929e06a708346dd5c0352faaafe56dc9c5b26442" exitCode=0 Jan 26 09:22:24 crc kubenswrapper[4806]: I0126 09:22:24.987184 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dgdsf" event={"ID":"24274be1-353b-4afa-8d99-01c1a737f1e8","Type":"ContainerDied","Data":"153159c2969b1942cebbbf90929e06a708346dd5c0352faaafe56dc9c5b26442"} Jan 26 09:22:25 crc kubenswrapper[4806]: I0126 09:22:25.042147 4806 scope.go:117] "RemoveContainer" containerID="6fd55683b8f15334183fb0b95891d473becb88c0609832bd732db6a7f091647a" Jan 26 09:22:25 crc kubenswrapper[4806]: E0126 09:22:25.042402 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 09:22:25 crc kubenswrapper[4806]: I0126 09:22:25.996545 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dgdsf" event={"ID":"24274be1-353b-4afa-8d99-01c1a737f1e8","Type":"ContainerStarted","Data":"21e8600c576f20d5f336a66c3be91671279e346a18c3677aff3861f2a73ef55b"} Jan 26 09:22:26 crc kubenswrapper[4806]: E0126 09:22:26.010441 4806 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/system.slice/rpm-ostreed.service\": RecentStats: unable to find data in memory cache]" Jan 26 09:22:26 crc kubenswrapper[4806]: I0126 09:22:26.050248 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-dgdsf" podStartSLOduration=3.605138887 podStartE2EDuration="6.050205904s" podCreationTimestamp="2026-01-26 09:22:20 +0000 UTC" firstStartedPulling="2026-01-26 09:22:22.965407026 +0000 UTC m=+5322.229815082" lastFinishedPulling="2026-01-26 09:22:25.410474043 +0000 UTC m=+5324.674882099" observedRunningTime="2026-01-26 09:22:26.020100302 +0000 UTC m=+5325.284508358" watchObservedRunningTime="2026-01-26 09:22:26.050205904 +0000 UTC m=+5325.314613960" Jan 26 09:22:30 crc kubenswrapper[4806]: I0126 09:22:30.960172 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-dgdsf" Jan 26 09:22:30 crc kubenswrapper[4806]: I0126 09:22:30.960688 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-dgdsf" Jan 26 09:22:31 crc kubenswrapper[4806]: I0126 09:22:31.006145 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-dgdsf" Jan 26 09:22:31 crc kubenswrapper[4806]: I0126 09:22:31.100101 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-dgdsf" Jan 26 09:22:31 crc kubenswrapper[4806]: I0126 09:22:31.241369 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dgdsf"] Jan 26 09:22:32 crc kubenswrapper[4806]: I0126 09:22:32.346366 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-msp45" Jan 26 09:22:32 crc kubenswrapper[4806]: I0126 09:22:32.399930 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-msp45" Jan 26 09:22:33 crc kubenswrapper[4806]: I0126 09:22:33.071304 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-dgdsf" podUID="24274be1-353b-4afa-8d99-01c1a737f1e8" containerName="registry-server" containerID="cri-o://21e8600c576f20d5f336a66c3be91671279e346a18c3677aff3861f2a73ef55b" gracePeriod=2 Jan 26 09:22:33 crc kubenswrapper[4806]: I0126 09:22:33.650282 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dgdsf" Jan 26 09:22:33 crc kubenswrapper[4806]: I0126 09:22:33.827488 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dlrll\" (UniqueName: \"kubernetes.io/projected/24274be1-353b-4afa-8d99-01c1a737f1e8-kube-api-access-dlrll\") pod \"24274be1-353b-4afa-8d99-01c1a737f1e8\" (UID: \"24274be1-353b-4afa-8d99-01c1a737f1e8\") " Jan 26 09:22:33 crc kubenswrapper[4806]: I0126 09:22:33.827772 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24274be1-353b-4afa-8d99-01c1a737f1e8-catalog-content\") pod \"24274be1-353b-4afa-8d99-01c1a737f1e8\" (UID: \"24274be1-353b-4afa-8d99-01c1a737f1e8\") " Jan 26 09:22:33 crc kubenswrapper[4806]: I0126 09:22:33.827797 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24274be1-353b-4afa-8d99-01c1a737f1e8-utilities\") pod \"24274be1-353b-4afa-8d99-01c1a737f1e8\" (UID: \"24274be1-353b-4afa-8d99-01c1a737f1e8\") " Jan 26 09:22:33 crc kubenswrapper[4806]: I0126 09:22:33.828731 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24274be1-353b-4afa-8d99-01c1a737f1e8-utilities" (OuterVolumeSpecName: "utilities") pod "24274be1-353b-4afa-8d99-01c1a737f1e8" (UID: "24274be1-353b-4afa-8d99-01c1a737f1e8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 09:22:33 crc kubenswrapper[4806]: I0126 09:22:33.833601 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24274be1-353b-4afa-8d99-01c1a737f1e8-kube-api-access-dlrll" (OuterVolumeSpecName: "kube-api-access-dlrll") pod "24274be1-353b-4afa-8d99-01c1a737f1e8" (UID: "24274be1-353b-4afa-8d99-01c1a737f1e8"). InnerVolumeSpecName "kube-api-access-dlrll". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:22:33 crc kubenswrapper[4806]: I0126 09:22:33.875784 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24274be1-353b-4afa-8d99-01c1a737f1e8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "24274be1-353b-4afa-8d99-01c1a737f1e8" (UID: "24274be1-353b-4afa-8d99-01c1a737f1e8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 09:22:33 crc kubenswrapper[4806]: I0126 09:22:33.930237 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dlrll\" (UniqueName: \"kubernetes.io/projected/24274be1-353b-4afa-8d99-01c1a737f1e8-kube-api-access-dlrll\") on node \"crc\" DevicePath \"\"" Jan 26 09:22:33 crc kubenswrapper[4806]: I0126 09:22:33.930274 4806 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24274be1-353b-4afa-8d99-01c1a737f1e8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 09:22:33 crc kubenswrapper[4806]: I0126 09:22:33.930283 4806 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24274be1-353b-4afa-8d99-01c1a737f1e8-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 09:22:34 crc kubenswrapper[4806]: I0126 09:22:34.050069 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-msp45"] Jan 26 09:22:34 crc kubenswrapper[4806]: I0126 09:22:34.083907 4806 generic.go:334] "Generic (PLEG): container finished" podID="24274be1-353b-4afa-8d99-01c1a737f1e8" containerID="21e8600c576f20d5f336a66c3be91671279e346a18c3677aff3861f2a73ef55b" exitCode=0 Jan 26 09:22:34 crc kubenswrapper[4806]: I0126 09:22:34.083989 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dgdsf" Jan 26 09:22:34 crc kubenswrapper[4806]: I0126 09:22:34.084023 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dgdsf" event={"ID":"24274be1-353b-4afa-8d99-01c1a737f1e8","Type":"ContainerDied","Data":"21e8600c576f20d5f336a66c3be91671279e346a18c3677aff3861f2a73ef55b"} Jan 26 09:22:34 crc kubenswrapper[4806]: I0126 09:22:34.084109 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dgdsf" event={"ID":"24274be1-353b-4afa-8d99-01c1a737f1e8","Type":"ContainerDied","Data":"0f8817d65fdc4a459f97bb03c45cde6da933f9e85bc51663275260da4124a578"} Jan 26 09:22:34 crc kubenswrapper[4806]: I0126 09:22:34.084133 4806 scope.go:117] "RemoveContainer" containerID="21e8600c576f20d5f336a66c3be91671279e346a18c3677aff3861f2a73ef55b" Jan 26 09:22:34 crc kubenswrapper[4806]: I0126 09:22:34.084410 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-msp45" podUID="747a935c-827f-4295-8e7e-c136dd6ae1a0" containerName="registry-server" containerID="cri-o://3f856ae733606927d4145cc55a0f839b2f35081cdb98b8f520b1fc2e3aac5429" gracePeriod=2 Jan 26 09:22:34 crc kubenswrapper[4806]: I0126 09:22:34.117131 4806 scope.go:117] "RemoveContainer" containerID="153159c2969b1942cebbbf90929e06a708346dd5c0352faaafe56dc9c5b26442" Jan 26 09:22:34 crc kubenswrapper[4806]: I0126 09:22:34.127552 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dgdsf"] Jan 26 09:22:34 crc kubenswrapper[4806]: I0126 09:22:34.139307 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-dgdsf"] Jan 26 09:22:34 crc kubenswrapper[4806]: I0126 09:22:34.145387 4806 scope.go:117] "RemoveContainer" containerID="d782bd5f8256ab3c7520c9921a98cb1b7bec75bb3bf2ac20ca190a1bded78170" Jan 26 09:22:34 crc kubenswrapper[4806]: I0126 09:22:34.305321 4806 scope.go:117] "RemoveContainer" containerID="21e8600c576f20d5f336a66c3be91671279e346a18c3677aff3861f2a73ef55b" Jan 26 09:22:34 crc kubenswrapper[4806]: E0126 09:22:34.308389 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"21e8600c576f20d5f336a66c3be91671279e346a18c3677aff3861f2a73ef55b\": container with ID starting with 21e8600c576f20d5f336a66c3be91671279e346a18c3677aff3861f2a73ef55b not found: ID does not exist" containerID="21e8600c576f20d5f336a66c3be91671279e346a18c3677aff3861f2a73ef55b" Jan 26 09:22:34 crc kubenswrapper[4806]: I0126 09:22:34.308433 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21e8600c576f20d5f336a66c3be91671279e346a18c3677aff3861f2a73ef55b"} err="failed to get container status \"21e8600c576f20d5f336a66c3be91671279e346a18c3677aff3861f2a73ef55b\": rpc error: code = NotFound desc = could not find container \"21e8600c576f20d5f336a66c3be91671279e346a18c3677aff3861f2a73ef55b\": container with ID starting with 21e8600c576f20d5f336a66c3be91671279e346a18c3677aff3861f2a73ef55b not found: ID does not exist" Jan 26 09:22:34 crc kubenswrapper[4806]: I0126 09:22:34.308460 4806 scope.go:117] "RemoveContainer" containerID="153159c2969b1942cebbbf90929e06a708346dd5c0352faaafe56dc9c5b26442" Jan 26 09:22:34 crc kubenswrapper[4806]: E0126 09:22:34.308810 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"153159c2969b1942cebbbf90929e06a708346dd5c0352faaafe56dc9c5b26442\": container with ID starting with 153159c2969b1942cebbbf90929e06a708346dd5c0352faaafe56dc9c5b26442 not found: ID does not exist" containerID="153159c2969b1942cebbbf90929e06a708346dd5c0352faaafe56dc9c5b26442" Jan 26 09:22:34 crc kubenswrapper[4806]: I0126 09:22:34.308850 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"153159c2969b1942cebbbf90929e06a708346dd5c0352faaafe56dc9c5b26442"} err="failed to get container status \"153159c2969b1942cebbbf90929e06a708346dd5c0352faaafe56dc9c5b26442\": rpc error: code = NotFound desc = could not find container \"153159c2969b1942cebbbf90929e06a708346dd5c0352faaafe56dc9c5b26442\": container with ID starting with 153159c2969b1942cebbbf90929e06a708346dd5c0352faaafe56dc9c5b26442 not found: ID does not exist" Jan 26 09:22:34 crc kubenswrapper[4806]: I0126 09:22:34.308870 4806 scope.go:117] "RemoveContainer" containerID="d782bd5f8256ab3c7520c9921a98cb1b7bec75bb3bf2ac20ca190a1bded78170" Jan 26 09:22:34 crc kubenswrapper[4806]: E0126 09:22:34.309113 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d782bd5f8256ab3c7520c9921a98cb1b7bec75bb3bf2ac20ca190a1bded78170\": container with ID starting with d782bd5f8256ab3c7520c9921a98cb1b7bec75bb3bf2ac20ca190a1bded78170 not found: ID does not exist" containerID="d782bd5f8256ab3c7520c9921a98cb1b7bec75bb3bf2ac20ca190a1bded78170" Jan 26 09:22:34 crc kubenswrapper[4806]: I0126 09:22:34.309137 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d782bd5f8256ab3c7520c9921a98cb1b7bec75bb3bf2ac20ca190a1bded78170"} err="failed to get container status \"d782bd5f8256ab3c7520c9921a98cb1b7bec75bb3bf2ac20ca190a1bded78170\": rpc error: code = NotFound desc = could not find container \"d782bd5f8256ab3c7520c9921a98cb1b7bec75bb3bf2ac20ca190a1bded78170\": container with ID starting with d782bd5f8256ab3c7520c9921a98cb1b7bec75bb3bf2ac20ca190a1bded78170 not found: ID does not exist" Jan 26 09:22:34 crc kubenswrapper[4806]: I0126 09:22:34.671731 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-msp45" Jan 26 09:22:34 crc kubenswrapper[4806]: I0126 09:22:34.846433 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/747a935c-827f-4295-8e7e-c136dd6ae1a0-utilities\") pod \"747a935c-827f-4295-8e7e-c136dd6ae1a0\" (UID: \"747a935c-827f-4295-8e7e-c136dd6ae1a0\") " Jan 26 09:22:34 crc kubenswrapper[4806]: I0126 09:22:34.846647 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/747a935c-827f-4295-8e7e-c136dd6ae1a0-catalog-content\") pod \"747a935c-827f-4295-8e7e-c136dd6ae1a0\" (UID: \"747a935c-827f-4295-8e7e-c136dd6ae1a0\") " Jan 26 09:22:34 crc kubenswrapper[4806]: I0126 09:22:34.846763 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tszj5\" (UniqueName: \"kubernetes.io/projected/747a935c-827f-4295-8e7e-c136dd6ae1a0-kube-api-access-tszj5\") pod \"747a935c-827f-4295-8e7e-c136dd6ae1a0\" (UID: \"747a935c-827f-4295-8e7e-c136dd6ae1a0\") " Jan 26 09:22:34 crc kubenswrapper[4806]: I0126 09:22:34.847411 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/747a935c-827f-4295-8e7e-c136dd6ae1a0-utilities" (OuterVolumeSpecName: "utilities") pod "747a935c-827f-4295-8e7e-c136dd6ae1a0" (UID: "747a935c-827f-4295-8e7e-c136dd6ae1a0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 09:22:34 crc kubenswrapper[4806]: I0126 09:22:34.852119 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/747a935c-827f-4295-8e7e-c136dd6ae1a0-kube-api-access-tszj5" (OuterVolumeSpecName: "kube-api-access-tszj5") pod "747a935c-827f-4295-8e7e-c136dd6ae1a0" (UID: "747a935c-827f-4295-8e7e-c136dd6ae1a0"). InnerVolumeSpecName "kube-api-access-tszj5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:22:34 crc kubenswrapper[4806]: I0126 09:22:34.854722 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tszj5\" (UniqueName: \"kubernetes.io/projected/747a935c-827f-4295-8e7e-c136dd6ae1a0-kube-api-access-tszj5\") on node \"crc\" DevicePath \"\"" Jan 26 09:22:34 crc kubenswrapper[4806]: I0126 09:22:34.854767 4806 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/747a935c-827f-4295-8e7e-c136dd6ae1a0-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 09:22:34 crc kubenswrapper[4806]: I0126 09:22:34.936031 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/747a935c-827f-4295-8e7e-c136dd6ae1a0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "747a935c-827f-4295-8e7e-c136dd6ae1a0" (UID: "747a935c-827f-4295-8e7e-c136dd6ae1a0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 09:22:34 crc kubenswrapper[4806]: I0126 09:22:34.956643 4806 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/747a935c-827f-4295-8e7e-c136dd6ae1a0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 09:22:35 crc kubenswrapper[4806]: I0126 09:22:35.054048 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24274be1-353b-4afa-8d99-01c1a737f1e8" path="/var/lib/kubelet/pods/24274be1-353b-4afa-8d99-01c1a737f1e8/volumes" Jan 26 09:22:35 crc kubenswrapper[4806]: I0126 09:22:35.096634 4806 generic.go:334] "Generic (PLEG): container finished" podID="747a935c-827f-4295-8e7e-c136dd6ae1a0" containerID="3f856ae733606927d4145cc55a0f839b2f35081cdb98b8f520b1fc2e3aac5429" exitCode=0 Jan 26 09:22:35 crc kubenswrapper[4806]: I0126 09:22:35.096672 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-msp45" event={"ID":"747a935c-827f-4295-8e7e-c136dd6ae1a0","Type":"ContainerDied","Data":"3f856ae733606927d4145cc55a0f839b2f35081cdb98b8f520b1fc2e3aac5429"} Jan 26 09:22:35 crc kubenswrapper[4806]: I0126 09:22:35.096695 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-msp45" event={"ID":"747a935c-827f-4295-8e7e-c136dd6ae1a0","Type":"ContainerDied","Data":"c348c6afc17ce782f2cac2a81fde2f0fb1f389525cb4d3d6bfbd82b963024b8d"} Jan 26 09:22:35 crc kubenswrapper[4806]: I0126 09:22:35.096712 4806 scope.go:117] "RemoveContainer" containerID="3f856ae733606927d4145cc55a0f839b2f35081cdb98b8f520b1fc2e3aac5429" Jan 26 09:22:35 crc kubenswrapper[4806]: I0126 09:22:35.096735 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-msp45" Jan 26 09:22:35 crc kubenswrapper[4806]: I0126 09:22:35.148471 4806 scope.go:117] "RemoveContainer" containerID="42bc5d1a04e4ce5481ae0ee84415ca49db9cee922cbaecadd34aa8982405174d" Jan 26 09:22:35 crc kubenswrapper[4806]: I0126 09:22:35.154895 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-msp45"] Jan 26 09:22:35 crc kubenswrapper[4806]: I0126 09:22:35.165539 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-msp45"] Jan 26 09:22:35 crc kubenswrapper[4806]: I0126 09:22:35.174639 4806 scope.go:117] "RemoveContainer" containerID="95e2d53695658ad3d578d0e80891cd5a9d6c7f7bc1946016963bacea4edf68c1" Jan 26 09:22:35 crc kubenswrapper[4806]: I0126 09:22:35.266277 4806 scope.go:117] "RemoveContainer" containerID="3f856ae733606927d4145cc55a0f839b2f35081cdb98b8f520b1fc2e3aac5429" Jan 26 09:22:35 crc kubenswrapper[4806]: E0126 09:22:35.270021 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f856ae733606927d4145cc55a0f839b2f35081cdb98b8f520b1fc2e3aac5429\": container with ID starting with 3f856ae733606927d4145cc55a0f839b2f35081cdb98b8f520b1fc2e3aac5429 not found: ID does not exist" containerID="3f856ae733606927d4145cc55a0f839b2f35081cdb98b8f520b1fc2e3aac5429" Jan 26 09:22:35 crc kubenswrapper[4806]: I0126 09:22:35.270075 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f856ae733606927d4145cc55a0f839b2f35081cdb98b8f520b1fc2e3aac5429"} err="failed to get container status \"3f856ae733606927d4145cc55a0f839b2f35081cdb98b8f520b1fc2e3aac5429\": rpc error: code = NotFound desc = could not find container \"3f856ae733606927d4145cc55a0f839b2f35081cdb98b8f520b1fc2e3aac5429\": container with ID starting with 3f856ae733606927d4145cc55a0f839b2f35081cdb98b8f520b1fc2e3aac5429 not found: ID does not exist" Jan 26 09:22:35 crc kubenswrapper[4806]: I0126 09:22:35.270132 4806 scope.go:117] "RemoveContainer" containerID="42bc5d1a04e4ce5481ae0ee84415ca49db9cee922cbaecadd34aa8982405174d" Jan 26 09:22:35 crc kubenswrapper[4806]: E0126 09:22:35.277774 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42bc5d1a04e4ce5481ae0ee84415ca49db9cee922cbaecadd34aa8982405174d\": container with ID starting with 42bc5d1a04e4ce5481ae0ee84415ca49db9cee922cbaecadd34aa8982405174d not found: ID does not exist" containerID="42bc5d1a04e4ce5481ae0ee84415ca49db9cee922cbaecadd34aa8982405174d" Jan 26 09:22:35 crc kubenswrapper[4806]: I0126 09:22:35.277844 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42bc5d1a04e4ce5481ae0ee84415ca49db9cee922cbaecadd34aa8982405174d"} err="failed to get container status \"42bc5d1a04e4ce5481ae0ee84415ca49db9cee922cbaecadd34aa8982405174d\": rpc error: code = NotFound desc = could not find container \"42bc5d1a04e4ce5481ae0ee84415ca49db9cee922cbaecadd34aa8982405174d\": container with ID starting with 42bc5d1a04e4ce5481ae0ee84415ca49db9cee922cbaecadd34aa8982405174d not found: ID does not exist" Jan 26 09:22:35 crc kubenswrapper[4806]: I0126 09:22:35.277870 4806 scope.go:117] "RemoveContainer" containerID="95e2d53695658ad3d578d0e80891cd5a9d6c7f7bc1946016963bacea4edf68c1" Jan 26 09:22:35 crc kubenswrapper[4806]: E0126 09:22:35.283646 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"95e2d53695658ad3d578d0e80891cd5a9d6c7f7bc1946016963bacea4edf68c1\": container with ID starting with 95e2d53695658ad3d578d0e80891cd5a9d6c7f7bc1946016963bacea4edf68c1 not found: ID does not exist" containerID="95e2d53695658ad3d578d0e80891cd5a9d6c7f7bc1946016963bacea4edf68c1" Jan 26 09:22:35 crc kubenswrapper[4806]: I0126 09:22:35.283690 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95e2d53695658ad3d578d0e80891cd5a9d6c7f7bc1946016963bacea4edf68c1"} err="failed to get container status \"95e2d53695658ad3d578d0e80891cd5a9d6c7f7bc1946016963bacea4edf68c1\": rpc error: code = NotFound desc = could not find container \"95e2d53695658ad3d578d0e80891cd5a9d6c7f7bc1946016963bacea4edf68c1\": container with ID starting with 95e2d53695658ad3d578d0e80891cd5a9d6c7f7bc1946016963bacea4edf68c1 not found: ID does not exist" Jan 26 09:22:37 crc kubenswrapper[4806]: I0126 09:22:37.042451 4806 scope.go:117] "RemoveContainer" containerID="6fd55683b8f15334183fb0b95891d473becb88c0609832bd732db6a7f091647a" Jan 26 09:22:37 crc kubenswrapper[4806]: E0126 09:22:37.043080 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 09:22:37 crc kubenswrapper[4806]: I0126 09:22:37.052342 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="747a935c-827f-4295-8e7e-c136dd6ae1a0" path="/var/lib/kubelet/pods/747a935c-827f-4295-8e7e-c136dd6ae1a0/volumes" Jan 26 09:22:48 crc kubenswrapper[4806]: I0126 09:22:48.042375 4806 scope.go:117] "RemoveContainer" containerID="6fd55683b8f15334183fb0b95891d473becb88c0609832bd732db6a7f091647a" Jan 26 09:22:48 crc kubenswrapper[4806]: E0126 09:22:48.043920 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 09:23:02 crc kubenswrapper[4806]: I0126 09:23:02.042048 4806 scope.go:117] "RemoveContainer" containerID="6fd55683b8f15334183fb0b95891d473becb88c0609832bd732db6a7f091647a" Jan 26 09:23:02 crc kubenswrapper[4806]: E0126 09:23:02.042769 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 09:23:14 crc kubenswrapper[4806]: I0126 09:23:14.041776 4806 scope.go:117] "RemoveContainer" containerID="6fd55683b8f15334183fb0b95891d473becb88c0609832bd732db6a7f091647a" Jan 26 09:23:14 crc kubenswrapper[4806]: E0126 09:23:14.042673 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 09:23:26 crc kubenswrapper[4806]: I0126 09:23:26.042298 4806 scope.go:117] "RemoveContainer" containerID="6fd55683b8f15334183fb0b95891d473becb88c0609832bd732db6a7f091647a" Jan 26 09:23:26 crc kubenswrapper[4806]: E0126 09:23:26.043110 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 09:23:31 crc kubenswrapper[4806]: I0126 09:23:31.682815 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-75fb9bfb7c-t5l28"] Jan 26 09:23:31 crc kubenswrapper[4806]: E0126 09:23:31.683934 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24274be1-353b-4afa-8d99-01c1a737f1e8" containerName="extract-utilities" Jan 26 09:23:31 crc kubenswrapper[4806]: I0126 09:23:31.683950 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="24274be1-353b-4afa-8d99-01c1a737f1e8" containerName="extract-utilities" Jan 26 09:23:31 crc kubenswrapper[4806]: E0126 09:23:31.683981 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="747a935c-827f-4295-8e7e-c136dd6ae1a0" containerName="registry-server" Jan 26 09:23:31 crc kubenswrapper[4806]: I0126 09:23:31.683988 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="747a935c-827f-4295-8e7e-c136dd6ae1a0" containerName="registry-server" Jan 26 09:23:31 crc kubenswrapper[4806]: E0126 09:23:31.684004 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24274be1-353b-4afa-8d99-01c1a737f1e8" containerName="registry-server" Jan 26 09:23:31 crc kubenswrapper[4806]: I0126 09:23:31.684011 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="24274be1-353b-4afa-8d99-01c1a737f1e8" containerName="registry-server" Jan 26 09:23:31 crc kubenswrapper[4806]: E0126 09:23:31.684022 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="747a935c-827f-4295-8e7e-c136dd6ae1a0" containerName="extract-utilities" Jan 26 09:23:31 crc kubenswrapper[4806]: I0126 09:23:31.684028 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="747a935c-827f-4295-8e7e-c136dd6ae1a0" containerName="extract-utilities" Jan 26 09:23:31 crc kubenswrapper[4806]: E0126 09:23:31.684045 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="747a935c-827f-4295-8e7e-c136dd6ae1a0" containerName="extract-content" Jan 26 09:23:31 crc kubenswrapper[4806]: I0126 09:23:31.684052 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="747a935c-827f-4295-8e7e-c136dd6ae1a0" containerName="extract-content" Jan 26 09:23:31 crc kubenswrapper[4806]: E0126 09:23:31.684073 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24274be1-353b-4afa-8d99-01c1a737f1e8" containerName="extract-content" Jan 26 09:23:31 crc kubenswrapper[4806]: I0126 09:23:31.684079 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="24274be1-353b-4afa-8d99-01c1a737f1e8" containerName="extract-content" Jan 26 09:23:31 crc kubenswrapper[4806]: I0126 09:23:31.684279 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="747a935c-827f-4295-8e7e-c136dd6ae1a0" containerName="registry-server" Jan 26 09:23:31 crc kubenswrapper[4806]: I0126 09:23:31.684306 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="24274be1-353b-4afa-8d99-01c1a737f1e8" containerName="registry-server" Jan 26 09:23:31 crc kubenswrapper[4806]: I0126 09:23:31.685771 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-75fb9bfb7c-t5l28" Jan 26 09:23:31 crc kubenswrapper[4806]: I0126 09:23:31.703907 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-75fb9bfb7c-t5l28"] Jan 26 09:23:31 crc kubenswrapper[4806]: I0126 09:23:31.777865 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnptz\" (UniqueName: \"kubernetes.io/projected/236df924-79aa-410f-905e-aba909cdfae2-kube-api-access-jnptz\") pod \"neutron-75fb9bfb7c-t5l28\" (UID: \"236df924-79aa-410f-905e-aba909cdfae2\") " pod="openstack/neutron-75fb9bfb7c-t5l28" Jan 26 09:23:31 crc kubenswrapper[4806]: I0126 09:23:31.778135 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/236df924-79aa-410f-905e-aba909cdfae2-httpd-config\") pod \"neutron-75fb9bfb7c-t5l28\" (UID: \"236df924-79aa-410f-905e-aba909cdfae2\") " pod="openstack/neutron-75fb9bfb7c-t5l28" Jan 26 09:23:31 crc kubenswrapper[4806]: I0126 09:23:31.778173 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/236df924-79aa-410f-905e-aba909cdfae2-combined-ca-bundle\") pod \"neutron-75fb9bfb7c-t5l28\" (UID: \"236df924-79aa-410f-905e-aba909cdfae2\") " pod="openstack/neutron-75fb9bfb7c-t5l28" Jan 26 09:23:31 crc kubenswrapper[4806]: I0126 09:23:31.778192 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/236df924-79aa-410f-905e-aba909cdfae2-ovndb-tls-certs\") pod \"neutron-75fb9bfb7c-t5l28\" (UID: \"236df924-79aa-410f-905e-aba909cdfae2\") " pod="openstack/neutron-75fb9bfb7c-t5l28" Jan 26 09:23:31 crc kubenswrapper[4806]: I0126 09:23:31.778257 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/236df924-79aa-410f-905e-aba909cdfae2-config\") pod \"neutron-75fb9bfb7c-t5l28\" (UID: \"236df924-79aa-410f-905e-aba909cdfae2\") " pod="openstack/neutron-75fb9bfb7c-t5l28" Jan 26 09:23:31 crc kubenswrapper[4806]: I0126 09:23:31.778273 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/236df924-79aa-410f-905e-aba909cdfae2-public-tls-certs\") pod \"neutron-75fb9bfb7c-t5l28\" (UID: \"236df924-79aa-410f-905e-aba909cdfae2\") " pod="openstack/neutron-75fb9bfb7c-t5l28" Jan 26 09:23:31 crc kubenswrapper[4806]: I0126 09:23:31.778307 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/236df924-79aa-410f-905e-aba909cdfae2-internal-tls-certs\") pod \"neutron-75fb9bfb7c-t5l28\" (UID: \"236df924-79aa-410f-905e-aba909cdfae2\") " pod="openstack/neutron-75fb9bfb7c-t5l28" Jan 26 09:23:31 crc kubenswrapper[4806]: I0126 09:23:31.879728 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/236df924-79aa-410f-905e-aba909cdfae2-config\") pod \"neutron-75fb9bfb7c-t5l28\" (UID: \"236df924-79aa-410f-905e-aba909cdfae2\") " pod="openstack/neutron-75fb9bfb7c-t5l28" Jan 26 09:23:31 crc kubenswrapper[4806]: I0126 09:23:31.879783 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/236df924-79aa-410f-905e-aba909cdfae2-public-tls-certs\") pod \"neutron-75fb9bfb7c-t5l28\" (UID: \"236df924-79aa-410f-905e-aba909cdfae2\") " pod="openstack/neutron-75fb9bfb7c-t5l28" Jan 26 09:23:31 crc kubenswrapper[4806]: I0126 09:23:31.879838 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/236df924-79aa-410f-905e-aba909cdfae2-internal-tls-certs\") pod \"neutron-75fb9bfb7c-t5l28\" (UID: \"236df924-79aa-410f-905e-aba909cdfae2\") " pod="openstack/neutron-75fb9bfb7c-t5l28" Jan 26 09:23:31 crc kubenswrapper[4806]: I0126 09:23:31.880022 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jnptz\" (UniqueName: \"kubernetes.io/projected/236df924-79aa-410f-905e-aba909cdfae2-kube-api-access-jnptz\") pod \"neutron-75fb9bfb7c-t5l28\" (UID: \"236df924-79aa-410f-905e-aba909cdfae2\") " pod="openstack/neutron-75fb9bfb7c-t5l28" Jan 26 09:23:31 crc kubenswrapper[4806]: I0126 09:23:31.880050 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/236df924-79aa-410f-905e-aba909cdfae2-httpd-config\") pod \"neutron-75fb9bfb7c-t5l28\" (UID: \"236df924-79aa-410f-905e-aba909cdfae2\") " pod="openstack/neutron-75fb9bfb7c-t5l28" Jan 26 09:23:31 crc kubenswrapper[4806]: I0126 09:23:31.880098 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/236df924-79aa-410f-905e-aba909cdfae2-combined-ca-bundle\") pod \"neutron-75fb9bfb7c-t5l28\" (UID: \"236df924-79aa-410f-905e-aba909cdfae2\") " pod="openstack/neutron-75fb9bfb7c-t5l28" Jan 26 09:23:31 crc kubenswrapper[4806]: I0126 09:23:31.880125 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/236df924-79aa-410f-905e-aba909cdfae2-ovndb-tls-certs\") pod \"neutron-75fb9bfb7c-t5l28\" (UID: \"236df924-79aa-410f-905e-aba909cdfae2\") " pod="openstack/neutron-75fb9bfb7c-t5l28" Jan 26 09:23:31 crc kubenswrapper[4806]: I0126 09:23:31.886952 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/236df924-79aa-410f-905e-aba909cdfae2-public-tls-certs\") pod \"neutron-75fb9bfb7c-t5l28\" (UID: \"236df924-79aa-410f-905e-aba909cdfae2\") " pod="openstack/neutron-75fb9bfb7c-t5l28" Jan 26 09:23:31 crc kubenswrapper[4806]: I0126 09:23:31.887080 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/236df924-79aa-410f-905e-aba909cdfae2-config\") pod \"neutron-75fb9bfb7c-t5l28\" (UID: \"236df924-79aa-410f-905e-aba909cdfae2\") " pod="openstack/neutron-75fb9bfb7c-t5l28" Jan 26 09:23:31 crc kubenswrapper[4806]: I0126 09:23:31.887370 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/236df924-79aa-410f-905e-aba909cdfae2-combined-ca-bundle\") pod \"neutron-75fb9bfb7c-t5l28\" (UID: \"236df924-79aa-410f-905e-aba909cdfae2\") " pod="openstack/neutron-75fb9bfb7c-t5l28" Jan 26 09:23:31 crc kubenswrapper[4806]: I0126 09:23:31.888051 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/236df924-79aa-410f-905e-aba909cdfae2-ovndb-tls-certs\") pod \"neutron-75fb9bfb7c-t5l28\" (UID: \"236df924-79aa-410f-905e-aba909cdfae2\") " pod="openstack/neutron-75fb9bfb7c-t5l28" Jan 26 09:23:31 crc kubenswrapper[4806]: I0126 09:23:31.889283 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/236df924-79aa-410f-905e-aba909cdfae2-internal-tls-certs\") pod \"neutron-75fb9bfb7c-t5l28\" (UID: \"236df924-79aa-410f-905e-aba909cdfae2\") " pod="openstack/neutron-75fb9bfb7c-t5l28" Jan 26 09:23:31 crc kubenswrapper[4806]: I0126 09:23:31.895275 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/236df924-79aa-410f-905e-aba909cdfae2-httpd-config\") pod \"neutron-75fb9bfb7c-t5l28\" (UID: \"236df924-79aa-410f-905e-aba909cdfae2\") " pod="openstack/neutron-75fb9bfb7c-t5l28" Jan 26 09:23:31 crc kubenswrapper[4806]: I0126 09:23:31.907468 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jnptz\" (UniqueName: \"kubernetes.io/projected/236df924-79aa-410f-905e-aba909cdfae2-kube-api-access-jnptz\") pod \"neutron-75fb9bfb7c-t5l28\" (UID: \"236df924-79aa-410f-905e-aba909cdfae2\") " pod="openstack/neutron-75fb9bfb7c-t5l28" Jan 26 09:23:32 crc kubenswrapper[4806]: I0126 09:23:32.012677 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-75fb9bfb7c-t5l28" Jan 26 09:23:32 crc kubenswrapper[4806]: I0126 09:23:32.648828 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-75fb9bfb7c-t5l28"] Jan 26 09:23:33 crc kubenswrapper[4806]: I0126 09:23:33.584577 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-75fb9bfb7c-t5l28" event={"ID":"236df924-79aa-410f-905e-aba909cdfae2","Type":"ContainerStarted","Data":"95744eb8ee2b728f6a83bfa1467bc22365fe209212d4c089b9c75cad3dd14b20"} Jan 26 09:23:33 crc kubenswrapper[4806]: I0126 09:23:33.585134 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-75fb9bfb7c-t5l28" event={"ID":"236df924-79aa-410f-905e-aba909cdfae2","Type":"ContainerStarted","Data":"8619346cf1b397da75af325f5626fab381a694f8f1a3cff14b4683e04933269c"} Jan 26 09:23:33 crc kubenswrapper[4806]: I0126 09:23:33.585147 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-75fb9bfb7c-t5l28" event={"ID":"236df924-79aa-410f-905e-aba909cdfae2","Type":"ContainerStarted","Data":"be135617f772f932821a9116dab6578bfed4f5c96e017d5f06e8409fced74335"} Jan 26 09:23:33 crc kubenswrapper[4806]: I0126 09:23:33.585163 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-75fb9bfb7c-t5l28" Jan 26 09:23:33 crc kubenswrapper[4806]: I0126 09:23:33.608595 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-75fb9bfb7c-t5l28" podStartSLOduration=2.60857387 podStartE2EDuration="2.60857387s" podCreationTimestamp="2026-01-26 09:23:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 09:23:33.60187987 +0000 UTC m=+5392.866287946" watchObservedRunningTime="2026-01-26 09:23:33.60857387 +0000 UTC m=+5392.872981926" Jan 26 09:23:38 crc kubenswrapper[4806]: I0126 09:23:38.042214 4806 scope.go:117] "RemoveContainer" containerID="6fd55683b8f15334183fb0b95891d473becb88c0609832bd732db6a7f091647a" Jan 26 09:23:38 crc kubenswrapper[4806]: E0126 09:23:38.043042 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 09:23:50 crc kubenswrapper[4806]: I0126 09:23:50.042245 4806 scope.go:117] "RemoveContainer" containerID="6fd55683b8f15334183fb0b95891d473becb88c0609832bd732db6a7f091647a" Jan 26 09:23:50 crc kubenswrapper[4806]: E0126 09:23:50.043045 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 09:24:02 crc kubenswrapper[4806]: I0126 09:24:02.024963 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-75fb9bfb7c-t5l28" Jan 26 09:24:02 crc kubenswrapper[4806]: I0126 09:24:02.124345 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7fd7f5fc77-snnst"] Jan 26 09:24:02 crc kubenswrapper[4806]: I0126 09:24:02.124858 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7fd7f5fc77-snnst" podUID="1a42a8dd-6e74-4dba-a208-21461ce7ad8f" containerName="neutron-api" containerID="cri-o://dbf92d3c9ccab2d639420553b365841303b3b37631362ea1814d3548b107050f" gracePeriod=30 Jan 26 09:24:02 crc kubenswrapper[4806]: I0126 09:24:02.125239 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7fd7f5fc77-snnst" podUID="1a42a8dd-6e74-4dba-a208-21461ce7ad8f" containerName="neutron-httpd" containerID="cri-o://917306e7bbeb9b54dc9a02bcbdfe6b4c60d8fe2df3c9a899fed4684d4d1eafa4" gracePeriod=30 Jan 26 09:24:02 crc kubenswrapper[4806]: I0126 09:24:02.842702 4806 generic.go:334] "Generic (PLEG): container finished" podID="1a42a8dd-6e74-4dba-a208-21461ce7ad8f" containerID="917306e7bbeb9b54dc9a02bcbdfe6b4c60d8fe2df3c9a899fed4684d4d1eafa4" exitCode=0 Jan 26 09:24:02 crc kubenswrapper[4806]: I0126 09:24:02.842754 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7fd7f5fc77-snnst" event={"ID":"1a42a8dd-6e74-4dba-a208-21461ce7ad8f","Type":"ContainerDied","Data":"917306e7bbeb9b54dc9a02bcbdfe6b4c60d8fe2df3c9a899fed4684d4d1eafa4"} Jan 26 09:24:03 crc kubenswrapper[4806]: I0126 09:24:03.855214 4806 generic.go:334] "Generic (PLEG): container finished" podID="1a42a8dd-6e74-4dba-a208-21461ce7ad8f" containerID="dbf92d3c9ccab2d639420553b365841303b3b37631362ea1814d3548b107050f" exitCode=0 Jan 26 09:24:03 crc kubenswrapper[4806]: I0126 09:24:03.855259 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7fd7f5fc77-snnst" event={"ID":"1a42a8dd-6e74-4dba-a208-21461ce7ad8f","Type":"ContainerDied","Data":"dbf92d3c9ccab2d639420553b365841303b3b37631362ea1814d3548b107050f"} Jan 26 09:24:04 crc kubenswrapper[4806]: I0126 09:24:04.042984 4806 scope.go:117] "RemoveContainer" containerID="6fd55683b8f15334183fb0b95891d473becb88c0609832bd732db6a7f091647a" Jan 26 09:24:04 crc kubenswrapper[4806]: E0126 09:24:04.044130 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 09:24:04 crc kubenswrapper[4806]: I0126 09:24:04.103953 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7fd7f5fc77-snnst" Jan 26 09:24:04 crc kubenswrapper[4806]: I0126 09:24:04.158555 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a42a8dd-6e74-4dba-a208-21461ce7ad8f-ovndb-tls-certs\") pod \"1a42a8dd-6e74-4dba-a208-21461ce7ad8f\" (UID: \"1a42a8dd-6e74-4dba-a208-21461ce7ad8f\") " Jan 26 09:24:04 crc kubenswrapper[4806]: I0126 09:24:04.158669 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1a42a8dd-6e74-4dba-a208-21461ce7ad8f-config\") pod \"1a42a8dd-6e74-4dba-a208-21461ce7ad8f\" (UID: \"1a42a8dd-6e74-4dba-a208-21461ce7ad8f\") " Jan 26 09:24:04 crc kubenswrapper[4806]: I0126 09:24:04.159014 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1a42a8dd-6e74-4dba-a208-21461ce7ad8f-httpd-config\") pod \"1a42a8dd-6e74-4dba-a208-21461ce7ad8f\" (UID: \"1a42a8dd-6e74-4dba-a208-21461ce7ad8f\") " Jan 26 09:24:04 crc kubenswrapper[4806]: I0126 09:24:04.159100 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a42a8dd-6e74-4dba-a208-21461ce7ad8f-public-tls-certs\") pod \"1a42a8dd-6e74-4dba-a208-21461ce7ad8f\" (UID: \"1a42a8dd-6e74-4dba-a208-21461ce7ad8f\") " Jan 26 09:24:04 crc kubenswrapper[4806]: I0126 09:24:04.159198 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a42a8dd-6e74-4dba-a208-21461ce7ad8f-internal-tls-certs\") pod \"1a42a8dd-6e74-4dba-a208-21461ce7ad8f\" (UID: \"1a42a8dd-6e74-4dba-a208-21461ce7ad8f\") " Jan 26 09:24:04 crc kubenswrapper[4806]: I0126 09:24:04.159246 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bmnq7\" (UniqueName: \"kubernetes.io/projected/1a42a8dd-6e74-4dba-a208-21461ce7ad8f-kube-api-access-bmnq7\") pod \"1a42a8dd-6e74-4dba-a208-21461ce7ad8f\" (UID: \"1a42a8dd-6e74-4dba-a208-21461ce7ad8f\") " Jan 26 09:24:04 crc kubenswrapper[4806]: I0126 09:24:04.159365 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a42a8dd-6e74-4dba-a208-21461ce7ad8f-combined-ca-bundle\") pod \"1a42a8dd-6e74-4dba-a208-21461ce7ad8f\" (UID: \"1a42a8dd-6e74-4dba-a208-21461ce7ad8f\") " Jan 26 09:24:04 crc kubenswrapper[4806]: I0126 09:24:04.186141 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a42a8dd-6e74-4dba-a208-21461ce7ad8f-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "1a42a8dd-6e74-4dba-a208-21461ce7ad8f" (UID: "1a42a8dd-6e74-4dba-a208-21461ce7ad8f"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:24:04 crc kubenswrapper[4806]: I0126 09:24:04.188029 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a42a8dd-6e74-4dba-a208-21461ce7ad8f-kube-api-access-bmnq7" (OuterVolumeSpecName: "kube-api-access-bmnq7") pod "1a42a8dd-6e74-4dba-a208-21461ce7ad8f" (UID: "1a42a8dd-6e74-4dba-a208-21461ce7ad8f"). InnerVolumeSpecName "kube-api-access-bmnq7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:24:04 crc kubenswrapper[4806]: I0126 09:24:04.225820 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a42a8dd-6e74-4dba-a208-21461ce7ad8f-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "1a42a8dd-6e74-4dba-a208-21461ce7ad8f" (UID: "1a42a8dd-6e74-4dba-a208-21461ce7ad8f"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:24:04 crc kubenswrapper[4806]: I0126 09:24:04.228809 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a42a8dd-6e74-4dba-a208-21461ce7ad8f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1a42a8dd-6e74-4dba-a208-21461ce7ad8f" (UID: "1a42a8dd-6e74-4dba-a208-21461ce7ad8f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:24:04 crc kubenswrapper[4806]: I0126 09:24:04.245516 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a42a8dd-6e74-4dba-a208-21461ce7ad8f-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "1a42a8dd-6e74-4dba-a208-21461ce7ad8f" (UID: "1a42a8dd-6e74-4dba-a208-21461ce7ad8f"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:24:04 crc kubenswrapper[4806]: I0126 09:24:04.262592 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a42a8dd-6e74-4dba-a208-21461ce7ad8f-config" (OuterVolumeSpecName: "config") pod "1a42a8dd-6e74-4dba-a208-21461ce7ad8f" (UID: "1a42a8dd-6e74-4dba-a208-21461ce7ad8f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:24:04 crc kubenswrapper[4806]: I0126 09:24:04.263500 4806 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1a42a8dd-6e74-4dba-a208-21461ce7ad8f-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 26 09:24:04 crc kubenswrapper[4806]: I0126 09:24:04.263540 4806 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a42a8dd-6e74-4dba-a208-21461ce7ad8f-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 09:24:04 crc kubenswrapper[4806]: I0126 09:24:04.263566 4806 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a42a8dd-6e74-4dba-a208-21461ce7ad8f-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 09:24:04 crc kubenswrapper[4806]: I0126 09:24:04.263575 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bmnq7\" (UniqueName: \"kubernetes.io/projected/1a42a8dd-6e74-4dba-a208-21461ce7ad8f-kube-api-access-bmnq7\") on node \"crc\" DevicePath \"\"" Jan 26 09:24:04 crc kubenswrapper[4806]: I0126 09:24:04.263585 4806 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a42a8dd-6e74-4dba-a208-21461ce7ad8f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 09:24:04 crc kubenswrapper[4806]: I0126 09:24:04.263593 4806 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/1a42a8dd-6e74-4dba-a208-21461ce7ad8f-config\") on node \"crc\" DevicePath \"\"" Jan 26 09:24:04 crc kubenswrapper[4806]: I0126 09:24:04.282383 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a42a8dd-6e74-4dba-a208-21461ce7ad8f-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "1a42a8dd-6e74-4dba-a208-21461ce7ad8f" (UID: "1a42a8dd-6e74-4dba-a208-21461ce7ad8f"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:24:04 crc kubenswrapper[4806]: I0126 09:24:04.365817 4806 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1a42a8dd-6e74-4dba-a208-21461ce7ad8f-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 09:24:04 crc kubenswrapper[4806]: I0126 09:24:04.865455 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7fd7f5fc77-snnst" event={"ID":"1a42a8dd-6e74-4dba-a208-21461ce7ad8f","Type":"ContainerDied","Data":"dcba9cf66a7fa75532dd8661dacb9cdc8ea890bf0435da063d1451b84efafdef"} Jan 26 09:24:04 crc kubenswrapper[4806]: I0126 09:24:04.865558 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7fd7f5fc77-snnst" Jan 26 09:24:04 crc kubenswrapper[4806]: I0126 09:24:04.865569 4806 scope.go:117] "RemoveContainer" containerID="917306e7bbeb9b54dc9a02bcbdfe6b4c60d8fe2df3c9a899fed4684d4d1eafa4" Jan 26 09:24:04 crc kubenswrapper[4806]: I0126 09:24:04.894797 4806 scope.go:117] "RemoveContainer" containerID="dbf92d3c9ccab2d639420553b365841303b3b37631362ea1814d3548b107050f" Jan 26 09:24:04 crc kubenswrapper[4806]: I0126 09:24:04.905110 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7fd7f5fc77-snnst"] Jan 26 09:24:04 crc kubenswrapper[4806]: I0126 09:24:04.914560 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-7fd7f5fc77-snnst"] Jan 26 09:24:05 crc kubenswrapper[4806]: I0126 09:24:05.053147 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a42a8dd-6e74-4dba-a208-21461ce7ad8f" path="/var/lib/kubelet/pods/1a42a8dd-6e74-4dba-a208-21461ce7ad8f/volumes" Jan 26 09:24:16 crc kubenswrapper[4806]: I0126 09:24:16.042303 4806 scope.go:117] "RemoveContainer" containerID="6fd55683b8f15334183fb0b95891d473becb88c0609832bd732db6a7f091647a" Jan 26 09:24:16 crc kubenswrapper[4806]: E0126 09:24:16.043062 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 09:24:29 crc kubenswrapper[4806]: I0126 09:24:29.042425 4806 scope.go:117] "RemoveContainer" containerID="6fd55683b8f15334183fb0b95891d473becb88c0609832bd732db6a7f091647a" Jan 26 09:24:29 crc kubenswrapper[4806]: E0126 09:24:29.043300 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 09:24:43 crc kubenswrapper[4806]: I0126 09:24:43.042359 4806 scope.go:117] "RemoveContainer" containerID="6fd55683b8f15334183fb0b95891d473becb88c0609832bd732db6a7f091647a" Jan 26 09:24:43 crc kubenswrapper[4806]: E0126 09:24:43.044209 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 09:24:48 crc kubenswrapper[4806]: I0126 09:24:48.574628 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-27fss"] Jan 26 09:24:48 crc kubenswrapper[4806]: E0126 09:24:48.575546 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a42a8dd-6e74-4dba-a208-21461ce7ad8f" containerName="neutron-api" Jan 26 09:24:48 crc kubenswrapper[4806]: I0126 09:24:48.575558 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a42a8dd-6e74-4dba-a208-21461ce7ad8f" containerName="neutron-api" Jan 26 09:24:48 crc kubenswrapper[4806]: E0126 09:24:48.575572 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a42a8dd-6e74-4dba-a208-21461ce7ad8f" containerName="neutron-httpd" Jan 26 09:24:48 crc kubenswrapper[4806]: I0126 09:24:48.575578 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a42a8dd-6e74-4dba-a208-21461ce7ad8f" containerName="neutron-httpd" Jan 26 09:24:48 crc kubenswrapper[4806]: I0126 09:24:48.575766 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a42a8dd-6e74-4dba-a208-21461ce7ad8f" containerName="neutron-httpd" Jan 26 09:24:48 crc kubenswrapper[4806]: I0126 09:24:48.575776 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a42a8dd-6e74-4dba-a208-21461ce7ad8f" containerName="neutron-api" Jan 26 09:24:48 crc kubenswrapper[4806]: I0126 09:24:48.577105 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-27fss" Jan 26 09:24:48 crc kubenswrapper[4806]: I0126 09:24:48.585839 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-27fss"] Jan 26 09:24:48 crc kubenswrapper[4806]: I0126 09:24:48.715921 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6d89087-9a99-4d22-b3c7-c9fef844d393-catalog-content\") pod \"redhat-marketplace-27fss\" (UID: \"b6d89087-9a99-4d22-b3c7-c9fef844d393\") " pod="openshift-marketplace/redhat-marketplace-27fss" Jan 26 09:24:48 crc kubenswrapper[4806]: I0126 09:24:48.716395 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6d89087-9a99-4d22-b3c7-c9fef844d393-utilities\") pod \"redhat-marketplace-27fss\" (UID: \"b6d89087-9a99-4d22-b3c7-c9fef844d393\") " pod="openshift-marketplace/redhat-marketplace-27fss" Jan 26 09:24:48 crc kubenswrapper[4806]: I0126 09:24:48.716619 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9tr8\" (UniqueName: \"kubernetes.io/projected/b6d89087-9a99-4d22-b3c7-c9fef844d393-kube-api-access-j9tr8\") pod \"redhat-marketplace-27fss\" (UID: \"b6d89087-9a99-4d22-b3c7-c9fef844d393\") " pod="openshift-marketplace/redhat-marketplace-27fss" Jan 26 09:24:48 crc kubenswrapper[4806]: I0126 09:24:48.819077 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6d89087-9a99-4d22-b3c7-c9fef844d393-utilities\") pod \"redhat-marketplace-27fss\" (UID: \"b6d89087-9a99-4d22-b3c7-c9fef844d393\") " pod="openshift-marketplace/redhat-marketplace-27fss" Jan 26 09:24:48 crc kubenswrapper[4806]: I0126 09:24:48.819803 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6d89087-9a99-4d22-b3c7-c9fef844d393-utilities\") pod \"redhat-marketplace-27fss\" (UID: \"b6d89087-9a99-4d22-b3c7-c9fef844d393\") " pod="openshift-marketplace/redhat-marketplace-27fss" Jan 26 09:24:48 crc kubenswrapper[4806]: I0126 09:24:48.820390 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9tr8\" (UniqueName: \"kubernetes.io/projected/b6d89087-9a99-4d22-b3c7-c9fef844d393-kube-api-access-j9tr8\") pod \"redhat-marketplace-27fss\" (UID: \"b6d89087-9a99-4d22-b3c7-c9fef844d393\") " pod="openshift-marketplace/redhat-marketplace-27fss" Jan 26 09:24:48 crc kubenswrapper[4806]: I0126 09:24:48.821032 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6d89087-9a99-4d22-b3c7-c9fef844d393-catalog-content\") pod \"redhat-marketplace-27fss\" (UID: \"b6d89087-9a99-4d22-b3c7-c9fef844d393\") " pod="openshift-marketplace/redhat-marketplace-27fss" Jan 26 09:24:48 crc kubenswrapper[4806]: I0126 09:24:48.821433 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6d89087-9a99-4d22-b3c7-c9fef844d393-catalog-content\") pod \"redhat-marketplace-27fss\" (UID: \"b6d89087-9a99-4d22-b3c7-c9fef844d393\") " pod="openshift-marketplace/redhat-marketplace-27fss" Jan 26 09:24:48 crc kubenswrapper[4806]: I0126 09:24:48.856173 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9tr8\" (UniqueName: \"kubernetes.io/projected/b6d89087-9a99-4d22-b3c7-c9fef844d393-kube-api-access-j9tr8\") pod \"redhat-marketplace-27fss\" (UID: \"b6d89087-9a99-4d22-b3c7-c9fef844d393\") " pod="openshift-marketplace/redhat-marketplace-27fss" Jan 26 09:24:48 crc kubenswrapper[4806]: I0126 09:24:48.896117 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-27fss" Jan 26 09:24:49 crc kubenswrapper[4806]: I0126 09:24:49.373273 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-27fss"] Jan 26 09:24:49 crc kubenswrapper[4806]: W0126 09:24:49.387429 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb6d89087_9a99_4d22_b3c7_c9fef844d393.slice/crio-f9a714c70b3346ade9e913d0180fea0350f7a8fd522ad8a2617c2ea2a7f7270d WatchSource:0}: Error finding container f9a714c70b3346ade9e913d0180fea0350f7a8fd522ad8a2617c2ea2a7f7270d: Status 404 returned error can't find the container with id f9a714c70b3346ade9e913d0180fea0350f7a8fd522ad8a2617c2ea2a7f7270d Jan 26 09:24:50 crc kubenswrapper[4806]: I0126 09:24:50.283960 4806 generic.go:334] "Generic (PLEG): container finished" podID="b6d89087-9a99-4d22-b3c7-c9fef844d393" containerID="588f97895ab7ac85d8189595bc21997a600c4b2bd153694bdea3df799b9c01a7" exitCode=0 Jan 26 09:24:50 crc kubenswrapper[4806]: I0126 09:24:50.284358 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-27fss" event={"ID":"b6d89087-9a99-4d22-b3c7-c9fef844d393","Type":"ContainerDied","Data":"588f97895ab7ac85d8189595bc21997a600c4b2bd153694bdea3df799b9c01a7"} Jan 26 09:24:50 crc kubenswrapper[4806]: I0126 09:24:50.285427 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-27fss" event={"ID":"b6d89087-9a99-4d22-b3c7-c9fef844d393","Type":"ContainerStarted","Data":"f9a714c70b3346ade9e913d0180fea0350f7a8fd522ad8a2617c2ea2a7f7270d"} Jan 26 09:24:51 crc kubenswrapper[4806]: I0126 09:24:51.294231 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-27fss" event={"ID":"b6d89087-9a99-4d22-b3c7-c9fef844d393","Type":"ContainerStarted","Data":"44f7b55ac23cc372e7455fe2c073490088930153adbbba1a9f002653ec34f607"} Jan 26 09:24:52 crc kubenswrapper[4806]: I0126 09:24:52.303589 4806 generic.go:334] "Generic (PLEG): container finished" podID="b6d89087-9a99-4d22-b3c7-c9fef844d393" containerID="44f7b55ac23cc372e7455fe2c073490088930153adbbba1a9f002653ec34f607" exitCode=0 Jan 26 09:24:52 crc kubenswrapper[4806]: I0126 09:24:52.303679 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-27fss" event={"ID":"b6d89087-9a99-4d22-b3c7-c9fef844d393","Type":"ContainerDied","Data":"44f7b55ac23cc372e7455fe2c073490088930153adbbba1a9f002653ec34f607"} Jan 26 09:24:54 crc kubenswrapper[4806]: I0126 09:24:54.330395 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-27fss" event={"ID":"b6d89087-9a99-4d22-b3c7-c9fef844d393","Type":"ContainerStarted","Data":"2e0b2319c43dcb1ab1847548667d4b4c50c24d01bb7702ca33094f17d79d36d8"} Jan 26 09:24:54 crc kubenswrapper[4806]: I0126 09:24:54.355717 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-27fss" podStartSLOduration=3.510390224 podStartE2EDuration="6.355697331s" podCreationTimestamp="2026-01-26 09:24:48 +0000 UTC" firstStartedPulling="2026-01-26 09:24:50.291107803 +0000 UTC m=+5469.555515859" lastFinishedPulling="2026-01-26 09:24:53.1364149 +0000 UTC m=+5472.400822966" observedRunningTime="2026-01-26 09:24:54.348875588 +0000 UTC m=+5473.613283644" watchObservedRunningTime="2026-01-26 09:24:54.355697331 +0000 UTC m=+5473.620105387" Jan 26 09:24:57 crc kubenswrapper[4806]: I0126 09:24:57.042199 4806 scope.go:117] "RemoveContainer" containerID="6fd55683b8f15334183fb0b95891d473becb88c0609832bd732db6a7f091647a" Jan 26 09:24:57 crc kubenswrapper[4806]: E0126 09:24:57.043973 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 09:24:58 crc kubenswrapper[4806]: I0126 09:24:58.896265 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-27fss" Jan 26 09:24:58 crc kubenswrapper[4806]: I0126 09:24:58.896593 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-27fss" Jan 26 09:24:58 crc kubenswrapper[4806]: I0126 09:24:58.950272 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-27fss" Jan 26 09:24:59 crc kubenswrapper[4806]: I0126 09:24:59.420066 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-27fss" Jan 26 09:24:59 crc kubenswrapper[4806]: I0126 09:24:59.469708 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-27fss"] Jan 26 09:25:01 crc kubenswrapper[4806]: I0126 09:25:01.387384 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-27fss" podUID="b6d89087-9a99-4d22-b3c7-c9fef844d393" containerName="registry-server" containerID="cri-o://2e0b2319c43dcb1ab1847548667d4b4c50c24d01bb7702ca33094f17d79d36d8" gracePeriod=2 Jan 26 09:25:02 crc kubenswrapper[4806]: I0126 09:25:02.398907 4806 generic.go:334] "Generic (PLEG): container finished" podID="b6d89087-9a99-4d22-b3c7-c9fef844d393" containerID="2e0b2319c43dcb1ab1847548667d4b4c50c24d01bb7702ca33094f17d79d36d8" exitCode=0 Jan 26 09:25:02 crc kubenswrapper[4806]: I0126 09:25:02.398977 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-27fss" Jan 26 09:25:02 crc kubenswrapper[4806]: I0126 09:25:02.398984 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-27fss" event={"ID":"b6d89087-9a99-4d22-b3c7-c9fef844d393","Type":"ContainerDied","Data":"2e0b2319c43dcb1ab1847548667d4b4c50c24d01bb7702ca33094f17d79d36d8"} Jan 26 09:25:02 crc kubenswrapper[4806]: I0126 09:25:02.399862 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-27fss" event={"ID":"b6d89087-9a99-4d22-b3c7-c9fef844d393","Type":"ContainerDied","Data":"f9a714c70b3346ade9e913d0180fea0350f7a8fd522ad8a2617c2ea2a7f7270d"} Jan 26 09:25:02 crc kubenswrapper[4806]: I0126 09:25:02.399897 4806 scope.go:117] "RemoveContainer" containerID="2e0b2319c43dcb1ab1847548667d4b4c50c24d01bb7702ca33094f17d79d36d8" Jan 26 09:25:02 crc kubenswrapper[4806]: I0126 09:25:02.430103 4806 scope.go:117] "RemoveContainer" containerID="44f7b55ac23cc372e7455fe2c073490088930153adbbba1a9f002653ec34f607" Jan 26 09:25:02 crc kubenswrapper[4806]: I0126 09:25:02.451317 4806 scope.go:117] "RemoveContainer" containerID="588f97895ab7ac85d8189595bc21997a600c4b2bd153694bdea3df799b9c01a7" Jan 26 09:25:02 crc kubenswrapper[4806]: I0126 09:25:02.508977 4806 scope.go:117] "RemoveContainer" containerID="2e0b2319c43dcb1ab1847548667d4b4c50c24d01bb7702ca33094f17d79d36d8" Jan 26 09:25:02 crc kubenswrapper[4806]: E0126 09:25:02.509766 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e0b2319c43dcb1ab1847548667d4b4c50c24d01bb7702ca33094f17d79d36d8\": container with ID starting with 2e0b2319c43dcb1ab1847548667d4b4c50c24d01bb7702ca33094f17d79d36d8 not found: ID does not exist" containerID="2e0b2319c43dcb1ab1847548667d4b4c50c24d01bb7702ca33094f17d79d36d8" Jan 26 09:25:02 crc kubenswrapper[4806]: I0126 09:25:02.509833 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e0b2319c43dcb1ab1847548667d4b4c50c24d01bb7702ca33094f17d79d36d8"} err="failed to get container status \"2e0b2319c43dcb1ab1847548667d4b4c50c24d01bb7702ca33094f17d79d36d8\": rpc error: code = NotFound desc = could not find container \"2e0b2319c43dcb1ab1847548667d4b4c50c24d01bb7702ca33094f17d79d36d8\": container with ID starting with 2e0b2319c43dcb1ab1847548667d4b4c50c24d01bb7702ca33094f17d79d36d8 not found: ID does not exist" Jan 26 09:25:02 crc kubenswrapper[4806]: I0126 09:25:02.509863 4806 scope.go:117] "RemoveContainer" containerID="44f7b55ac23cc372e7455fe2c073490088930153adbbba1a9f002653ec34f607" Jan 26 09:25:02 crc kubenswrapper[4806]: E0126 09:25:02.510280 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"44f7b55ac23cc372e7455fe2c073490088930153adbbba1a9f002653ec34f607\": container with ID starting with 44f7b55ac23cc372e7455fe2c073490088930153adbbba1a9f002653ec34f607 not found: ID does not exist" containerID="44f7b55ac23cc372e7455fe2c073490088930153adbbba1a9f002653ec34f607" Jan 26 09:25:02 crc kubenswrapper[4806]: I0126 09:25:02.510299 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44f7b55ac23cc372e7455fe2c073490088930153adbbba1a9f002653ec34f607"} err="failed to get container status \"44f7b55ac23cc372e7455fe2c073490088930153adbbba1a9f002653ec34f607\": rpc error: code = NotFound desc = could not find container \"44f7b55ac23cc372e7455fe2c073490088930153adbbba1a9f002653ec34f607\": container with ID starting with 44f7b55ac23cc372e7455fe2c073490088930153adbbba1a9f002653ec34f607 not found: ID does not exist" Jan 26 09:25:02 crc kubenswrapper[4806]: I0126 09:25:02.510313 4806 scope.go:117] "RemoveContainer" containerID="588f97895ab7ac85d8189595bc21997a600c4b2bd153694bdea3df799b9c01a7" Jan 26 09:25:02 crc kubenswrapper[4806]: E0126 09:25:02.510881 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"588f97895ab7ac85d8189595bc21997a600c4b2bd153694bdea3df799b9c01a7\": container with ID starting with 588f97895ab7ac85d8189595bc21997a600c4b2bd153694bdea3df799b9c01a7 not found: ID does not exist" containerID="588f97895ab7ac85d8189595bc21997a600c4b2bd153694bdea3df799b9c01a7" Jan 26 09:25:02 crc kubenswrapper[4806]: I0126 09:25:02.510923 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"588f97895ab7ac85d8189595bc21997a600c4b2bd153694bdea3df799b9c01a7"} err="failed to get container status \"588f97895ab7ac85d8189595bc21997a600c4b2bd153694bdea3df799b9c01a7\": rpc error: code = NotFound desc = could not find container \"588f97895ab7ac85d8189595bc21997a600c4b2bd153694bdea3df799b9c01a7\": container with ID starting with 588f97895ab7ac85d8189595bc21997a600c4b2bd153694bdea3df799b9c01a7 not found: ID does not exist" Jan 26 09:25:02 crc kubenswrapper[4806]: I0126 09:25:02.520735 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6d89087-9a99-4d22-b3c7-c9fef844d393-catalog-content\") pod \"b6d89087-9a99-4d22-b3c7-c9fef844d393\" (UID: \"b6d89087-9a99-4d22-b3c7-c9fef844d393\") " Jan 26 09:25:02 crc kubenswrapper[4806]: I0126 09:25:02.520782 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6d89087-9a99-4d22-b3c7-c9fef844d393-utilities\") pod \"b6d89087-9a99-4d22-b3c7-c9fef844d393\" (UID: \"b6d89087-9a99-4d22-b3c7-c9fef844d393\") " Jan 26 09:25:02 crc kubenswrapper[4806]: I0126 09:25:02.521057 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j9tr8\" (UniqueName: \"kubernetes.io/projected/b6d89087-9a99-4d22-b3c7-c9fef844d393-kube-api-access-j9tr8\") pod \"b6d89087-9a99-4d22-b3c7-c9fef844d393\" (UID: \"b6d89087-9a99-4d22-b3c7-c9fef844d393\") " Jan 26 09:25:02 crc kubenswrapper[4806]: I0126 09:25:02.521824 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b6d89087-9a99-4d22-b3c7-c9fef844d393-utilities" (OuterVolumeSpecName: "utilities") pod "b6d89087-9a99-4d22-b3c7-c9fef844d393" (UID: "b6d89087-9a99-4d22-b3c7-c9fef844d393"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 09:25:02 crc kubenswrapper[4806]: I0126 09:25:02.528309 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6d89087-9a99-4d22-b3c7-c9fef844d393-kube-api-access-j9tr8" (OuterVolumeSpecName: "kube-api-access-j9tr8") pod "b6d89087-9a99-4d22-b3c7-c9fef844d393" (UID: "b6d89087-9a99-4d22-b3c7-c9fef844d393"). InnerVolumeSpecName "kube-api-access-j9tr8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:25:02 crc kubenswrapper[4806]: I0126 09:25:02.544216 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b6d89087-9a99-4d22-b3c7-c9fef844d393-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b6d89087-9a99-4d22-b3c7-c9fef844d393" (UID: "b6d89087-9a99-4d22-b3c7-c9fef844d393"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 09:25:02 crc kubenswrapper[4806]: I0126 09:25:02.623474 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j9tr8\" (UniqueName: \"kubernetes.io/projected/b6d89087-9a99-4d22-b3c7-c9fef844d393-kube-api-access-j9tr8\") on node \"crc\" DevicePath \"\"" Jan 26 09:25:02 crc kubenswrapper[4806]: I0126 09:25:02.623514 4806 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6d89087-9a99-4d22-b3c7-c9fef844d393-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 09:25:02 crc kubenswrapper[4806]: I0126 09:25:02.623539 4806 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6d89087-9a99-4d22-b3c7-c9fef844d393-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 09:25:03 crc kubenswrapper[4806]: I0126 09:25:03.409373 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-27fss" Jan 26 09:25:03 crc kubenswrapper[4806]: I0126 09:25:03.434877 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-27fss"] Jan 26 09:25:03 crc kubenswrapper[4806]: I0126 09:25:03.444900 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-27fss"] Jan 26 09:25:05 crc kubenswrapper[4806]: I0126 09:25:05.052966 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6d89087-9a99-4d22-b3c7-c9fef844d393" path="/var/lib/kubelet/pods/b6d89087-9a99-4d22-b3c7-c9fef844d393/volumes" Jan 26 09:25:05 crc kubenswrapper[4806]: I0126 09:25:05.321123 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-h77dr"] Jan 26 09:25:05 crc kubenswrapper[4806]: E0126 09:25:05.321622 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6d89087-9a99-4d22-b3c7-c9fef844d393" containerName="extract-content" Jan 26 09:25:05 crc kubenswrapper[4806]: I0126 09:25:05.321646 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6d89087-9a99-4d22-b3c7-c9fef844d393" containerName="extract-content" Jan 26 09:25:05 crc kubenswrapper[4806]: E0126 09:25:05.321687 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6d89087-9a99-4d22-b3c7-c9fef844d393" containerName="registry-server" Jan 26 09:25:05 crc kubenswrapper[4806]: I0126 09:25:05.321695 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6d89087-9a99-4d22-b3c7-c9fef844d393" containerName="registry-server" Jan 26 09:25:05 crc kubenswrapper[4806]: E0126 09:25:05.321710 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6d89087-9a99-4d22-b3c7-c9fef844d393" containerName="extract-utilities" Jan 26 09:25:05 crc kubenswrapper[4806]: I0126 09:25:05.321718 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6d89087-9a99-4d22-b3c7-c9fef844d393" containerName="extract-utilities" Jan 26 09:25:05 crc kubenswrapper[4806]: I0126 09:25:05.321956 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6d89087-9a99-4d22-b3c7-c9fef844d393" containerName="registry-server" Jan 26 09:25:05 crc kubenswrapper[4806]: I0126 09:25:05.323696 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h77dr" Jan 26 09:25:05 crc kubenswrapper[4806]: I0126 09:25:05.331569 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-h77dr"] Jan 26 09:25:05 crc kubenswrapper[4806]: I0126 09:25:05.414950 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/315de5fa-5acc-47aa-a247-cac4c117affb-catalog-content\") pod \"redhat-operators-h77dr\" (UID: \"315de5fa-5acc-47aa-a247-cac4c117affb\") " pod="openshift-marketplace/redhat-operators-h77dr" Jan 26 09:25:05 crc kubenswrapper[4806]: I0126 09:25:05.415307 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2f86c\" (UniqueName: \"kubernetes.io/projected/315de5fa-5acc-47aa-a247-cac4c117affb-kube-api-access-2f86c\") pod \"redhat-operators-h77dr\" (UID: \"315de5fa-5acc-47aa-a247-cac4c117affb\") " pod="openshift-marketplace/redhat-operators-h77dr" Jan 26 09:25:05 crc kubenswrapper[4806]: I0126 09:25:05.415449 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/315de5fa-5acc-47aa-a247-cac4c117affb-utilities\") pod \"redhat-operators-h77dr\" (UID: \"315de5fa-5acc-47aa-a247-cac4c117affb\") " pod="openshift-marketplace/redhat-operators-h77dr" Jan 26 09:25:05 crc kubenswrapper[4806]: I0126 09:25:05.517112 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/315de5fa-5acc-47aa-a247-cac4c117affb-utilities\") pod \"redhat-operators-h77dr\" (UID: \"315de5fa-5acc-47aa-a247-cac4c117affb\") " pod="openshift-marketplace/redhat-operators-h77dr" Jan 26 09:25:05 crc kubenswrapper[4806]: I0126 09:25:05.517223 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/315de5fa-5acc-47aa-a247-cac4c117affb-catalog-content\") pod \"redhat-operators-h77dr\" (UID: \"315de5fa-5acc-47aa-a247-cac4c117affb\") " pod="openshift-marketplace/redhat-operators-h77dr" Jan 26 09:25:05 crc kubenswrapper[4806]: I0126 09:25:05.517243 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2f86c\" (UniqueName: \"kubernetes.io/projected/315de5fa-5acc-47aa-a247-cac4c117affb-kube-api-access-2f86c\") pod \"redhat-operators-h77dr\" (UID: \"315de5fa-5acc-47aa-a247-cac4c117affb\") " pod="openshift-marketplace/redhat-operators-h77dr" Jan 26 09:25:05 crc kubenswrapper[4806]: I0126 09:25:05.517744 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/315de5fa-5acc-47aa-a247-cac4c117affb-catalog-content\") pod \"redhat-operators-h77dr\" (UID: \"315de5fa-5acc-47aa-a247-cac4c117affb\") " pod="openshift-marketplace/redhat-operators-h77dr" Jan 26 09:25:05 crc kubenswrapper[4806]: I0126 09:25:05.517771 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/315de5fa-5acc-47aa-a247-cac4c117affb-utilities\") pod \"redhat-operators-h77dr\" (UID: \"315de5fa-5acc-47aa-a247-cac4c117affb\") " pod="openshift-marketplace/redhat-operators-h77dr" Jan 26 09:25:05 crc kubenswrapper[4806]: I0126 09:25:05.538373 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2f86c\" (UniqueName: \"kubernetes.io/projected/315de5fa-5acc-47aa-a247-cac4c117affb-kube-api-access-2f86c\") pod \"redhat-operators-h77dr\" (UID: \"315de5fa-5acc-47aa-a247-cac4c117affb\") " pod="openshift-marketplace/redhat-operators-h77dr" Jan 26 09:25:05 crc kubenswrapper[4806]: I0126 09:25:05.641397 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h77dr" Jan 26 09:25:06 crc kubenswrapper[4806]: I0126 09:25:06.226055 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-h77dr"] Jan 26 09:25:06 crc kubenswrapper[4806]: W0126 09:25:06.235777 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod315de5fa_5acc_47aa_a247_cac4c117affb.slice/crio-5f0882d5406a7c503ab6b4e42548bbf8df0d9c893ba5b2288db1483c59a90aab WatchSource:0}: Error finding container 5f0882d5406a7c503ab6b4e42548bbf8df0d9c893ba5b2288db1483c59a90aab: Status 404 returned error can't find the container with id 5f0882d5406a7c503ab6b4e42548bbf8df0d9c893ba5b2288db1483c59a90aab Jan 26 09:25:06 crc kubenswrapper[4806]: I0126 09:25:06.435306 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h77dr" event={"ID":"315de5fa-5acc-47aa-a247-cac4c117affb","Type":"ContainerStarted","Data":"5f0882d5406a7c503ab6b4e42548bbf8df0d9c893ba5b2288db1483c59a90aab"} Jan 26 09:25:07 crc kubenswrapper[4806]: I0126 09:25:07.445293 4806 generic.go:334] "Generic (PLEG): container finished" podID="315de5fa-5acc-47aa-a247-cac4c117affb" containerID="95ba4f965f880dd2f58c919adf75acec89c8751370dc463003f32ba973083b90" exitCode=0 Jan 26 09:25:07 crc kubenswrapper[4806]: I0126 09:25:07.445353 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h77dr" event={"ID":"315de5fa-5acc-47aa-a247-cac4c117affb","Type":"ContainerDied","Data":"95ba4f965f880dd2f58c919adf75acec89c8751370dc463003f32ba973083b90"} Jan 26 09:25:08 crc kubenswrapper[4806]: I0126 09:25:08.455293 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h77dr" event={"ID":"315de5fa-5acc-47aa-a247-cac4c117affb","Type":"ContainerStarted","Data":"1cb9f2dcdda705c3e0d6ded12c3a8b43a9e063f3754290018ac43b5e23b16f28"} Jan 26 09:25:11 crc kubenswrapper[4806]: I0126 09:25:11.051074 4806 scope.go:117] "RemoveContainer" containerID="6fd55683b8f15334183fb0b95891d473becb88c0609832bd732db6a7f091647a" Jan 26 09:25:11 crc kubenswrapper[4806]: E0126 09:25:11.052834 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 09:25:13 crc kubenswrapper[4806]: I0126 09:25:13.496721 4806 generic.go:334] "Generic (PLEG): container finished" podID="315de5fa-5acc-47aa-a247-cac4c117affb" containerID="1cb9f2dcdda705c3e0d6ded12c3a8b43a9e063f3754290018ac43b5e23b16f28" exitCode=0 Jan 26 09:25:13 crc kubenswrapper[4806]: I0126 09:25:13.496782 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h77dr" event={"ID":"315de5fa-5acc-47aa-a247-cac4c117affb","Type":"ContainerDied","Data":"1cb9f2dcdda705c3e0d6ded12c3a8b43a9e063f3754290018ac43b5e23b16f28"} Jan 26 09:25:14 crc kubenswrapper[4806]: I0126 09:25:14.508634 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h77dr" event={"ID":"315de5fa-5acc-47aa-a247-cac4c117affb","Type":"ContainerStarted","Data":"caf8f9a0a8a55e307c7b6162a0889a04f5f9baa2bf48c77a29e2077d240452db"} Jan 26 09:25:15 crc kubenswrapper[4806]: I0126 09:25:15.641901 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-h77dr" Jan 26 09:25:15 crc kubenswrapper[4806]: I0126 09:25:15.641956 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-h77dr" Jan 26 09:25:16 crc kubenswrapper[4806]: I0126 09:25:16.686340 4806 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-h77dr" podUID="315de5fa-5acc-47aa-a247-cac4c117affb" containerName="registry-server" probeResult="failure" output=< Jan 26 09:25:16 crc kubenswrapper[4806]: timeout: failed to connect service ":50051" within 1s Jan 26 09:25:16 crc kubenswrapper[4806]: > Jan 26 09:25:25 crc kubenswrapper[4806]: I0126 09:25:25.041917 4806 scope.go:117] "RemoveContainer" containerID="6fd55683b8f15334183fb0b95891d473becb88c0609832bd732db6a7f091647a" Jan 26 09:25:25 crc kubenswrapper[4806]: E0126 09:25:25.042683 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 09:25:25 crc kubenswrapper[4806]: I0126 09:25:25.687260 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-h77dr" Jan 26 09:25:25 crc kubenswrapper[4806]: I0126 09:25:25.714010 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-h77dr" podStartSLOduration=14.250130864 podStartE2EDuration="20.713990801s" podCreationTimestamp="2026-01-26 09:25:05 +0000 UTC" firstStartedPulling="2026-01-26 09:25:07.447721748 +0000 UTC m=+5486.712129804" lastFinishedPulling="2026-01-26 09:25:13.911581685 +0000 UTC m=+5493.175989741" observedRunningTime="2026-01-26 09:25:14.533166023 +0000 UTC m=+5493.797574079" watchObservedRunningTime="2026-01-26 09:25:25.713990801 +0000 UTC m=+5504.978398857" Jan 26 09:25:25 crc kubenswrapper[4806]: I0126 09:25:25.739490 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-h77dr" Jan 26 09:25:25 crc kubenswrapper[4806]: I0126 09:25:25.924480 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-h77dr"] Jan 26 09:25:27 crc kubenswrapper[4806]: I0126 09:25:27.624909 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-h77dr" podUID="315de5fa-5acc-47aa-a247-cac4c117affb" containerName="registry-server" containerID="cri-o://caf8f9a0a8a55e307c7b6162a0889a04f5f9baa2bf48c77a29e2077d240452db" gracePeriod=2 Jan 26 09:25:28 crc kubenswrapper[4806]: I0126 09:25:28.105105 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h77dr" Jan 26 09:25:28 crc kubenswrapper[4806]: I0126 09:25:28.286106 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/315de5fa-5acc-47aa-a247-cac4c117affb-catalog-content\") pod \"315de5fa-5acc-47aa-a247-cac4c117affb\" (UID: \"315de5fa-5acc-47aa-a247-cac4c117affb\") " Jan 26 09:25:28 crc kubenswrapper[4806]: I0126 09:25:28.286203 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2f86c\" (UniqueName: \"kubernetes.io/projected/315de5fa-5acc-47aa-a247-cac4c117affb-kube-api-access-2f86c\") pod \"315de5fa-5acc-47aa-a247-cac4c117affb\" (UID: \"315de5fa-5acc-47aa-a247-cac4c117affb\") " Jan 26 09:25:28 crc kubenswrapper[4806]: I0126 09:25:28.286249 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/315de5fa-5acc-47aa-a247-cac4c117affb-utilities\") pod \"315de5fa-5acc-47aa-a247-cac4c117affb\" (UID: \"315de5fa-5acc-47aa-a247-cac4c117affb\") " Jan 26 09:25:28 crc kubenswrapper[4806]: I0126 09:25:28.287434 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/315de5fa-5acc-47aa-a247-cac4c117affb-utilities" (OuterVolumeSpecName: "utilities") pod "315de5fa-5acc-47aa-a247-cac4c117affb" (UID: "315de5fa-5acc-47aa-a247-cac4c117affb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 09:25:28 crc kubenswrapper[4806]: I0126 09:25:28.292828 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/315de5fa-5acc-47aa-a247-cac4c117affb-kube-api-access-2f86c" (OuterVolumeSpecName: "kube-api-access-2f86c") pod "315de5fa-5acc-47aa-a247-cac4c117affb" (UID: "315de5fa-5acc-47aa-a247-cac4c117affb"). InnerVolumeSpecName "kube-api-access-2f86c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:25:28 crc kubenswrapper[4806]: I0126 09:25:28.391259 4806 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/315de5fa-5acc-47aa-a247-cac4c117affb-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 09:25:28 crc kubenswrapper[4806]: I0126 09:25:28.391489 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2f86c\" (UniqueName: \"kubernetes.io/projected/315de5fa-5acc-47aa-a247-cac4c117affb-kube-api-access-2f86c\") on node \"crc\" DevicePath \"\"" Jan 26 09:25:28 crc kubenswrapper[4806]: I0126 09:25:28.404888 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/315de5fa-5acc-47aa-a247-cac4c117affb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "315de5fa-5acc-47aa-a247-cac4c117affb" (UID: "315de5fa-5acc-47aa-a247-cac4c117affb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 09:25:28 crc kubenswrapper[4806]: I0126 09:25:28.493932 4806 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/315de5fa-5acc-47aa-a247-cac4c117affb-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 09:25:28 crc kubenswrapper[4806]: I0126 09:25:28.635108 4806 generic.go:334] "Generic (PLEG): container finished" podID="315de5fa-5acc-47aa-a247-cac4c117affb" containerID="caf8f9a0a8a55e307c7b6162a0889a04f5f9baa2bf48c77a29e2077d240452db" exitCode=0 Jan 26 09:25:28 crc kubenswrapper[4806]: I0126 09:25:28.635151 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h77dr" event={"ID":"315de5fa-5acc-47aa-a247-cac4c117affb","Type":"ContainerDied","Data":"caf8f9a0a8a55e307c7b6162a0889a04f5f9baa2bf48c77a29e2077d240452db"} Jan 26 09:25:28 crc kubenswrapper[4806]: I0126 09:25:28.635181 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h77dr" event={"ID":"315de5fa-5acc-47aa-a247-cac4c117affb","Type":"ContainerDied","Data":"5f0882d5406a7c503ab6b4e42548bbf8df0d9c893ba5b2288db1483c59a90aab"} Jan 26 09:25:28 crc kubenswrapper[4806]: I0126 09:25:28.635199 4806 scope.go:117] "RemoveContainer" containerID="caf8f9a0a8a55e307c7b6162a0889a04f5f9baa2bf48c77a29e2077d240452db" Jan 26 09:25:28 crc kubenswrapper[4806]: I0126 09:25:28.635355 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h77dr" Jan 26 09:25:28 crc kubenswrapper[4806]: I0126 09:25:28.682715 4806 scope.go:117] "RemoveContainer" containerID="1cb9f2dcdda705c3e0d6ded12c3a8b43a9e063f3754290018ac43b5e23b16f28" Jan 26 09:25:28 crc kubenswrapper[4806]: I0126 09:25:28.689293 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-h77dr"] Jan 26 09:25:28 crc kubenswrapper[4806]: I0126 09:25:28.703365 4806 scope.go:117] "RemoveContainer" containerID="95ba4f965f880dd2f58c919adf75acec89c8751370dc463003f32ba973083b90" Jan 26 09:25:28 crc kubenswrapper[4806]: I0126 09:25:28.706438 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-h77dr"] Jan 26 09:25:28 crc kubenswrapper[4806]: I0126 09:25:28.760069 4806 scope.go:117] "RemoveContainer" containerID="caf8f9a0a8a55e307c7b6162a0889a04f5f9baa2bf48c77a29e2077d240452db" Jan 26 09:25:28 crc kubenswrapper[4806]: E0126 09:25:28.763249 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"caf8f9a0a8a55e307c7b6162a0889a04f5f9baa2bf48c77a29e2077d240452db\": container with ID starting with caf8f9a0a8a55e307c7b6162a0889a04f5f9baa2bf48c77a29e2077d240452db not found: ID does not exist" containerID="caf8f9a0a8a55e307c7b6162a0889a04f5f9baa2bf48c77a29e2077d240452db" Jan 26 09:25:28 crc kubenswrapper[4806]: I0126 09:25:28.763298 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"caf8f9a0a8a55e307c7b6162a0889a04f5f9baa2bf48c77a29e2077d240452db"} err="failed to get container status \"caf8f9a0a8a55e307c7b6162a0889a04f5f9baa2bf48c77a29e2077d240452db\": rpc error: code = NotFound desc = could not find container \"caf8f9a0a8a55e307c7b6162a0889a04f5f9baa2bf48c77a29e2077d240452db\": container with ID starting with caf8f9a0a8a55e307c7b6162a0889a04f5f9baa2bf48c77a29e2077d240452db not found: ID does not exist" Jan 26 09:25:28 crc kubenswrapper[4806]: I0126 09:25:28.763327 4806 scope.go:117] "RemoveContainer" containerID="1cb9f2dcdda705c3e0d6ded12c3a8b43a9e063f3754290018ac43b5e23b16f28" Jan 26 09:25:28 crc kubenswrapper[4806]: E0126 09:25:28.763815 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1cb9f2dcdda705c3e0d6ded12c3a8b43a9e063f3754290018ac43b5e23b16f28\": container with ID starting with 1cb9f2dcdda705c3e0d6ded12c3a8b43a9e063f3754290018ac43b5e23b16f28 not found: ID does not exist" containerID="1cb9f2dcdda705c3e0d6ded12c3a8b43a9e063f3754290018ac43b5e23b16f28" Jan 26 09:25:28 crc kubenswrapper[4806]: I0126 09:25:28.763846 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1cb9f2dcdda705c3e0d6ded12c3a8b43a9e063f3754290018ac43b5e23b16f28"} err="failed to get container status \"1cb9f2dcdda705c3e0d6ded12c3a8b43a9e063f3754290018ac43b5e23b16f28\": rpc error: code = NotFound desc = could not find container \"1cb9f2dcdda705c3e0d6ded12c3a8b43a9e063f3754290018ac43b5e23b16f28\": container with ID starting with 1cb9f2dcdda705c3e0d6ded12c3a8b43a9e063f3754290018ac43b5e23b16f28 not found: ID does not exist" Jan 26 09:25:28 crc kubenswrapper[4806]: I0126 09:25:28.763867 4806 scope.go:117] "RemoveContainer" containerID="95ba4f965f880dd2f58c919adf75acec89c8751370dc463003f32ba973083b90" Jan 26 09:25:28 crc kubenswrapper[4806]: E0126 09:25:28.764137 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"95ba4f965f880dd2f58c919adf75acec89c8751370dc463003f32ba973083b90\": container with ID starting with 95ba4f965f880dd2f58c919adf75acec89c8751370dc463003f32ba973083b90 not found: ID does not exist" containerID="95ba4f965f880dd2f58c919adf75acec89c8751370dc463003f32ba973083b90" Jan 26 09:25:28 crc kubenswrapper[4806]: I0126 09:25:28.764167 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95ba4f965f880dd2f58c919adf75acec89c8751370dc463003f32ba973083b90"} err="failed to get container status \"95ba4f965f880dd2f58c919adf75acec89c8751370dc463003f32ba973083b90\": rpc error: code = NotFound desc = could not find container \"95ba4f965f880dd2f58c919adf75acec89c8751370dc463003f32ba973083b90\": container with ID starting with 95ba4f965f880dd2f58c919adf75acec89c8751370dc463003f32ba973083b90 not found: ID does not exist" Jan 26 09:25:29 crc kubenswrapper[4806]: I0126 09:25:29.053194 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="315de5fa-5acc-47aa-a247-cac4c117affb" path="/var/lib/kubelet/pods/315de5fa-5acc-47aa-a247-cac4c117affb/volumes" Jan 26 09:25:38 crc kubenswrapper[4806]: I0126 09:25:38.041642 4806 scope.go:117] "RemoveContainer" containerID="6fd55683b8f15334183fb0b95891d473becb88c0609832bd732db6a7f091647a" Jan 26 09:25:38 crc kubenswrapper[4806]: E0126 09:25:38.042468 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 09:25:49 crc kubenswrapper[4806]: I0126 09:25:49.041912 4806 scope.go:117] "RemoveContainer" containerID="6fd55683b8f15334183fb0b95891d473becb88c0609832bd732db6a7f091647a" Jan 26 09:25:49 crc kubenswrapper[4806]: E0126 09:25:49.042912 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 09:26:00 crc kubenswrapper[4806]: I0126 09:26:00.042606 4806 scope.go:117] "RemoveContainer" containerID="6fd55683b8f15334183fb0b95891d473becb88c0609832bd732db6a7f091647a" Jan 26 09:26:00 crc kubenswrapper[4806]: E0126 09:26:00.043855 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 09:26:13 crc kubenswrapper[4806]: I0126 09:26:13.042566 4806 scope.go:117] "RemoveContainer" containerID="6fd55683b8f15334183fb0b95891d473becb88c0609832bd732db6a7f091647a" Jan 26 09:26:13 crc kubenswrapper[4806]: E0126 09:26:13.043580 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 09:26:28 crc kubenswrapper[4806]: I0126 09:26:28.042554 4806 scope.go:117] "RemoveContainer" containerID="6fd55683b8f15334183fb0b95891d473becb88c0609832bd732db6a7f091647a" Jan 26 09:26:28 crc kubenswrapper[4806]: E0126 09:26:28.043328 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 09:26:42 crc kubenswrapper[4806]: I0126 09:26:42.041697 4806 scope.go:117] "RemoveContainer" containerID="6fd55683b8f15334183fb0b95891d473becb88c0609832bd732db6a7f091647a" Jan 26 09:26:42 crc kubenswrapper[4806]: E0126 09:26:42.042500 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 09:26:54 crc kubenswrapper[4806]: I0126 09:26:54.043566 4806 scope.go:117] "RemoveContainer" containerID="6fd55683b8f15334183fb0b95891d473becb88c0609832bd732db6a7f091647a" Jan 26 09:26:54 crc kubenswrapper[4806]: I0126 09:26:54.328237 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" event={"ID":"d07502a2-50b0-4012-b335-340a1c694c50","Type":"ContainerStarted","Data":"2ef27a15112ff4ec36ba691c1982b94d997aaad09c20342558650b704969f6c7"} Jan 26 09:28:55 crc kubenswrapper[4806]: E0126 09:28:55.510737 4806 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.66:57246->38.102.83.66:34167: write tcp 38.102.83.66:57246->38.102.83.66:34167: write: broken pipe Jan 26 09:29:06 crc kubenswrapper[4806]: I0126 09:29:06.953194 4806 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-78c9d88fc9-5rs9s" podUID="7c49d653-a114-4352-afd1-a2ca43c811f1" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Jan 26 09:29:15 crc kubenswrapper[4806]: I0126 09:29:15.806263 4806 patch_prober.go:28] interesting pod/machine-config-daemon-k2tlk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 09:29:15 crc kubenswrapper[4806]: I0126 09:29:15.807259 4806 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 09:29:45 crc kubenswrapper[4806]: I0126 09:29:45.806712 4806 patch_prober.go:28] interesting pod/machine-config-daemon-k2tlk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 09:29:45 crc kubenswrapper[4806]: I0126 09:29:45.807144 4806 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 09:30:00 crc kubenswrapper[4806]: I0126 09:30:00.169739 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490330-qfflm"] Jan 26 09:30:00 crc kubenswrapper[4806]: E0126 09:30:00.170615 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="315de5fa-5acc-47aa-a247-cac4c117affb" containerName="extract-content" Jan 26 09:30:00 crc kubenswrapper[4806]: I0126 09:30:00.170628 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="315de5fa-5acc-47aa-a247-cac4c117affb" containerName="extract-content" Jan 26 09:30:00 crc kubenswrapper[4806]: E0126 09:30:00.170638 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="315de5fa-5acc-47aa-a247-cac4c117affb" containerName="extract-utilities" Jan 26 09:30:00 crc kubenswrapper[4806]: I0126 09:30:00.170644 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="315de5fa-5acc-47aa-a247-cac4c117affb" containerName="extract-utilities" Jan 26 09:30:00 crc kubenswrapper[4806]: E0126 09:30:00.170678 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="315de5fa-5acc-47aa-a247-cac4c117affb" containerName="registry-server" Jan 26 09:30:00 crc kubenswrapper[4806]: I0126 09:30:00.170685 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="315de5fa-5acc-47aa-a247-cac4c117affb" containerName="registry-server" Jan 26 09:30:00 crc kubenswrapper[4806]: I0126 09:30:00.170856 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="315de5fa-5acc-47aa-a247-cac4c117affb" containerName="registry-server" Jan 26 09:30:00 crc kubenswrapper[4806]: I0126 09:30:00.171448 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490330-qfflm" Jan 26 09:30:00 crc kubenswrapper[4806]: I0126 09:30:00.178784 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 26 09:30:00 crc kubenswrapper[4806]: I0126 09:30:00.179637 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 26 09:30:00 crc kubenswrapper[4806]: I0126 09:30:00.206781 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490330-qfflm"] Jan 26 09:30:00 crc kubenswrapper[4806]: I0126 09:30:00.301174 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cae345c7-b5af-4b00-a276-943d47d07e1b-secret-volume\") pod \"collect-profiles-29490330-qfflm\" (UID: \"cae345c7-b5af-4b00-a276-943d47d07e1b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490330-qfflm" Jan 26 09:30:00 crc kubenswrapper[4806]: I0126 09:30:00.301634 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cae345c7-b5af-4b00-a276-943d47d07e1b-config-volume\") pod \"collect-profiles-29490330-qfflm\" (UID: \"cae345c7-b5af-4b00-a276-943d47d07e1b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490330-qfflm" Jan 26 09:30:00 crc kubenswrapper[4806]: I0126 09:30:00.301787 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vh42l\" (UniqueName: \"kubernetes.io/projected/cae345c7-b5af-4b00-a276-943d47d07e1b-kube-api-access-vh42l\") pod \"collect-profiles-29490330-qfflm\" (UID: \"cae345c7-b5af-4b00-a276-943d47d07e1b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490330-qfflm" Jan 26 09:30:00 crc kubenswrapper[4806]: I0126 09:30:00.403666 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cae345c7-b5af-4b00-a276-943d47d07e1b-config-volume\") pod \"collect-profiles-29490330-qfflm\" (UID: \"cae345c7-b5af-4b00-a276-943d47d07e1b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490330-qfflm" Jan 26 09:30:00 crc kubenswrapper[4806]: I0126 09:30:00.404053 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vh42l\" (UniqueName: \"kubernetes.io/projected/cae345c7-b5af-4b00-a276-943d47d07e1b-kube-api-access-vh42l\") pod \"collect-profiles-29490330-qfflm\" (UID: \"cae345c7-b5af-4b00-a276-943d47d07e1b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490330-qfflm" Jan 26 09:30:00 crc kubenswrapper[4806]: I0126 09:30:00.404135 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cae345c7-b5af-4b00-a276-943d47d07e1b-secret-volume\") pod \"collect-profiles-29490330-qfflm\" (UID: \"cae345c7-b5af-4b00-a276-943d47d07e1b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490330-qfflm" Jan 26 09:30:00 crc kubenswrapper[4806]: I0126 09:30:00.404657 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cae345c7-b5af-4b00-a276-943d47d07e1b-config-volume\") pod \"collect-profiles-29490330-qfflm\" (UID: \"cae345c7-b5af-4b00-a276-943d47d07e1b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490330-qfflm" Jan 26 09:30:00 crc kubenswrapper[4806]: I0126 09:30:00.420416 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cae345c7-b5af-4b00-a276-943d47d07e1b-secret-volume\") pod \"collect-profiles-29490330-qfflm\" (UID: \"cae345c7-b5af-4b00-a276-943d47d07e1b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490330-qfflm" Jan 26 09:30:00 crc kubenswrapper[4806]: I0126 09:30:00.423561 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vh42l\" (UniqueName: \"kubernetes.io/projected/cae345c7-b5af-4b00-a276-943d47d07e1b-kube-api-access-vh42l\") pod \"collect-profiles-29490330-qfflm\" (UID: \"cae345c7-b5af-4b00-a276-943d47d07e1b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490330-qfflm" Jan 26 09:30:00 crc kubenswrapper[4806]: I0126 09:30:00.493607 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490330-qfflm" Jan 26 09:30:01 crc kubenswrapper[4806]: I0126 09:30:01.007229 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490330-qfflm"] Jan 26 09:30:01 crc kubenswrapper[4806]: I0126 09:30:01.848716 4806 generic.go:334] "Generic (PLEG): container finished" podID="cae345c7-b5af-4b00-a276-943d47d07e1b" containerID="bba8acb976810ebd253ba95d6f931c9321328e487a95207597e8a17fa9920398" exitCode=0 Jan 26 09:30:01 crc kubenswrapper[4806]: I0126 09:30:01.849046 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490330-qfflm" event={"ID":"cae345c7-b5af-4b00-a276-943d47d07e1b","Type":"ContainerDied","Data":"bba8acb976810ebd253ba95d6f931c9321328e487a95207597e8a17fa9920398"} Jan 26 09:30:01 crc kubenswrapper[4806]: I0126 09:30:01.849075 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490330-qfflm" event={"ID":"cae345c7-b5af-4b00-a276-943d47d07e1b","Type":"ContainerStarted","Data":"7a7c433a3e5e47d1dd056d5d1302674afd5f54248cab0f8831f7f4bd879bd87f"} Jan 26 09:30:03 crc kubenswrapper[4806]: I0126 09:30:03.237961 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490330-qfflm" Jan 26 09:30:03 crc kubenswrapper[4806]: I0126 09:30:03.362540 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cae345c7-b5af-4b00-a276-943d47d07e1b-secret-volume\") pod \"cae345c7-b5af-4b00-a276-943d47d07e1b\" (UID: \"cae345c7-b5af-4b00-a276-943d47d07e1b\") " Jan 26 09:30:03 crc kubenswrapper[4806]: I0126 09:30:03.362701 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vh42l\" (UniqueName: \"kubernetes.io/projected/cae345c7-b5af-4b00-a276-943d47d07e1b-kube-api-access-vh42l\") pod \"cae345c7-b5af-4b00-a276-943d47d07e1b\" (UID: \"cae345c7-b5af-4b00-a276-943d47d07e1b\") " Jan 26 09:30:03 crc kubenswrapper[4806]: I0126 09:30:03.362736 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cae345c7-b5af-4b00-a276-943d47d07e1b-config-volume\") pod \"cae345c7-b5af-4b00-a276-943d47d07e1b\" (UID: \"cae345c7-b5af-4b00-a276-943d47d07e1b\") " Jan 26 09:30:03 crc kubenswrapper[4806]: I0126 09:30:03.364021 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cae345c7-b5af-4b00-a276-943d47d07e1b-config-volume" (OuterVolumeSpecName: "config-volume") pod "cae345c7-b5af-4b00-a276-943d47d07e1b" (UID: "cae345c7-b5af-4b00-a276-943d47d07e1b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:30:03 crc kubenswrapper[4806]: I0126 09:30:03.370901 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cae345c7-b5af-4b00-a276-943d47d07e1b-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "cae345c7-b5af-4b00-a276-943d47d07e1b" (UID: "cae345c7-b5af-4b00-a276-943d47d07e1b"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:30:03 crc kubenswrapper[4806]: I0126 09:30:03.376701 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cae345c7-b5af-4b00-a276-943d47d07e1b-kube-api-access-vh42l" (OuterVolumeSpecName: "kube-api-access-vh42l") pod "cae345c7-b5af-4b00-a276-943d47d07e1b" (UID: "cae345c7-b5af-4b00-a276-943d47d07e1b"). InnerVolumeSpecName "kube-api-access-vh42l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:30:03 crc kubenswrapper[4806]: I0126 09:30:03.465011 4806 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cae345c7-b5af-4b00-a276-943d47d07e1b-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 09:30:03 crc kubenswrapper[4806]: I0126 09:30:03.465043 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vh42l\" (UniqueName: \"kubernetes.io/projected/cae345c7-b5af-4b00-a276-943d47d07e1b-kube-api-access-vh42l\") on node \"crc\" DevicePath \"\"" Jan 26 09:30:03 crc kubenswrapper[4806]: I0126 09:30:03.465055 4806 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cae345c7-b5af-4b00-a276-943d47d07e1b-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 09:30:03 crc kubenswrapper[4806]: I0126 09:30:03.869457 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490330-qfflm" event={"ID":"cae345c7-b5af-4b00-a276-943d47d07e1b","Type":"ContainerDied","Data":"7a7c433a3e5e47d1dd056d5d1302674afd5f54248cab0f8831f7f4bd879bd87f"} Jan 26 09:30:03 crc kubenswrapper[4806]: I0126 09:30:03.869813 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a7c433a3e5e47d1dd056d5d1302674afd5f54248cab0f8831f7f4bd879bd87f" Jan 26 09:30:03 crc kubenswrapper[4806]: I0126 09:30:03.869894 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490330-qfflm" Jan 26 09:30:04 crc kubenswrapper[4806]: I0126 09:30:04.332340 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490285-4cwsp"] Jan 26 09:30:04 crc kubenswrapper[4806]: I0126 09:30:04.341764 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490285-4cwsp"] Jan 26 09:30:05 crc kubenswrapper[4806]: I0126 09:30:05.054569 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d931af36-7c58-4c04-b118-53cdbaafb655" path="/var/lib/kubelet/pods/d931af36-7c58-4c04-b118-53cdbaafb655/volumes" Jan 26 09:30:15 crc kubenswrapper[4806]: I0126 09:30:15.806075 4806 patch_prober.go:28] interesting pod/machine-config-daemon-k2tlk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 09:30:15 crc kubenswrapper[4806]: I0126 09:30:15.808083 4806 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 09:30:15 crc kubenswrapper[4806]: I0126 09:30:15.808289 4806 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" Jan 26 09:30:15 crc kubenswrapper[4806]: I0126 09:30:15.809566 4806 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2ef27a15112ff4ec36ba691c1982b94d997aaad09c20342558650b704969f6c7"} pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 09:30:15 crc kubenswrapper[4806]: I0126 09:30:15.809846 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" containerName="machine-config-daemon" containerID="cri-o://2ef27a15112ff4ec36ba691c1982b94d997aaad09c20342558650b704969f6c7" gracePeriod=600 Jan 26 09:30:15 crc kubenswrapper[4806]: I0126 09:30:15.972605 4806 generic.go:334] "Generic (PLEG): container finished" podID="d07502a2-50b0-4012-b335-340a1c694c50" containerID="2ef27a15112ff4ec36ba691c1982b94d997aaad09c20342558650b704969f6c7" exitCode=0 Jan 26 09:30:15 crc kubenswrapper[4806]: I0126 09:30:15.972677 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" event={"ID":"d07502a2-50b0-4012-b335-340a1c694c50","Type":"ContainerDied","Data":"2ef27a15112ff4ec36ba691c1982b94d997aaad09c20342558650b704969f6c7"} Jan 26 09:30:15 crc kubenswrapper[4806]: I0126 09:30:15.972964 4806 scope.go:117] "RemoveContainer" containerID="6fd55683b8f15334183fb0b95891d473becb88c0609832bd732db6a7f091647a" Jan 26 09:30:16 crc kubenswrapper[4806]: I0126 09:30:16.987285 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" event={"ID":"d07502a2-50b0-4012-b335-340a1c694c50","Type":"ContainerStarted","Data":"0b4284bb6f66d7af192b1b55fa536fcbb56d0e5fcd0b5a1dafbad5325167e098"} Jan 26 09:30:39 crc kubenswrapper[4806]: I0126 09:30:39.060299 4806 scope.go:117] "RemoveContainer" containerID="7d9ca67e779af60e8f0b4b5bf373f0c45ed4484b58d7a75958970c336957a521" Jan 26 09:32:13 crc kubenswrapper[4806]: I0126 09:32:13.325055 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wsnpw"] Jan 26 09:32:13 crc kubenswrapper[4806]: E0126 09:32:13.325973 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cae345c7-b5af-4b00-a276-943d47d07e1b" containerName="collect-profiles" Jan 26 09:32:13 crc kubenswrapper[4806]: I0126 09:32:13.325987 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="cae345c7-b5af-4b00-a276-943d47d07e1b" containerName="collect-profiles" Jan 26 09:32:13 crc kubenswrapper[4806]: I0126 09:32:13.326155 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="cae345c7-b5af-4b00-a276-943d47d07e1b" containerName="collect-profiles" Jan 26 09:32:13 crc kubenswrapper[4806]: I0126 09:32:13.327415 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wsnpw" Jan 26 09:32:13 crc kubenswrapper[4806]: I0126 09:32:13.356164 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wsnpw"] Jan 26 09:32:13 crc kubenswrapper[4806]: I0126 09:32:13.444105 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d904bc8a-ea03-41b3-b984-a0e21e6f8b0f-catalog-content\") pod \"community-operators-wsnpw\" (UID: \"d904bc8a-ea03-41b3-b984-a0e21e6f8b0f\") " pod="openshift-marketplace/community-operators-wsnpw" Jan 26 09:32:13 crc kubenswrapper[4806]: I0126 09:32:13.444170 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zggc\" (UniqueName: \"kubernetes.io/projected/d904bc8a-ea03-41b3-b984-a0e21e6f8b0f-kube-api-access-5zggc\") pod \"community-operators-wsnpw\" (UID: \"d904bc8a-ea03-41b3-b984-a0e21e6f8b0f\") " pod="openshift-marketplace/community-operators-wsnpw" Jan 26 09:32:13 crc kubenswrapper[4806]: I0126 09:32:13.444266 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d904bc8a-ea03-41b3-b984-a0e21e6f8b0f-utilities\") pod \"community-operators-wsnpw\" (UID: \"d904bc8a-ea03-41b3-b984-a0e21e6f8b0f\") " pod="openshift-marketplace/community-operators-wsnpw" Jan 26 09:32:13 crc kubenswrapper[4806]: I0126 09:32:13.546392 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5zggc\" (UniqueName: \"kubernetes.io/projected/d904bc8a-ea03-41b3-b984-a0e21e6f8b0f-kube-api-access-5zggc\") pod \"community-operators-wsnpw\" (UID: \"d904bc8a-ea03-41b3-b984-a0e21e6f8b0f\") " pod="openshift-marketplace/community-operators-wsnpw" Jan 26 09:32:13 crc kubenswrapper[4806]: I0126 09:32:13.546494 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d904bc8a-ea03-41b3-b984-a0e21e6f8b0f-utilities\") pod \"community-operators-wsnpw\" (UID: \"d904bc8a-ea03-41b3-b984-a0e21e6f8b0f\") " pod="openshift-marketplace/community-operators-wsnpw" Jan 26 09:32:13 crc kubenswrapper[4806]: I0126 09:32:13.546582 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d904bc8a-ea03-41b3-b984-a0e21e6f8b0f-catalog-content\") pod \"community-operators-wsnpw\" (UID: \"d904bc8a-ea03-41b3-b984-a0e21e6f8b0f\") " pod="openshift-marketplace/community-operators-wsnpw" Jan 26 09:32:13 crc kubenswrapper[4806]: I0126 09:32:13.546976 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d904bc8a-ea03-41b3-b984-a0e21e6f8b0f-catalog-content\") pod \"community-operators-wsnpw\" (UID: \"d904bc8a-ea03-41b3-b984-a0e21e6f8b0f\") " pod="openshift-marketplace/community-operators-wsnpw" Jan 26 09:32:13 crc kubenswrapper[4806]: I0126 09:32:13.547218 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d904bc8a-ea03-41b3-b984-a0e21e6f8b0f-utilities\") pod \"community-operators-wsnpw\" (UID: \"d904bc8a-ea03-41b3-b984-a0e21e6f8b0f\") " pod="openshift-marketplace/community-operators-wsnpw" Jan 26 09:32:13 crc kubenswrapper[4806]: I0126 09:32:13.569539 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zggc\" (UniqueName: \"kubernetes.io/projected/d904bc8a-ea03-41b3-b984-a0e21e6f8b0f-kube-api-access-5zggc\") pod \"community-operators-wsnpw\" (UID: \"d904bc8a-ea03-41b3-b984-a0e21e6f8b0f\") " pod="openshift-marketplace/community-operators-wsnpw" Jan 26 09:32:13 crc kubenswrapper[4806]: I0126 09:32:13.645712 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wsnpw" Jan 26 09:32:14 crc kubenswrapper[4806]: I0126 09:32:14.282793 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wsnpw"] Jan 26 09:32:15 crc kubenswrapper[4806]: I0126 09:32:15.042110 4806 generic.go:334] "Generic (PLEG): container finished" podID="d904bc8a-ea03-41b3-b984-a0e21e6f8b0f" containerID="8c109ffe4eb3d69e27259747058e6ef237687b6727b147b6d7213e1ef882e04f" exitCode=0 Jan 26 09:32:15 crc kubenswrapper[4806]: I0126 09:32:15.048323 4806 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 09:32:15 crc kubenswrapper[4806]: I0126 09:32:15.059286 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wsnpw" event={"ID":"d904bc8a-ea03-41b3-b984-a0e21e6f8b0f","Type":"ContainerDied","Data":"8c109ffe4eb3d69e27259747058e6ef237687b6727b147b6d7213e1ef882e04f"} Jan 26 09:32:15 crc kubenswrapper[4806]: I0126 09:32:15.059347 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wsnpw" event={"ID":"d904bc8a-ea03-41b3-b984-a0e21e6f8b0f","Type":"ContainerStarted","Data":"921a7951d7aacfc356e3863135e90edab716c9270ce39d2d6d75e4adc7befc27"} Jan 26 09:32:16 crc kubenswrapper[4806]: I0126 09:32:16.052047 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wsnpw" event={"ID":"d904bc8a-ea03-41b3-b984-a0e21e6f8b0f","Type":"ContainerStarted","Data":"41bfd1a7f2d0ae0e6f1da8b01c29a181aa0e38f13d87ed26ff23a35a314e1e51"} Jan 26 09:32:17 crc kubenswrapper[4806]: I0126 09:32:17.059565 4806 generic.go:334] "Generic (PLEG): container finished" podID="d904bc8a-ea03-41b3-b984-a0e21e6f8b0f" containerID="41bfd1a7f2d0ae0e6f1da8b01c29a181aa0e38f13d87ed26ff23a35a314e1e51" exitCode=0 Jan 26 09:32:17 crc kubenswrapper[4806]: I0126 09:32:17.060510 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wsnpw" event={"ID":"d904bc8a-ea03-41b3-b984-a0e21e6f8b0f","Type":"ContainerDied","Data":"41bfd1a7f2d0ae0e6f1da8b01c29a181aa0e38f13d87ed26ff23a35a314e1e51"} Jan 26 09:32:18 crc kubenswrapper[4806]: I0126 09:32:18.070365 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wsnpw" event={"ID":"d904bc8a-ea03-41b3-b984-a0e21e6f8b0f","Type":"ContainerStarted","Data":"7fb5c6f3350b7e0ba7e87a6696949bcdbf20483a4271796f82cde5d250742769"} Jan 26 09:32:18 crc kubenswrapper[4806]: I0126 09:32:18.090093 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wsnpw" podStartSLOduration=2.655054093 podStartE2EDuration="5.089701202s" podCreationTimestamp="2026-01-26 09:32:13 +0000 UTC" firstStartedPulling="2026-01-26 09:32:15.046165844 +0000 UTC m=+5914.310573930" lastFinishedPulling="2026-01-26 09:32:17.480812983 +0000 UTC m=+5916.745221039" observedRunningTime="2026-01-26 09:32:18.086164502 +0000 UTC m=+5917.350572558" watchObservedRunningTime="2026-01-26 09:32:18.089701202 +0000 UTC m=+5917.354109258" Jan 26 09:32:23 crc kubenswrapper[4806]: I0126 09:32:23.647049 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-wsnpw" Jan 26 09:32:23 crc kubenswrapper[4806]: I0126 09:32:23.648168 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wsnpw" Jan 26 09:32:23 crc kubenswrapper[4806]: I0126 09:32:23.710231 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wsnpw" Jan 26 09:32:24 crc kubenswrapper[4806]: I0126 09:32:24.162842 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wsnpw" Jan 26 09:32:24 crc kubenswrapper[4806]: I0126 09:32:24.212182 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wsnpw"] Jan 26 09:32:26 crc kubenswrapper[4806]: I0126 09:32:26.132504 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-wsnpw" podUID="d904bc8a-ea03-41b3-b984-a0e21e6f8b0f" containerName="registry-server" containerID="cri-o://7fb5c6f3350b7e0ba7e87a6696949bcdbf20483a4271796f82cde5d250742769" gracePeriod=2 Jan 26 09:32:26 crc kubenswrapper[4806]: I0126 09:32:26.617582 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wsnpw" Jan 26 09:32:26 crc kubenswrapper[4806]: I0126 09:32:26.701515 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5zggc\" (UniqueName: \"kubernetes.io/projected/d904bc8a-ea03-41b3-b984-a0e21e6f8b0f-kube-api-access-5zggc\") pod \"d904bc8a-ea03-41b3-b984-a0e21e6f8b0f\" (UID: \"d904bc8a-ea03-41b3-b984-a0e21e6f8b0f\") " Jan 26 09:32:26 crc kubenswrapper[4806]: I0126 09:32:26.701578 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d904bc8a-ea03-41b3-b984-a0e21e6f8b0f-catalog-content\") pod \"d904bc8a-ea03-41b3-b984-a0e21e6f8b0f\" (UID: \"d904bc8a-ea03-41b3-b984-a0e21e6f8b0f\") " Jan 26 09:32:26 crc kubenswrapper[4806]: I0126 09:32:26.701620 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d904bc8a-ea03-41b3-b984-a0e21e6f8b0f-utilities\") pod \"d904bc8a-ea03-41b3-b984-a0e21e6f8b0f\" (UID: \"d904bc8a-ea03-41b3-b984-a0e21e6f8b0f\") " Jan 26 09:32:26 crc kubenswrapper[4806]: I0126 09:32:26.702614 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d904bc8a-ea03-41b3-b984-a0e21e6f8b0f-utilities" (OuterVolumeSpecName: "utilities") pod "d904bc8a-ea03-41b3-b984-a0e21e6f8b0f" (UID: "d904bc8a-ea03-41b3-b984-a0e21e6f8b0f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 09:32:26 crc kubenswrapper[4806]: I0126 09:32:26.709539 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d904bc8a-ea03-41b3-b984-a0e21e6f8b0f-kube-api-access-5zggc" (OuterVolumeSpecName: "kube-api-access-5zggc") pod "d904bc8a-ea03-41b3-b984-a0e21e6f8b0f" (UID: "d904bc8a-ea03-41b3-b984-a0e21e6f8b0f"). InnerVolumeSpecName "kube-api-access-5zggc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:32:26 crc kubenswrapper[4806]: I0126 09:32:26.766634 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d904bc8a-ea03-41b3-b984-a0e21e6f8b0f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d904bc8a-ea03-41b3-b984-a0e21e6f8b0f" (UID: "d904bc8a-ea03-41b3-b984-a0e21e6f8b0f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 09:32:26 crc kubenswrapper[4806]: I0126 09:32:26.804659 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5zggc\" (UniqueName: \"kubernetes.io/projected/d904bc8a-ea03-41b3-b984-a0e21e6f8b0f-kube-api-access-5zggc\") on node \"crc\" DevicePath \"\"" Jan 26 09:32:26 crc kubenswrapper[4806]: I0126 09:32:26.804688 4806 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d904bc8a-ea03-41b3-b984-a0e21e6f8b0f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 09:32:26 crc kubenswrapper[4806]: I0126 09:32:26.804698 4806 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d904bc8a-ea03-41b3-b984-a0e21e6f8b0f-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 09:32:27 crc kubenswrapper[4806]: I0126 09:32:27.144286 4806 generic.go:334] "Generic (PLEG): container finished" podID="d904bc8a-ea03-41b3-b984-a0e21e6f8b0f" containerID="7fb5c6f3350b7e0ba7e87a6696949bcdbf20483a4271796f82cde5d250742769" exitCode=0 Jan 26 09:32:27 crc kubenswrapper[4806]: I0126 09:32:27.144326 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wsnpw" event={"ID":"d904bc8a-ea03-41b3-b984-a0e21e6f8b0f","Type":"ContainerDied","Data":"7fb5c6f3350b7e0ba7e87a6696949bcdbf20483a4271796f82cde5d250742769"} Jan 26 09:32:27 crc kubenswrapper[4806]: I0126 09:32:27.144703 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wsnpw" event={"ID":"d904bc8a-ea03-41b3-b984-a0e21e6f8b0f","Type":"ContainerDied","Data":"921a7951d7aacfc356e3863135e90edab716c9270ce39d2d6d75e4adc7befc27"} Jan 26 09:32:27 crc kubenswrapper[4806]: I0126 09:32:27.144732 4806 scope.go:117] "RemoveContainer" containerID="7fb5c6f3350b7e0ba7e87a6696949bcdbf20483a4271796f82cde5d250742769" Jan 26 09:32:27 crc kubenswrapper[4806]: I0126 09:32:27.144426 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wsnpw" Jan 26 09:32:27 crc kubenswrapper[4806]: I0126 09:32:27.170705 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wsnpw"] Jan 26 09:32:27 crc kubenswrapper[4806]: I0126 09:32:27.178058 4806 scope.go:117] "RemoveContainer" containerID="41bfd1a7f2d0ae0e6f1da8b01c29a181aa0e38f13d87ed26ff23a35a314e1e51" Jan 26 09:32:27 crc kubenswrapper[4806]: I0126 09:32:27.178886 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-wsnpw"] Jan 26 09:32:27 crc kubenswrapper[4806]: I0126 09:32:27.199401 4806 scope.go:117] "RemoveContainer" containerID="8c109ffe4eb3d69e27259747058e6ef237687b6727b147b6d7213e1ef882e04f" Jan 26 09:32:27 crc kubenswrapper[4806]: I0126 09:32:27.236710 4806 scope.go:117] "RemoveContainer" containerID="7fb5c6f3350b7e0ba7e87a6696949bcdbf20483a4271796f82cde5d250742769" Jan 26 09:32:27 crc kubenswrapper[4806]: E0126 09:32:27.237096 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7fb5c6f3350b7e0ba7e87a6696949bcdbf20483a4271796f82cde5d250742769\": container with ID starting with 7fb5c6f3350b7e0ba7e87a6696949bcdbf20483a4271796f82cde5d250742769 not found: ID does not exist" containerID="7fb5c6f3350b7e0ba7e87a6696949bcdbf20483a4271796f82cde5d250742769" Jan 26 09:32:27 crc kubenswrapper[4806]: I0126 09:32:27.237141 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7fb5c6f3350b7e0ba7e87a6696949bcdbf20483a4271796f82cde5d250742769"} err="failed to get container status \"7fb5c6f3350b7e0ba7e87a6696949bcdbf20483a4271796f82cde5d250742769\": rpc error: code = NotFound desc = could not find container \"7fb5c6f3350b7e0ba7e87a6696949bcdbf20483a4271796f82cde5d250742769\": container with ID starting with 7fb5c6f3350b7e0ba7e87a6696949bcdbf20483a4271796f82cde5d250742769 not found: ID does not exist" Jan 26 09:32:27 crc kubenswrapper[4806]: I0126 09:32:27.237170 4806 scope.go:117] "RemoveContainer" containerID="41bfd1a7f2d0ae0e6f1da8b01c29a181aa0e38f13d87ed26ff23a35a314e1e51" Jan 26 09:32:27 crc kubenswrapper[4806]: E0126 09:32:27.237568 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"41bfd1a7f2d0ae0e6f1da8b01c29a181aa0e38f13d87ed26ff23a35a314e1e51\": container with ID starting with 41bfd1a7f2d0ae0e6f1da8b01c29a181aa0e38f13d87ed26ff23a35a314e1e51 not found: ID does not exist" containerID="41bfd1a7f2d0ae0e6f1da8b01c29a181aa0e38f13d87ed26ff23a35a314e1e51" Jan 26 09:32:27 crc kubenswrapper[4806]: I0126 09:32:27.237589 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41bfd1a7f2d0ae0e6f1da8b01c29a181aa0e38f13d87ed26ff23a35a314e1e51"} err="failed to get container status \"41bfd1a7f2d0ae0e6f1da8b01c29a181aa0e38f13d87ed26ff23a35a314e1e51\": rpc error: code = NotFound desc = could not find container \"41bfd1a7f2d0ae0e6f1da8b01c29a181aa0e38f13d87ed26ff23a35a314e1e51\": container with ID starting with 41bfd1a7f2d0ae0e6f1da8b01c29a181aa0e38f13d87ed26ff23a35a314e1e51 not found: ID does not exist" Jan 26 09:32:27 crc kubenswrapper[4806]: I0126 09:32:27.237604 4806 scope.go:117] "RemoveContainer" containerID="8c109ffe4eb3d69e27259747058e6ef237687b6727b147b6d7213e1ef882e04f" Jan 26 09:32:27 crc kubenswrapper[4806]: E0126 09:32:27.238100 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c109ffe4eb3d69e27259747058e6ef237687b6727b147b6d7213e1ef882e04f\": container with ID starting with 8c109ffe4eb3d69e27259747058e6ef237687b6727b147b6d7213e1ef882e04f not found: ID does not exist" containerID="8c109ffe4eb3d69e27259747058e6ef237687b6727b147b6d7213e1ef882e04f" Jan 26 09:32:27 crc kubenswrapper[4806]: I0126 09:32:27.238118 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c109ffe4eb3d69e27259747058e6ef237687b6727b147b6d7213e1ef882e04f"} err="failed to get container status \"8c109ffe4eb3d69e27259747058e6ef237687b6727b147b6d7213e1ef882e04f\": rpc error: code = NotFound desc = could not find container \"8c109ffe4eb3d69e27259747058e6ef237687b6727b147b6d7213e1ef882e04f\": container with ID starting with 8c109ffe4eb3d69e27259747058e6ef237687b6727b147b6d7213e1ef882e04f not found: ID does not exist" Jan 26 09:32:29 crc kubenswrapper[4806]: I0126 09:32:29.051861 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d904bc8a-ea03-41b3-b984-a0e21e6f8b0f" path="/var/lib/kubelet/pods/d904bc8a-ea03-41b3-b984-a0e21e6f8b0f/volumes" Jan 26 09:32:45 crc kubenswrapper[4806]: I0126 09:32:45.806952 4806 patch_prober.go:28] interesting pod/machine-config-daemon-k2tlk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 09:32:45 crc kubenswrapper[4806]: I0126 09:32:45.807693 4806 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 09:33:15 crc kubenswrapper[4806]: I0126 09:33:15.807019 4806 patch_prober.go:28] interesting pod/machine-config-daemon-k2tlk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 09:33:15 crc kubenswrapper[4806]: I0126 09:33:15.807788 4806 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 09:33:31 crc kubenswrapper[4806]: I0126 09:33:31.352584 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-drlh5"] Jan 26 09:33:31 crc kubenswrapper[4806]: E0126 09:33:31.353470 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d904bc8a-ea03-41b3-b984-a0e21e6f8b0f" containerName="extract-utilities" Jan 26 09:33:31 crc kubenswrapper[4806]: I0126 09:33:31.353486 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="d904bc8a-ea03-41b3-b984-a0e21e6f8b0f" containerName="extract-utilities" Jan 26 09:33:31 crc kubenswrapper[4806]: E0126 09:33:31.353508 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d904bc8a-ea03-41b3-b984-a0e21e6f8b0f" containerName="registry-server" Jan 26 09:33:31 crc kubenswrapper[4806]: I0126 09:33:31.353532 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="d904bc8a-ea03-41b3-b984-a0e21e6f8b0f" containerName="registry-server" Jan 26 09:33:31 crc kubenswrapper[4806]: E0126 09:33:31.353560 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d904bc8a-ea03-41b3-b984-a0e21e6f8b0f" containerName="extract-content" Jan 26 09:33:31 crc kubenswrapper[4806]: I0126 09:33:31.353568 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="d904bc8a-ea03-41b3-b984-a0e21e6f8b0f" containerName="extract-content" Jan 26 09:33:31 crc kubenswrapper[4806]: I0126 09:33:31.353779 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="d904bc8a-ea03-41b3-b984-a0e21e6f8b0f" containerName="registry-server" Jan 26 09:33:31 crc kubenswrapper[4806]: I0126 09:33:31.358001 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-drlh5" Jan 26 09:33:31 crc kubenswrapper[4806]: I0126 09:33:31.371781 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-drlh5"] Jan 26 09:33:31 crc kubenswrapper[4806]: I0126 09:33:31.476008 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32225309-25f3-4b57-a766-e8ff1c436f73-catalog-content\") pod \"certified-operators-drlh5\" (UID: \"32225309-25f3-4b57-a766-e8ff1c436f73\") " pod="openshift-marketplace/certified-operators-drlh5" Jan 26 09:33:31 crc kubenswrapper[4806]: I0126 09:33:31.476059 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjw7x\" (UniqueName: \"kubernetes.io/projected/32225309-25f3-4b57-a766-e8ff1c436f73-kube-api-access-bjw7x\") pod \"certified-operators-drlh5\" (UID: \"32225309-25f3-4b57-a766-e8ff1c436f73\") " pod="openshift-marketplace/certified-operators-drlh5" Jan 26 09:33:31 crc kubenswrapper[4806]: I0126 09:33:31.476179 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32225309-25f3-4b57-a766-e8ff1c436f73-utilities\") pod \"certified-operators-drlh5\" (UID: \"32225309-25f3-4b57-a766-e8ff1c436f73\") " pod="openshift-marketplace/certified-operators-drlh5" Jan 26 09:33:31 crc kubenswrapper[4806]: I0126 09:33:31.578481 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32225309-25f3-4b57-a766-e8ff1c436f73-utilities\") pod \"certified-operators-drlh5\" (UID: \"32225309-25f3-4b57-a766-e8ff1c436f73\") " pod="openshift-marketplace/certified-operators-drlh5" Jan 26 09:33:31 crc kubenswrapper[4806]: I0126 09:33:31.578864 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32225309-25f3-4b57-a766-e8ff1c436f73-catalog-content\") pod \"certified-operators-drlh5\" (UID: \"32225309-25f3-4b57-a766-e8ff1c436f73\") " pod="openshift-marketplace/certified-operators-drlh5" Jan 26 09:33:31 crc kubenswrapper[4806]: I0126 09:33:31.578988 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjw7x\" (UniqueName: \"kubernetes.io/projected/32225309-25f3-4b57-a766-e8ff1c436f73-kube-api-access-bjw7x\") pod \"certified-operators-drlh5\" (UID: \"32225309-25f3-4b57-a766-e8ff1c436f73\") " pod="openshift-marketplace/certified-operators-drlh5" Jan 26 09:33:31 crc kubenswrapper[4806]: I0126 09:33:31.579133 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32225309-25f3-4b57-a766-e8ff1c436f73-catalog-content\") pod \"certified-operators-drlh5\" (UID: \"32225309-25f3-4b57-a766-e8ff1c436f73\") " pod="openshift-marketplace/certified-operators-drlh5" Jan 26 09:33:31 crc kubenswrapper[4806]: I0126 09:33:31.578935 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32225309-25f3-4b57-a766-e8ff1c436f73-utilities\") pod \"certified-operators-drlh5\" (UID: \"32225309-25f3-4b57-a766-e8ff1c436f73\") " pod="openshift-marketplace/certified-operators-drlh5" Jan 26 09:33:31 crc kubenswrapper[4806]: I0126 09:33:31.620397 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjw7x\" (UniqueName: \"kubernetes.io/projected/32225309-25f3-4b57-a766-e8ff1c436f73-kube-api-access-bjw7x\") pod \"certified-operators-drlh5\" (UID: \"32225309-25f3-4b57-a766-e8ff1c436f73\") " pod="openshift-marketplace/certified-operators-drlh5" Jan 26 09:33:31 crc kubenswrapper[4806]: I0126 09:33:31.678907 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-drlh5" Jan 26 09:33:32 crc kubenswrapper[4806]: I0126 09:33:32.179828 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-drlh5"] Jan 26 09:33:32 crc kubenswrapper[4806]: I0126 09:33:32.814466 4806 generic.go:334] "Generic (PLEG): container finished" podID="32225309-25f3-4b57-a766-e8ff1c436f73" containerID="49886546ac4da3215cd428665e81e84b5417d2e7ba8a1c42543e82ff2adbad0a" exitCode=0 Jan 26 09:33:32 crc kubenswrapper[4806]: I0126 09:33:32.814579 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-drlh5" event={"ID":"32225309-25f3-4b57-a766-e8ff1c436f73","Type":"ContainerDied","Data":"49886546ac4da3215cd428665e81e84b5417d2e7ba8a1c42543e82ff2adbad0a"} Jan 26 09:33:32 crc kubenswrapper[4806]: I0126 09:33:32.817135 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-drlh5" event={"ID":"32225309-25f3-4b57-a766-e8ff1c436f73","Type":"ContainerStarted","Data":"e6ad6cdc071b5d06a935984f093bc5ecc8a6118e73f5e6f159893273310f142a"} Jan 26 09:33:33 crc kubenswrapper[4806]: I0126 09:33:33.825421 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-drlh5" event={"ID":"32225309-25f3-4b57-a766-e8ff1c436f73","Type":"ContainerStarted","Data":"962f0c3e40fb857bf599960aa38f4fd87e71993e66027a943a424d2f64d36cb4"} Jan 26 09:33:34 crc kubenswrapper[4806]: I0126 09:33:34.835508 4806 generic.go:334] "Generic (PLEG): container finished" podID="32225309-25f3-4b57-a766-e8ff1c436f73" containerID="962f0c3e40fb857bf599960aa38f4fd87e71993e66027a943a424d2f64d36cb4" exitCode=0 Jan 26 09:33:34 crc kubenswrapper[4806]: I0126 09:33:34.835566 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-drlh5" event={"ID":"32225309-25f3-4b57-a766-e8ff1c436f73","Type":"ContainerDied","Data":"962f0c3e40fb857bf599960aa38f4fd87e71993e66027a943a424d2f64d36cb4"} Jan 26 09:33:35 crc kubenswrapper[4806]: I0126 09:33:35.846977 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-drlh5" event={"ID":"32225309-25f3-4b57-a766-e8ff1c436f73","Type":"ContainerStarted","Data":"81b2c3a342fe61b20f12e6a49a25ec6e8c6615e2ea6d21aded3af8e03373ed94"} Jan 26 09:33:35 crc kubenswrapper[4806]: I0126 09:33:35.867867 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-drlh5" podStartSLOduration=2.183684541 podStartE2EDuration="4.867849102s" podCreationTimestamp="2026-01-26 09:33:31 +0000 UTC" firstStartedPulling="2026-01-26 09:33:32.81602801 +0000 UTC m=+5992.080436076" lastFinishedPulling="2026-01-26 09:33:35.500192581 +0000 UTC m=+5994.764600637" observedRunningTime="2026-01-26 09:33:35.8656635 +0000 UTC m=+5995.130071566" watchObservedRunningTime="2026-01-26 09:33:35.867849102 +0000 UTC m=+5995.132257158" Jan 26 09:33:41 crc kubenswrapper[4806]: I0126 09:33:41.679922 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-drlh5" Jan 26 09:33:41 crc kubenswrapper[4806]: I0126 09:33:41.680865 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-drlh5" Jan 26 09:33:41 crc kubenswrapper[4806]: I0126 09:33:41.725896 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-drlh5" Jan 26 09:33:41 crc kubenswrapper[4806]: I0126 09:33:41.985285 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-drlh5" Jan 26 09:33:42 crc kubenswrapper[4806]: I0126 09:33:42.067210 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-drlh5"] Jan 26 09:33:43 crc kubenswrapper[4806]: I0126 09:33:43.926319 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-drlh5" podUID="32225309-25f3-4b57-a766-e8ff1c436f73" containerName="registry-server" containerID="cri-o://81b2c3a342fe61b20f12e6a49a25ec6e8c6615e2ea6d21aded3af8e03373ed94" gracePeriod=2 Jan 26 09:33:44 crc kubenswrapper[4806]: I0126 09:33:44.479264 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-drlh5" Jan 26 09:33:44 crc kubenswrapper[4806]: I0126 09:33:44.503850 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32225309-25f3-4b57-a766-e8ff1c436f73-utilities\") pod \"32225309-25f3-4b57-a766-e8ff1c436f73\" (UID: \"32225309-25f3-4b57-a766-e8ff1c436f73\") " Jan 26 09:33:44 crc kubenswrapper[4806]: I0126 09:33:44.503971 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32225309-25f3-4b57-a766-e8ff1c436f73-catalog-content\") pod \"32225309-25f3-4b57-a766-e8ff1c436f73\" (UID: \"32225309-25f3-4b57-a766-e8ff1c436f73\") " Jan 26 09:33:44 crc kubenswrapper[4806]: I0126 09:33:44.504040 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bjw7x\" (UniqueName: \"kubernetes.io/projected/32225309-25f3-4b57-a766-e8ff1c436f73-kube-api-access-bjw7x\") pod \"32225309-25f3-4b57-a766-e8ff1c436f73\" (UID: \"32225309-25f3-4b57-a766-e8ff1c436f73\") " Jan 26 09:33:44 crc kubenswrapper[4806]: I0126 09:33:44.504666 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/32225309-25f3-4b57-a766-e8ff1c436f73-utilities" (OuterVolumeSpecName: "utilities") pod "32225309-25f3-4b57-a766-e8ff1c436f73" (UID: "32225309-25f3-4b57-a766-e8ff1c436f73"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 09:33:44 crc kubenswrapper[4806]: I0126 09:33:44.513109 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32225309-25f3-4b57-a766-e8ff1c436f73-kube-api-access-bjw7x" (OuterVolumeSpecName: "kube-api-access-bjw7x") pod "32225309-25f3-4b57-a766-e8ff1c436f73" (UID: "32225309-25f3-4b57-a766-e8ff1c436f73"). InnerVolumeSpecName "kube-api-access-bjw7x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:33:44 crc kubenswrapper[4806]: I0126 09:33:44.574467 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/32225309-25f3-4b57-a766-e8ff1c436f73-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "32225309-25f3-4b57-a766-e8ff1c436f73" (UID: "32225309-25f3-4b57-a766-e8ff1c436f73"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 09:33:44 crc kubenswrapper[4806]: I0126 09:33:44.606102 4806 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32225309-25f3-4b57-a766-e8ff1c436f73-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 09:33:44 crc kubenswrapper[4806]: I0126 09:33:44.606139 4806 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32225309-25f3-4b57-a766-e8ff1c436f73-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 09:33:44 crc kubenswrapper[4806]: I0126 09:33:44.606156 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bjw7x\" (UniqueName: \"kubernetes.io/projected/32225309-25f3-4b57-a766-e8ff1c436f73-kube-api-access-bjw7x\") on node \"crc\" DevicePath \"\"" Jan 26 09:33:44 crc kubenswrapper[4806]: I0126 09:33:44.939006 4806 generic.go:334] "Generic (PLEG): container finished" podID="32225309-25f3-4b57-a766-e8ff1c436f73" containerID="81b2c3a342fe61b20f12e6a49a25ec6e8c6615e2ea6d21aded3af8e03373ed94" exitCode=0 Jan 26 09:33:44 crc kubenswrapper[4806]: I0126 09:33:44.939111 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-drlh5" event={"ID":"32225309-25f3-4b57-a766-e8ff1c436f73","Type":"ContainerDied","Data":"81b2c3a342fe61b20f12e6a49a25ec6e8c6615e2ea6d21aded3af8e03373ed94"} Jan 26 09:33:44 crc kubenswrapper[4806]: I0126 09:33:44.939296 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-drlh5" event={"ID":"32225309-25f3-4b57-a766-e8ff1c436f73","Type":"ContainerDied","Data":"e6ad6cdc071b5d06a935984f093bc5ecc8a6118e73f5e6f159893273310f142a"} Jan 26 09:33:44 crc kubenswrapper[4806]: I0126 09:33:44.939317 4806 scope.go:117] "RemoveContainer" containerID="81b2c3a342fe61b20f12e6a49a25ec6e8c6615e2ea6d21aded3af8e03373ed94" Jan 26 09:33:44 crc kubenswrapper[4806]: I0126 09:33:44.939178 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-drlh5" Jan 26 09:33:44 crc kubenswrapper[4806]: I0126 09:33:44.984261 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-drlh5"] Jan 26 09:33:44 crc kubenswrapper[4806]: I0126 09:33:44.989891 4806 scope.go:117] "RemoveContainer" containerID="962f0c3e40fb857bf599960aa38f4fd87e71993e66027a943a424d2f64d36cb4" Jan 26 09:33:44 crc kubenswrapper[4806]: I0126 09:33:44.991904 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-drlh5"] Jan 26 09:33:45 crc kubenswrapper[4806]: I0126 09:33:45.012863 4806 scope.go:117] "RemoveContainer" containerID="49886546ac4da3215cd428665e81e84b5417d2e7ba8a1c42543e82ff2adbad0a" Jan 26 09:33:45 crc kubenswrapper[4806]: I0126 09:33:45.056901 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32225309-25f3-4b57-a766-e8ff1c436f73" path="/var/lib/kubelet/pods/32225309-25f3-4b57-a766-e8ff1c436f73/volumes" Jan 26 09:33:45 crc kubenswrapper[4806]: I0126 09:33:45.067918 4806 scope.go:117] "RemoveContainer" containerID="81b2c3a342fe61b20f12e6a49a25ec6e8c6615e2ea6d21aded3af8e03373ed94" Jan 26 09:33:45 crc kubenswrapper[4806]: E0126 09:33:45.068840 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81b2c3a342fe61b20f12e6a49a25ec6e8c6615e2ea6d21aded3af8e03373ed94\": container with ID starting with 81b2c3a342fe61b20f12e6a49a25ec6e8c6615e2ea6d21aded3af8e03373ed94 not found: ID does not exist" containerID="81b2c3a342fe61b20f12e6a49a25ec6e8c6615e2ea6d21aded3af8e03373ed94" Jan 26 09:33:45 crc kubenswrapper[4806]: I0126 09:33:45.068883 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81b2c3a342fe61b20f12e6a49a25ec6e8c6615e2ea6d21aded3af8e03373ed94"} err="failed to get container status \"81b2c3a342fe61b20f12e6a49a25ec6e8c6615e2ea6d21aded3af8e03373ed94\": rpc error: code = NotFound desc = could not find container \"81b2c3a342fe61b20f12e6a49a25ec6e8c6615e2ea6d21aded3af8e03373ed94\": container with ID starting with 81b2c3a342fe61b20f12e6a49a25ec6e8c6615e2ea6d21aded3af8e03373ed94 not found: ID does not exist" Jan 26 09:33:45 crc kubenswrapper[4806]: I0126 09:33:45.068916 4806 scope.go:117] "RemoveContainer" containerID="962f0c3e40fb857bf599960aa38f4fd87e71993e66027a943a424d2f64d36cb4" Jan 26 09:33:45 crc kubenswrapper[4806]: E0126 09:33:45.069260 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"962f0c3e40fb857bf599960aa38f4fd87e71993e66027a943a424d2f64d36cb4\": container with ID starting with 962f0c3e40fb857bf599960aa38f4fd87e71993e66027a943a424d2f64d36cb4 not found: ID does not exist" containerID="962f0c3e40fb857bf599960aa38f4fd87e71993e66027a943a424d2f64d36cb4" Jan 26 09:33:45 crc kubenswrapper[4806]: I0126 09:33:45.069285 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"962f0c3e40fb857bf599960aa38f4fd87e71993e66027a943a424d2f64d36cb4"} err="failed to get container status \"962f0c3e40fb857bf599960aa38f4fd87e71993e66027a943a424d2f64d36cb4\": rpc error: code = NotFound desc = could not find container \"962f0c3e40fb857bf599960aa38f4fd87e71993e66027a943a424d2f64d36cb4\": container with ID starting with 962f0c3e40fb857bf599960aa38f4fd87e71993e66027a943a424d2f64d36cb4 not found: ID does not exist" Jan 26 09:33:45 crc kubenswrapper[4806]: I0126 09:33:45.069305 4806 scope.go:117] "RemoveContainer" containerID="49886546ac4da3215cd428665e81e84b5417d2e7ba8a1c42543e82ff2adbad0a" Jan 26 09:33:45 crc kubenswrapper[4806]: E0126 09:33:45.069641 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"49886546ac4da3215cd428665e81e84b5417d2e7ba8a1c42543e82ff2adbad0a\": container with ID starting with 49886546ac4da3215cd428665e81e84b5417d2e7ba8a1c42543e82ff2adbad0a not found: ID does not exist" containerID="49886546ac4da3215cd428665e81e84b5417d2e7ba8a1c42543e82ff2adbad0a" Jan 26 09:33:45 crc kubenswrapper[4806]: I0126 09:33:45.069668 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49886546ac4da3215cd428665e81e84b5417d2e7ba8a1c42543e82ff2adbad0a"} err="failed to get container status \"49886546ac4da3215cd428665e81e84b5417d2e7ba8a1c42543e82ff2adbad0a\": rpc error: code = NotFound desc = could not find container \"49886546ac4da3215cd428665e81e84b5417d2e7ba8a1c42543e82ff2adbad0a\": container with ID starting with 49886546ac4da3215cd428665e81e84b5417d2e7ba8a1c42543e82ff2adbad0a not found: ID does not exist" Jan 26 09:33:45 crc kubenswrapper[4806]: I0126 09:33:45.810074 4806 patch_prober.go:28] interesting pod/machine-config-daemon-k2tlk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 09:33:45 crc kubenswrapper[4806]: I0126 09:33:45.810410 4806 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 09:33:45 crc kubenswrapper[4806]: I0126 09:33:45.810605 4806 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" Jan 26 09:33:45 crc kubenswrapper[4806]: I0126 09:33:45.812207 4806 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0b4284bb6f66d7af192b1b55fa536fcbb56d0e5fcd0b5a1dafbad5325167e098"} pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 09:33:45 crc kubenswrapper[4806]: I0126 09:33:45.812664 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" containerName="machine-config-daemon" containerID="cri-o://0b4284bb6f66d7af192b1b55fa536fcbb56d0e5fcd0b5a1dafbad5325167e098" gracePeriod=600 Jan 26 09:33:45 crc kubenswrapper[4806]: E0126 09:33:45.941333 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 09:33:45 crc kubenswrapper[4806]: I0126 09:33:45.948289 4806 generic.go:334] "Generic (PLEG): container finished" podID="d07502a2-50b0-4012-b335-340a1c694c50" containerID="0b4284bb6f66d7af192b1b55fa536fcbb56d0e5fcd0b5a1dafbad5325167e098" exitCode=0 Jan 26 09:33:45 crc kubenswrapper[4806]: I0126 09:33:45.948369 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" event={"ID":"d07502a2-50b0-4012-b335-340a1c694c50","Type":"ContainerDied","Data":"0b4284bb6f66d7af192b1b55fa536fcbb56d0e5fcd0b5a1dafbad5325167e098"} Jan 26 09:33:45 crc kubenswrapper[4806]: I0126 09:33:45.948420 4806 scope.go:117] "RemoveContainer" containerID="2ef27a15112ff4ec36ba691c1982b94d997aaad09c20342558650b704969f6c7" Jan 26 09:33:45 crc kubenswrapper[4806]: I0126 09:33:45.950115 4806 scope.go:117] "RemoveContainer" containerID="0b4284bb6f66d7af192b1b55fa536fcbb56d0e5fcd0b5a1dafbad5325167e098" Jan 26 09:33:45 crc kubenswrapper[4806]: E0126 09:33:45.950413 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 09:34:00 crc kubenswrapper[4806]: I0126 09:34:00.042642 4806 scope.go:117] "RemoveContainer" containerID="0b4284bb6f66d7af192b1b55fa536fcbb56d0e5fcd0b5a1dafbad5325167e098" Jan 26 09:34:00 crc kubenswrapper[4806]: E0126 09:34:00.043704 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 09:34:14 crc kubenswrapper[4806]: I0126 09:34:14.042149 4806 scope.go:117] "RemoveContainer" containerID="0b4284bb6f66d7af192b1b55fa536fcbb56d0e5fcd0b5a1dafbad5325167e098" Jan 26 09:34:14 crc kubenswrapper[4806]: E0126 09:34:14.042840 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 09:34:29 crc kubenswrapper[4806]: I0126 09:34:29.042403 4806 scope.go:117] "RemoveContainer" containerID="0b4284bb6f66d7af192b1b55fa536fcbb56d0e5fcd0b5a1dafbad5325167e098" Jan 26 09:34:29 crc kubenswrapper[4806]: E0126 09:34:29.043334 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 09:34:43 crc kubenswrapper[4806]: I0126 09:34:43.042245 4806 scope.go:117] "RemoveContainer" containerID="0b4284bb6f66d7af192b1b55fa536fcbb56d0e5fcd0b5a1dafbad5325167e098" Jan 26 09:34:43 crc kubenswrapper[4806]: E0126 09:34:43.043055 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 09:34:54 crc kubenswrapper[4806]: I0126 09:34:54.042355 4806 scope.go:117] "RemoveContainer" containerID="0b4284bb6f66d7af192b1b55fa536fcbb56d0e5fcd0b5a1dafbad5325167e098" Jan 26 09:34:54 crc kubenswrapper[4806]: E0126 09:34:54.043183 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 09:35:06 crc kubenswrapper[4806]: I0126 09:35:06.043122 4806 scope.go:117] "RemoveContainer" containerID="0b4284bb6f66d7af192b1b55fa536fcbb56d0e5fcd0b5a1dafbad5325167e098" Jan 26 09:35:06 crc kubenswrapper[4806]: E0126 09:35:06.044208 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 09:35:17 crc kubenswrapper[4806]: I0126 09:35:17.042256 4806 scope.go:117] "RemoveContainer" containerID="0b4284bb6f66d7af192b1b55fa536fcbb56d0e5fcd0b5a1dafbad5325167e098" Jan 26 09:35:17 crc kubenswrapper[4806]: E0126 09:35:17.043173 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 09:35:26 crc kubenswrapper[4806]: I0126 09:35:26.784965 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2bms4"] Jan 26 09:35:26 crc kubenswrapper[4806]: E0126 09:35:26.787531 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32225309-25f3-4b57-a766-e8ff1c436f73" containerName="registry-server" Jan 26 09:35:26 crc kubenswrapper[4806]: I0126 09:35:26.787643 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="32225309-25f3-4b57-a766-e8ff1c436f73" containerName="registry-server" Jan 26 09:35:26 crc kubenswrapper[4806]: E0126 09:35:26.787741 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32225309-25f3-4b57-a766-e8ff1c436f73" containerName="extract-content" Jan 26 09:35:26 crc kubenswrapper[4806]: I0126 09:35:26.787813 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="32225309-25f3-4b57-a766-e8ff1c436f73" containerName="extract-content" Jan 26 09:35:26 crc kubenswrapper[4806]: E0126 09:35:26.787903 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32225309-25f3-4b57-a766-e8ff1c436f73" containerName="extract-utilities" Jan 26 09:35:26 crc kubenswrapper[4806]: I0126 09:35:26.788028 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="32225309-25f3-4b57-a766-e8ff1c436f73" containerName="extract-utilities" Jan 26 09:35:26 crc kubenswrapper[4806]: I0126 09:35:26.788411 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="32225309-25f3-4b57-a766-e8ff1c436f73" containerName="registry-server" Jan 26 09:35:26 crc kubenswrapper[4806]: I0126 09:35:26.790113 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2bms4" Jan 26 09:35:26 crc kubenswrapper[4806]: I0126 09:35:26.816171 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2bms4"] Jan 26 09:35:26 crc kubenswrapper[4806]: I0126 09:35:26.941286 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a618e04-42c1-42f9-b260-8a10ce456955-utilities\") pod \"redhat-operators-2bms4\" (UID: \"6a618e04-42c1-42f9-b260-8a10ce456955\") " pod="openshift-marketplace/redhat-operators-2bms4" Jan 26 09:35:26 crc kubenswrapper[4806]: I0126 09:35:26.941344 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a618e04-42c1-42f9-b260-8a10ce456955-catalog-content\") pod \"redhat-operators-2bms4\" (UID: \"6a618e04-42c1-42f9-b260-8a10ce456955\") " pod="openshift-marketplace/redhat-operators-2bms4" Jan 26 09:35:26 crc kubenswrapper[4806]: I0126 09:35:26.941447 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgb4t\" (UniqueName: \"kubernetes.io/projected/6a618e04-42c1-42f9-b260-8a10ce456955-kube-api-access-vgb4t\") pod \"redhat-operators-2bms4\" (UID: \"6a618e04-42c1-42f9-b260-8a10ce456955\") " pod="openshift-marketplace/redhat-operators-2bms4" Jan 26 09:35:27 crc kubenswrapper[4806]: I0126 09:35:27.043843 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a618e04-42c1-42f9-b260-8a10ce456955-utilities\") pod \"redhat-operators-2bms4\" (UID: \"6a618e04-42c1-42f9-b260-8a10ce456955\") " pod="openshift-marketplace/redhat-operators-2bms4" Jan 26 09:35:27 crc kubenswrapper[4806]: I0126 09:35:27.043921 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a618e04-42c1-42f9-b260-8a10ce456955-catalog-content\") pod \"redhat-operators-2bms4\" (UID: \"6a618e04-42c1-42f9-b260-8a10ce456955\") " pod="openshift-marketplace/redhat-operators-2bms4" Jan 26 09:35:27 crc kubenswrapper[4806]: I0126 09:35:27.044030 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgb4t\" (UniqueName: \"kubernetes.io/projected/6a618e04-42c1-42f9-b260-8a10ce456955-kube-api-access-vgb4t\") pod \"redhat-operators-2bms4\" (UID: \"6a618e04-42c1-42f9-b260-8a10ce456955\") " pod="openshift-marketplace/redhat-operators-2bms4" Jan 26 09:35:27 crc kubenswrapper[4806]: I0126 09:35:27.044572 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a618e04-42c1-42f9-b260-8a10ce456955-utilities\") pod \"redhat-operators-2bms4\" (UID: \"6a618e04-42c1-42f9-b260-8a10ce456955\") " pod="openshift-marketplace/redhat-operators-2bms4" Jan 26 09:35:27 crc kubenswrapper[4806]: I0126 09:35:27.044893 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a618e04-42c1-42f9-b260-8a10ce456955-catalog-content\") pod \"redhat-operators-2bms4\" (UID: \"6a618e04-42c1-42f9-b260-8a10ce456955\") " pod="openshift-marketplace/redhat-operators-2bms4" Jan 26 09:35:27 crc kubenswrapper[4806]: I0126 09:35:27.066833 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgb4t\" (UniqueName: \"kubernetes.io/projected/6a618e04-42c1-42f9-b260-8a10ce456955-kube-api-access-vgb4t\") pod \"redhat-operators-2bms4\" (UID: \"6a618e04-42c1-42f9-b260-8a10ce456955\") " pod="openshift-marketplace/redhat-operators-2bms4" Jan 26 09:35:27 crc kubenswrapper[4806]: I0126 09:35:27.124917 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2bms4" Jan 26 09:35:27 crc kubenswrapper[4806]: I0126 09:35:27.603730 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2bms4"] Jan 26 09:35:28 crc kubenswrapper[4806]: I0126 09:35:28.014014 4806 generic.go:334] "Generic (PLEG): container finished" podID="6a618e04-42c1-42f9-b260-8a10ce456955" containerID="dad010d23b5bb87fc90b83a0cc063cbbfbc2263affed2ecc1759d8bf9cd2716f" exitCode=0 Jan 26 09:35:28 crc kubenswrapper[4806]: I0126 09:35:28.014056 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2bms4" event={"ID":"6a618e04-42c1-42f9-b260-8a10ce456955","Type":"ContainerDied","Data":"dad010d23b5bb87fc90b83a0cc063cbbfbc2263affed2ecc1759d8bf9cd2716f"} Jan 26 09:35:28 crc kubenswrapper[4806]: I0126 09:35:28.014082 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2bms4" event={"ID":"6a618e04-42c1-42f9-b260-8a10ce456955","Type":"ContainerStarted","Data":"23861f0fed59ec588fcea4802e7074ca020c38c9ed77e59f3c641e2f0ee88f25"} Jan 26 09:35:29 crc kubenswrapper[4806]: I0126 09:35:29.023229 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2bms4" event={"ID":"6a618e04-42c1-42f9-b260-8a10ce456955","Type":"ContainerStarted","Data":"3a9130cdd2e6175d332bb3be0ab1ac36e12bc89e7bc50d72cd157965df661050"} Jan 26 09:35:32 crc kubenswrapper[4806]: I0126 09:35:32.042232 4806 scope.go:117] "RemoveContainer" containerID="0b4284bb6f66d7af192b1b55fa536fcbb56d0e5fcd0b5a1dafbad5325167e098" Jan 26 09:35:32 crc kubenswrapper[4806]: E0126 09:35:32.043342 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 09:35:32 crc kubenswrapper[4806]: I0126 09:35:32.052258 4806 generic.go:334] "Generic (PLEG): container finished" podID="6a618e04-42c1-42f9-b260-8a10ce456955" containerID="3a9130cdd2e6175d332bb3be0ab1ac36e12bc89e7bc50d72cd157965df661050" exitCode=0 Jan 26 09:35:32 crc kubenswrapper[4806]: I0126 09:35:32.052310 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2bms4" event={"ID":"6a618e04-42c1-42f9-b260-8a10ce456955","Type":"ContainerDied","Data":"3a9130cdd2e6175d332bb3be0ab1ac36e12bc89e7bc50d72cd157965df661050"} Jan 26 09:35:33 crc kubenswrapper[4806]: I0126 09:35:33.070705 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2bms4" event={"ID":"6a618e04-42c1-42f9-b260-8a10ce456955","Type":"ContainerStarted","Data":"16b4ea98a8a23a69c1be9474d51f92517098ebcdb9bbe43aa8c667d4c02bce26"} Jan 26 09:35:33 crc kubenswrapper[4806]: I0126 09:35:33.134229 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2bms4" podStartSLOduration=2.705498082 podStartE2EDuration="7.134204357s" podCreationTimestamp="2026-01-26 09:35:26 +0000 UTC" firstStartedPulling="2026-01-26 09:35:28.015866446 +0000 UTC m=+6107.280274512" lastFinishedPulling="2026-01-26 09:35:32.444572721 +0000 UTC m=+6111.708980787" observedRunningTime="2026-01-26 09:35:33.114590472 +0000 UTC m=+6112.378998528" watchObservedRunningTime="2026-01-26 09:35:33.134204357 +0000 UTC m=+6112.398612413" Jan 26 09:35:37 crc kubenswrapper[4806]: I0126 09:35:37.125791 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2bms4" Jan 26 09:35:37 crc kubenswrapper[4806]: I0126 09:35:37.126294 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-2bms4" Jan 26 09:35:38 crc kubenswrapper[4806]: I0126 09:35:38.181158 4806 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-2bms4" podUID="6a618e04-42c1-42f9-b260-8a10ce456955" containerName="registry-server" probeResult="failure" output=< Jan 26 09:35:38 crc kubenswrapper[4806]: timeout: failed to connect service ":50051" within 1s Jan 26 09:35:38 crc kubenswrapper[4806]: > Jan 26 09:35:45 crc kubenswrapper[4806]: I0126 09:35:45.043005 4806 scope.go:117] "RemoveContainer" containerID="0b4284bb6f66d7af192b1b55fa536fcbb56d0e5fcd0b5a1dafbad5325167e098" Jan 26 09:35:45 crc kubenswrapper[4806]: E0126 09:35:45.043671 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 09:35:48 crc kubenswrapper[4806]: I0126 09:35:48.176072 4806 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-2bms4" podUID="6a618e04-42c1-42f9-b260-8a10ce456955" containerName="registry-server" probeResult="failure" output=< Jan 26 09:35:48 crc kubenswrapper[4806]: timeout: failed to connect service ":50051" within 1s Jan 26 09:35:48 crc kubenswrapper[4806]: > Jan 26 09:35:48 crc kubenswrapper[4806]: I0126 09:35:48.225367 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-9cjs7"] Jan 26 09:35:48 crc kubenswrapper[4806]: I0126 09:35:48.228427 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9cjs7" Jan 26 09:35:48 crc kubenswrapper[4806]: I0126 09:35:48.243811 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9cjs7"] Jan 26 09:35:48 crc kubenswrapper[4806]: I0126 09:35:48.260139 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67c29e43-63e2-46e4-b9c8-18e5321ac0f3-catalog-content\") pod \"redhat-marketplace-9cjs7\" (UID: \"67c29e43-63e2-46e4-b9c8-18e5321ac0f3\") " pod="openshift-marketplace/redhat-marketplace-9cjs7" Jan 26 09:35:48 crc kubenswrapper[4806]: I0126 09:35:48.260208 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndsw4\" (UniqueName: \"kubernetes.io/projected/67c29e43-63e2-46e4-b9c8-18e5321ac0f3-kube-api-access-ndsw4\") pod \"redhat-marketplace-9cjs7\" (UID: \"67c29e43-63e2-46e4-b9c8-18e5321ac0f3\") " pod="openshift-marketplace/redhat-marketplace-9cjs7" Jan 26 09:35:48 crc kubenswrapper[4806]: I0126 09:35:48.260640 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67c29e43-63e2-46e4-b9c8-18e5321ac0f3-utilities\") pod \"redhat-marketplace-9cjs7\" (UID: \"67c29e43-63e2-46e4-b9c8-18e5321ac0f3\") " pod="openshift-marketplace/redhat-marketplace-9cjs7" Jan 26 09:35:48 crc kubenswrapper[4806]: I0126 09:35:48.363252 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67c29e43-63e2-46e4-b9c8-18e5321ac0f3-utilities\") pod \"redhat-marketplace-9cjs7\" (UID: \"67c29e43-63e2-46e4-b9c8-18e5321ac0f3\") " pod="openshift-marketplace/redhat-marketplace-9cjs7" Jan 26 09:35:48 crc kubenswrapper[4806]: I0126 09:35:48.363802 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67c29e43-63e2-46e4-b9c8-18e5321ac0f3-catalog-content\") pod \"redhat-marketplace-9cjs7\" (UID: \"67c29e43-63e2-46e4-b9c8-18e5321ac0f3\") " pod="openshift-marketplace/redhat-marketplace-9cjs7" Jan 26 09:35:48 crc kubenswrapper[4806]: I0126 09:35:48.363870 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ndsw4\" (UniqueName: \"kubernetes.io/projected/67c29e43-63e2-46e4-b9c8-18e5321ac0f3-kube-api-access-ndsw4\") pod \"redhat-marketplace-9cjs7\" (UID: \"67c29e43-63e2-46e4-b9c8-18e5321ac0f3\") " pod="openshift-marketplace/redhat-marketplace-9cjs7" Jan 26 09:35:48 crc kubenswrapper[4806]: I0126 09:35:48.364197 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67c29e43-63e2-46e4-b9c8-18e5321ac0f3-utilities\") pod \"redhat-marketplace-9cjs7\" (UID: \"67c29e43-63e2-46e4-b9c8-18e5321ac0f3\") " pod="openshift-marketplace/redhat-marketplace-9cjs7" Jan 26 09:35:48 crc kubenswrapper[4806]: I0126 09:35:48.364750 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67c29e43-63e2-46e4-b9c8-18e5321ac0f3-catalog-content\") pod \"redhat-marketplace-9cjs7\" (UID: \"67c29e43-63e2-46e4-b9c8-18e5321ac0f3\") " pod="openshift-marketplace/redhat-marketplace-9cjs7" Jan 26 09:35:48 crc kubenswrapper[4806]: I0126 09:35:48.404195 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ndsw4\" (UniqueName: \"kubernetes.io/projected/67c29e43-63e2-46e4-b9c8-18e5321ac0f3-kube-api-access-ndsw4\") pod \"redhat-marketplace-9cjs7\" (UID: \"67c29e43-63e2-46e4-b9c8-18e5321ac0f3\") " pod="openshift-marketplace/redhat-marketplace-9cjs7" Jan 26 09:35:48 crc kubenswrapper[4806]: I0126 09:35:48.548290 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9cjs7" Jan 26 09:35:49 crc kubenswrapper[4806]: I0126 09:35:49.226610 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9cjs7"] Jan 26 09:35:49 crc kubenswrapper[4806]: W0126 09:35:49.237477 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod67c29e43_63e2_46e4_b9c8_18e5321ac0f3.slice/crio-9c478d2782823d5babfa1a50f93f9bb985bbcf84d78f52447d7d15fbea85a52f WatchSource:0}: Error finding container 9c478d2782823d5babfa1a50f93f9bb985bbcf84d78f52447d7d15fbea85a52f: Status 404 returned error can't find the container with id 9c478d2782823d5babfa1a50f93f9bb985bbcf84d78f52447d7d15fbea85a52f Jan 26 09:35:50 crc kubenswrapper[4806]: I0126 09:35:50.242208 4806 generic.go:334] "Generic (PLEG): container finished" podID="67c29e43-63e2-46e4-b9c8-18e5321ac0f3" containerID="f1349196a0f5063fc78e7cca2176481a45cd28ff76978fd78be9e269ce40bb13" exitCode=0 Jan 26 09:35:50 crc kubenswrapper[4806]: I0126 09:35:50.242461 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9cjs7" event={"ID":"67c29e43-63e2-46e4-b9c8-18e5321ac0f3","Type":"ContainerDied","Data":"f1349196a0f5063fc78e7cca2176481a45cd28ff76978fd78be9e269ce40bb13"} Jan 26 09:35:50 crc kubenswrapper[4806]: I0126 09:35:50.242508 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9cjs7" event={"ID":"67c29e43-63e2-46e4-b9c8-18e5321ac0f3","Type":"ContainerStarted","Data":"9c478d2782823d5babfa1a50f93f9bb985bbcf84d78f52447d7d15fbea85a52f"} Jan 26 09:35:51 crc kubenswrapper[4806]: I0126 09:35:51.252591 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9cjs7" event={"ID":"67c29e43-63e2-46e4-b9c8-18e5321ac0f3","Type":"ContainerStarted","Data":"cf394fadf06fd072db0fdc931a6ce899ce0f88ce1738c28bbf537cf4c687e831"} Jan 26 09:35:52 crc kubenswrapper[4806]: I0126 09:35:52.275156 4806 generic.go:334] "Generic (PLEG): container finished" podID="67c29e43-63e2-46e4-b9c8-18e5321ac0f3" containerID="cf394fadf06fd072db0fdc931a6ce899ce0f88ce1738c28bbf537cf4c687e831" exitCode=0 Jan 26 09:35:52 crc kubenswrapper[4806]: I0126 09:35:52.275235 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9cjs7" event={"ID":"67c29e43-63e2-46e4-b9c8-18e5321ac0f3","Type":"ContainerDied","Data":"cf394fadf06fd072db0fdc931a6ce899ce0f88ce1738c28bbf537cf4c687e831"} Jan 26 09:35:53 crc kubenswrapper[4806]: I0126 09:35:53.286774 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9cjs7" event={"ID":"67c29e43-63e2-46e4-b9c8-18e5321ac0f3","Type":"ContainerStarted","Data":"6dc6291b0c746ae18a06e987e8468dc6c1166fc2324c727c4768b9b0ccdbb7ca"} Jan 26 09:35:53 crc kubenswrapper[4806]: I0126 09:35:53.316368 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-9cjs7" podStartSLOduration=2.7810240090000002 podStartE2EDuration="5.315880659s" podCreationTimestamp="2026-01-26 09:35:48 +0000 UTC" firstStartedPulling="2026-01-26 09:35:50.245601746 +0000 UTC m=+6129.510009812" lastFinishedPulling="2026-01-26 09:35:52.780458416 +0000 UTC m=+6132.044866462" observedRunningTime="2026-01-26 09:35:53.309163686 +0000 UTC m=+6132.573571752" watchObservedRunningTime="2026-01-26 09:35:53.315880659 +0000 UTC m=+6132.580288735" Jan 26 09:35:56 crc kubenswrapper[4806]: I0126 09:35:56.042066 4806 scope.go:117] "RemoveContainer" containerID="0b4284bb6f66d7af192b1b55fa536fcbb56d0e5fcd0b5a1dafbad5325167e098" Jan 26 09:35:56 crc kubenswrapper[4806]: E0126 09:35:56.042698 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 09:35:57 crc kubenswrapper[4806]: I0126 09:35:57.193732 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2bms4" Jan 26 09:35:57 crc kubenswrapper[4806]: I0126 09:35:57.257803 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2bms4" Jan 26 09:35:58 crc kubenswrapper[4806]: I0126 09:35:58.003870 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2bms4"] Jan 26 09:35:58 crc kubenswrapper[4806]: I0126 09:35:58.385614 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-2bms4" podUID="6a618e04-42c1-42f9-b260-8a10ce456955" containerName="registry-server" containerID="cri-o://16b4ea98a8a23a69c1be9474d51f92517098ebcdb9bbe43aa8c667d4c02bce26" gracePeriod=2 Jan 26 09:35:58 crc kubenswrapper[4806]: I0126 09:35:58.549867 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-9cjs7" Jan 26 09:35:58 crc kubenswrapper[4806]: I0126 09:35:58.550303 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-9cjs7" Jan 26 09:35:58 crc kubenswrapper[4806]: I0126 09:35:58.597241 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-9cjs7" Jan 26 09:35:58 crc kubenswrapper[4806]: I0126 09:35:58.857073 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2bms4" Jan 26 09:35:58 crc kubenswrapper[4806]: I0126 09:35:58.923177 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a618e04-42c1-42f9-b260-8a10ce456955-utilities\") pod \"6a618e04-42c1-42f9-b260-8a10ce456955\" (UID: \"6a618e04-42c1-42f9-b260-8a10ce456955\") " Jan 26 09:35:58 crc kubenswrapper[4806]: I0126 09:35:58.923277 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a618e04-42c1-42f9-b260-8a10ce456955-catalog-content\") pod \"6a618e04-42c1-42f9-b260-8a10ce456955\" (UID: \"6a618e04-42c1-42f9-b260-8a10ce456955\") " Jan 26 09:35:58 crc kubenswrapper[4806]: I0126 09:35:58.923446 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vgb4t\" (UniqueName: \"kubernetes.io/projected/6a618e04-42c1-42f9-b260-8a10ce456955-kube-api-access-vgb4t\") pod \"6a618e04-42c1-42f9-b260-8a10ce456955\" (UID: \"6a618e04-42c1-42f9-b260-8a10ce456955\") " Jan 26 09:35:58 crc kubenswrapper[4806]: I0126 09:35:58.924643 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6a618e04-42c1-42f9-b260-8a10ce456955-utilities" (OuterVolumeSpecName: "utilities") pod "6a618e04-42c1-42f9-b260-8a10ce456955" (UID: "6a618e04-42c1-42f9-b260-8a10ce456955"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 09:35:58 crc kubenswrapper[4806]: I0126 09:35:58.935894 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a618e04-42c1-42f9-b260-8a10ce456955-kube-api-access-vgb4t" (OuterVolumeSpecName: "kube-api-access-vgb4t") pod "6a618e04-42c1-42f9-b260-8a10ce456955" (UID: "6a618e04-42c1-42f9-b260-8a10ce456955"). InnerVolumeSpecName "kube-api-access-vgb4t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:35:59 crc kubenswrapper[4806]: I0126 09:35:59.025825 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vgb4t\" (UniqueName: \"kubernetes.io/projected/6a618e04-42c1-42f9-b260-8a10ce456955-kube-api-access-vgb4t\") on node \"crc\" DevicePath \"\"" Jan 26 09:35:59 crc kubenswrapper[4806]: I0126 09:35:59.025861 4806 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a618e04-42c1-42f9-b260-8a10ce456955-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 09:35:59 crc kubenswrapper[4806]: I0126 09:35:59.031185 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6a618e04-42c1-42f9-b260-8a10ce456955-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6a618e04-42c1-42f9-b260-8a10ce456955" (UID: "6a618e04-42c1-42f9-b260-8a10ce456955"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 09:35:59 crc kubenswrapper[4806]: I0126 09:35:59.128399 4806 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a618e04-42c1-42f9-b260-8a10ce456955-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 09:35:59 crc kubenswrapper[4806]: I0126 09:35:59.398475 4806 generic.go:334] "Generic (PLEG): container finished" podID="6a618e04-42c1-42f9-b260-8a10ce456955" containerID="16b4ea98a8a23a69c1be9474d51f92517098ebcdb9bbe43aa8c667d4c02bce26" exitCode=0 Jan 26 09:35:59 crc kubenswrapper[4806]: I0126 09:35:59.398653 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2bms4" Jan 26 09:35:59 crc kubenswrapper[4806]: I0126 09:35:59.398702 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2bms4" event={"ID":"6a618e04-42c1-42f9-b260-8a10ce456955","Type":"ContainerDied","Data":"16b4ea98a8a23a69c1be9474d51f92517098ebcdb9bbe43aa8c667d4c02bce26"} Jan 26 09:35:59 crc kubenswrapper[4806]: I0126 09:35:59.398751 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2bms4" event={"ID":"6a618e04-42c1-42f9-b260-8a10ce456955","Type":"ContainerDied","Data":"23861f0fed59ec588fcea4802e7074ca020c38c9ed77e59f3c641e2f0ee88f25"} Jan 26 09:35:59 crc kubenswrapper[4806]: I0126 09:35:59.398773 4806 scope.go:117] "RemoveContainer" containerID="16b4ea98a8a23a69c1be9474d51f92517098ebcdb9bbe43aa8c667d4c02bce26" Jan 26 09:35:59 crc kubenswrapper[4806]: I0126 09:35:59.459067 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2bms4"] Jan 26 09:35:59 crc kubenswrapper[4806]: I0126 09:35:59.472180 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-2bms4"] Jan 26 09:35:59 crc kubenswrapper[4806]: I0126 09:35:59.474570 4806 scope.go:117] "RemoveContainer" containerID="3a9130cdd2e6175d332bb3be0ab1ac36e12bc89e7bc50d72cd157965df661050" Jan 26 09:35:59 crc kubenswrapper[4806]: I0126 09:35:59.490055 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-9cjs7" Jan 26 09:35:59 crc kubenswrapper[4806]: I0126 09:35:59.499776 4806 scope.go:117] "RemoveContainer" containerID="dad010d23b5bb87fc90b83a0cc063cbbfbc2263affed2ecc1759d8bf9cd2716f" Jan 26 09:35:59 crc kubenswrapper[4806]: I0126 09:35:59.557151 4806 scope.go:117] "RemoveContainer" containerID="16b4ea98a8a23a69c1be9474d51f92517098ebcdb9bbe43aa8c667d4c02bce26" Jan 26 09:35:59 crc kubenswrapper[4806]: E0126 09:35:59.558017 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"16b4ea98a8a23a69c1be9474d51f92517098ebcdb9bbe43aa8c667d4c02bce26\": container with ID starting with 16b4ea98a8a23a69c1be9474d51f92517098ebcdb9bbe43aa8c667d4c02bce26 not found: ID does not exist" containerID="16b4ea98a8a23a69c1be9474d51f92517098ebcdb9bbe43aa8c667d4c02bce26" Jan 26 09:35:59 crc kubenswrapper[4806]: I0126 09:35:59.558072 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16b4ea98a8a23a69c1be9474d51f92517098ebcdb9bbe43aa8c667d4c02bce26"} err="failed to get container status \"16b4ea98a8a23a69c1be9474d51f92517098ebcdb9bbe43aa8c667d4c02bce26\": rpc error: code = NotFound desc = could not find container \"16b4ea98a8a23a69c1be9474d51f92517098ebcdb9bbe43aa8c667d4c02bce26\": container with ID starting with 16b4ea98a8a23a69c1be9474d51f92517098ebcdb9bbe43aa8c667d4c02bce26 not found: ID does not exist" Jan 26 09:35:59 crc kubenswrapper[4806]: I0126 09:35:59.558093 4806 scope.go:117] "RemoveContainer" containerID="3a9130cdd2e6175d332bb3be0ab1ac36e12bc89e7bc50d72cd157965df661050" Jan 26 09:35:59 crc kubenswrapper[4806]: E0126 09:35:59.558409 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a9130cdd2e6175d332bb3be0ab1ac36e12bc89e7bc50d72cd157965df661050\": container with ID starting with 3a9130cdd2e6175d332bb3be0ab1ac36e12bc89e7bc50d72cd157965df661050 not found: ID does not exist" containerID="3a9130cdd2e6175d332bb3be0ab1ac36e12bc89e7bc50d72cd157965df661050" Jan 26 09:35:59 crc kubenswrapper[4806]: I0126 09:35:59.558458 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a9130cdd2e6175d332bb3be0ab1ac36e12bc89e7bc50d72cd157965df661050"} err="failed to get container status \"3a9130cdd2e6175d332bb3be0ab1ac36e12bc89e7bc50d72cd157965df661050\": rpc error: code = NotFound desc = could not find container \"3a9130cdd2e6175d332bb3be0ab1ac36e12bc89e7bc50d72cd157965df661050\": container with ID starting with 3a9130cdd2e6175d332bb3be0ab1ac36e12bc89e7bc50d72cd157965df661050 not found: ID does not exist" Jan 26 09:35:59 crc kubenswrapper[4806]: I0126 09:35:59.558495 4806 scope.go:117] "RemoveContainer" containerID="dad010d23b5bb87fc90b83a0cc063cbbfbc2263affed2ecc1759d8bf9cd2716f" Jan 26 09:35:59 crc kubenswrapper[4806]: E0126 09:35:59.558898 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dad010d23b5bb87fc90b83a0cc063cbbfbc2263affed2ecc1759d8bf9cd2716f\": container with ID starting with dad010d23b5bb87fc90b83a0cc063cbbfbc2263affed2ecc1759d8bf9cd2716f not found: ID does not exist" containerID="dad010d23b5bb87fc90b83a0cc063cbbfbc2263affed2ecc1759d8bf9cd2716f" Jan 26 09:35:59 crc kubenswrapper[4806]: I0126 09:35:59.558929 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dad010d23b5bb87fc90b83a0cc063cbbfbc2263affed2ecc1759d8bf9cd2716f"} err="failed to get container status \"dad010d23b5bb87fc90b83a0cc063cbbfbc2263affed2ecc1759d8bf9cd2716f\": rpc error: code = NotFound desc = could not find container \"dad010d23b5bb87fc90b83a0cc063cbbfbc2263affed2ecc1759d8bf9cd2716f\": container with ID starting with dad010d23b5bb87fc90b83a0cc063cbbfbc2263affed2ecc1759d8bf9cd2716f not found: ID does not exist" Jan 26 09:36:00 crc kubenswrapper[4806]: I0126 09:36:00.985973 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9cjs7"] Jan 26 09:36:01 crc kubenswrapper[4806]: I0126 09:36:01.056682 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a618e04-42c1-42f9-b260-8a10ce456955" path="/var/lib/kubelet/pods/6a618e04-42c1-42f9-b260-8a10ce456955/volumes" Jan 26 09:36:02 crc kubenswrapper[4806]: I0126 09:36:02.427126 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-9cjs7" podUID="67c29e43-63e2-46e4-b9c8-18e5321ac0f3" containerName="registry-server" containerID="cri-o://6dc6291b0c746ae18a06e987e8468dc6c1166fc2324c727c4768b9b0ccdbb7ca" gracePeriod=2 Jan 26 09:36:02 crc kubenswrapper[4806]: I0126 09:36:02.901135 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9cjs7" Jan 26 09:36:03 crc kubenswrapper[4806]: I0126 09:36:03.020097 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ndsw4\" (UniqueName: \"kubernetes.io/projected/67c29e43-63e2-46e4-b9c8-18e5321ac0f3-kube-api-access-ndsw4\") pod \"67c29e43-63e2-46e4-b9c8-18e5321ac0f3\" (UID: \"67c29e43-63e2-46e4-b9c8-18e5321ac0f3\") " Jan 26 09:36:03 crc kubenswrapper[4806]: I0126 09:36:03.020256 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67c29e43-63e2-46e4-b9c8-18e5321ac0f3-utilities\") pod \"67c29e43-63e2-46e4-b9c8-18e5321ac0f3\" (UID: \"67c29e43-63e2-46e4-b9c8-18e5321ac0f3\") " Jan 26 09:36:03 crc kubenswrapper[4806]: I0126 09:36:03.020323 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67c29e43-63e2-46e4-b9c8-18e5321ac0f3-catalog-content\") pod \"67c29e43-63e2-46e4-b9c8-18e5321ac0f3\" (UID: \"67c29e43-63e2-46e4-b9c8-18e5321ac0f3\") " Jan 26 09:36:03 crc kubenswrapper[4806]: I0126 09:36:03.021096 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67c29e43-63e2-46e4-b9c8-18e5321ac0f3-utilities" (OuterVolumeSpecName: "utilities") pod "67c29e43-63e2-46e4-b9c8-18e5321ac0f3" (UID: "67c29e43-63e2-46e4-b9c8-18e5321ac0f3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 09:36:03 crc kubenswrapper[4806]: I0126 09:36:03.031416 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67c29e43-63e2-46e4-b9c8-18e5321ac0f3-kube-api-access-ndsw4" (OuterVolumeSpecName: "kube-api-access-ndsw4") pod "67c29e43-63e2-46e4-b9c8-18e5321ac0f3" (UID: "67c29e43-63e2-46e4-b9c8-18e5321ac0f3"). InnerVolumeSpecName "kube-api-access-ndsw4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:36:03 crc kubenswrapper[4806]: I0126 09:36:03.059204 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67c29e43-63e2-46e4-b9c8-18e5321ac0f3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "67c29e43-63e2-46e4-b9c8-18e5321ac0f3" (UID: "67c29e43-63e2-46e4-b9c8-18e5321ac0f3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 09:36:03 crc kubenswrapper[4806]: I0126 09:36:03.122821 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ndsw4\" (UniqueName: \"kubernetes.io/projected/67c29e43-63e2-46e4-b9c8-18e5321ac0f3-kube-api-access-ndsw4\") on node \"crc\" DevicePath \"\"" Jan 26 09:36:03 crc kubenswrapper[4806]: I0126 09:36:03.122860 4806 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67c29e43-63e2-46e4-b9c8-18e5321ac0f3-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 09:36:03 crc kubenswrapper[4806]: I0126 09:36:03.122873 4806 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67c29e43-63e2-46e4-b9c8-18e5321ac0f3-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 09:36:03 crc kubenswrapper[4806]: I0126 09:36:03.438971 4806 generic.go:334] "Generic (PLEG): container finished" podID="67c29e43-63e2-46e4-b9c8-18e5321ac0f3" containerID="6dc6291b0c746ae18a06e987e8468dc6c1166fc2324c727c4768b9b0ccdbb7ca" exitCode=0 Jan 26 09:36:03 crc kubenswrapper[4806]: I0126 09:36:03.439180 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9cjs7" event={"ID":"67c29e43-63e2-46e4-b9c8-18e5321ac0f3","Type":"ContainerDied","Data":"6dc6291b0c746ae18a06e987e8468dc6c1166fc2324c727c4768b9b0ccdbb7ca"} Jan 26 09:36:03 crc kubenswrapper[4806]: I0126 09:36:03.439247 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9cjs7" event={"ID":"67c29e43-63e2-46e4-b9c8-18e5321ac0f3","Type":"ContainerDied","Data":"9c478d2782823d5babfa1a50f93f9bb985bbcf84d78f52447d7d15fbea85a52f"} Jan 26 09:36:03 crc kubenswrapper[4806]: I0126 09:36:03.439269 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9cjs7" Jan 26 09:36:03 crc kubenswrapper[4806]: I0126 09:36:03.439274 4806 scope.go:117] "RemoveContainer" containerID="6dc6291b0c746ae18a06e987e8468dc6c1166fc2324c727c4768b9b0ccdbb7ca" Jan 26 09:36:03 crc kubenswrapper[4806]: I0126 09:36:03.470690 4806 scope.go:117] "RemoveContainer" containerID="cf394fadf06fd072db0fdc931a6ce899ce0f88ce1738c28bbf537cf4c687e831" Jan 26 09:36:03 crc kubenswrapper[4806]: I0126 09:36:03.495163 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9cjs7"] Jan 26 09:36:03 crc kubenswrapper[4806]: I0126 09:36:03.508192 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-9cjs7"] Jan 26 09:36:03 crc kubenswrapper[4806]: I0126 09:36:03.516203 4806 scope.go:117] "RemoveContainer" containerID="f1349196a0f5063fc78e7cca2176481a45cd28ff76978fd78be9e269ce40bb13" Jan 26 09:36:03 crc kubenswrapper[4806]: I0126 09:36:03.555873 4806 scope.go:117] "RemoveContainer" containerID="6dc6291b0c746ae18a06e987e8468dc6c1166fc2324c727c4768b9b0ccdbb7ca" Jan 26 09:36:03 crc kubenswrapper[4806]: E0126 09:36:03.556313 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6dc6291b0c746ae18a06e987e8468dc6c1166fc2324c727c4768b9b0ccdbb7ca\": container with ID starting with 6dc6291b0c746ae18a06e987e8468dc6c1166fc2324c727c4768b9b0ccdbb7ca not found: ID does not exist" containerID="6dc6291b0c746ae18a06e987e8468dc6c1166fc2324c727c4768b9b0ccdbb7ca" Jan 26 09:36:03 crc kubenswrapper[4806]: I0126 09:36:03.556359 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6dc6291b0c746ae18a06e987e8468dc6c1166fc2324c727c4768b9b0ccdbb7ca"} err="failed to get container status \"6dc6291b0c746ae18a06e987e8468dc6c1166fc2324c727c4768b9b0ccdbb7ca\": rpc error: code = NotFound desc = could not find container \"6dc6291b0c746ae18a06e987e8468dc6c1166fc2324c727c4768b9b0ccdbb7ca\": container with ID starting with 6dc6291b0c746ae18a06e987e8468dc6c1166fc2324c727c4768b9b0ccdbb7ca not found: ID does not exist" Jan 26 09:36:03 crc kubenswrapper[4806]: I0126 09:36:03.556390 4806 scope.go:117] "RemoveContainer" containerID="cf394fadf06fd072db0fdc931a6ce899ce0f88ce1738c28bbf537cf4c687e831" Jan 26 09:36:03 crc kubenswrapper[4806]: E0126 09:36:03.556711 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf394fadf06fd072db0fdc931a6ce899ce0f88ce1738c28bbf537cf4c687e831\": container with ID starting with cf394fadf06fd072db0fdc931a6ce899ce0f88ce1738c28bbf537cf4c687e831 not found: ID does not exist" containerID="cf394fadf06fd072db0fdc931a6ce899ce0f88ce1738c28bbf537cf4c687e831" Jan 26 09:36:03 crc kubenswrapper[4806]: I0126 09:36:03.556739 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf394fadf06fd072db0fdc931a6ce899ce0f88ce1738c28bbf537cf4c687e831"} err="failed to get container status \"cf394fadf06fd072db0fdc931a6ce899ce0f88ce1738c28bbf537cf4c687e831\": rpc error: code = NotFound desc = could not find container \"cf394fadf06fd072db0fdc931a6ce899ce0f88ce1738c28bbf537cf4c687e831\": container with ID starting with cf394fadf06fd072db0fdc931a6ce899ce0f88ce1738c28bbf537cf4c687e831 not found: ID does not exist" Jan 26 09:36:03 crc kubenswrapper[4806]: I0126 09:36:03.556760 4806 scope.go:117] "RemoveContainer" containerID="f1349196a0f5063fc78e7cca2176481a45cd28ff76978fd78be9e269ce40bb13" Jan 26 09:36:03 crc kubenswrapper[4806]: E0126 09:36:03.556987 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f1349196a0f5063fc78e7cca2176481a45cd28ff76978fd78be9e269ce40bb13\": container with ID starting with f1349196a0f5063fc78e7cca2176481a45cd28ff76978fd78be9e269ce40bb13 not found: ID does not exist" containerID="f1349196a0f5063fc78e7cca2176481a45cd28ff76978fd78be9e269ce40bb13" Jan 26 09:36:03 crc kubenswrapper[4806]: I0126 09:36:03.557007 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1349196a0f5063fc78e7cca2176481a45cd28ff76978fd78be9e269ce40bb13"} err="failed to get container status \"f1349196a0f5063fc78e7cca2176481a45cd28ff76978fd78be9e269ce40bb13\": rpc error: code = NotFound desc = could not find container \"f1349196a0f5063fc78e7cca2176481a45cd28ff76978fd78be9e269ce40bb13\": container with ID starting with f1349196a0f5063fc78e7cca2176481a45cd28ff76978fd78be9e269ce40bb13 not found: ID does not exist" Jan 26 09:36:05 crc kubenswrapper[4806]: I0126 09:36:05.051444 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67c29e43-63e2-46e4-b9c8-18e5321ac0f3" path="/var/lib/kubelet/pods/67c29e43-63e2-46e4-b9c8-18e5321ac0f3/volumes" Jan 26 09:36:07 crc kubenswrapper[4806]: I0126 09:36:07.481272 4806 generic.go:334] "Generic (PLEG): container finished" podID="e2f598ac-916e-43f9-9d50-09c4be97c717" containerID="88fdc1d9344479c10ca824d5d551e93e6dbfba04a3baa898e553f266398c29e8" exitCode=1 Jan 26 09:36:07 crc kubenswrapper[4806]: I0126 09:36:07.481377 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s01-single-thread-testing" event={"ID":"e2f598ac-916e-43f9-9d50-09c4be97c717","Type":"ContainerDied","Data":"88fdc1d9344479c10ca824d5d551e93e6dbfba04a3baa898e553f266398c29e8"} Jan 26 09:36:08 crc kubenswrapper[4806]: I0126 09:36:08.042647 4806 scope.go:117] "RemoveContainer" containerID="0b4284bb6f66d7af192b1b55fa536fcbb56d0e5fcd0b5a1dafbad5325167e098" Jan 26 09:36:08 crc kubenswrapper[4806]: E0126 09:36:08.042999 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 09:36:09 crc kubenswrapper[4806]: I0126 09:36:09.029823 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 26 09:36:09 crc kubenswrapper[4806]: I0126 09:36:09.156579 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/e2f598ac-916e-43f9-9d50-09c4be97c717-openstack-config-secret\") pod \"e2f598ac-916e-43f9-9d50-09c4be97c717\" (UID: \"e2f598ac-916e-43f9-9d50-09c4be97c717\") " Jan 26 09:36:09 crc kubenswrapper[4806]: I0126 09:36:09.156681 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c7x86\" (UniqueName: \"kubernetes.io/projected/e2f598ac-916e-43f9-9d50-09c4be97c717-kube-api-access-c7x86\") pod \"e2f598ac-916e-43f9-9d50-09c4be97c717\" (UID: \"e2f598ac-916e-43f9-9d50-09c4be97c717\") " Jan 26 09:36:09 crc kubenswrapper[4806]: I0126 09:36:09.156808 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/e2f598ac-916e-43f9-9d50-09c4be97c717-test-operator-ephemeral-workdir\") pod \"e2f598ac-916e-43f9-9d50-09c4be97c717\" (UID: \"e2f598ac-916e-43f9-9d50-09c4be97c717\") " Jan 26 09:36:09 crc kubenswrapper[4806]: I0126 09:36:09.157080 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"e2f598ac-916e-43f9-9d50-09c4be97c717\" (UID: \"e2f598ac-916e-43f9-9d50-09c4be97c717\") " Jan 26 09:36:09 crc kubenswrapper[4806]: I0126 09:36:09.157108 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/e2f598ac-916e-43f9-9d50-09c4be97c717-openstack-config\") pod \"e2f598ac-916e-43f9-9d50-09c4be97c717\" (UID: \"e2f598ac-916e-43f9-9d50-09c4be97c717\") " Jan 26 09:36:09 crc kubenswrapper[4806]: I0126 09:36:09.157180 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/e2f598ac-916e-43f9-9d50-09c4be97c717-ca-certs\") pod \"e2f598ac-916e-43f9-9d50-09c4be97c717\" (UID: \"e2f598ac-916e-43f9-9d50-09c4be97c717\") " Jan 26 09:36:09 crc kubenswrapper[4806]: I0126 09:36:09.157215 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e2f598ac-916e-43f9-9d50-09c4be97c717-config-data\") pod \"e2f598ac-916e-43f9-9d50-09c4be97c717\" (UID: \"e2f598ac-916e-43f9-9d50-09c4be97c717\") " Jan 26 09:36:09 crc kubenswrapper[4806]: I0126 09:36:09.157301 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/e2f598ac-916e-43f9-9d50-09c4be97c717-test-operator-ephemeral-temporary\") pod \"e2f598ac-916e-43f9-9d50-09c4be97c717\" (UID: \"e2f598ac-916e-43f9-9d50-09c4be97c717\") " Jan 26 09:36:09 crc kubenswrapper[4806]: I0126 09:36:09.157425 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e2f598ac-916e-43f9-9d50-09c4be97c717-ssh-key\") pod \"e2f598ac-916e-43f9-9d50-09c4be97c717\" (UID: \"e2f598ac-916e-43f9-9d50-09c4be97c717\") " Jan 26 09:36:09 crc kubenswrapper[4806]: I0126 09:36:09.169823 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e2f598ac-916e-43f9-9d50-09c4be97c717-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "e2f598ac-916e-43f9-9d50-09c4be97c717" (UID: "e2f598ac-916e-43f9-9d50-09c4be97c717"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 09:36:09 crc kubenswrapper[4806]: I0126 09:36:09.176165 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2f598ac-916e-43f9-9d50-09c4be97c717-config-data" (OuterVolumeSpecName: "config-data") pod "e2f598ac-916e-43f9-9d50-09c4be97c717" (UID: "e2f598ac-916e-43f9-9d50-09c4be97c717"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:36:09 crc kubenswrapper[4806]: I0126 09:36:09.185696 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e2f598ac-916e-43f9-9d50-09c4be97c717-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "e2f598ac-916e-43f9-9d50-09c4be97c717" (UID: "e2f598ac-916e-43f9-9d50-09c4be97c717"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 09:36:09 crc kubenswrapper[4806]: I0126 09:36:09.197936 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2f598ac-916e-43f9-9d50-09c4be97c717-kube-api-access-c7x86" (OuterVolumeSpecName: "kube-api-access-c7x86") pod "e2f598ac-916e-43f9-9d50-09c4be97c717" (UID: "e2f598ac-916e-43f9-9d50-09c4be97c717"). InnerVolumeSpecName "kube-api-access-c7x86". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:36:09 crc kubenswrapper[4806]: I0126 09:36:09.214994 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "test-operator-logs") pod "e2f598ac-916e-43f9-9d50-09c4be97c717" (UID: "e2f598ac-916e-43f9-9d50-09c4be97c717"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 26 09:36:09 crc kubenswrapper[4806]: I0126 09:36:09.271864 4806 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/e2f598ac-916e-43f9-9d50-09c4be97c717-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Jan 26 09:36:09 crc kubenswrapper[4806]: I0126 09:36:09.272460 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2f598ac-916e-43f9-9d50-09c4be97c717-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "e2f598ac-916e-43f9-9d50-09c4be97c717" (UID: "e2f598ac-916e-43f9-9d50-09c4be97c717"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:36:09 crc kubenswrapper[4806]: I0126 09:36:09.284416 4806 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Jan 26 09:36:09 crc kubenswrapper[4806]: I0126 09:36:09.284449 4806 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e2f598ac-916e-43f9-9d50-09c4be97c717-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 09:36:09 crc kubenswrapper[4806]: I0126 09:36:09.284462 4806 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/e2f598ac-916e-43f9-9d50-09c4be97c717-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Jan 26 09:36:09 crc kubenswrapper[4806]: I0126 09:36:09.284480 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c7x86\" (UniqueName: \"kubernetes.io/projected/e2f598ac-916e-43f9-9d50-09c4be97c717-kube-api-access-c7x86\") on node \"crc\" DevicePath \"\"" Jan 26 09:36:09 crc kubenswrapper[4806]: I0126 09:36:09.285242 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2f598ac-916e-43f9-9d50-09c4be97c717-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "e2f598ac-916e-43f9-9d50-09c4be97c717" (UID: "e2f598ac-916e-43f9-9d50-09c4be97c717"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:36:09 crc kubenswrapper[4806]: I0126 09:36:09.288680 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2f598ac-916e-43f9-9d50-09c4be97c717-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "e2f598ac-916e-43f9-9d50-09c4be97c717" (UID: "e2f598ac-916e-43f9-9d50-09c4be97c717"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:36:09 crc kubenswrapper[4806]: I0126 09:36:09.293953 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2f598ac-916e-43f9-9d50-09c4be97c717-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "e2f598ac-916e-43f9-9d50-09c4be97c717" (UID: "e2f598ac-916e-43f9-9d50-09c4be97c717"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:36:09 crc kubenswrapper[4806]: I0126 09:36:09.311957 4806 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Jan 26 09:36:09 crc kubenswrapper[4806]: I0126 09:36:09.385683 4806 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e2f598ac-916e-43f9-9d50-09c4be97c717-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 26 09:36:09 crc kubenswrapper[4806]: I0126 09:36:09.385967 4806 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/e2f598ac-916e-43f9-9d50-09c4be97c717-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 26 09:36:09 crc kubenswrapper[4806]: I0126 09:36:09.385979 4806 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Jan 26 09:36:09 crc kubenswrapper[4806]: I0126 09:36:09.385989 4806 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/e2f598ac-916e-43f9-9d50-09c4be97c717-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 26 09:36:09 crc kubenswrapper[4806]: I0126 09:36:09.385998 4806 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/e2f598ac-916e-43f9-9d50-09c4be97c717-ca-certs\") on node \"crc\" DevicePath \"\"" Jan 26 09:36:09 crc kubenswrapper[4806]: I0126 09:36:09.496344 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s01-single-thread-testing" event={"ID":"e2f598ac-916e-43f9-9d50-09c4be97c717","Type":"ContainerDied","Data":"9a774aa32768cae47e6c3e3d00a1e3e33e1c492130b9577838269d73afbae778"} Jan 26 09:36:09 crc kubenswrapper[4806]: I0126 09:36:09.496382 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9a774aa32768cae47e6c3e3d00a1e3e33e1c492130b9577838269d73afbae778" Jan 26 09:36:09 crc kubenswrapper[4806]: I0126 09:36:09.496409 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 26 09:36:17 crc kubenswrapper[4806]: I0126 09:36:17.746883 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 26 09:36:17 crc kubenswrapper[4806]: E0126 09:36:17.748836 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a618e04-42c1-42f9-b260-8a10ce456955" containerName="registry-server" Jan 26 09:36:17 crc kubenswrapper[4806]: I0126 09:36:17.749042 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a618e04-42c1-42f9-b260-8a10ce456955" containerName="registry-server" Jan 26 09:36:17 crc kubenswrapper[4806]: E0126 09:36:17.749062 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67c29e43-63e2-46e4-b9c8-18e5321ac0f3" containerName="extract-content" Jan 26 09:36:17 crc kubenswrapper[4806]: I0126 09:36:17.749074 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="67c29e43-63e2-46e4-b9c8-18e5321ac0f3" containerName="extract-content" Jan 26 09:36:17 crc kubenswrapper[4806]: E0126 09:36:17.749106 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a618e04-42c1-42f9-b260-8a10ce456955" containerName="extract-content" Jan 26 09:36:17 crc kubenswrapper[4806]: I0126 09:36:17.749116 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a618e04-42c1-42f9-b260-8a10ce456955" containerName="extract-content" Jan 26 09:36:17 crc kubenswrapper[4806]: E0126 09:36:17.749137 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2f598ac-916e-43f9-9d50-09c4be97c717" containerName="tempest-tests-tempest-tests-runner" Jan 26 09:36:17 crc kubenswrapper[4806]: I0126 09:36:17.749151 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2f598ac-916e-43f9-9d50-09c4be97c717" containerName="tempest-tests-tempest-tests-runner" Jan 26 09:36:17 crc kubenswrapper[4806]: E0126 09:36:17.749178 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67c29e43-63e2-46e4-b9c8-18e5321ac0f3" containerName="extract-utilities" Jan 26 09:36:17 crc kubenswrapper[4806]: I0126 09:36:17.749189 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="67c29e43-63e2-46e4-b9c8-18e5321ac0f3" containerName="extract-utilities" Jan 26 09:36:17 crc kubenswrapper[4806]: E0126 09:36:17.749218 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67c29e43-63e2-46e4-b9c8-18e5321ac0f3" containerName="registry-server" Jan 26 09:36:17 crc kubenswrapper[4806]: I0126 09:36:17.749229 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="67c29e43-63e2-46e4-b9c8-18e5321ac0f3" containerName="registry-server" Jan 26 09:36:17 crc kubenswrapper[4806]: E0126 09:36:17.749293 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a618e04-42c1-42f9-b260-8a10ce456955" containerName="extract-utilities" Jan 26 09:36:17 crc kubenswrapper[4806]: I0126 09:36:17.749304 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a618e04-42c1-42f9-b260-8a10ce456955" containerName="extract-utilities" Jan 26 09:36:17 crc kubenswrapper[4806]: I0126 09:36:17.749727 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="67c29e43-63e2-46e4-b9c8-18e5321ac0f3" containerName="registry-server" Jan 26 09:36:17 crc kubenswrapper[4806]: I0126 09:36:17.749759 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a618e04-42c1-42f9-b260-8a10ce456955" containerName="registry-server" Jan 26 09:36:17 crc kubenswrapper[4806]: I0126 09:36:17.749783 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2f598ac-916e-43f9-9d50-09c4be97c717" containerName="tempest-tests-tempest-tests-runner" Jan 26 09:36:17 crc kubenswrapper[4806]: I0126 09:36:17.750776 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 26 09:36:17 crc kubenswrapper[4806]: I0126 09:36:17.761574 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-v9vjw" Jan 26 09:36:17 crc kubenswrapper[4806]: I0126 09:36:17.764073 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 26 09:36:17 crc kubenswrapper[4806]: I0126 09:36:17.895571 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"bda8645d-d202-47ae-a35e-c187b18dc23f\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 26 09:36:17 crc kubenswrapper[4806]: I0126 09:36:17.895920 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxr5s\" (UniqueName: \"kubernetes.io/projected/bda8645d-d202-47ae-a35e-c187b18dc23f-kube-api-access-vxr5s\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"bda8645d-d202-47ae-a35e-c187b18dc23f\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 26 09:36:17 crc kubenswrapper[4806]: I0126 09:36:17.998191 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"bda8645d-d202-47ae-a35e-c187b18dc23f\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 26 09:36:17 crc kubenswrapper[4806]: I0126 09:36:17.998306 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxr5s\" (UniqueName: \"kubernetes.io/projected/bda8645d-d202-47ae-a35e-c187b18dc23f-kube-api-access-vxr5s\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"bda8645d-d202-47ae-a35e-c187b18dc23f\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 26 09:36:17 crc kubenswrapper[4806]: I0126 09:36:17.999258 4806 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"bda8645d-d202-47ae-a35e-c187b18dc23f\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 26 09:36:18 crc kubenswrapper[4806]: I0126 09:36:18.032209 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxr5s\" (UniqueName: \"kubernetes.io/projected/bda8645d-d202-47ae-a35e-c187b18dc23f-kube-api-access-vxr5s\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"bda8645d-d202-47ae-a35e-c187b18dc23f\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 26 09:36:18 crc kubenswrapper[4806]: I0126 09:36:18.048460 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"bda8645d-d202-47ae-a35e-c187b18dc23f\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 26 09:36:18 crc kubenswrapper[4806]: I0126 09:36:18.073715 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 26 09:36:18 crc kubenswrapper[4806]: I0126 09:36:18.553765 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 26 09:36:18 crc kubenswrapper[4806]: I0126 09:36:18.602816 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"bda8645d-d202-47ae-a35e-c187b18dc23f","Type":"ContainerStarted","Data":"cc2d922b75afc979f95abf7d3fa35654c5f6b265e750ff76edb3f29655f9d3ba"} Jan 26 09:36:20 crc kubenswrapper[4806]: I0126 09:36:20.042599 4806 scope.go:117] "RemoveContainer" containerID="0b4284bb6f66d7af192b1b55fa536fcbb56d0e5fcd0b5a1dafbad5325167e098" Jan 26 09:36:20 crc kubenswrapper[4806]: E0126 09:36:20.043093 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 09:36:20 crc kubenswrapper[4806]: I0126 09:36:20.628197 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"bda8645d-d202-47ae-a35e-c187b18dc23f","Type":"ContainerStarted","Data":"edc39dd23ab485bd1f625d65c965fca27a450168a1dadb0f08556cc41c4e38fe"} Jan 26 09:36:20 crc kubenswrapper[4806]: I0126 09:36:20.653010 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=2.696952812 podStartE2EDuration="3.652989292s" podCreationTimestamp="2026-01-26 09:36:17 +0000 UTC" firstStartedPulling="2026-01-26 09:36:18.57870354 +0000 UTC m=+6157.843111596" lastFinishedPulling="2026-01-26 09:36:19.53474001 +0000 UTC m=+6158.799148076" observedRunningTime="2026-01-26 09:36:20.645430925 +0000 UTC m=+6159.909839011" watchObservedRunningTime="2026-01-26 09:36:20.652989292 +0000 UTC m=+6159.917397358" Jan 26 09:36:32 crc kubenswrapper[4806]: I0126 09:36:32.042770 4806 scope.go:117] "RemoveContainer" containerID="0b4284bb6f66d7af192b1b55fa536fcbb56d0e5fcd0b5a1dafbad5325167e098" Jan 26 09:36:32 crc kubenswrapper[4806]: E0126 09:36:32.043570 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 09:36:45 crc kubenswrapper[4806]: I0126 09:36:45.046406 4806 scope.go:117] "RemoveContainer" containerID="0b4284bb6f66d7af192b1b55fa536fcbb56d0e5fcd0b5a1dafbad5325167e098" Jan 26 09:36:45 crc kubenswrapper[4806]: E0126 09:36:45.047090 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 09:36:50 crc kubenswrapper[4806]: I0126 09:36:50.489603 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-6xrt9/must-gather-647n6"] Jan 26 09:36:50 crc kubenswrapper[4806]: I0126 09:36:50.493233 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6xrt9/must-gather-647n6" Jan 26 09:36:50 crc kubenswrapper[4806]: I0126 09:36:50.502254 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-6xrt9"/"openshift-service-ca.crt" Jan 26 09:36:50 crc kubenswrapper[4806]: I0126 09:36:50.532197 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-6xrt9"/"kube-root-ca.crt" Jan 26 09:36:50 crc kubenswrapper[4806]: I0126 09:36:50.572810 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f25dd6f2-cd3b-42ea-8adc-d435c977286c-must-gather-output\") pod \"must-gather-647n6\" (UID: \"f25dd6f2-cd3b-42ea-8adc-d435c977286c\") " pod="openshift-must-gather-6xrt9/must-gather-647n6" Jan 26 09:36:50 crc kubenswrapper[4806]: I0126 09:36:50.572871 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hg6l\" (UniqueName: \"kubernetes.io/projected/f25dd6f2-cd3b-42ea-8adc-d435c977286c-kube-api-access-5hg6l\") pod \"must-gather-647n6\" (UID: \"f25dd6f2-cd3b-42ea-8adc-d435c977286c\") " pod="openshift-must-gather-6xrt9/must-gather-647n6" Jan 26 09:36:50 crc kubenswrapper[4806]: I0126 09:36:50.593574 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-6xrt9/must-gather-647n6"] Jan 26 09:36:50 crc kubenswrapper[4806]: I0126 09:36:50.677035 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f25dd6f2-cd3b-42ea-8adc-d435c977286c-must-gather-output\") pod \"must-gather-647n6\" (UID: \"f25dd6f2-cd3b-42ea-8adc-d435c977286c\") " pod="openshift-must-gather-6xrt9/must-gather-647n6" Jan 26 09:36:50 crc kubenswrapper[4806]: I0126 09:36:50.677115 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5hg6l\" (UniqueName: \"kubernetes.io/projected/f25dd6f2-cd3b-42ea-8adc-d435c977286c-kube-api-access-5hg6l\") pod \"must-gather-647n6\" (UID: \"f25dd6f2-cd3b-42ea-8adc-d435c977286c\") " pod="openshift-must-gather-6xrt9/must-gather-647n6" Jan 26 09:36:50 crc kubenswrapper[4806]: I0126 09:36:50.678071 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f25dd6f2-cd3b-42ea-8adc-d435c977286c-must-gather-output\") pod \"must-gather-647n6\" (UID: \"f25dd6f2-cd3b-42ea-8adc-d435c977286c\") " pod="openshift-must-gather-6xrt9/must-gather-647n6" Jan 26 09:36:50 crc kubenswrapper[4806]: I0126 09:36:50.714248 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5hg6l\" (UniqueName: \"kubernetes.io/projected/f25dd6f2-cd3b-42ea-8adc-d435c977286c-kube-api-access-5hg6l\") pod \"must-gather-647n6\" (UID: \"f25dd6f2-cd3b-42ea-8adc-d435c977286c\") " pod="openshift-must-gather-6xrt9/must-gather-647n6" Jan 26 09:36:50 crc kubenswrapper[4806]: I0126 09:36:50.830913 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6xrt9/must-gather-647n6" Jan 26 09:36:51 crc kubenswrapper[4806]: I0126 09:36:51.303948 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-6xrt9/must-gather-647n6"] Jan 26 09:36:51 crc kubenswrapper[4806]: I0126 09:36:51.952659 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6xrt9/must-gather-647n6" event={"ID":"f25dd6f2-cd3b-42ea-8adc-d435c977286c","Type":"ContainerStarted","Data":"6f5aa137ac4722cef26e8ea6fc44c72826d2e92939e9a6179b6fb7fc8cb4f954"} Jan 26 09:36:59 crc kubenswrapper[4806]: I0126 09:36:59.054717 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6xrt9/must-gather-647n6" event={"ID":"f25dd6f2-cd3b-42ea-8adc-d435c977286c","Type":"ContainerStarted","Data":"6a153f9564b5b1dc0308e6cb6b0f362ea4819e3ebf8be6e200c105a95ceb5bf6"} Jan 26 09:36:59 crc kubenswrapper[4806]: I0126 09:36:59.055279 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6xrt9/must-gather-647n6" event={"ID":"f25dd6f2-cd3b-42ea-8adc-d435c977286c","Type":"ContainerStarted","Data":"7b7f1efce964bb1e553560fc8a687336c289378ca7de82c98e2f20dd27bde5dd"} Jan 26 09:36:59 crc kubenswrapper[4806]: I0126 09:36:59.077061 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-6xrt9/must-gather-647n6" podStartSLOduration=2.413683493 podStartE2EDuration="9.077045181s" podCreationTimestamp="2026-01-26 09:36:50 +0000 UTC" firstStartedPulling="2026-01-26 09:36:51.313111679 +0000 UTC m=+6190.577519745" lastFinishedPulling="2026-01-26 09:36:57.976473387 +0000 UTC m=+6197.240881433" observedRunningTime="2026-01-26 09:36:59.064892651 +0000 UTC m=+6198.329300707" watchObservedRunningTime="2026-01-26 09:36:59.077045181 +0000 UTC m=+6198.341453237" Jan 26 09:37:00 crc kubenswrapper[4806]: I0126 09:37:00.042386 4806 scope.go:117] "RemoveContainer" containerID="0b4284bb6f66d7af192b1b55fa536fcbb56d0e5fcd0b5a1dafbad5325167e098" Jan 26 09:37:00 crc kubenswrapper[4806]: E0126 09:37:00.042827 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 09:37:02 crc kubenswrapper[4806]: I0126 09:37:02.651910 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-6xrt9/crc-debug-sq99m"] Jan 26 09:37:02 crc kubenswrapper[4806]: I0126 09:37:02.654495 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6xrt9/crc-debug-sq99m" Jan 26 09:37:02 crc kubenswrapper[4806]: I0126 09:37:02.656590 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-6xrt9"/"default-dockercfg-7zq75" Jan 26 09:37:02 crc kubenswrapper[4806]: I0126 09:37:02.839668 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tscrl\" (UniqueName: \"kubernetes.io/projected/2ba1ad37-da08-4110-9a97-5d6a9c29361c-kube-api-access-tscrl\") pod \"crc-debug-sq99m\" (UID: \"2ba1ad37-da08-4110-9a97-5d6a9c29361c\") " pod="openshift-must-gather-6xrt9/crc-debug-sq99m" Jan 26 09:37:02 crc kubenswrapper[4806]: I0126 09:37:02.840730 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2ba1ad37-da08-4110-9a97-5d6a9c29361c-host\") pod \"crc-debug-sq99m\" (UID: \"2ba1ad37-da08-4110-9a97-5d6a9c29361c\") " pod="openshift-must-gather-6xrt9/crc-debug-sq99m" Jan 26 09:37:02 crc kubenswrapper[4806]: I0126 09:37:02.942477 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2ba1ad37-da08-4110-9a97-5d6a9c29361c-host\") pod \"crc-debug-sq99m\" (UID: \"2ba1ad37-da08-4110-9a97-5d6a9c29361c\") " pod="openshift-must-gather-6xrt9/crc-debug-sq99m" Jan 26 09:37:02 crc kubenswrapper[4806]: I0126 09:37:02.942635 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tscrl\" (UniqueName: \"kubernetes.io/projected/2ba1ad37-da08-4110-9a97-5d6a9c29361c-kube-api-access-tscrl\") pod \"crc-debug-sq99m\" (UID: \"2ba1ad37-da08-4110-9a97-5d6a9c29361c\") " pod="openshift-must-gather-6xrt9/crc-debug-sq99m" Jan 26 09:37:02 crc kubenswrapper[4806]: I0126 09:37:02.943561 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2ba1ad37-da08-4110-9a97-5d6a9c29361c-host\") pod \"crc-debug-sq99m\" (UID: \"2ba1ad37-da08-4110-9a97-5d6a9c29361c\") " pod="openshift-must-gather-6xrt9/crc-debug-sq99m" Jan 26 09:37:02 crc kubenswrapper[4806]: I0126 09:37:02.960854 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tscrl\" (UniqueName: \"kubernetes.io/projected/2ba1ad37-da08-4110-9a97-5d6a9c29361c-kube-api-access-tscrl\") pod \"crc-debug-sq99m\" (UID: \"2ba1ad37-da08-4110-9a97-5d6a9c29361c\") " pod="openshift-must-gather-6xrt9/crc-debug-sq99m" Jan 26 09:37:02 crc kubenswrapper[4806]: I0126 09:37:02.977914 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6xrt9/crc-debug-sq99m" Jan 26 09:37:03 crc kubenswrapper[4806]: I0126 09:37:03.141727 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6xrt9/crc-debug-sq99m" event={"ID":"2ba1ad37-da08-4110-9a97-5d6a9c29361c","Type":"ContainerStarted","Data":"893462def6766a6605c609a209352f54171704bf60846f49f3d1737960894d26"} Jan 26 09:37:13 crc kubenswrapper[4806]: I0126 09:37:13.041981 4806 scope.go:117] "RemoveContainer" containerID="0b4284bb6f66d7af192b1b55fa536fcbb56d0e5fcd0b5a1dafbad5325167e098" Jan 26 09:37:13 crc kubenswrapper[4806]: E0126 09:37:13.042742 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 09:37:16 crc kubenswrapper[4806]: I0126 09:37:16.280505 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6xrt9/crc-debug-sq99m" event={"ID":"2ba1ad37-da08-4110-9a97-5d6a9c29361c","Type":"ContainerStarted","Data":"87d06e8ba843f8a25a7e251c4c041fbeda9b69965594f05060d02c640e763bc9"} Jan 26 09:37:24 crc kubenswrapper[4806]: I0126 09:37:24.042540 4806 scope.go:117] "RemoveContainer" containerID="0b4284bb6f66d7af192b1b55fa536fcbb56d0e5fcd0b5a1dafbad5325167e098" Jan 26 09:37:24 crc kubenswrapper[4806]: E0126 09:37:24.043742 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 09:37:35 crc kubenswrapper[4806]: I0126 09:37:35.042263 4806 scope.go:117] "RemoveContainer" containerID="0b4284bb6f66d7af192b1b55fa536fcbb56d0e5fcd0b5a1dafbad5325167e098" Jan 26 09:37:35 crc kubenswrapper[4806]: E0126 09:37:35.042987 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 09:37:49 crc kubenswrapper[4806]: I0126 09:37:49.042227 4806 scope.go:117] "RemoveContainer" containerID="0b4284bb6f66d7af192b1b55fa536fcbb56d0e5fcd0b5a1dafbad5325167e098" Jan 26 09:37:49 crc kubenswrapper[4806]: E0126 09:37:49.043081 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 09:38:00 crc kubenswrapper[4806]: I0126 09:38:00.042335 4806 scope.go:117] "RemoveContainer" containerID="0b4284bb6f66d7af192b1b55fa536fcbb56d0e5fcd0b5a1dafbad5325167e098" Jan 26 09:38:00 crc kubenswrapper[4806]: E0126 09:38:00.045076 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 09:38:04 crc kubenswrapper[4806]: I0126 09:38:04.683709 4806 generic.go:334] "Generic (PLEG): container finished" podID="2ba1ad37-da08-4110-9a97-5d6a9c29361c" containerID="87d06e8ba843f8a25a7e251c4c041fbeda9b69965594f05060d02c640e763bc9" exitCode=0 Jan 26 09:38:04 crc kubenswrapper[4806]: I0126 09:38:04.684122 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6xrt9/crc-debug-sq99m" event={"ID":"2ba1ad37-da08-4110-9a97-5d6a9c29361c","Type":"ContainerDied","Data":"87d06e8ba843f8a25a7e251c4c041fbeda9b69965594f05060d02c640e763bc9"} Jan 26 09:38:05 crc kubenswrapper[4806]: I0126 09:38:05.827973 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6xrt9/crc-debug-sq99m" Jan 26 09:38:05 crc kubenswrapper[4806]: I0126 09:38:05.866446 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-6xrt9/crc-debug-sq99m"] Jan 26 09:38:05 crc kubenswrapper[4806]: I0126 09:38:05.875698 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-6xrt9/crc-debug-sq99m"] Jan 26 09:38:05 crc kubenswrapper[4806]: I0126 09:38:05.961386 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tscrl\" (UniqueName: \"kubernetes.io/projected/2ba1ad37-da08-4110-9a97-5d6a9c29361c-kube-api-access-tscrl\") pod \"2ba1ad37-da08-4110-9a97-5d6a9c29361c\" (UID: \"2ba1ad37-da08-4110-9a97-5d6a9c29361c\") " Jan 26 09:38:05 crc kubenswrapper[4806]: I0126 09:38:05.961753 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2ba1ad37-da08-4110-9a97-5d6a9c29361c-host\") pod \"2ba1ad37-da08-4110-9a97-5d6a9c29361c\" (UID: \"2ba1ad37-da08-4110-9a97-5d6a9c29361c\") " Jan 26 09:38:05 crc kubenswrapper[4806]: I0126 09:38:05.961877 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2ba1ad37-da08-4110-9a97-5d6a9c29361c-host" (OuterVolumeSpecName: "host") pod "2ba1ad37-da08-4110-9a97-5d6a9c29361c" (UID: "2ba1ad37-da08-4110-9a97-5d6a9c29361c"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 09:38:05 crc kubenswrapper[4806]: I0126 09:38:05.962325 4806 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2ba1ad37-da08-4110-9a97-5d6a9c29361c-host\") on node \"crc\" DevicePath \"\"" Jan 26 09:38:05 crc kubenswrapper[4806]: I0126 09:38:05.974180 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ba1ad37-da08-4110-9a97-5d6a9c29361c-kube-api-access-tscrl" (OuterVolumeSpecName: "kube-api-access-tscrl") pod "2ba1ad37-da08-4110-9a97-5d6a9c29361c" (UID: "2ba1ad37-da08-4110-9a97-5d6a9c29361c"). InnerVolumeSpecName "kube-api-access-tscrl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:38:06 crc kubenswrapper[4806]: I0126 09:38:06.063989 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tscrl\" (UniqueName: \"kubernetes.io/projected/2ba1ad37-da08-4110-9a97-5d6a9c29361c-kube-api-access-tscrl\") on node \"crc\" DevicePath \"\"" Jan 26 09:38:06 crc kubenswrapper[4806]: I0126 09:38:06.698986 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="893462def6766a6605c609a209352f54171704bf60846f49f3d1737960894d26" Jan 26 09:38:06 crc kubenswrapper[4806]: I0126 09:38:06.699038 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6xrt9/crc-debug-sq99m" Jan 26 09:38:07 crc kubenswrapper[4806]: I0126 09:38:07.056905 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ba1ad37-da08-4110-9a97-5d6a9c29361c" path="/var/lib/kubelet/pods/2ba1ad37-da08-4110-9a97-5d6a9c29361c/volumes" Jan 26 09:38:07 crc kubenswrapper[4806]: I0126 09:38:07.070041 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-6xrt9/crc-debug-fhbzg"] Jan 26 09:38:07 crc kubenswrapper[4806]: E0126 09:38:07.070399 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ba1ad37-da08-4110-9a97-5d6a9c29361c" containerName="container-00" Jan 26 09:38:07 crc kubenswrapper[4806]: I0126 09:38:07.070415 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ba1ad37-da08-4110-9a97-5d6a9c29361c" containerName="container-00" Jan 26 09:38:07 crc kubenswrapper[4806]: I0126 09:38:07.070613 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ba1ad37-da08-4110-9a97-5d6a9c29361c" containerName="container-00" Jan 26 09:38:07 crc kubenswrapper[4806]: I0126 09:38:07.071183 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6xrt9/crc-debug-fhbzg" Jan 26 09:38:07 crc kubenswrapper[4806]: I0126 09:38:07.073319 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-6xrt9"/"default-dockercfg-7zq75" Jan 26 09:38:07 crc kubenswrapper[4806]: I0126 09:38:07.181972 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkz2n\" (UniqueName: \"kubernetes.io/projected/b17fbcca-edd4-4eb6-8c6c-04282a1d8b24-kube-api-access-pkz2n\") pod \"crc-debug-fhbzg\" (UID: \"b17fbcca-edd4-4eb6-8c6c-04282a1d8b24\") " pod="openshift-must-gather-6xrt9/crc-debug-fhbzg" Jan 26 09:38:07 crc kubenswrapper[4806]: I0126 09:38:07.182418 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b17fbcca-edd4-4eb6-8c6c-04282a1d8b24-host\") pod \"crc-debug-fhbzg\" (UID: \"b17fbcca-edd4-4eb6-8c6c-04282a1d8b24\") " pod="openshift-must-gather-6xrt9/crc-debug-fhbzg" Jan 26 09:38:07 crc kubenswrapper[4806]: I0126 09:38:07.284124 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b17fbcca-edd4-4eb6-8c6c-04282a1d8b24-host\") pod \"crc-debug-fhbzg\" (UID: \"b17fbcca-edd4-4eb6-8c6c-04282a1d8b24\") " pod="openshift-must-gather-6xrt9/crc-debug-fhbzg" Jan 26 09:38:07 crc kubenswrapper[4806]: I0126 09:38:07.284242 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkz2n\" (UniqueName: \"kubernetes.io/projected/b17fbcca-edd4-4eb6-8c6c-04282a1d8b24-kube-api-access-pkz2n\") pod \"crc-debug-fhbzg\" (UID: \"b17fbcca-edd4-4eb6-8c6c-04282a1d8b24\") " pod="openshift-must-gather-6xrt9/crc-debug-fhbzg" Jan 26 09:38:07 crc kubenswrapper[4806]: I0126 09:38:07.284266 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b17fbcca-edd4-4eb6-8c6c-04282a1d8b24-host\") pod \"crc-debug-fhbzg\" (UID: \"b17fbcca-edd4-4eb6-8c6c-04282a1d8b24\") " pod="openshift-must-gather-6xrt9/crc-debug-fhbzg" Jan 26 09:38:07 crc kubenswrapper[4806]: I0126 09:38:07.301965 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkz2n\" (UniqueName: \"kubernetes.io/projected/b17fbcca-edd4-4eb6-8c6c-04282a1d8b24-kube-api-access-pkz2n\") pod \"crc-debug-fhbzg\" (UID: \"b17fbcca-edd4-4eb6-8c6c-04282a1d8b24\") " pod="openshift-must-gather-6xrt9/crc-debug-fhbzg" Jan 26 09:38:07 crc kubenswrapper[4806]: I0126 09:38:07.387454 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6xrt9/crc-debug-fhbzg" Jan 26 09:38:07 crc kubenswrapper[4806]: I0126 09:38:07.708660 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6xrt9/crc-debug-fhbzg" event={"ID":"b17fbcca-edd4-4eb6-8c6c-04282a1d8b24","Type":"ContainerStarted","Data":"2fe90882c9b4d1ff29e3767613bbdfa585e4531ce39c6128082e97504f21908a"} Jan 26 09:38:07 crc kubenswrapper[4806]: I0126 09:38:07.708924 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6xrt9/crc-debug-fhbzg" event={"ID":"b17fbcca-edd4-4eb6-8c6c-04282a1d8b24","Type":"ContainerStarted","Data":"6fb48ae39674c1899cb9a83b3166adbfc3659a7ca8f633b3b6c8a7e19fed9dd5"} Jan 26 09:38:07 crc kubenswrapper[4806]: I0126 09:38:07.722372 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-6xrt9/crc-debug-fhbzg" podStartSLOduration=0.722356265 podStartE2EDuration="722.356265ms" podCreationTimestamp="2026-01-26 09:38:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 09:38:07.720246575 +0000 UTC m=+6266.984654641" watchObservedRunningTime="2026-01-26 09:38:07.722356265 +0000 UTC m=+6266.986764321" Jan 26 09:38:08 crc kubenswrapper[4806]: I0126 09:38:08.720986 4806 generic.go:334] "Generic (PLEG): container finished" podID="b17fbcca-edd4-4eb6-8c6c-04282a1d8b24" containerID="2fe90882c9b4d1ff29e3767613bbdfa585e4531ce39c6128082e97504f21908a" exitCode=0 Jan 26 09:38:08 crc kubenswrapper[4806]: I0126 09:38:08.721051 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6xrt9/crc-debug-fhbzg" event={"ID":"b17fbcca-edd4-4eb6-8c6c-04282a1d8b24","Type":"ContainerDied","Data":"2fe90882c9b4d1ff29e3767613bbdfa585e4531ce39c6128082e97504f21908a"} Jan 26 09:38:09 crc kubenswrapper[4806]: I0126 09:38:09.831733 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6xrt9/crc-debug-fhbzg" Jan 26 09:38:09 crc kubenswrapper[4806]: I0126 09:38:09.923154 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b17fbcca-edd4-4eb6-8c6c-04282a1d8b24-host\") pod \"b17fbcca-edd4-4eb6-8c6c-04282a1d8b24\" (UID: \"b17fbcca-edd4-4eb6-8c6c-04282a1d8b24\") " Jan 26 09:38:09 crc kubenswrapper[4806]: I0126 09:38:09.923293 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b17fbcca-edd4-4eb6-8c6c-04282a1d8b24-host" (OuterVolumeSpecName: "host") pod "b17fbcca-edd4-4eb6-8c6c-04282a1d8b24" (UID: "b17fbcca-edd4-4eb6-8c6c-04282a1d8b24"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 09:38:09 crc kubenswrapper[4806]: I0126 09:38:09.923706 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pkz2n\" (UniqueName: \"kubernetes.io/projected/b17fbcca-edd4-4eb6-8c6c-04282a1d8b24-kube-api-access-pkz2n\") pod \"b17fbcca-edd4-4eb6-8c6c-04282a1d8b24\" (UID: \"b17fbcca-edd4-4eb6-8c6c-04282a1d8b24\") " Jan 26 09:38:09 crc kubenswrapper[4806]: I0126 09:38:09.924203 4806 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b17fbcca-edd4-4eb6-8c6c-04282a1d8b24-host\") on node \"crc\" DevicePath \"\"" Jan 26 09:38:09 crc kubenswrapper[4806]: I0126 09:38:09.928740 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b17fbcca-edd4-4eb6-8c6c-04282a1d8b24-kube-api-access-pkz2n" (OuterVolumeSpecName: "kube-api-access-pkz2n") pod "b17fbcca-edd4-4eb6-8c6c-04282a1d8b24" (UID: "b17fbcca-edd4-4eb6-8c6c-04282a1d8b24"). InnerVolumeSpecName "kube-api-access-pkz2n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:38:09 crc kubenswrapper[4806]: I0126 09:38:09.968647 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-6xrt9/crc-debug-fhbzg"] Jan 26 09:38:09 crc kubenswrapper[4806]: I0126 09:38:09.976058 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-6xrt9/crc-debug-fhbzg"] Jan 26 09:38:10 crc kubenswrapper[4806]: I0126 09:38:10.026425 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pkz2n\" (UniqueName: \"kubernetes.io/projected/b17fbcca-edd4-4eb6-8c6c-04282a1d8b24-kube-api-access-pkz2n\") on node \"crc\" DevicePath \"\"" Jan 26 09:38:10 crc kubenswrapper[4806]: I0126 09:38:10.741756 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6fb48ae39674c1899cb9a83b3166adbfc3659a7ca8f633b3b6c8a7e19fed9dd5" Jan 26 09:38:10 crc kubenswrapper[4806]: I0126 09:38:10.741864 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6xrt9/crc-debug-fhbzg" Jan 26 09:38:11 crc kubenswrapper[4806]: I0126 09:38:11.066790 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b17fbcca-edd4-4eb6-8c6c-04282a1d8b24" path="/var/lib/kubelet/pods/b17fbcca-edd4-4eb6-8c6c-04282a1d8b24/volumes" Jan 26 09:38:11 crc kubenswrapper[4806]: I0126 09:38:11.146394 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-6xrt9/crc-debug-vgtsg"] Jan 26 09:38:11 crc kubenswrapper[4806]: E0126 09:38:11.146837 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b17fbcca-edd4-4eb6-8c6c-04282a1d8b24" containerName="container-00" Jan 26 09:38:11 crc kubenswrapper[4806]: I0126 09:38:11.146859 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="b17fbcca-edd4-4eb6-8c6c-04282a1d8b24" containerName="container-00" Jan 26 09:38:11 crc kubenswrapper[4806]: I0126 09:38:11.147106 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="b17fbcca-edd4-4eb6-8c6c-04282a1d8b24" containerName="container-00" Jan 26 09:38:11 crc kubenswrapper[4806]: I0126 09:38:11.147831 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6xrt9/crc-debug-vgtsg" Jan 26 09:38:11 crc kubenswrapper[4806]: I0126 09:38:11.150571 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-6xrt9"/"default-dockercfg-7zq75" Jan 26 09:38:11 crc kubenswrapper[4806]: I0126 09:38:11.250813 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6e3d2c35-5ee8-4770-b892-74c4d62d70ad-host\") pod \"crc-debug-vgtsg\" (UID: \"6e3d2c35-5ee8-4770-b892-74c4d62d70ad\") " pod="openshift-must-gather-6xrt9/crc-debug-vgtsg" Jan 26 09:38:11 crc kubenswrapper[4806]: I0126 09:38:11.251308 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcd8k\" (UniqueName: \"kubernetes.io/projected/6e3d2c35-5ee8-4770-b892-74c4d62d70ad-kube-api-access-dcd8k\") pod \"crc-debug-vgtsg\" (UID: \"6e3d2c35-5ee8-4770-b892-74c4d62d70ad\") " pod="openshift-must-gather-6xrt9/crc-debug-vgtsg" Jan 26 09:38:11 crc kubenswrapper[4806]: I0126 09:38:11.352739 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dcd8k\" (UniqueName: \"kubernetes.io/projected/6e3d2c35-5ee8-4770-b892-74c4d62d70ad-kube-api-access-dcd8k\") pod \"crc-debug-vgtsg\" (UID: \"6e3d2c35-5ee8-4770-b892-74c4d62d70ad\") " pod="openshift-must-gather-6xrt9/crc-debug-vgtsg" Jan 26 09:38:11 crc kubenswrapper[4806]: I0126 09:38:11.353000 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6e3d2c35-5ee8-4770-b892-74c4d62d70ad-host\") pod \"crc-debug-vgtsg\" (UID: \"6e3d2c35-5ee8-4770-b892-74c4d62d70ad\") " pod="openshift-must-gather-6xrt9/crc-debug-vgtsg" Jan 26 09:38:11 crc kubenswrapper[4806]: I0126 09:38:11.353241 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6e3d2c35-5ee8-4770-b892-74c4d62d70ad-host\") pod \"crc-debug-vgtsg\" (UID: \"6e3d2c35-5ee8-4770-b892-74c4d62d70ad\") " pod="openshift-must-gather-6xrt9/crc-debug-vgtsg" Jan 26 09:38:11 crc kubenswrapper[4806]: I0126 09:38:11.396747 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dcd8k\" (UniqueName: \"kubernetes.io/projected/6e3d2c35-5ee8-4770-b892-74c4d62d70ad-kube-api-access-dcd8k\") pod \"crc-debug-vgtsg\" (UID: \"6e3d2c35-5ee8-4770-b892-74c4d62d70ad\") " pod="openshift-must-gather-6xrt9/crc-debug-vgtsg" Jan 26 09:38:11 crc kubenswrapper[4806]: I0126 09:38:11.471453 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6xrt9/crc-debug-vgtsg" Jan 26 09:38:11 crc kubenswrapper[4806]: I0126 09:38:11.752288 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6xrt9/crc-debug-vgtsg" event={"ID":"6e3d2c35-5ee8-4770-b892-74c4d62d70ad","Type":"ContainerStarted","Data":"0e12cdefde43f7abeffaec5c302a5d215be1de23ff21a80b7bb47eadd02b7e39"} Jan 26 09:38:12 crc kubenswrapper[4806]: I0126 09:38:12.765244 4806 generic.go:334] "Generic (PLEG): container finished" podID="6e3d2c35-5ee8-4770-b892-74c4d62d70ad" containerID="bd5af128c198ea2823680176fa312e7f8089679fac1487f0dbe2af50fe5dace5" exitCode=0 Jan 26 09:38:12 crc kubenswrapper[4806]: I0126 09:38:12.765293 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6xrt9/crc-debug-vgtsg" event={"ID":"6e3d2c35-5ee8-4770-b892-74c4d62d70ad","Type":"ContainerDied","Data":"bd5af128c198ea2823680176fa312e7f8089679fac1487f0dbe2af50fe5dace5"} Jan 26 09:38:12 crc kubenswrapper[4806]: I0126 09:38:12.828817 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-6xrt9/crc-debug-vgtsg"] Jan 26 09:38:12 crc kubenswrapper[4806]: I0126 09:38:12.841279 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-6xrt9/crc-debug-vgtsg"] Jan 26 09:38:13 crc kubenswrapper[4806]: I0126 09:38:13.877755 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6xrt9/crc-debug-vgtsg" Jan 26 09:38:13 crc kubenswrapper[4806]: I0126 09:38:13.998884 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dcd8k\" (UniqueName: \"kubernetes.io/projected/6e3d2c35-5ee8-4770-b892-74c4d62d70ad-kube-api-access-dcd8k\") pod \"6e3d2c35-5ee8-4770-b892-74c4d62d70ad\" (UID: \"6e3d2c35-5ee8-4770-b892-74c4d62d70ad\") " Jan 26 09:38:13 crc kubenswrapper[4806]: I0126 09:38:13.999090 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6e3d2c35-5ee8-4770-b892-74c4d62d70ad-host\") pod \"6e3d2c35-5ee8-4770-b892-74c4d62d70ad\" (UID: \"6e3d2c35-5ee8-4770-b892-74c4d62d70ad\") " Jan 26 09:38:13 crc kubenswrapper[4806]: I0126 09:38:13.999556 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6e3d2c35-5ee8-4770-b892-74c4d62d70ad-host" (OuterVolumeSpecName: "host") pod "6e3d2c35-5ee8-4770-b892-74c4d62d70ad" (UID: "6e3d2c35-5ee8-4770-b892-74c4d62d70ad"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 09:38:14 crc kubenswrapper[4806]: I0126 09:38:14.013760 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e3d2c35-5ee8-4770-b892-74c4d62d70ad-kube-api-access-dcd8k" (OuterVolumeSpecName: "kube-api-access-dcd8k") pod "6e3d2c35-5ee8-4770-b892-74c4d62d70ad" (UID: "6e3d2c35-5ee8-4770-b892-74c4d62d70ad"). InnerVolumeSpecName "kube-api-access-dcd8k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:38:14 crc kubenswrapper[4806]: I0126 09:38:14.042956 4806 scope.go:117] "RemoveContainer" containerID="0b4284bb6f66d7af192b1b55fa536fcbb56d0e5fcd0b5a1dafbad5325167e098" Jan 26 09:38:14 crc kubenswrapper[4806]: E0126 09:38:14.043149 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 09:38:14 crc kubenswrapper[4806]: I0126 09:38:14.100931 4806 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6e3d2c35-5ee8-4770-b892-74c4d62d70ad-host\") on node \"crc\" DevicePath \"\"" Jan 26 09:38:14 crc kubenswrapper[4806]: I0126 09:38:14.100959 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dcd8k\" (UniqueName: \"kubernetes.io/projected/6e3d2c35-5ee8-4770-b892-74c4d62d70ad-kube-api-access-dcd8k\") on node \"crc\" DevicePath \"\"" Jan 26 09:38:14 crc kubenswrapper[4806]: I0126 09:38:14.781289 4806 scope.go:117] "RemoveContainer" containerID="bd5af128c198ea2823680176fa312e7f8089679fac1487f0dbe2af50fe5dace5" Jan 26 09:38:14 crc kubenswrapper[4806]: I0126 09:38:14.781317 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6xrt9/crc-debug-vgtsg" Jan 26 09:38:15 crc kubenswrapper[4806]: I0126 09:38:15.053609 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e3d2c35-5ee8-4770-b892-74c4d62d70ad" path="/var/lib/kubelet/pods/6e3d2c35-5ee8-4770-b892-74c4d62d70ad/volumes" Jan 26 09:38:27 crc kubenswrapper[4806]: I0126 09:38:27.046097 4806 scope.go:117] "RemoveContainer" containerID="0b4284bb6f66d7af192b1b55fa536fcbb56d0e5fcd0b5a1dafbad5325167e098" Jan 26 09:38:27 crc kubenswrapper[4806]: E0126 09:38:27.047079 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 09:38:30 crc kubenswrapper[4806]: I0126 09:38:30.944505 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5554df79f4-4pvrc_758b7482-35c7-4cda-aaff-f3e3784bc5c4/barbican-api/0.log" Jan 26 09:38:31 crc kubenswrapper[4806]: I0126 09:38:31.148424 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5554df79f4-4pvrc_758b7482-35c7-4cda-aaff-f3e3784bc5c4/barbican-api-log/0.log" Jan 26 09:38:31 crc kubenswrapper[4806]: I0126 09:38:31.270601 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-7c56b98c68-pf9q4_3f8773f7-27d5-469b-837e-90bf31716266/barbican-keystone-listener/0.log" Jan 26 09:38:31 crc kubenswrapper[4806]: I0126 09:38:31.547782 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-7c56b98c68-pf9q4_3f8773f7-27d5-469b-837e-90bf31716266/barbican-keystone-listener-log/0.log" Jan 26 09:38:31 crc kubenswrapper[4806]: I0126 09:38:31.581452 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-7b56fd48c9-2fhh8_c07d6e0b-e41e-402f-8d38-196e641be864/barbican-worker-log/0.log" Jan 26 09:38:31 crc kubenswrapper[4806]: I0126 09:38:31.611405 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-7b56fd48c9-2fhh8_c07d6e0b-e41e-402f-8d38-196e641be864/barbican-worker/0.log" Jan 26 09:38:31 crc kubenswrapper[4806]: I0126 09:38:31.909878 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-lc8v5_b38882dc-facd-46ab-96ce-176528439b16/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 09:38:31 crc kubenswrapper[4806]: I0126 09:38:31.924261 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_be6dfa34-fa38-4375-be1f-467c5428818d/ceilometer-central-agent/0.log" Jan 26 09:38:32 crc kubenswrapper[4806]: I0126 09:38:32.043871 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_be6dfa34-fa38-4375-be1f-467c5428818d/ceilometer-notification-agent/0.log" Jan 26 09:38:32 crc kubenswrapper[4806]: I0126 09:38:32.139697 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_be6dfa34-fa38-4375-be1f-467c5428818d/sg-core/0.log" Jan 26 09:38:32 crc kubenswrapper[4806]: I0126 09:38:32.140458 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_be6dfa34-fa38-4375-be1f-467c5428818d/proxy-httpd/0.log" Jan 26 09:38:32 crc kubenswrapper[4806]: I0126 09:38:32.406110 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_e18f5e2b-f6be-4016-ad01-21b9e9b8bc58/cinder-api-log/0.log" Jan 26 09:38:32 crc kubenswrapper[4806]: I0126 09:38:32.407129 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_e18f5e2b-f6be-4016-ad01-21b9e9b8bc58/cinder-api/0.log" Jan 26 09:38:32 crc kubenswrapper[4806]: I0126 09:38:32.564713 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_462d2770-3796-4a02-b83e-91de31a08bd0/cinder-scheduler/0.log" Jan 26 09:38:32 crc kubenswrapper[4806]: I0126 09:38:32.706182 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_462d2770-3796-4a02-b83e-91de31a08bd0/probe/0.log" Jan 26 09:38:32 crc kubenswrapper[4806]: I0126 09:38:32.758875 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-vb25w_290f172f-1b02-41e0-a865-c926792e9121/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 09:38:32 crc kubenswrapper[4806]: I0126 09:38:32.931448 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-htjzk_ffea3d90-3e30-4e6e-9c01-ee7411638bc1/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 09:38:33 crc kubenswrapper[4806]: I0126 09:38:33.011497 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-64c9b56dc5-dqgzc_94d36ef4-cca6-4740-be74-d88ac60ed646/init/0.log" Jan 26 09:38:33 crc kubenswrapper[4806]: I0126 09:38:33.466579 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-64c9b56dc5-dqgzc_94d36ef4-cca6-4740-be74-d88ac60ed646/init/0.log" Jan 26 09:38:33 crc kubenswrapper[4806]: I0126 09:38:33.587026 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-6zgmb_12a200ee-7089-445b-a0eb-ae7fce15f5ec/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 09:38:33 crc kubenswrapper[4806]: I0126 09:38:33.632146 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-64c9b56dc5-dqgzc_94d36ef4-cca6-4740-be74-d88ac60ed646/dnsmasq-dns/0.log" Jan 26 09:38:33 crc kubenswrapper[4806]: I0126 09:38:33.847548 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_f0a1a709-885d-4f4e-a2a2-51d7bad26f6f/glance-log/0.log" Jan 26 09:38:33 crc kubenswrapper[4806]: I0126 09:38:33.852325 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_f0a1a709-885d-4f4e-a2a2-51d7bad26f6f/glance-httpd/0.log" Jan 26 09:38:34 crc kubenswrapper[4806]: I0126 09:38:34.100836 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_b728490f-ad14-45d0-aa07-096fecf7be60/glance-httpd/0.log" Jan 26 09:38:34 crc kubenswrapper[4806]: I0126 09:38:34.134279 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_b728490f-ad14-45d0-aa07-096fecf7be60/glance-log/0.log" Jan 26 09:38:34 crc kubenswrapper[4806]: I0126 09:38:34.751120 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-engine-5d649c5968-gb8r4_4656694c-fa67-4546-bf62-bc929866aeae/heat-engine/0.log" Jan 26 09:38:34 crc kubenswrapper[4806]: I0126 09:38:34.910339 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-api-5687f48547-kz5md_39ad9dca-7dee-4116-ab24-071e59b41dc2/heat-api/0.log" Jan 26 09:38:35 crc kubenswrapper[4806]: I0126 09:38:35.165599 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-6b8f96b47b-sbsnb_d4ed3e96-22ec-410e-8f50-afd310343aa8/horizon/0.log" Jan 26 09:38:35 crc kubenswrapper[4806]: I0126 09:38:35.264483 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-cfnapi-56c9b6cf4b-dl98j_e2b71668-05ca-4e62-a0fc-1e240e24caff/heat-cfnapi/0.log" Jan 26 09:38:35 crc kubenswrapper[4806]: I0126 09:38:35.401070 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-8nw2s_597e4bf2-e48f-4f61-90a0-2e930444f754/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 09:38:35 crc kubenswrapper[4806]: I0126 09:38:35.595758 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-jzfsf_b3d151b0-221e-46fe-a24a-cb842d74c532/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 09:38:35 crc kubenswrapper[4806]: I0126 09:38:35.696888 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-6b8f96b47b-sbsnb_d4ed3e96-22ec-410e-8f50-afd310343aa8/horizon-log/0.log" Jan 26 09:38:35 crc kubenswrapper[4806]: I0126 09:38:35.879874 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29490301-fw69s_5fba021e-0dda-4793-abae-5b9137baf1ef/keystone-cron/0.log" Jan 26 09:38:36 crc kubenswrapper[4806]: I0126 09:38:36.110623 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_44dd7b53-ec3f-4dde-b448-315e571d5249/kube-state-metrics/0.log" Jan 26 09:38:36 crc kubenswrapper[4806]: I0126 09:38:36.208754 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-5c44c79675-nsqr2_6c7f3b17-0e6a-47c1-90d5-bc5ccd2c4453/keystone-api/0.log" Jan 26 09:38:36 crc kubenswrapper[4806]: I0126 09:38:36.307443 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-bgxrh_c23b3bb7-8ff5-4e80-8476-b478ffeb87a7/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 09:38:36 crc kubenswrapper[4806]: I0126 09:38:36.665267 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-75fb9bfb7c-t5l28_236df924-79aa-410f-905e-aba909cdfae2/neutron-httpd/0.log" Jan 26 09:38:37 crc kubenswrapper[4806]: I0126 09:38:37.168917 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-snd4r_0c4ea336-2189-42c5-9c34-1ad75642efd0/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 09:38:37 crc kubenswrapper[4806]: I0126 09:38:37.271945 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-75fb9bfb7c-t5l28_236df924-79aa-410f-905e-aba909cdfae2/neutron-api/0.log" Jan 26 09:38:37 crc kubenswrapper[4806]: I0126 09:38:37.995095 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_aee89af5-d60e-4b49-938e-443c6299f3fa/nova-cell0-conductor-conductor/0.log" Jan 26 09:38:38 crc kubenswrapper[4806]: I0126 09:38:38.188783 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_58f0f405-bc44-4051-af6e-ece4bf71bdbb/nova-cell1-conductor-conductor/0.log" Jan 26 09:38:38 crc kubenswrapper[4806]: I0126 09:38:38.589458 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_9ce10b87-e354-4e13-9283-f1e15e0d5908/nova-cell1-novncproxy-novncproxy/0.log" Jan 26 09:38:38 crc kubenswrapper[4806]: I0126 09:38:38.623405 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_58e34a8b-db1e-40a7-8d39-e791e2e45de9/nova-api-log/0.log" Jan 26 09:38:38 crc kubenswrapper[4806]: I0126 09:38:38.834153 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-tdxcm_0c6eebc2-cb5b-4524-931a-96b86b65585a/nova-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 09:38:39 crc kubenswrapper[4806]: I0126 09:38:39.063049 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_ee33ca06-96f1-4ef2-ba3c-ddd2a15a2a47/nova-metadata-log/0.log" Jan 26 09:38:39 crc kubenswrapper[4806]: I0126 09:38:39.174982 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_58e34a8b-db1e-40a7-8d39-e791e2e45de9/nova-api-api/0.log" Jan 26 09:38:39 crc kubenswrapper[4806]: I0126 09:38:39.420410 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_cc07bbaf-381b-4edc-acd9-48211c3eb4c6/mysql-bootstrap/0.log" Jan 26 09:38:39 crc kubenswrapper[4806]: I0126 09:38:39.665977 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_cc07bbaf-381b-4edc-acd9-48211c3eb4c6/mysql-bootstrap/0.log" Jan 26 09:38:39 crc kubenswrapper[4806]: I0126 09:38:39.742604 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_cc07bbaf-381b-4edc-acd9-48211c3eb4c6/galera/0.log" Jan 26 09:38:39 crc kubenswrapper[4806]: I0126 09:38:39.813130 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_7b4904b8-0f8b-4492-b879-8361f8b9e092/nova-scheduler-scheduler/0.log" Jan 26 09:38:40 crc kubenswrapper[4806]: I0126 09:38:40.041804 4806 scope.go:117] "RemoveContainer" containerID="0b4284bb6f66d7af192b1b55fa536fcbb56d0e5fcd0b5a1dafbad5325167e098" Jan 26 09:38:40 crc kubenswrapper[4806]: E0126 09:38:40.042040 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 09:38:40 crc kubenswrapper[4806]: I0126 09:38:40.055212 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_70aa246b-31a1-4800-b76e-d50a2002a5f8/mysql-bootstrap/0.log" Jan 26 09:38:40 crc kubenswrapper[4806]: I0126 09:38:40.152603 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_70aa246b-31a1-4800-b76e-d50a2002a5f8/mysql-bootstrap/0.log" Jan 26 09:38:40 crc kubenswrapper[4806]: I0126 09:38:40.262005 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_70aa246b-31a1-4800-b76e-d50a2002a5f8/galera/0.log" Jan 26 09:38:40 crc kubenswrapper[4806]: I0126 09:38:40.566498 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_f39640fb-b2ef-4514-84d0-38c6d07adb11/openstackclient/0.log" Jan 26 09:38:40 crc kubenswrapper[4806]: I0126 09:38:40.781233 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-jb2zj_b5d47098-d6d7-4b59-a88c-4bfb7d643a89/ovn-controller/0.log" Jan 26 09:38:40 crc kubenswrapper[4806]: I0126 09:38:40.993414 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-j6k99_9c70579c-19d0-4675-9fab-75415cbcaf47/openstack-network-exporter/0.log" Jan 26 09:38:41 crc kubenswrapper[4806]: I0126 09:38:41.161503 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-r7hjs_a9b96a34-03af-4967-bec7-e1beda976396/ovsdb-server-init/0.log" Jan 26 09:38:41 crc kubenswrapper[4806]: I0126 09:38:41.413347 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-r7hjs_a9b96a34-03af-4967-bec7-e1beda976396/ovsdb-server-init/0.log" Jan 26 09:38:41 crc kubenswrapper[4806]: I0126 09:38:41.451726 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-r7hjs_a9b96a34-03af-4967-bec7-e1beda976396/ovs-vswitchd/0.log" Jan 26 09:38:41 crc kubenswrapper[4806]: I0126 09:38:41.533765 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-r7hjs_a9b96a34-03af-4967-bec7-e1beda976396/ovsdb-server/0.log" Jan 26 09:38:41 crc kubenswrapper[4806]: I0126 09:38:41.762275 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-ks5cf_b91476e2-3e0d-4447-b7d8-f9f4696ca1c7/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 09:38:41 crc kubenswrapper[4806]: I0126 09:38:41.819106 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_ee33ca06-96f1-4ef2-ba3c-ddd2a15a2a47/nova-metadata-metadata/0.log" Jan 26 09:38:41 crc kubenswrapper[4806]: I0126 09:38:41.970428 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_55fab6bb-f40a-4964-b87e-be61729787a2/ovn-northd/0.log" Jan 26 09:38:42 crc kubenswrapper[4806]: I0126 09:38:42.012712 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_55fab6bb-f40a-4964-b87e-be61729787a2/openstack-network-exporter/0.log" Jan 26 09:38:42 crc kubenswrapper[4806]: I0126 09:38:42.167507 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_04a8f9ea-cfac-4d1a-8ef9-9f3f4b2efcc5/openstack-network-exporter/0.log" Jan 26 09:38:42 crc kubenswrapper[4806]: I0126 09:38:42.302002 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_04a8f9ea-cfac-4d1a-8ef9-9f3f4b2efcc5/ovsdbserver-nb/0.log" Jan 26 09:38:42 crc kubenswrapper[4806]: I0126 09:38:42.423363 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_69ceca24-1275-4ecf-a77b-acb2728d7cc4/openstack-network-exporter/0.log" Jan 26 09:38:42 crc kubenswrapper[4806]: I0126 09:38:42.463979 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_69ceca24-1275-4ecf-a77b-acb2728d7cc4/ovsdbserver-sb/0.log" Jan 26 09:38:42 crc kubenswrapper[4806]: I0126 09:38:42.811320 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6f665b5db4-wpfmw_57bd9f7e-2311-4121-a33f-4610aecf4422/placement-api/0.log" Jan 26 09:38:42 crc kubenswrapper[4806]: I0126 09:38:42.963463 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_da8e3d3a-b943-47b1-9c7d-5c44a1816934/setup-container/0.log" Jan 26 09:38:42 crc kubenswrapper[4806]: I0126 09:38:42.975357 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6f665b5db4-wpfmw_57bd9f7e-2311-4121-a33f-4610aecf4422/placement-log/0.log" Jan 26 09:38:43 crc kubenswrapper[4806]: I0126 09:38:43.282768 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_da8e3d3a-b943-47b1-9c7d-5c44a1816934/rabbitmq/0.log" Jan 26 09:38:43 crc kubenswrapper[4806]: I0126 09:38:43.292086 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_86791215-4d1e-4b06-b013-fa551e935b74/setup-container/0.log" Jan 26 09:38:43 crc kubenswrapper[4806]: I0126 09:38:43.303749 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_da8e3d3a-b943-47b1-9c7d-5c44a1816934/setup-container/0.log" Jan 26 09:38:43 crc kubenswrapper[4806]: I0126 09:38:43.620177 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_86791215-4d1e-4b06-b013-fa551e935b74/setup-container/0.log" Jan 26 09:38:43 crc kubenswrapper[4806]: I0126 09:38:43.730552 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-5gckz_9a8bd166-d69f-424d-b0b6-bfb56d092e7d/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 09:38:43 crc kubenswrapper[4806]: I0126 09:38:43.733996 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_86791215-4d1e-4b06-b013-fa551e935b74/rabbitmq/0.log" Jan 26 09:38:43 crc kubenswrapper[4806]: I0126 09:38:43.953601 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-xcpsd_e18dfde2-7334-4c26-a7bb-b79bf78fad03/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 09:38:44 crc kubenswrapper[4806]: I0126 09:38:44.053072 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-d4bdx_2f3c6fdc-4055-4dbe-b5cb-fedcfe4f9356/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 09:38:44 crc kubenswrapper[4806]: I0126 09:38:44.316410 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-pc9sc_ee3e603a-b29b-4774-87b1-83e26920dfde/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 09:38:44 crc kubenswrapper[4806]: I0126 09:38:44.716649 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-wdztn_382a8851-c811-4842-a4a9-e0a2e6d7f2e6/ssh-known-hosts-edpm-deployment/0.log" Jan 26 09:38:44 crc kubenswrapper[4806]: I0126 09:38:44.926980 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-78c9d88fc9-5rs9s_7c49d653-a114-4352-afd1-a2ca43c811f1/proxy-server/0.log" Jan 26 09:38:45 crc kubenswrapper[4806]: I0126 09:38:45.051856 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-2dckc_061b909a-a88f-4261-9ccf-2daaf3958621/swift-ring-rebalance/0.log" Jan 26 09:38:45 crc kubenswrapper[4806]: I0126 09:38:45.087161 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-78c9d88fc9-5rs9s_7c49d653-a114-4352-afd1-a2ca43c811f1/proxy-httpd/0.log" Jan 26 09:38:45 crc kubenswrapper[4806]: I0126 09:38:45.239822 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_fcc22009-cca0-438b-8f2f-5c245db7c70c/account-auditor/0.log" Jan 26 09:38:45 crc kubenswrapper[4806]: I0126 09:38:45.276693 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_fcc22009-cca0-438b-8f2f-5c245db7c70c/account-reaper/0.log" Jan 26 09:38:45 crc kubenswrapper[4806]: I0126 09:38:45.374862 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_fcc22009-cca0-438b-8f2f-5c245db7c70c/account-replicator/0.log" Jan 26 09:38:45 crc kubenswrapper[4806]: I0126 09:38:45.562218 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_fcc22009-cca0-438b-8f2f-5c245db7c70c/container-auditor/0.log" Jan 26 09:38:45 crc kubenswrapper[4806]: I0126 09:38:45.584119 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_fcc22009-cca0-438b-8f2f-5c245db7c70c/container-replicator/0.log" Jan 26 09:38:45 crc kubenswrapper[4806]: I0126 09:38:45.639967 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_fcc22009-cca0-438b-8f2f-5c245db7c70c/account-server/0.log" Jan 26 09:38:45 crc kubenswrapper[4806]: I0126 09:38:45.782185 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_fcc22009-cca0-438b-8f2f-5c245db7c70c/container-server/0.log" Jan 26 09:38:45 crc kubenswrapper[4806]: I0126 09:38:45.810184 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_fcc22009-cca0-438b-8f2f-5c245db7c70c/container-updater/0.log" Jan 26 09:38:45 crc kubenswrapper[4806]: I0126 09:38:45.918957 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_fcc22009-cca0-438b-8f2f-5c245db7c70c/object-expirer/0.log" Jan 26 09:38:45 crc kubenswrapper[4806]: I0126 09:38:45.920328 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_fcc22009-cca0-438b-8f2f-5c245db7c70c/object-auditor/0.log" Jan 26 09:38:46 crc kubenswrapper[4806]: I0126 09:38:46.058962 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_fcc22009-cca0-438b-8f2f-5c245db7c70c/object-server/0.log" Jan 26 09:38:46 crc kubenswrapper[4806]: I0126 09:38:46.183609 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_fcc22009-cca0-438b-8f2f-5c245db7c70c/object-replicator/0.log" Jan 26 09:38:46 crc kubenswrapper[4806]: I0126 09:38:46.186974 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_fcc22009-cca0-438b-8f2f-5c245db7c70c/rsync/0.log" Jan 26 09:38:46 crc kubenswrapper[4806]: I0126 09:38:46.236664 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_fcc22009-cca0-438b-8f2f-5c245db7c70c/object-updater/0.log" Jan 26 09:38:46 crc kubenswrapper[4806]: I0126 09:38:46.348318 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_fcc22009-cca0-438b-8f2f-5c245db7c70c/swift-recon-cron/0.log" Jan 26 09:38:46 crc kubenswrapper[4806]: I0126 09:38:46.533925 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-72fbn_26657020-74ce-471a-8877-43f4fd4fde5d/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 09:38:47 crc kubenswrapper[4806]: I0126 09:38:47.006324 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest-s01-single-thread-testing_e2f598ac-916e-43f9-9d50-09c4be97c717/tempest-tests-tempest-tests-runner/0.log" Jan 26 09:38:47 crc kubenswrapper[4806]: I0126 09:38:47.125714 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_bda8645d-d202-47ae-a35e-c187b18dc23f/test-operator-logs-container/0.log" Jan 26 09:38:47 crc kubenswrapper[4806]: I0126 09:38:47.164670 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest-s00-multi-thread-testing_d392e063-ed04-4768-b95c-cbd7d0e5afda/tempest-tests-tempest-tests-runner/0.log" Jan 26 09:38:47 crc kubenswrapper[4806]: I0126 09:38:47.349606 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-kq6ht_47febe81-62f5-4336-a165-bbc520756fc7/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 09:38:55 crc kubenswrapper[4806]: I0126 09:38:55.042244 4806 scope.go:117] "RemoveContainer" containerID="0b4284bb6f66d7af192b1b55fa536fcbb56d0e5fcd0b5a1dafbad5325167e098" Jan 26 09:38:56 crc kubenswrapper[4806]: I0126 09:38:56.163548 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" event={"ID":"d07502a2-50b0-4012-b335-340a1c694c50","Type":"ContainerStarted","Data":"94db467df12a6038972412ef143c9c2677da69013ebdb4ab5f0a652f611d1b29"} Jan 26 09:38:57 crc kubenswrapper[4806]: I0126 09:38:57.825472 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_376996ab-adaf-4126-80f8-09242f277fe2/memcached/0.log" Jan 26 09:39:22 crc kubenswrapper[4806]: I0126 09:39:22.337753 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5f33e6d26e38b6659ab3f4d90d9fbf1cd409d6eb8076f273c58e43de55gxsxk_d350a047-9d1f-46ea-b0cd-54c9a629f49c/util/0.log" Jan 26 09:39:22 crc kubenswrapper[4806]: I0126 09:39:22.653252 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5f33e6d26e38b6659ab3f4d90d9fbf1cd409d6eb8076f273c58e43de55gxsxk_d350a047-9d1f-46ea-b0cd-54c9a629f49c/pull/0.log" Jan 26 09:39:22 crc kubenswrapper[4806]: I0126 09:39:22.661907 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5f33e6d26e38b6659ab3f4d90d9fbf1cd409d6eb8076f273c58e43de55gxsxk_d350a047-9d1f-46ea-b0cd-54c9a629f49c/pull/0.log" Jan 26 09:39:22 crc kubenswrapper[4806]: I0126 09:39:22.669908 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5f33e6d26e38b6659ab3f4d90d9fbf1cd409d6eb8076f273c58e43de55gxsxk_d350a047-9d1f-46ea-b0cd-54c9a629f49c/util/0.log" Jan 26 09:39:22 crc kubenswrapper[4806]: I0126 09:39:22.845187 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5f33e6d26e38b6659ab3f4d90d9fbf1cd409d6eb8076f273c58e43de55gxsxk_d350a047-9d1f-46ea-b0cd-54c9a629f49c/pull/0.log" Jan 26 09:39:22 crc kubenswrapper[4806]: I0126 09:39:22.863547 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5f33e6d26e38b6659ab3f4d90d9fbf1cd409d6eb8076f273c58e43de55gxsxk_d350a047-9d1f-46ea-b0cd-54c9a629f49c/util/0.log" Jan 26 09:39:22 crc kubenswrapper[4806]: I0126 09:39:22.884629 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5f33e6d26e38b6659ab3f4d90d9fbf1cd409d6eb8076f273c58e43de55gxsxk_d350a047-9d1f-46ea-b0cd-54c9a629f49c/extract/0.log" Jan 26 09:39:23 crc kubenswrapper[4806]: I0126 09:39:23.139720 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-7478f7dbf9-wwr7b_55e84831-2044-4555-844d-93053648d17a/manager/0.log" Jan 26 09:39:23 crc kubenswrapper[4806]: I0126 09:39:23.166231 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7f86f8796f-9mwdz_9b0f19e9-5ee8-4f12-a453-2195b20a8f09/manager/0.log" Jan 26 09:39:23 crc kubenswrapper[4806]: I0126 09:39:23.420107 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-b45d7bf98-9ld8m_293159cc-40c4-4335-ad77-65f1c493e35a/manager/0.log" Jan 26 09:39:23 crc kubenswrapper[4806]: I0126 09:39:23.472865 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-78fdd796fd-psw9b_002839d6-a78d-4826-a93c-b6dec9671bab/manager/0.log" Jan 26 09:39:23 crc kubenswrapper[4806]: I0126 09:39:23.691573 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-6fzqz_d36b3dbe-4776-4c55-a64f-4ea15cad6fb7/manager/0.log" Jan 26 09:39:23 crc kubenswrapper[4806]: I0126 09:39:23.718554 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-q9hmq_3191a58e-ee1d-430f-97ce-c7c532d132a6/manager/0.log" Jan 26 09:39:23 crc kubenswrapper[4806]: I0126 09:39:23.914878 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-598f7747c9-fd757_00c672af-a00d-45d3-9d80-39de7bbcf49c/manager/0.log" Jan 26 09:39:24 crc kubenswrapper[4806]: I0126 09:39:24.085921 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-694cf4f878-8s72c_52ffe9cc-7d93-400f-a7ef-81d4c7335024/manager/0.log" Jan 26 09:39:24 crc kubenswrapper[4806]: I0126 09:39:24.261907 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b8b6d4659-rf496_1ca63855-2d6d-4543-a084-4cdb7c6d0c5c/manager/0.log" Jan 26 09:39:24 crc kubenswrapper[4806]: I0126 09:39:24.351708 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-78c6999f6f-497jq_41df476c-557f-407c-8711-57c979600bea/manager/0.log" Jan 26 09:39:24 crc kubenswrapper[4806]: I0126 09:39:24.588775 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6b9fb5fdcb-lkcgg_5270b699-329c-41eb-a8cf-5f94eeb4cd11/manager/0.log" Jan 26 09:39:24 crc kubenswrapper[4806]: I0126 09:39:24.649085 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-78d58447c5-rk454_167c1b32-0550-4c81-a2b6-b30e8d58dd3d/manager/0.log" Jan 26 09:39:24 crc kubenswrapper[4806]: I0126 09:39:24.840766 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-7bdb645866-tj4m9_9096956d-1ed7-4e3c-bdec-d86c14168601/manager/0.log" Jan 26 09:39:24 crc kubenswrapper[4806]: I0126 09:39:24.919271 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-5f4cd88d46-nk6xc_f84e4d06-7a1b-4038-b30f-ec7bf90efa2c/manager/0.log" Jan 26 09:39:25 crc kubenswrapper[4806]: I0126 09:39:25.067901 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b854bc45q_f955001e-4d2d-437c-bc31-19a4234ed701/manager/0.log" Jan 26 09:39:25 crc kubenswrapper[4806]: I0126 09:39:25.232142 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-644d5c8bff-nqdhj_ec3ed82a-eb71-4099-80bd-be5ed1d06943/operator/0.log" Jan 26 09:39:25 crc kubenswrapper[4806]: I0126 09:39:25.460398 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-8lljx_72beed4f-5ada-46d1-874a-3394b8768fd2/registry-server/0.log" Jan 26 09:39:25 crc kubenswrapper[4806]: I0126 09:39:25.758979 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-6f75f45d54-ktw6x_1a85568e-bc00-4bc5-a99e-bcef2f7041ee/manager/0.log" Jan 26 09:39:25 crc kubenswrapper[4806]: I0126 09:39:25.946231 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-79d5ccc684-ncfjs_109bb090-2776-45ce-b579-711304ae2db8/manager/0.log" Jan 26 09:39:26 crc kubenswrapper[4806]: I0126 09:39:26.174699 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-gnhrx_77931dd1-1acc-4552-8605-33a24c74fc43/operator/0.log" Jan 26 09:39:26 crc kubenswrapper[4806]: I0126 09:39:26.397406 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-6898c455c-d6bzz_d6d4c1d3-c0cc-4cc3-b281-73f28ac929d2/manager/0.log" Jan 26 09:39:26 crc kubenswrapper[4806]: I0126 09:39:26.615930 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-547cbdb99f-swpm7_e048fc14-f2ba-4930-9e77-a281b25c7a07/manager/0.log" Jan 26 09:39:26 crc kubenswrapper[4806]: I0126 09:39:26.847889 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-85cd9769bb-p4bvd_3959df09-4052-4ccb-8c3f-b3f5aebb747c/manager/0.log" Jan 26 09:39:26 crc kubenswrapper[4806]: I0126 09:39:26.931939 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-69797bbcbd-bhg2d_16faebac-962b-4520-bb85-f77bc1d781d1/manager/0.log" Jan 26 09:39:27 crc kubenswrapper[4806]: I0126 09:39:27.063759 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-564965969-zqkgl_f63ffecc-85dc-48df-b4d6-675d0792cacf/manager/0.log" Jan 26 09:39:48 crc kubenswrapper[4806]: I0126 09:39:48.854300 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-jdcjm_bd8d4294-4075-456c-ab53-d3646b5117b5/control-plane-machine-set-operator/0.log" Jan 26 09:39:49 crc kubenswrapper[4806]: I0126 09:39:49.179854 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-gk5q9_0f3802bf-e4bc-4952-9e22-428d62ec0349/kube-rbac-proxy/0.log" Jan 26 09:39:49 crc kubenswrapper[4806]: I0126 09:39:49.268051 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-gk5q9_0f3802bf-e4bc-4952-9e22-428d62ec0349/machine-api-operator/0.log" Jan 26 09:40:03 crc kubenswrapper[4806]: I0126 09:40:03.833705 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-shxsb_d8ad5ee7-8dde-482e-9c75-0114fb096dfb/cert-manager-controller/0.log" Jan 26 09:40:04 crc kubenswrapper[4806]: I0126 09:40:04.121183 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-x8z22_dcc781ef-dcbe-4eb5-9291-3486d5ef0d00/cert-manager-cainjector/0.log" Jan 26 09:40:04 crc kubenswrapper[4806]: I0126 09:40:04.171693 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-j6gd8_bd1576cf-642f-4fc6-86fb-2d144fbd299c/cert-manager-webhook/0.log" Jan 26 09:40:18 crc kubenswrapper[4806]: I0126 09:40:18.063325 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-8wp62_ff219c8b-8864-482f-9524-11c05e3fef70/nmstate-console-plugin/0.log" Jan 26 09:40:18 crc kubenswrapper[4806]: I0126 09:40:18.405223 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-jbwmp_e20bf81f-2252-43c2-9f16-0cca133f9b13/nmstate-handler/0.log" Jan 26 09:40:18 crc kubenswrapper[4806]: I0126 09:40:18.449699 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-chwqp_8c540a44-feb5-4c62-b4ad-f1f0dfd40576/nmstate-metrics/0.log" Jan 26 09:40:18 crc kubenswrapper[4806]: I0126 09:40:18.522250 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-chwqp_8c540a44-feb5-4c62-b4ad-f1f0dfd40576/kube-rbac-proxy/0.log" Jan 26 09:40:18 crc kubenswrapper[4806]: I0126 09:40:18.720400 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-r8wcz_4a979c8c-9902-414a-8458-4cac2b34e61d/nmstate-operator/0.log" Jan 26 09:40:18 crc kubenswrapper[4806]: I0126 09:40:18.779235 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-bmxmt_22afad0a-47c8-44b3-87b3-342559ef78f5/nmstate-webhook/0.log" Jan 26 09:40:48 crc kubenswrapper[4806]: I0126 09:40:48.224937 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-8527s_6eb65788-c61b-4b04-931e-d122493e153b/controller/0.log" Jan 26 09:40:48 crc kubenswrapper[4806]: I0126 09:40:48.275822 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-8527s_6eb65788-c61b-4b04-931e-d122493e153b/kube-rbac-proxy/0.log" Jan 26 09:40:48 crc kubenswrapper[4806]: I0126 09:40:48.521540 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jws2v_f35f3a4f-d62b-4a20-85c0-09e66c185e14/cp-frr-files/0.log" Jan 26 09:40:48 crc kubenswrapper[4806]: I0126 09:40:48.611953 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jws2v_f35f3a4f-d62b-4a20-85c0-09e66c185e14/cp-frr-files/0.log" Jan 26 09:40:48 crc kubenswrapper[4806]: I0126 09:40:48.654477 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jws2v_f35f3a4f-d62b-4a20-85c0-09e66c185e14/cp-metrics/0.log" Jan 26 09:40:48 crc kubenswrapper[4806]: I0126 09:40:48.675574 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jws2v_f35f3a4f-d62b-4a20-85c0-09e66c185e14/cp-reloader/0.log" Jan 26 09:40:48 crc kubenswrapper[4806]: I0126 09:40:48.768316 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jws2v_f35f3a4f-d62b-4a20-85c0-09e66c185e14/cp-reloader/0.log" Jan 26 09:40:48 crc kubenswrapper[4806]: I0126 09:40:48.965073 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jws2v_f35f3a4f-d62b-4a20-85c0-09e66c185e14/cp-frr-files/0.log" Jan 26 09:40:49 crc kubenswrapper[4806]: I0126 09:40:49.017051 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jws2v_f35f3a4f-d62b-4a20-85c0-09e66c185e14/cp-reloader/0.log" Jan 26 09:40:49 crc kubenswrapper[4806]: I0126 09:40:49.054425 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jws2v_f35f3a4f-d62b-4a20-85c0-09e66c185e14/cp-metrics/0.log" Jan 26 09:40:49 crc kubenswrapper[4806]: I0126 09:40:49.062582 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jws2v_f35f3a4f-d62b-4a20-85c0-09e66c185e14/cp-metrics/0.log" Jan 26 09:40:49 crc kubenswrapper[4806]: I0126 09:40:49.278851 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jws2v_f35f3a4f-d62b-4a20-85c0-09e66c185e14/cp-metrics/0.log" Jan 26 09:40:49 crc kubenswrapper[4806]: I0126 09:40:49.349240 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jws2v_f35f3a4f-d62b-4a20-85c0-09e66c185e14/cp-reloader/0.log" Jan 26 09:40:49 crc kubenswrapper[4806]: I0126 09:40:49.350397 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jws2v_f35f3a4f-d62b-4a20-85c0-09e66c185e14/cp-frr-files/0.log" Jan 26 09:40:49 crc kubenswrapper[4806]: I0126 09:40:49.406200 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jws2v_f35f3a4f-d62b-4a20-85c0-09e66c185e14/controller/0.log" Jan 26 09:40:49 crc kubenswrapper[4806]: I0126 09:40:49.576826 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jws2v_f35f3a4f-d62b-4a20-85c0-09e66c185e14/kube-rbac-proxy/0.log" Jan 26 09:40:49 crc kubenswrapper[4806]: I0126 09:40:49.613403 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jws2v_f35f3a4f-d62b-4a20-85c0-09e66c185e14/kube-rbac-proxy-frr/0.log" Jan 26 09:40:49 crc kubenswrapper[4806]: I0126 09:40:49.673096 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jws2v_f35f3a4f-d62b-4a20-85c0-09e66c185e14/frr-metrics/0.log" Jan 26 09:40:50 crc kubenswrapper[4806]: I0126 09:40:50.170622 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jws2v_f35f3a4f-d62b-4a20-85c0-09e66c185e14/reloader/0.log" Jan 26 09:40:50 crc kubenswrapper[4806]: I0126 09:40:50.379349 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-s45xn_fe95a617-3830-4e48-99fe-fc542f07b380/frr-k8s-webhook-server/0.log" Jan 26 09:40:50 crc kubenswrapper[4806]: I0126 09:40:50.614266 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-7484b44c99-sst7d_73ae6aae-d47c-4eb0-a300-4cf672c00caa/manager/0.log" Jan 26 09:40:50 crc kubenswrapper[4806]: I0126 09:40:50.778433 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-7f57678986-nbfxh_2e5fa748-c882-43c5-9ecd-6d3d97c944ec/webhook-server/0.log" Jan 26 09:40:51 crc kubenswrapper[4806]: I0126 09:40:51.008942 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-tg7xx_e4ff47b2-636a-4bae-88ae-6fde41f5cdfc/kube-rbac-proxy/0.log" Jan 26 09:40:51 crc kubenswrapper[4806]: I0126 09:40:51.272115 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jws2v_f35f3a4f-d62b-4a20-85c0-09e66c185e14/frr/0.log" Jan 26 09:40:51 crc kubenswrapper[4806]: I0126 09:40:51.534443 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-tg7xx_e4ff47b2-636a-4bae-88ae-6fde41f5cdfc/speaker/0.log" Jan 26 09:41:06 crc kubenswrapper[4806]: I0126 09:41:06.719744 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc8fql6_4da3d341-b501-409d-9834-02c5ccf5cada/util/0.log" Jan 26 09:41:06 crc kubenswrapper[4806]: I0126 09:41:06.994559 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc8fql6_4da3d341-b501-409d-9834-02c5ccf5cada/util/0.log" Jan 26 09:41:07 crc kubenswrapper[4806]: I0126 09:41:07.044240 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc8fql6_4da3d341-b501-409d-9834-02c5ccf5cada/pull/0.log" Jan 26 09:41:07 crc kubenswrapper[4806]: I0126 09:41:07.054221 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc8fql6_4da3d341-b501-409d-9834-02c5ccf5cada/pull/0.log" Jan 26 09:41:07 crc kubenswrapper[4806]: I0126 09:41:07.241743 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc8fql6_4da3d341-b501-409d-9834-02c5ccf5cada/util/0.log" Jan 26 09:41:07 crc kubenswrapper[4806]: I0126 09:41:07.288265 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc8fql6_4da3d341-b501-409d-9834-02c5ccf5cada/extract/0.log" Jan 26 09:41:07 crc kubenswrapper[4806]: I0126 09:41:07.315880 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc8fql6_4da3d341-b501-409d-9834-02c5ccf5cada/pull/0.log" Jan 26 09:41:07 crc kubenswrapper[4806]: I0126 09:41:07.465196 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139nf4p_7e4ad2fa-5351-43e6-b30b-646bd63ade85/util/0.log" Jan 26 09:41:07 crc kubenswrapper[4806]: I0126 09:41:07.755324 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139nf4p_7e4ad2fa-5351-43e6-b30b-646bd63ade85/pull/0.log" Jan 26 09:41:07 crc kubenswrapper[4806]: I0126 09:41:07.755397 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139nf4p_7e4ad2fa-5351-43e6-b30b-646bd63ade85/pull/0.log" Jan 26 09:41:07 crc kubenswrapper[4806]: I0126 09:41:07.760655 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139nf4p_7e4ad2fa-5351-43e6-b30b-646bd63ade85/util/0.log" Jan 26 09:41:07 crc kubenswrapper[4806]: I0126 09:41:07.973937 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139nf4p_7e4ad2fa-5351-43e6-b30b-646bd63ade85/extract/0.log" Jan 26 09:41:07 crc kubenswrapper[4806]: I0126 09:41:07.992581 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139nf4p_7e4ad2fa-5351-43e6-b30b-646bd63ade85/util/0.log" Jan 26 09:41:07 crc kubenswrapper[4806]: I0126 09:41:07.993496 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7139nf4p_7e4ad2fa-5351-43e6-b30b-646bd63ade85/pull/0.log" Jan 26 09:41:08 crc kubenswrapper[4806]: I0126 09:41:08.163110 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-f427x_0c92773d-ebe0-4739-9668-f826721f9a36/extract-utilities/0.log" Jan 26 09:41:08 crc kubenswrapper[4806]: I0126 09:41:08.427061 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-f427x_0c92773d-ebe0-4739-9668-f826721f9a36/extract-content/0.log" Jan 26 09:41:08 crc kubenswrapper[4806]: I0126 09:41:08.427185 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-f427x_0c92773d-ebe0-4739-9668-f826721f9a36/extract-utilities/0.log" Jan 26 09:41:08 crc kubenswrapper[4806]: I0126 09:41:08.495176 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-f427x_0c92773d-ebe0-4739-9668-f826721f9a36/extract-content/0.log" Jan 26 09:41:08 crc kubenswrapper[4806]: I0126 09:41:08.694582 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-f427x_0c92773d-ebe0-4739-9668-f826721f9a36/extract-utilities/0.log" Jan 26 09:41:08 crc kubenswrapper[4806]: I0126 09:41:08.752808 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-f427x_0c92773d-ebe0-4739-9668-f826721f9a36/extract-content/0.log" Jan 26 09:41:08 crc kubenswrapper[4806]: I0126 09:41:08.952833 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-xfp4f_557bfe89-c128-469e-8e26-f80ecb3a1cb1/extract-utilities/0.log" Jan 26 09:41:09 crc kubenswrapper[4806]: I0126 09:41:09.184683 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-xfp4f_557bfe89-c128-469e-8e26-f80ecb3a1cb1/extract-utilities/0.log" Jan 26 09:41:09 crc kubenswrapper[4806]: I0126 09:41:09.257616 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-xfp4f_557bfe89-c128-469e-8e26-f80ecb3a1cb1/extract-content/0.log" Jan 26 09:41:09 crc kubenswrapper[4806]: I0126 09:41:09.657248 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-xfp4f_557bfe89-c128-469e-8e26-f80ecb3a1cb1/extract-content/0.log" Jan 26 09:41:09 crc kubenswrapper[4806]: I0126 09:41:09.895983 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-f427x_0c92773d-ebe0-4739-9668-f826721f9a36/registry-server/0.log" Jan 26 09:41:09 crc kubenswrapper[4806]: I0126 09:41:09.906193 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-xfp4f_557bfe89-c128-469e-8e26-f80ecb3a1cb1/extract-content/0.log" Jan 26 09:41:09 crc kubenswrapper[4806]: I0126 09:41:09.952683 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-xfp4f_557bfe89-c128-469e-8e26-f80ecb3a1cb1/extract-utilities/0.log" Jan 26 09:41:10 crc kubenswrapper[4806]: I0126 09:41:10.279385 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-4mgsn_54cc9617-4cbc-4346-916a-cded431da40b/marketplace-operator/0.log" Jan 26 09:41:10 crc kubenswrapper[4806]: I0126 09:41:10.604542 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-rqfsl_b94c8b95-1f08-4f96-a9c0-47aef79a823b/extract-utilities/0.log" Jan 26 09:41:10 crc kubenswrapper[4806]: I0126 09:41:10.704757 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-xfp4f_557bfe89-c128-469e-8e26-f80ecb3a1cb1/registry-server/0.log" Jan 26 09:41:10 crc kubenswrapper[4806]: I0126 09:41:10.789750 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-rqfsl_b94c8b95-1f08-4f96-a9c0-47aef79a823b/extract-content/0.log" Jan 26 09:41:10 crc kubenswrapper[4806]: I0126 09:41:10.801150 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-rqfsl_b94c8b95-1f08-4f96-a9c0-47aef79a823b/extract-utilities/0.log" Jan 26 09:41:10 crc kubenswrapper[4806]: I0126 09:41:10.907619 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-rqfsl_b94c8b95-1f08-4f96-a9c0-47aef79a823b/extract-content/0.log" Jan 26 09:41:11 crc kubenswrapper[4806]: I0126 09:41:11.157648 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-rqfsl_b94c8b95-1f08-4f96-a9c0-47aef79a823b/extract-content/0.log" Jan 26 09:41:11 crc kubenswrapper[4806]: I0126 09:41:11.209245 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-rqfsl_b94c8b95-1f08-4f96-a9c0-47aef79a823b/extract-utilities/0.log" Jan 26 09:41:11 crc kubenswrapper[4806]: I0126 09:41:11.413727 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lnpjg_c55665b2-fe11-48ad-9699-5bf16993d344/extract-utilities/0.log" Jan 26 09:41:11 crc kubenswrapper[4806]: I0126 09:41:11.604307 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lnpjg_c55665b2-fe11-48ad-9699-5bf16993d344/extract-content/0.log" Jan 26 09:41:11 crc kubenswrapper[4806]: I0126 09:41:11.611602 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lnpjg_c55665b2-fe11-48ad-9699-5bf16993d344/extract-utilities/0.log" Jan 26 09:41:11 crc kubenswrapper[4806]: I0126 09:41:11.669866 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lnpjg_c55665b2-fe11-48ad-9699-5bf16993d344/extract-content/0.log" Jan 26 09:41:11 crc kubenswrapper[4806]: I0126 09:41:11.894849 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-rqfsl_b94c8b95-1f08-4f96-a9c0-47aef79a823b/registry-server/0.log" Jan 26 09:41:11 crc kubenswrapper[4806]: I0126 09:41:11.913536 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lnpjg_c55665b2-fe11-48ad-9699-5bf16993d344/extract-content/0.log" Jan 26 09:41:11 crc kubenswrapper[4806]: I0126 09:41:11.990069 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lnpjg_c55665b2-fe11-48ad-9699-5bf16993d344/extract-utilities/0.log" Jan 26 09:41:12 crc kubenswrapper[4806]: I0126 09:41:12.775473 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lnpjg_c55665b2-fe11-48ad-9699-5bf16993d344/registry-server/0.log" Jan 26 09:41:15 crc kubenswrapper[4806]: I0126 09:41:15.806678 4806 patch_prober.go:28] interesting pod/machine-config-daemon-k2tlk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 09:41:15 crc kubenswrapper[4806]: I0126 09:41:15.807459 4806 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 09:41:45 crc kubenswrapper[4806]: I0126 09:41:45.806473 4806 patch_prober.go:28] interesting pod/machine-config-daemon-k2tlk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 09:41:45 crc kubenswrapper[4806]: I0126 09:41:45.807119 4806 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 09:42:15 crc kubenswrapper[4806]: I0126 09:42:15.806667 4806 patch_prober.go:28] interesting pod/machine-config-daemon-k2tlk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 09:42:15 crc kubenswrapper[4806]: I0126 09:42:15.807241 4806 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 09:42:15 crc kubenswrapper[4806]: I0126 09:42:15.807292 4806 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" Jan 26 09:42:15 crc kubenswrapper[4806]: I0126 09:42:15.808593 4806 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"94db467df12a6038972412ef143c9c2677da69013ebdb4ab5f0a652f611d1b29"} pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 09:42:15 crc kubenswrapper[4806]: I0126 09:42:15.808902 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" containerName="machine-config-daemon" containerID="cri-o://94db467df12a6038972412ef143c9c2677da69013ebdb4ab5f0a652f611d1b29" gracePeriod=600 Jan 26 09:42:16 crc kubenswrapper[4806]: I0126 09:42:16.048881 4806 generic.go:334] "Generic (PLEG): container finished" podID="d07502a2-50b0-4012-b335-340a1c694c50" containerID="94db467df12a6038972412ef143c9c2677da69013ebdb4ab5f0a652f611d1b29" exitCode=0 Jan 26 09:42:16 crc kubenswrapper[4806]: I0126 09:42:16.049193 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" event={"ID":"d07502a2-50b0-4012-b335-340a1c694c50","Type":"ContainerDied","Data":"94db467df12a6038972412ef143c9c2677da69013ebdb4ab5f0a652f611d1b29"} Jan 26 09:42:16 crc kubenswrapper[4806]: I0126 09:42:16.049930 4806 scope.go:117] "RemoveContainer" containerID="0b4284bb6f66d7af192b1b55fa536fcbb56d0e5fcd0b5a1dafbad5325167e098" Jan 26 09:42:17 crc kubenswrapper[4806]: I0126 09:42:17.063480 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" event={"ID":"d07502a2-50b0-4012-b335-340a1c694c50","Type":"ContainerStarted","Data":"30b0621374ad1344bda7b792ef451495a6c6e23e6c45091ee047de6935c61017"} Jan 26 09:42:44 crc kubenswrapper[4806]: I0126 09:42:44.683187 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-btssx"] Jan 26 09:42:44 crc kubenswrapper[4806]: E0126 09:42:44.685909 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e3d2c35-5ee8-4770-b892-74c4d62d70ad" containerName="container-00" Jan 26 09:42:44 crc kubenswrapper[4806]: I0126 09:42:44.686023 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e3d2c35-5ee8-4770-b892-74c4d62d70ad" containerName="container-00" Jan 26 09:42:44 crc kubenswrapper[4806]: I0126 09:42:44.686628 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e3d2c35-5ee8-4770-b892-74c4d62d70ad" containerName="container-00" Jan 26 09:42:44 crc kubenswrapper[4806]: I0126 09:42:44.690218 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-btssx" Jan 26 09:42:44 crc kubenswrapper[4806]: I0126 09:42:44.717273 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-btssx"] Jan 26 09:42:44 crc kubenswrapper[4806]: I0126 09:42:44.757936 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kdzs\" (UniqueName: \"kubernetes.io/projected/e65a42f3-a9df-49d5-b1c7-9cd5211e0c4e-kube-api-access-8kdzs\") pod \"community-operators-btssx\" (UID: \"e65a42f3-a9df-49d5-b1c7-9cd5211e0c4e\") " pod="openshift-marketplace/community-operators-btssx" Jan 26 09:42:44 crc kubenswrapper[4806]: I0126 09:42:44.757995 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e65a42f3-a9df-49d5-b1c7-9cd5211e0c4e-catalog-content\") pod \"community-operators-btssx\" (UID: \"e65a42f3-a9df-49d5-b1c7-9cd5211e0c4e\") " pod="openshift-marketplace/community-operators-btssx" Jan 26 09:42:44 crc kubenswrapper[4806]: I0126 09:42:44.758083 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e65a42f3-a9df-49d5-b1c7-9cd5211e0c4e-utilities\") pod \"community-operators-btssx\" (UID: \"e65a42f3-a9df-49d5-b1c7-9cd5211e0c4e\") " pod="openshift-marketplace/community-operators-btssx" Jan 26 09:42:44 crc kubenswrapper[4806]: I0126 09:42:44.860103 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8kdzs\" (UniqueName: \"kubernetes.io/projected/e65a42f3-a9df-49d5-b1c7-9cd5211e0c4e-kube-api-access-8kdzs\") pod \"community-operators-btssx\" (UID: \"e65a42f3-a9df-49d5-b1c7-9cd5211e0c4e\") " pod="openshift-marketplace/community-operators-btssx" Jan 26 09:42:44 crc kubenswrapper[4806]: I0126 09:42:44.860152 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e65a42f3-a9df-49d5-b1c7-9cd5211e0c4e-catalog-content\") pod \"community-operators-btssx\" (UID: \"e65a42f3-a9df-49d5-b1c7-9cd5211e0c4e\") " pod="openshift-marketplace/community-operators-btssx" Jan 26 09:42:44 crc kubenswrapper[4806]: I0126 09:42:44.860183 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e65a42f3-a9df-49d5-b1c7-9cd5211e0c4e-utilities\") pod \"community-operators-btssx\" (UID: \"e65a42f3-a9df-49d5-b1c7-9cd5211e0c4e\") " pod="openshift-marketplace/community-operators-btssx" Jan 26 09:42:44 crc kubenswrapper[4806]: I0126 09:42:44.860804 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e65a42f3-a9df-49d5-b1c7-9cd5211e0c4e-catalog-content\") pod \"community-operators-btssx\" (UID: \"e65a42f3-a9df-49d5-b1c7-9cd5211e0c4e\") " pod="openshift-marketplace/community-operators-btssx" Jan 26 09:42:44 crc kubenswrapper[4806]: I0126 09:42:44.861014 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e65a42f3-a9df-49d5-b1c7-9cd5211e0c4e-utilities\") pod \"community-operators-btssx\" (UID: \"e65a42f3-a9df-49d5-b1c7-9cd5211e0c4e\") " pod="openshift-marketplace/community-operators-btssx" Jan 26 09:42:44 crc kubenswrapper[4806]: I0126 09:42:44.887749 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8kdzs\" (UniqueName: \"kubernetes.io/projected/e65a42f3-a9df-49d5-b1c7-9cd5211e0c4e-kube-api-access-8kdzs\") pod \"community-operators-btssx\" (UID: \"e65a42f3-a9df-49d5-b1c7-9cd5211e0c4e\") " pod="openshift-marketplace/community-operators-btssx" Jan 26 09:42:45 crc kubenswrapper[4806]: I0126 09:42:45.015741 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-btssx" Jan 26 09:42:45 crc kubenswrapper[4806]: I0126 09:42:45.704872 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-btssx"] Jan 26 09:42:46 crc kubenswrapper[4806]: I0126 09:42:46.403871 4806 generic.go:334] "Generic (PLEG): container finished" podID="e65a42f3-a9df-49d5-b1c7-9cd5211e0c4e" containerID="bb9276c8cf0fb47723fae579daa723c3a7ddfbdd955c49ec2be36b95a5c26c77" exitCode=0 Jan 26 09:42:46 crc kubenswrapper[4806]: I0126 09:42:46.404015 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-btssx" event={"ID":"e65a42f3-a9df-49d5-b1c7-9cd5211e0c4e","Type":"ContainerDied","Data":"bb9276c8cf0fb47723fae579daa723c3a7ddfbdd955c49ec2be36b95a5c26c77"} Jan 26 09:42:46 crc kubenswrapper[4806]: I0126 09:42:46.404661 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-btssx" event={"ID":"e65a42f3-a9df-49d5-b1c7-9cd5211e0c4e","Type":"ContainerStarted","Data":"a4872746335100b6be78be9a200b9586456ac2320667586728a928d763578454"} Jan 26 09:42:46 crc kubenswrapper[4806]: I0126 09:42:46.409308 4806 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 09:42:47 crc kubenswrapper[4806]: I0126 09:42:47.414390 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-btssx" event={"ID":"e65a42f3-a9df-49d5-b1c7-9cd5211e0c4e","Type":"ContainerStarted","Data":"4fa44602ff024c0e82488789c4776e250bc45e037caf68707636f749bae46861"} Jan 26 09:42:48 crc kubenswrapper[4806]: I0126 09:42:48.423289 4806 generic.go:334] "Generic (PLEG): container finished" podID="e65a42f3-a9df-49d5-b1c7-9cd5211e0c4e" containerID="4fa44602ff024c0e82488789c4776e250bc45e037caf68707636f749bae46861" exitCode=0 Jan 26 09:42:48 crc kubenswrapper[4806]: I0126 09:42:48.423584 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-btssx" event={"ID":"e65a42f3-a9df-49d5-b1c7-9cd5211e0c4e","Type":"ContainerDied","Data":"4fa44602ff024c0e82488789c4776e250bc45e037caf68707636f749bae46861"} Jan 26 09:42:49 crc kubenswrapper[4806]: I0126 09:42:49.464089 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-btssx" event={"ID":"e65a42f3-a9df-49d5-b1c7-9cd5211e0c4e","Type":"ContainerStarted","Data":"cb7115c21107a8457992beaaf7e0ab23d421ff710c7f82d46c4d3d51f14f0ece"} Jan 26 09:42:49 crc kubenswrapper[4806]: I0126 09:42:49.523907 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-btssx" podStartSLOduration=3.080816971 podStartE2EDuration="5.522888761s" podCreationTimestamp="2026-01-26 09:42:44 +0000 UTC" firstStartedPulling="2026-01-26 09:42:46.405977907 +0000 UTC m=+6545.670385963" lastFinishedPulling="2026-01-26 09:42:48.848049697 +0000 UTC m=+6548.112457753" observedRunningTime="2026-01-26 09:42:49.51653578 +0000 UTC m=+6548.780943866" watchObservedRunningTime="2026-01-26 09:42:49.522888761 +0000 UTC m=+6548.787296817" Jan 26 09:42:55 crc kubenswrapper[4806]: I0126 09:42:55.016708 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-btssx" Jan 26 09:42:55 crc kubenswrapper[4806]: I0126 09:42:55.017187 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-btssx" Jan 26 09:42:55 crc kubenswrapper[4806]: I0126 09:42:55.108509 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-btssx" Jan 26 09:42:55 crc kubenswrapper[4806]: I0126 09:42:55.581106 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-btssx" Jan 26 09:42:55 crc kubenswrapper[4806]: I0126 09:42:55.652725 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-btssx"] Jan 26 09:42:57 crc kubenswrapper[4806]: I0126 09:42:57.537638 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-btssx" podUID="e65a42f3-a9df-49d5-b1c7-9cd5211e0c4e" containerName="registry-server" containerID="cri-o://cb7115c21107a8457992beaaf7e0ab23d421ff710c7f82d46c4d3d51f14f0ece" gracePeriod=2 Jan 26 09:42:58 crc kubenswrapper[4806]: I0126 09:42:58.065893 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-btssx" Jan 26 09:42:58 crc kubenswrapper[4806]: I0126 09:42:58.227715 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e65a42f3-a9df-49d5-b1c7-9cd5211e0c4e-catalog-content\") pod \"e65a42f3-a9df-49d5-b1c7-9cd5211e0c4e\" (UID: \"e65a42f3-a9df-49d5-b1c7-9cd5211e0c4e\") " Jan 26 09:42:58 crc kubenswrapper[4806]: I0126 09:42:58.228011 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8kdzs\" (UniqueName: \"kubernetes.io/projected/e65a42f3-a9df-49d5-b1c7-9cd5211e0c4e-kube-api-access-8kdzs\") pod \"e65a42f3-a9df-49d5-b1c7-9cd5211e0c4e\" (UID: \"e65a42f3-a9df-49d5-b1c7-9cd5211e0c4e\") " Jan 26 09:42:58 crc kubenswrapper[4806]: I0126 09:42:58.228076 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e65a42f3-a9df-49d5-b1c7-9cd5211e0c4e-utilities\") pod \"e65a42f3-a9df-49d5-b1c7-9cd5211e0c4e\" (UID: \"e65a42f3-a9df-49d5-b1c7-9cd5211e0c4e\") " Jan 26 09:42:58 crc kubenswrapper[4806]: I0126 09:42:58.229609 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e65a42f3-a9df-49d5-b1c7-9cd5211e0c4e-utilities" (OuterVolumeSpecName: "utilities") pod "e65a42f3-a9df-49d5-b1c7-9cd5211e0c4e" (UID: "e65a42f3-a9df-49d5-b1c7-9cd5211e0c4e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 09:42:58 crc kubenswrapper[4806]: I0126 09:42:58.237629 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e65a42f3-a9df-49d5-b1c7-9cd5211e0c4e-kube-api-access-8kdzs" (OuterVolumeSpecName: "kube-api-access-8kdzs") pod "e65a42f3-a9df-49d5-b1c7-9cd5211e0c4e" (UID: "e65a42f3-a9df-49d5-b1c7-9cd5211e0c4e"). InnerVolumeSpecName "kube-api-access-8kdzs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:42:58 crc kubenswrapper[4806]: I0126 09:42:58.295119 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e65a42f3-a9df-49d5-b1c7-9cd5211e0c4e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e65a42f3-a9df-49d5-b1c7-9cd5211e0c4e" (UID: "e65a42f3-a9df-49d5-b1c7-9cd5211e0c4e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 09:42:58 crc kubenswrapper[4806]: I0126 09:42:58.330445 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8kdzs\" (UniqueName: \"kubernetes.io/projected/e65a42f3-a9df-49d5-b1c7-9cd5211e0c4e-kube-api-access-8kdzs\") on node \"crc\" DevicePath \"\"" Jan 26 09:42:58 crc kubenswrapper[4806]: I0126 09:42:58.330490 4806 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e65a42f3-a9df-49d5-b1c7-9cd5211e0c4e-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 09:42:58 crc kubenswrapper[4806]: I0126 09:42:58.330502 4806 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e65a42f3-a9df-49d5-b1c7-9cd5211e0c4e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 09:42:58 crc kubenswrapper[4806]: I0126 09:42:58.549704 4806 generic.go:334] "Generic (PLEG): container finished" podID="e65a42f3-a9df-49d5-b1c7-9cd5211e0c4e" containerID="cb7115c21107a8457992beaaf7e0ab23d421ff710c7f82d46c4d3d51f14f0ece" exitCode=0 Jan 26 09:42:58 crc kubenswrapper[4806]: I0126 09:42:58.549745 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-btssx" event={"ID":"e65a42f3-a9df-49d5-b1c7-9cd5211e0c4e","Type":"ContainerDied","Data":"cb7115c21107a8457992beaaf7e0ab23d421ff710c7f82d46c4d3d51f14f0ece"} Jan 26 09:42:58 crc kubenswrapper[4806]: I0126 09:42:58.549770 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-btssx" event={"ID":"e65a42f3-a9df-49d5-b1c7-9cd5211e0c4e","Type":"ContainerDied","Data":"a4872746335100b6be78be9a200b9586456ac2320667586728a928d763578454"} Jan 26 09:42:58 crc kubenswrapper[4806]: I0126 09:42:58.549787 4806 scope.go:117] "RemoveContainer" containerID="cb7115c21107a8457992beaaf7e0ab23d421ff710c7f82d46c4d3d51f14f0ece" Jan 26 09:42:58 crc kubenswrapper[4806]: I0126 09:42:58.549909 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-btssx" Jan 26 09:42:58 crc kubenswrapper[4806]: I0126 09:42:58.583670 4806 scope.go:117] "RemoveContainer" containerID="4fa44602ff024c0e82488789c4776e250bc45e037caf68707636f749bae46861" Jan 26 09:42:58 crc kubenswrapper[4806]: I0126 09:42:58.600594 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-btssx"] Jan 26 09:42:58 crc kubenswrapper[4806]: I0126 09:42:58.610384 4806 scope.go:117] "RemoveContainer" containerID="bb9276c8cf0fb47723fae579daa723c3a7ddfbdd955c49ec2be36b95a5c26c77" Jan 26 09:42:58 crc kubenswrapper[4806]: I0126 09:42:58.614016 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-btssx"] Jan 26 09:42:58 crc kubenswrapper[4806]: I0126 09:42:58.652667 4806 scope.go:117] "RemoveContainer" containerID="cb7115c21107a8457992beaaf7e0ab23d421ff710c7f82d46c4d3d51f14f0ece" Jan 26 09:42:58 crc kubenswrapper[4806]: E0126 09:42:58.653448 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb7115c21107a8457992beaaf7e0ab23d421ff710c7f82d46c4d3d51f14f0ece\": container with ID starting with cb7115c21107a8457992beaaf7e0ab23d421ff710c7f82d46c4d3d51f14f0ece not found: ID does not exist" containerID="cb7115c21107a8457992beaaf7e0ab23d421ff710c7f82d46c4d3d51f14f0ece" Jan 26 09:42:58 crc kubenswrapper[4806]: I0126 09:42:58.653693 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb7115c21107a8457992beaaf7e0ab23d421ff710c7f82d46c4d3d51f14f0ece"} err="failed to get container status \"cb7115c21107a8457992beaaf7e0ab23d421ff710c7f82d46c4d3d51f14f0ece\": rpc error: code = NotFound desc = could not find container \"cb7115c21107a8457992beaaf7e0ab23d421ff710c7f82d46c4d3d51f14f0ece\": container with ID starting with cb7115c21107a8457992beaaf7e0ab23d421ff710c7f82d46c4d3d51f14f0ece not found: ID does not exist" Jan 26 09:42:58 crc kubenswrapper[4806]: I0126 09:42:58.653726 4806 scope.go:117] "RemoveContainer" containerID="4fa44602ff024c0e82488789c4776e250bc45e037caf68707636f749bae46861" Jan 26 09:42:58 crc kubenswrapper[4806]: E0126 09:42:58.654042 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4fa44602ff024c0e82488789c4776e250bc45e037caf68707636f749bae46861\": container with ID starting with 4fa44602ff024c0e82488789c4776e250bc45e037caf68707636f749bae46861 not found: ID does not exist" containerID="4fa44602ff024c0e82488789c4776e250bc45e037caf68707636f749bae46861" Jan 26 09:42:58 crc kubenswrapper[4806]: I0126 09:42:58.654100 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4fa44602ff024c0e82488789c4776e250bc45e037caf68707636f749bae46861"} err="failed to get container status \"4fa44602ff024c0e82488789c4776e250bc45e037caf68707636f749bae46861\": rpc error: code = NotFound desc = could not find container \"4fa44602ff024c0e82488789c4776e250bc45e037caf68707636f749bae46861\": container with ID starting with 4fa44602ff024c0e82488789c4776e250bc45e037caf68707636f749bae46861 not found: ID does not exist" Jan 26 09:42:58 crc kubenswrapper[4806]: I0126 09:42:58.654135 4806 scope.go:117] "RemoveContainer" containerID="bb9276c8cf0fb47723fae579daa723c3a7ddfbdd955c49ec2be36b95a5c26c77" Jan 26 09:42:58 crc kubenswrapper[4806]: E0126 09:42:58.654598 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb9276c8cf0fb47723fae579daa723c3a7ddfbdd955c49ec2be36b95a5c26c77\": container with ID starting with bb9276c8cf0fb47723fae579daa723c3a7ddfbdd955c49ec2be36b95a5c26c77 not found: ID does not exist" containerID="bb9276c8cf0fb47723fae579daa723c3a7ddfbdd955c49ec2be36b95a5c26c77" Jan 26 09:42:58 crc kubenswrapper[4806]: I0126 09:42:58.654625 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb9276c8cf0fb47723fae579daa723c3a7ddfbdd955c49ec2be36b95a5c26c77"} err="failed to get container status \"bb9276c8cf0fb47723fae579daa723c3a7ddfbdd955c49ec2be36b95a5c26c77\": rpc error: code = NotFound desc = could not find container \"bb9276c8cf0fb47723fae579daa723c3a7ddfbdd955c49ec2be36b95a5c26c77\": container with ID starting with bb9276c8cf0fb47723fae579daa723c3a7ddfbdd955c49ec2be36b95a5c26c77 not found: ID does not exist" Jan 26 09:42:59 crc kubenswrapper[4806]: I0126 09:42:59.056055 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e65a42f3-a9df-49d5-b1c7-9cd5211e0c4e" path="/var/lib/kubelet/pods/e65a42f3-a9df-49d5-b1c7-9cd5211e0c4e/volumes" Jan 26 09:43:31 crc kubenswrapper[4806]: I0126 09:43:31.937725 4806 generic.go:334] "Generic (PLEG): container finished" podID="f25dd6f2-cd3b-42ea-8adc-d435c977286c" containerID="7b7f1efce964bb1e553560fc8a687336c289378ca7de82c98e2f20dd27bde5dd" exitCode=0 Jan 26 09:43:31 crc kubenswrapper[4806]: I0126 09:43:31.937849 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6xrt9/must-gather-647n6" event={"ID":"f25dd6f2-cd3b-42ea-8adc-d435c977286c","Type":"ContainerDied","Data":"7b7f1efce964bb1e553560fc8a687336c289378ca7de82c98e2f20dd27bde5dd"} Jan 26 09:43:31 crc kubenswrapper[4806]: I0126 09:43:31.938671 4806 scope.go:117] "RemoveContainer" containerID="7b7f1efce964bb1e553560fc8a687336c289378ca7de82c98e2f20dd27bde5dd" Jan 26 09:43:32 crc kubenswrapper[4806]: I0126 09:43:32.702834 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-6xrt9_must-gather-647n6_f25dd6f2-cd3b-42ea-8adc-d435c977286c/gather/0.log" Jan 26 09:43:39 crc kubenswrapper[4806]: I0126 09:43:39.536737 4806 scope.go:117] "RemoveContainer" containerID="87d06e8ba843f8a25a7e251c4c041fbeda9b69965594f05060d02c640e763bc9" Jan 26 09:43:41 crc kubenswrapper[4806]: I0126 09:43:41.067296 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-6xrt9/must-gather-647n6"] Jan 26 09:43:41 crc kubenswrapper[4806]: I0126 09:43:41.067561 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-6xrt9/must-gather-647n6"] Jan 26 09:43:41 crc kubenswrapper[4806]: I0126 09:43:41.067739 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-6xrt9/must-gather-647n6" podUID="f25dd6f2-cd3b-42ea-8adc-d435c977286c" containerName="copy" containerID="cri-o://6a153f9564b5b1dc0308e6cb6b0f362ea4819e3ebf8be6e200c105a95ceb5bf6" gracePeriod=2 Jan 26 09:43:41 crc kubenswrapper[4806]: I0126 09:43:41.595923 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-6xrt9_must-gather-647n6_f25dd6f2-cd3b-42ea-8adc-d435c977286c/copy/0.log" Jan 26 09:43:41 crc kubenswrapper[4806]: I0126 09:43:41.599201 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6xrt9/must-gather-647n6" Jan 26 09:43:41 crc kubenswrapper[4806]: I0126 09:43:41.750338 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f25dd6f2-cd3b-42ea-8adc-d435c977286c-must-gather-output\") pod \"f25dd6f2-cd3b-42ea-8adc-d435c977286c\" (UID: \"f25dd6f2-cd3b-42ea-8adc-d435c977286c\") " Jan 26 09:43:41 crc kubenswrapper[4806]: I0126 09:43:41.750669 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5hg6l\" (UniqueName: \"kubernetes.io/projected/f25dd6f2-cd3b-42ea-8adc-d435c977286c-kube-api-access-5hg6l\") pod \"f25dd6f2-cd3b-42ea-8adc-d435c977286c\" (UID: \"f25dd6f2-cd3b-42ea-8adc-d435c977286c\") " Jan 26 09:43:41 crc kubenswrapper[4806]: I0126 09:43:41.761973 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f25dd6f2-cd3b-42ea-8adc-d435c977286c-kube-api-access-5hg6l" (OuterVolumeSpecName: "kube-api-access-5hg6l") pod "f25dd6f2-cd3b-42ea-8adc-d435c977286c" (UID: "f25dd6f2-cd3b-42ea-8adc-d435c977286c"). InnerVolumeSpecName "kube-api-access-5hg6l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:43:41 crc kubenswrapper[4806]: I0126 09:43:41.852583 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5hg6l\" (UniqueName: \"kubernetes.io/projected/f25dd6f2-cd3b-42ea-8adc-d435c977286c-kube-api-access-5hg6l\") on node \"crc\" DevicePath \"\"" Jan 26 09:43:42 crc kubenswrapper[4806]: I0126 09:43:42.079770 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f25dd6f2-cd3b-42ea-8adc-d435c977286c-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "f25dd6f2-cd3b-42ea-8adc-d435c977286c" (UID: "f25dd6f2-cd3b-42ea-8adc-d435c977286c"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 09:43:42 crc kubenswrapper[4806]: I0126 09:43:42.140912 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-6xrt9_must-gather-647n6_f25dd6f2-cd3b-42ea-8adc-d435c977286c/copy/0.log" Jan 26 09:43:42 crc kubenswrapper[4806]: I0126 09:43:42.141610 4806 generic.go:334] "Generic (PLEG): container finished" podID="f25dd6f2-cd3b-42ea-8adc-d435c977286c" containerID="6a153f9564b5b1dc0308e6cb6b0f362ea4819e3ebf8be6e200c105a95ceb5bf6" exitCode=143 Jan 26 09:43:42 crc kubenswrapper[4806]: I0126 09:43:42.141658 4806 scope.go:117] "RemoveContainer" containerID="6a153f9564b5b1dc0308e6cb6b0f362ea4819e3ebf8be6e200c105a95ceb5bf6" Jan 26 09:43:42 crc kubenswrapper[4806]: I0126 09:43:42.141679 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6xrt9/must-gather-647n6" Jan 26 09:43:42 crc kubenswrapper[4806]: I0126 09:43:42.163058 4806 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f25dd6f2-cd3b-42ea-8adc-d435c977286c-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 26 09:43:42 crc kubenswrapper[4806]: I0126 09:43:42.174348 4806 scope.go:117] "RemoveContainer" containerID="7b7f1efce964bb1e553560fc8a687336c289378ca7de82c98e2f20dd27bde5dd" Jan 26 09:43:42 crc kubenswrapper[4806]: I0126 09:43:42.224063 4806 scope.go:117] "RemoveContainer" containerID="6a153f9564b5b1dc0308e6cb6b0f362ea4819e3ebf8be6e200c105a95ceb5bf6" Jan 26 09:43:42 crc kubenswrapper[4806]: E0126 09:43:42.224871 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a153f9564b5b1dc0308e6cb6b0f362ea4819e3ebf8be6e200c105a95ceb5bf6\": container with ID starting with 6a153f9564b5b1dc0308e6cb6b0f362ea4819e3ebf8be6e200c105a95ceb5bf6 not found: ID does not exist" containerID="6a153f9564b5b1dc0308e6cb6b0f362ea4819e3ebf8be6e200c105a95ceb5bf6" Jan 26 09:43:42 crc kubenswrapper[4806]: I0126 09:43:42.224901 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a153f9564b5b1dc0308e6cb6b0f362ea4819e3ebf8be6e200c105a95ceb5bf6"} err="failed to get container status \"6a153f9564b5b1dc0308e6cb6b0f362ea4819e3ebf8be6e200c105a95ceb5bf6\": rpc error: code = NotFound desc = could not find container \"6a153f9564b5b1dc0308e6cb6b0f362ea4819e3ebf8be6e200c105a95ceb5bf6\": container with ID starting with 6a153f9564b5b1dc0308e6cb6b0f362ea4819e3ebf8be6e200c105a95ceb5bf6 not found: ID does not exist" Jan 26 09:43:42 crc kubenswrapper[4806]: I0126 09:43:42.224922 4806 scope.go:117] "RemoveContainer" containerID="7b7f1efce964bb1e553560fc8a687336c289378ca7de82c98e2f20dd27bde5dd" Jan 26 09:43:42 crc kubenswrapper[4806]: E0126 09:43:42.225285 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b7f1efce964bb1e553560fc8a687336c289378ca7de82c98e2f20dd27bde5dd\": container with ID starting with 7b7f1efce964bb1e553560fc8a687336c289378ca7de82c98e2f20dd27bde5dd not found: ID does not exist" containerID="7b7f1efce964bb1e553560fc8a687336c289378ca7de82c98e2f20dd27bde5dd" Jan 26 09:43:42 crc kubenswrapper[4806]: I0126 09:43:42.225303 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b7f1efce964bb1e553560fc8a687336c289378ca7de82c98e2f20dd27bde5dd"} err="failed to get container status \"7b7f1efce964bb1e553560fc8a687336c289378ca7de82c98e2f20dd27bde5dd\": rpc error: code = NotFound desc = could not find container \"7b7f1efce964bb1e553560fc8a687336c289378ca7de82c98e2f20dd27bde5dd\": container with ID starting with 7b7f1efce964bb1e553560fc8a687336c289378ca7de82c98e2f20dd27bde5dd not found: ID does not exist" Jan 26 09:43:43 crc kubenswrapper[4806]: I0126 09:43:43.069680 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f25dd6f2-cd3b-42ea-8adc-d435c977286c" path="/var/lib/kubelet/pods/f25dd6f2-cd3b-42ea-8adc-d435c977286c/volumes" Jan 26 09:44:18 crc kubenswrapper[4806]: I0126 09:44:18.027458 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-jsvcv"] Jan 26 09:44:18 crc kubenswrapper[4806]: E0126 09:44:18.028645 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e65a42f3-a9df-49d5-b1c7-9cd5211e0c4e" containerName="extract-content" Jan 26 09:44:18 crc kubenswrapper[4806]: I0126 09:44:18.028663 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="e65a42f3-a9df-49d5-b1c7-9cd5211e0c4e" containerName="extract-content" Jan 26 09:44:18 crc kubenswrapper[4806]: E0126 09:44:18.028689 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e65a42f3-a9df-49d5-b1c7-9cd5211e0c4e" containerName="extract-utilities" Jan 26 09:44:18 crc kubenswrapper[4806]: I0126 09:44:18.028697 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="e65a42f3-a9df-49d5-b1c7-9cd5211e0c4e" containerName="extract-utilities" Jan 26 09:44:18 crc kubenswrapper[4806]: E0126 09:44:18.028738 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f25dd6f2-cd3b-42ea-8adc-d435c977286c" containerName="gather" Jan 26 09:44:18 crc kubenswrapper[4806]: I0126 09:44:18.028747 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="f25dd6f2-cd3b-42ea-8adc-d435c977286c" containerName="gather" Jan 26 09:44:18 crc kubenswrapper[4806]: E0126 09:44:18.028768 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f25dd6f2-cd3b-42ea-8adc-d435c977286c" containerName="copy" Jan 26 09:44:18 crc kubenswrapper[4806]: I0126 09:44:18.028795 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="f25dd6f2-cd3b-42ea-8adc-d435c977286c" containerName="copy" Jan 26 09:44:18 crc kubenswrapper[4806]: E0126 09:44:18.028812 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e65a42f3-a9df-49d5-b1c7-9cd5211e0c4e" containerName="registry-server" Jan 26 09:44:18 crc kubenswrapper[4806]: I0126 09:44:18.028819 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="e65a42f3-a9df-49d5-b1c7-9cd5211e0c4e" containerName="registry-server" Jan 26 09:44:18 crc kubenswrapper[4806]: I0126 09:44:18.029038 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="e65a42f3-a9df-49d5-b1c7-9cd5211e0c4e" containerName="registry-server" Jan 26 09:44:18 crc kubenswrapper[4806]: I0126 09:44:18.029057 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="f25dd6f2-cd3b-42ea-8adc-d435c977286c" containerName="copy" Jan 26 09:44:18 crc kubenswrapper[4806]: I0126 09:44:18.029071 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="f25dd6f2-cd3b-42ea-8adc-d435c977286c" containerName="gather" Jan 26 09:44:18 crc kubenswrapper[4806]: I0126 09:44:18.030836 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jsvcv" Jan 26 09:44:18 crc kubenswrapper[4806]: I0126 09:44:18.048667 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jsvcv"] Jan 26 09:44:18 crc kubenswrapper[4806]: I0126 09:44:18.184336 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b78544f5-8527-4f24-8f6c-9e517bbd7aa0-catalog-content\") pod \"certified-operators-jsvcv\" (UID: \"b78544f5-8527-4f24-8f6c-9e517bbd7aa0\") " pod="openshift-marketplace/certified-operators-jsvcv" Jan 26 09:44:18 crc kubenswrapper[4806]: I0126 09:44:18.184401 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cl2gl\" (UniqueName: \"kubernetes.io/projected/b78544f5-8527-4f24-8f6c-9e517bbd7aa0-kube-api-access-cl2gl\") pod \"certified-operators-jsvcv\" (UID: \"b78544f5-8527-4f24-8f6c-9e517bbd7aa0\") " pod="openshift-marketplace/certified-operators-jsvcv" Jan 26 09:44:18 crc kubenswrapper[4806]: I0126 09:44:18.184456 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b78544f5-8527-4f24-8f6c-9e517bbd7aa0-utilities\") pod \"certified-operators-jsvcv\" (UID: \"b78544f5-8527-4f24-8f6c-9e517bbd7aa0\") " pod="openshift-marketplace/certified-operators-jsvcv" Jan 26 09:44:18 crc kubenswrapper[4806]: I0126 09:44:18.287258 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b78544f5-8527-4f24-8f6c-9e517bbd7aa0-catalog-content\") pod \"certified-operators-jsvcv\" (UID: \"b78544f5-8527-4f24-8f6c-9e517bbd7aa0\") " pod="openshift-marketplace/certified-operators-jsvcv" Jan 26 09:44:18 crc kubenswrapper[4806]: I0126 09:44:18.287431 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cl2gl\" (UniqueName: \"kubernetes.io/projected/b78544f5-8527-4f24-8f6c-9e517bbd7aa0-kube-api-access-cl2gl\") pod \"certified-operators-jsvcv\" (UID: \"b78544f5-8527-4f24-8f6c-9e517bbd7aa0\") " pod="openshift-marketplace/certified-operators-jsvcv" Jan 26 09:44:18 crc kubenswrapper[4806]: I0126 09:44:18.287611 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b78544f5-8527-4f24-8f6c-9e517bbd7aa0-utilities\") pod \"certified-operators-jsvcv\" (UID: \"b78544f5-8527-4f24-8f6c-9e517bbd7aa0\") " pod="openshift-marketplace/certified-operators-jsvcv" Jan 26 09:44:18 crc kubenswrapper[4806]: I0126 09:44:18.288192 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b78544f5-8527-4f24-8f6c-9e517bbd7aa0-utilities\") pod \"certified-operators-jsvcv\" (UID: \"b78544f5-8527-4f24-8f6c-9e517bbd7aa0\") " pod="openshift-marketplace/certified-operators-jsvcv" Jan 26 09:44:18 crc kubenswrapper[4806]: I0126 09:44:18.288218 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b78544f5-8527-4f24-8f6c-9e517bbd7aa0-catalog-content\") pod \"certified-operators-jsvcv\" (UID: \"b78544f5-8527-4f24-8f6c-9e517bbd7aa0\") " pod="openshift-marketplace/certified-operators-jsvcv" Jan 26 09:44:18 crc kubenswrapper[4806]: I0126 09:44:18.313866 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cl2gl\" (UniqueName: \"kubernetes.io/projected/b78544f5-8527-4f24-8f6c-9e517bbd7aa0-kube-api-access-cl2gl\") pod \"certified-operators-jsvcv\" (UID: \"b78544f5-8527-4f24-8f6c-9e517bbd7aa0\") " pod="openshift-marketplace/certified-operators-jsvcv" Jan 26 09:44:18 crc kubenswrapper[4806]: I0126 09:44:18.352749 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jsvcv" Jan 26 09:44:18 crc kubenswrapper[4806]: I0126 09:44:18.869899 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jsvcv"] Jan 26 09:44:19 crc kubenswrapper[4806]: I0126 09:44:19.492043 4806 generic.go:334] "Generic (PLEG): container finished" podID="b78544f5-8527-4f24-8f6c-9e517bbd7aa0" containerID="e05c912af8a6153cf16ca898a88900c8647e95244e7c16c1d6ea318c2973c189" exitCode=0 Jan 26 09:44:19 crc kubenswrapper[4806]: I0126 09:44:19.492174 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jsvcv" event={"ID":"b78544f5-8527-4f24-8f6c-9e517bbd7aa0","Type":"ContainerDied","Data":"e05c912af8a6153cf16ca898a88900c8647e95244e7c16c1d6ea318c2973c189"} Jan 26 09:44:19 crc kubenswrapper[4806]: I0126 09:44:19.492461 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jsvcv" event={"ID":"b78544f5-8527-4f24-8f6c-9e517bbd7aa0","Type":"ContainerStarted","Data":"a273633d008b260c24ba4f9138d55a0a37714b95193a3bdaabc8d781ac73ac4a"} Jan 26 09:44:20 crc kubenswrapper[4806]: I0126 09:44:20.505538 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jsvcv" event={"ID":"b78544f5-8527-4f24-8f6c-9e517bbd7aa0","Type":"ContainerStarted","Data":"1607ddf33463466ae6233e8825e9d7a35c4a9a6191f3c2708bd4965462a48a50"} Jan 26 09:44:21 crc kubenswrapper[4806]: I0126 09:44:21.518227 4806 generic.go:334] "Generic (PLEG): container finished" podID="b78544f5-8527-4f24-8f6c-9e517bbd7aa0" containerID="1607ddf33463466ae6233e8825e9d7a35c4a9a6191f3c2708bd4965462a48a50" exitCode=0 Jan 26 09:44:21 crc kubenswrapper[4806]: I0126 09:44:21.518302 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jsvcv" event={"ID":"b78544f5-8527-4f24-8f6c-9e517bbd7aa0","Type":"ContainerDied","Data":"1607ddf33463466ae6233e8825e9d7a35c4a9a6191f3c2708bd4965462a48a50"} Jan 26 09:44:22 crc kubenswrapper[4806]: I0126 09:44:22.528709 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jsvcv" event={"ID":"b78544f5-8527-4f24-8f6c-9e517bbd7aa0","Type":"ContainerStarted","Data":"ff3d016073e82b9ffaefbf03e62ea9b86e57068a8f6e3108c14b447742c60790"} Jan 26 09:44:28 crc kubenswrapper[4806]: I0126 09:44:28.353152 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-jsvcv" Jan 26 09:44:28 crc kubenswrapper[4806]: I0126 09:44:28.353772 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-jsvcv" Jan 26 09:44:28 crc kubenswrapper[4806]: I0126 09:44:28.404496 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-jsvcv" Jan 26 09:44:28 crc kubenswrapper[4806]: I0126 09:44:28.429470 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-jsvcv" podStartSLOduration=7.986599942 podStartE2EDuration="10.429452574s" podCreationTimestamp="2026-01-26 09:44:18 +0000 UTC" firstStartedPulling="2026-01-26 09:44:19.494449207 +0000 UTC m=+6638.758857273" lastFinishedPulling="2026-01-26 09:44:21.937301859 +0000 UTC m=+6641.201709905" observedRunningTime="2026-01-26 09:44:22.555095487 +0000 UTC m=+6641.819503553" watchObservedRunningTime="2026-01-26 09:44:28.429452574 +0000 UTC m=+6647.693860630" Jan 26 09:44:28 crc kubenswrapper[4806]: I0126 09:44:28.638892 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-jsvcv" Jan 26 09:44:28 crc kubenswrapper[4806]: I0126 09:44:28.685492 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jsvcv"] Jan 26 09:44:30 crc kubenswrapper[4806]: I0126 09:44:30.607209 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-jsvcv" podUID="b78544f5-8527-4f24-8f6c-9e517bbd7aa0" containerName="registry-server" containerID="cri-o://ff3d016073e82b9ffaefbf03e62ea9b86e57068a8f6e3108c14b447742c60790" gracePeriod=2 Jan 26 09:44:31 crc kubenswrapper[4806]: I0126 09:44:31.597105 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jsvcv" Jan 26 09:44:31 crc kubenswrapper[4806]: I0126 09:44:31.665245 4806 generic.go:334] "Generic (PLEG): container finished" podID="b78544f5-8527-4f24-8f6c-9e517bbd7aa0" containerID="ff3d016073e82b9ffaefbf03e62ea9b86e57068a8f6e3108c14b447742c60790" exitCode=0 Jan 26 09:44:31 crc kubenswrapper[4806]: I0126 09:44:31.665297 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jsvcv" event={"ID":"b78544f5-8527-4f24-8f6c-9e517bbd7aa0","Type":"ContainerDied","Data":"ff3d016073e82b9ffaefbf03e62ea9b86e57068a8f6e3108c14b447742c60790"} Jan 26 09:44:31 crc kubenswrapper[4806]: I0126 09:44:31.665337 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jsvcv" event={"ID":"b78544f5-8527-4f24-8f6c-9e517bbd7aa0","Type":"ContainerDied","Data":"a273633d008b260c24ba4f9138d55a0a37714b95193a3bdaabc8d781ac73ac4a"} Jan 26 09:44:31 crc kubenswrapper[4806]: I0126 09:44:31.665366 4806 scope.go:117] "RemoveContainer" containerID="ff3d016073e82b9ffaefbf03e62ea9b86e57068a8f6e3108c14b447742c60790" Jan 26 09:44:31 crc kubenswrapper[4806]: I0126 09:44:31.665611 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jsvcv" Jan 26 09:44:31 crc kubenswrapper[4806]: I0126 09:44:31.683302 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b78544f5-8527-4f24-8f6c-9e517bbd7aa0-catalog-content\") pod \"b78544f5-8527-4f24-8f6c-9e517bbd7aa0\" (UID: \"b78544f5-8527-4f24-8f6c-9e517bbd7aa0\") " Jan 26 09:44:31 crc kubenswrapper[4806]: I0126 09:44:31.683495 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b78544f5-8527-4f24-8f6c-9e517bbd7aa0-utilities\") pod \"b78544f5-8527-4f24-8f6c-9e517bbd7aa0\" (UID: \"b78544f5-8527-4f24-8f6c-9e517bbd7aa0\") " Jan 26 09:44:31 crc kubenswrapper[4806]: I0126 09:44:31.683597 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cl2gl\" (UniqueName: \"kubernetes.io/projected/b78544f5-8527-4f24-8f6c-9e517bbd7aa0-kube-api-access-cl2gl\") pod \"b78544f5-8527-4f24-8f6c-9e517bbd7aa0\" (UID: \"b78544f5-8527-4f24-8f6c-9e517bbd7aa0\") " Jan 26 09:44:31 crc kubenswrapper[4806]: I0126 09:44:31.685110 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b78544f5-8527-4f24-8f6c-9e517bbd7aa0-utilities" (OuterVolumeSpecName: "utilities") pod "b78544f5-8527-4f24-8f6c-9e517bbd7aa0" (UID: "b78544f5-8527-4f24-8f6c-9e517bbd7aa0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 09:44:31 crc kubenswrapper[4806]: I0126 09:44:31.707061 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b78544f5-8527-4f24-8f6c-9e517bbd7aa0-kube-api-access-cl2gl" (OuterVolumeSpecName: "kube-api-access-cl2gl") pod "b78544f5-8527-4f24-8f6c-9e517bbd7aa0" (UID: "b78544f5-8527-4f24-8f6c-9e517bbd7aa0"). InnerVolumeSpecName "kube-api-access-cl2gl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:44:31 crc kubenswrapper[4806]: I0126 09:44:31.732956 4806 scope.go:117] "RemoveContainer" containerID="1607ddf33463466ae6233e8825e9d7a35c4a9a6191f3c2708bd4965462a48a50" Jan 26 09:44:31 crc kubenswrapper[4806]: I0126 09:44:31.750860 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b78544f5-8527-4f24-8f6c-9e517bbd7aa0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b78544f5-8527-4f24-8f6c-9e517bbd7aa0" (UID: "b78544f5-8527-4f24-8f6c-9e517bbd7aa0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 09:44:31 crc kubenswrapper[4806]: I0126 09:44:31.758298 4806 scope.go:117] "RemoveContainer" containerID="e05c912af8a6153cf16ca898a88900c8647e95244e7c16c1d6ea318c2973c189" Jan 26 09:44:31 crc kubenswrapper[4806]: I0126 09:44:31.788862 4806 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b78544f5-8527-4f24-8f6c-9e517bbd7aa0-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 09:44:31 crc kubenswrapper[4806]: I0126 09:44:31.788944 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cl2gl\" (UniqueName: \"kubernetes.io/projected/b78544f5-8527-4f24-8f6c-9e517bbd7aa0-kube-api-access-cl2gl\") on node \"crc\" DevicePath \"\"" Jan 26 09:44:31 crc kubenswrapper[4806]: I0126 09:44:31.788966 4806 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b78544f5-8527-4f24-8f6c-9e517bbd7aa0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 09:44:31 crc kubenswrapper[4806]: I0126 09:44:31.802543 4806 scope.go:117] "RemoveContainer" containerID="ff3d016073e82b9ffaefbf03e62ea9b86e57068a8f6e3108c14b447742c60790" Jan 26 09:44:31 crc kubenswrapper[4806]: E0126 09:44:31.803150 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff3d016073e82b9ffaefbf03e62ea9b86e57068a8f6e3108c14b447742c60790\": container with ID starting with ff3d016073e82b9ffaefbf03e62ea9b86e57068a8f6e3108c14b447742c60790 not found: ID does not exist" containerID="ff3d016073e82b9ffaefbf03e62ea9b86e57068a8f6e3108c14b447742c60790" Jan 26 09:44:31 crc kubenswrapper[4806]: I0126 09:44:31.803181 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff3d016073e82b9ffaefbf03e62ea9b86e57068a8f6e3108c14b447742c60790"} err="failed to get container status \"ff3d016073e82b9ffaefbf03e62ea9b86e57068a8f6e3108c14b447742c60790\": rpc error: code = NotFound desc = could not find container \"ff3d016073e82b9ffaefbf03e62ea9b86e57068a8f6e3108c14b447742c60790\": container with ID starting with ff3d016073e82b9ffaefbf03e62ea9b86e57068a8f6e3108c14b447742c60790 not found: ID does not exist" Jan 26 09:44:31 crc kubenswrapper[4806]: I0126 09:44:31.803200 4806 scope.go:117] "RemoveContainer" containerID="1607ddf33463466ae6233e8825e9d7a35c4a9a6191f3c2708bd4965462a48a50" Jan 26 09:44:31 crc kubenswrapper[4806]: E0126 09:44:31.803390 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1607ddf33463466ae6233e8825e9d7a35c4a9a6191f3c2708bd4965462a48a50\": container with ID starting with 1607ddf33463466ae6233e8825e9d7a35c4a9a6191f3c2708bd4965462a48a50 not found: ID does not exist" containerID="1607ddf33463466ae6233e8825e9d7a35c4a9a6191f3c2708bd4965462a48a50" Jan 26 09:44:31 crc kubenswrapper[4806]: I0126 09:44:31.803405 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1607ddf33463466ae6233e8825e9d7a35c4a9a6191f3c2708bd4965462a48a50"} err="failed to get container status \"1607ddf33463466ae6233e8825e9d7a35c4a9a6191f3c2708bd4965462a48a50\": rpc error: code = NotFound desc = could not find container \"1607ddf33463466ae6233e8825e9d7a35c4a9a6191f3c2708bd4965462a48a50\": container with ID starting with 1607ddf33463466ae6233e8825e9d7a35c4a9a6191f3c2708bd4965462a48a50 not found: ID does not exist" Jan 26 09:44:31 crc kubenswrapper[4806]: I0126 09:44:31.803419 4806 scope.go:117] "RemoveContainer" containerID="e05c912af8a6153cf16ca898a88900c8647e95244e7c16c1d6ea318c2973c189" Jan 26 09:44:31 crc kubenswrapper[4806]: E0126 09:44:31.803669 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e05c912af8a6153cf16ca898a88900c8647e95244e7c16c1d6ea318c2973c189\": container with ID starting with e05c912af8a6153cf16ca898a88900c8647e95244e7c16c1d6ea318c2973c189 not found: ID does not exist" containerID="e05c912af8a6153cf16ca898a88900c8647e95244e7c16c1d6ea318c2973c189" Jan 26 09:44:31 crc kubenswrapper[4806]: I0126 09:44:31.803685 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e05c912af8a6153cf16ca898a88900c8647e95244e7c16c1d6ea318c2973c189"} err="failed to get container status \"e05c912af8a6153cf16ca898a88900c8647e95244e7c16c1d6ea318c2973c189\": rpc error: code = NotFound desc = could not find container \"e05c912af8a6153cf16ca898a88900c8647e95244e7c16c1d6ea318c2973c189\": container with ID starting with e05c912af8a6153cf16ca898a88900c8647e95244e7c16c1d6ea318c2973c189 not found: ID does not exist" Jan 26 09:44:31 crc kubenswrapper[4806]: I0126 09:44:31.997075 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jsvcv"] Jan 26 09:44:32 crc kubenswrapper[4806]: I0126 09:44:32.007654 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-jsvcv"] Jan 26 09:44:33 crc kubenswrapper[4806]: I0126 09:44:33.066155 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b78544f5-8527-4f24-8f6c-9e517bbd7aa0" path="/var/lib/kubelet/pods/b78544f5-8527-4f24-8f6c-9e517bbd7aa0/volumes" Jan 26 09:44:39 crc kubenswrapper[4806]: I0126 09:44:39.624826 4806 scope.go:117] "RemoveContainer" containerID="2fe90882c9b4d1ff29e3767613bbdfa585e4531ce39c6128082e97504f21908a" Jan 26 09:44:45 crc kubenswrapper[4806]: I0126 09:44:45.806650 4806 patch_prober.go:28] interesting pod/machine-config-daemon-k2tlk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 09:44:45 crc kubenswrapper[4806]: I0126 09:44:45.807340 4806 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 09:45:00 crc kubenswrapper[4806]: I0126 09:45:00.229563 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490345-42d85"] Jan 26 09:45:00 crc kubenswrapper[4806]: E0126 09:45:00.230669 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b78544f5-8527-4f24-8f6c-9e517bbd7aa0" containerName="extract-content" Jan 26 09:45:00 crc kubenswrapper[4806]: I0126 09:45:00.230688 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="b78544f5-8527-4f24-8f6c-9e517bbd7aa0" containerName="extract-content" Jan 26 09:45:00 crc kubenswrapper[4806]: E0126 09:45:00.230707 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b78544f5-8527-4f24-8f6c-9e517bbd7aa0" containerName="extract-utilities" Jan 26 09:45:00 crc kubenswrapper[4806]: I0126 09:45:00.230715 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="b78544f5-8527-4f24-8f6c-9e517bbd7aa0" containerName="extract-utilities" Jan 26 09:45:00 crc kubenswrapper[4806]: E0126 09:45:00.230736 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b78544f5-8527-4f24-8f6c-9e517bbd7aa0" containerName="registry-server" Jan 26 09:45:00 crc kubenswrapper[4806]: I0126 09:45:00.230745 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="b78544f5-8527-4f24-8f6c-9e517bbd7aa0" containerName="registry-server" Jan 26 09:45:00 crc kubenswrapper[4806]: I0126 09:45:00.230988 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="b78544f5-8527-4f24-8f6c-9e517bbd7aa0" containerName="registry-server" Jan 26 09:45:00 crc kubenswrapper[4806]: I0126 09:45:00.231836 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490345-42d85" Jan 26 09:45:00 crc kubenswrapper[4806]: I0126 09:45:00.234938 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 26 09:45:00 crc kubenswrapper[4806]: I0126 09:45:00.241320 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 26 09:45:00 crc kubenswrapper[4806]: I0126 09:45:00.247885 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490345-42d85"] Jan 26 09:45:00 crc kubenswrapper[4806]: I0126 09:45:00.375276 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42xbs\" (UniqueName: \"kubernetes.io/projected/410233b6-10f3-4903-af5d-e5a434de95d8-kube-api-access-42xbs\") pod \"collect-profiles-29490345-42d85\" (UID: \"410233b6-10f3-4903-af5d-e5a434de95d8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490345-42d85" Jan 26 09:45:00 crc kubenswrapper[4806]: I0126 09:45:00.375476 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/410233b6-10f3-4903-af5d-e5a434de95d8-config-volume\") pod \"collect-profiles-29490345-42d85\" (UID: \"410233b6-10f3-4903-af5d-e5a434de95d8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490345-42d85" Jan 26 09:45:00 crc kubenswrapper[4806]: I0126 09:45:00.375570 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/410233b6-10f3-4903-af5d-e5a434de95d8-secret-volume\") pod \"collect-profiles-29490345-42d85\" (UID: \"410233b6-10f3-4903-af5d-e5a434de95d8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490345-42d85" Jan 26 09:45:00 crc kubenswrapper[4806]: I0126 09:45:00.477341 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/410233b6-10f3-4903-af5d-e5a434de95d8-config-volume\") pod \"collect-profiles-29490345-42d85\" (UID: \"410233b6-10f3-4903-af5d-e5a434de95d8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490345-42d85" Jan 26 09:45:00 crc kubenswrapper[4806]: I0126 09:45:00.477404 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/410233b6-10f3-4903-af5d-e5a434de95d8-secret-volume\") pod \"collect-profiles-29490345-42d85\" (UID: \"410233b6-10f3-4903-af5d-e5a434de95d8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490345-42d85" Jan 26 09:45:00 crc kubenswrapper[4806]: I0126 09:45:00.477459 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42xbs\" (UniqueName: \"kubernetes.io/projected/410233b6-10f3-4903-af5d-e5a434de95d8-kube-api-access-42xbs\") pod \"collect-profiles-29490345-42d85\" (UID: \"410233b6-10f3-4903-af5d-e5a434de95d8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490345-42d85" Jan 26 09:45:00 crc kubenswrapper[4806]: I0126 09:45:00.478557 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/410233b6-10f3-4903-af5d-e5a434de95d8-config-volume\") pod \"collect-profiles-29490345-42d85\" (UID: \"410233b6-10f3-4903-af5d-e5a434de95d8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490345-42d85" Jan 26 09:45:00 crc kubenswrapper[4806]: I0126 09:45:00.496052 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/410233b6-10f3-4903-af5d-e5a434de95d8-secret-volume\") pod \"collect-profiles-29490345-42d85\" (UID: \"410233b6-10f3-4903-af5d-e5a434de95d8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490345-42d85" Jan 26 09:45:00 crc kubenswrapper[4806]: I0126 09:45:00.501710 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42xbs\" (UniqueName: \"kubernetes.io/projected/410233b6-10f3-4903-af5d-e5a434de95d8-kube-api-access-42xbs\") pod \"collect-profiles-29490345-42d85\" (UID: \"410233b6-10f3-4903-af5d-e5a434de95d8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490345-42d85" Jan 26 09:45:00 crc kubenswrapper[4806]: I0126 09:45:00.567471 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490345-42d85" Jan 26 09:45:01 crc kubenswrapper[4806]: I0126 09:45:01.010289 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490345-42d85"] Jan 26 09:45:01 crc kubenswrapper[4806]: I0126 09:45:01.973159 4806 generic.go:334] "Generic (PLEG): container finished" podID="410233b6-10f3-4903-af5d-e5a434de95d8" containerID="4b0d939bb6066a1b27cb24df22461dc0b7b95b96687b383c74e250c6df3006a1" exitCode=0 Jan 26 09:45:01 crc kubenswrapper[4806]: I0126 09:45:01.973245 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490345-42d85" event={"ID":"410233b6-10f3-4903-af5d-e5a434de95d8","Type":"ContainerDied","Data":"4b0d939bb6066a1b27cb24df22461dc0b7b95b96687b383c74e250c6df3006a1"} Jan 26 09:45:01 crc kubenswrapper[4806]: I0126 09:45:01.973553 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490345-42d85" event={"ID":"410233b6-10f3-4903-af5d-e5a434de95d8","Type":"ContainerStarted","Data":"a71bdd3b4becfbcea697e12829b9654ff0b65581bf656ff00fbd900ee1f2827b"} Jan 26 09:45:03 crc kubenswrapper[4806]: I0126 09:45:03.295946 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490345-42d85" Jan 26 09:45:03 crc kubenswrapper[4806]: I0126 09:45:03.437727 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/410233b6-10f3-4903-af5d-e5a434de95d8-config-volume\") pod \"410233b6-10f3-4903-af5d-e5a434de95d8\" (UID: \"410233b6-10f3-4903-af5d-e5a434de95d8\") " Jan 26 09:45:03 crc kubenswrapper[4806]: I0126 09:45:03.437839 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/410233b6-10f3-4903-af5d-e5a434de95d8-secret-volume\") pod \"410233b6-10f3-4903-af5d-e5a434de95d8\" (UID: \"410233b6-10f3-4903-af5d-e5a434de95d8\") " Jan 26 09:45:03 crc kubenswrapper[4806]: I0126 09:45:03.437913 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-42xbs\" (UniqueName: \"kubernetes.io/projected/410233b6-10f3-4903-af5d-e5a434de95d8-kube-api-access-42xbs\") pod \"410233b6-10f3-4903-af5d-e5a434de95d8\" (UID: \"410233b6-10f3-4903-af5d-e5a434de95d8\") " Jan 26 09:45:03 crc kubenswrapper[4806]: I0126 09:45:03.438632 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/410233b6-10f3-4903-af5d-e5a434de95d8-config-volume" (OuterVolumeSpecName: "config-volume") pod "410233b6-10f3-4903-af5d-e5a434de95d8" (UID: "410233b6-10f3-4903-af5d-e5a434de95d8"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 09:45:03 crc kubenswrapper[4806]: I0126 09:45:03.444204 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/410233b6-10f3-4903-af5d-e5a434de95d8-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "410233b6-10f3-4903-af5d-e5a434de95d8" (UID: "410233b6-10f3-4903-af5d-e5a434de95d8"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 09:45:03 crc kubenswrapper[4806]: I0126 09:45:03.444730 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/410233b6-10f3-4903-af5d-e5a434de95d8-kube-api-access-42xbs" (OuterVolumeSpecName: "kube-api-access-42xbs") pod "410233b6-10f3-4903-af5d-e5a434de95d8" (UID: "410233b6-10f3-4903-af5d-e5a434de95d8"). InnerVolumeSpecName "kube-api-access-42xbs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:45:03 crc kubenswrapper[4806]: I0126 09:45:03.540473 4806 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/410233b6-10f3-4903-af5d-e5a434de95d8-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 09:45:03 crc kubenswrapper[4806]: I0126 09:45:03.540512 4806 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/410233b6-10f3-4903-af5d-e5a434de95d8-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 09:45:03 crc kubenswrapper[4806]: I0126 09:45:03.540548 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-42xbs\" (UniqueName: \"kubernetes.io/projected/410233b6-10f3-4903-af5d-e5a434de95d8-kube-api-access-42xbs\") on node \"crc\" DevicePath \"\"" Jan 26 09:45:03 crc kubenswrapper[4806]: I0126 09:45:03.993724 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490345-42d85" event={"ID":"410233b6-10f3-4903-af5d-e5a434de95d8","Type":"ContainerDied","Data":"a71bdd3b4becfbcea697e12829b9654ff0b65581bf656ff00fbd900ee1f2827b"} Jan 26 09:45:03 crc kubenswrapper[4806]: I0126 09:45:03.994075 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a71bdd3b4becfbcea697e12829b9654ff0b65581bf656ff00fbd900ee1f2827b" Jan 26 09:45:03 crc kubenswrapper[4806]: I0126 09:45:03.993765 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490345-42d85" Jan 26 09:45:04 crc kubenswrapper[4806]: I0126 09:45:04.391155 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490300-wpj66"] Jan 26 09:45:04 crc kubenswrapper[4806]: I0126 09:45:04.401123 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490300-wpj66"] Jan 26 09:45:05 crc kubenswrapper[4806]: I0126 09:45:05.055607 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e727139-3126-460d-9f74-0dc96b6fcf53" path="/var/lib/kubelet/pods/0e727139-3126-460d-9f74-0dc96b6fcf53/volumes" Jan 26 09:45:15 crc kubenswrapper[4806]: I0126 09:45:15.807023 4806 patch_prober.go:28] interesting pod/machine-config-daemon-k2tlk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 09:45:15 crc kubenswrapper[4806]: I0126 09:45:15.807619 4806 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 09:45:39 crc kubenswrapper[4806]: I0126 09:45:39.721918 4806 scope.go:117] "RemoveContainer" containerID="57aa4ad8d59f5cb77cb4a7c6b622e582931a0168af12f2d75b3912786fa4fd79" Jan 26 09:45:45 crc kubenswrapper[4806]: I0126 09:45:45.806670 4806 patch_prober.go:28] interesting pod/machine-config-daemon-k2tlk container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 09:45:45 crc kubenswrapper[4806]: I0126 09:45:45.809198 4806 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 09:45:45 crc kubenswrapper[4806]: I0126 09:45:45.809373 4806 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" Jan 26 09:45:45 crc kubenswrapper[4806]: I0126 09:45:45.810415 4806 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"30b0621374ad1344bda7b792ef451495a6c6e23e6c45091ee047de6935c61017"} pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 09:45:45 crc kubenswrapper[4806]: I0126 09:45:45.810850 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" containerName="machine-config-daemon" containerID="cri-o://30b0621374ad1344bda7b792ef451495a6c6e23e6c45091ee047de6935c61017" gracePeriod=600 Jan 26 09:45:45 crc kubenswrapper[4806]: E0126 09:45:45.943602 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 09:45:46 crc kubenswrapper[4806]: I0126 09:45:46.455479 4806 generic.go:334] "Generic (PLEG): container finished" podID="d07502a2-50b0-4012-b335-340a1c694c50" containerID="30b0621374ad1344bda7b792ef451495a6c6e23e6c45091ee047de6935c61017" exitCode=0 Jan 26 09:45:46 crc kubenswrapper[4806]: I0126 09:45:46.455650 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" event={"ID":"d07502a2-50b0-4012-b335-340a1c694c50","Type":"ContainerDied","Data":"30b0621374ad1344bda7b792ef451495a6c6e23e6c45091ee047de6935c61017"} Jan 26 09:45:46 crc kubenswrapper[4806]: I0126 09:45:46.456107 4806 scope.go:117] "RemoveContainer" containerID="94db467df12a6038972412ef143c9c2677da69013ebdb4ab5f0a652f611d1b29" Jan 26 09:45:46 crc kubenswrapper[4806]: I0126 09:45:46.457273 4806 scope.go:117] "RemoveContainer" containerID="30b0621374ad1344bda7b792ef451495a6c6e23e6c45091ee047de6935c61017" Jan 26 09:45:46 crc kubenswrapper[4806]: E0126 09:45:46.457954 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 09:45:58 crc kubenswrapper[4806]: I0126 09:45:58.042012 4806 scope.go:117] "RemoveContainer" containerID="30b0621374ad1344bda7b792ef451495a6c6e23e6c45091ee047de6935c61017" Jan 26 09:45:58 crc kubenswrapper[4806]: E0126 09:45:58.042827 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 09:45:59 crc kubenswrapper[4806]: I0126 09:45:59.882482 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-grjnj"] Jan 26 09:45:59 crc kubenswrapper[4806]: E0126 09:45:59.883158 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="410233b6-10f3-4903-af5d-e5a434de95d8" containerName="collect-profiles" Jan 26 09:45:59 crc kubenswrapper[4806]: I0126 09:45:59.883170 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="410233b6-10f3-4903-af5d-e5a434de95d8" containerName="collect-profiles" Jan 26 09:45:59 crc kubenswrapper[4806]: I0126 09:45:59.883347 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="410233b6-10f3-4903-af5d-e5a434de95d8" containerName="collect-profiles" Jan 26 09:45:59 crc kubenswrapper[4806]: I0126 09:45:59.884958 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-grjnj" Jan 26 09:45:59 crc kubenswrapper[4806]: I0126 09:45:59.896491 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-grjnj"] Jan 26 09:46:00 crc kubenswrapper[4806]: I0126 09:46:00.042335 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7539c482-cf87-4a4d-a52c-102ba6296a7b-utilities\") pod \"redhat-marketplace-grjnj\" (UID: \"7539c482-cf87-4a4d-a52c-102ba6296a7b\") " pod="openshift-marketplace/redhat-marketplace-grjnj" Jan 26 09:46:00 crc kubenswrapper[4806]: I0126 09:46:00.042746 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xn9w\" (UniqueName: \"kubernetes.io/projected/7539c482-cf87-4a4d-a52c-102ba6296a7b-kube-api-access-5xn9w\") pod \"redhat-marketplace-grjnj\" (UID: \"7539c482-cf87-4a4d-a52c-102ba6296a7b\") " pod="openshift-marketplace/redhat-marketplace-grjnj" Jan 26 09:46:00 crc kubenswrapper[4806]: I0126 09:46:00.042789 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7539c482-cf87-4a4d-a52c-102ba6296a7b-catalog-content\") pod \"redhat-marketplace-grjnj\" (UID: \"7539c482-cf87-4a4d-a52c-102ba6296a7b\") " pod="openshift-marketplace/redhat-marketplace-grjnj" Jan 26 09:46:00 crc kubenswrapper[4806]: I0126 09:46:00.144737 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7539c482-cf87-4a4d-a52c-102ba6296a7b-utilities\") pod \"redhat-marketplace-grjnj\" (UID: \"7539c482-cf87-4a4d-a52c-102ba6296a7b\") " pod="openshift-marketplace/redhat-marketplace-grjnj" Jan 26 09:46:00 crc kubenswrapper[4806]: I0126 09:46:00.144858 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xn9w\" (UniqueName: \"kubernetes.io/projected/7539c482-cf87-4a4d-a52c-102ba6296a7b-kube-api-access-5xn9w\") pod \"redhat-marketplace-grjnj\" (UID: \"7539c482-cf87-4a4d-a52c-102ba6296a7b\") " pod="openshift-marketplace/redhat-marketplace-grjnj" Jan 26 09:46:00 crc kubenswrapper[4806]: I0126 09:46:00.145325 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7539c482-cf87-4a4d-a52c-102ba6296a7b-utilities\") pod \"redhat-marketplace-grjnj\" (UID: \"7539c482-cf87-4a4d-a52c-102ba6296a7b\") " pod="openshift-marketplace/redhat-marketplace-grjnj" Jan 26 09:46:00 crc kubenswrapper[4806]: I0126 09:46:00.145325 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7539c482-cf87-4a4d-a52c-102ba6296a7b-catalog-content\") pod \"redhat-marketplace-grjnj\" (UID: \"7539c482-cf87-4a4d-a52c-102ba6296a7b\") " pod="openshift-marketplace/redhat-marketplace-grjnj" Jan 26 09:46:00 crc kubenswrapper[4806]: I0126 09:46:00.145855 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7539c482-cf87-4a4d-a52c-102ba6296a7b-catalog-content\") pod \"redhat-marketplace-grjnj\" (UID: \"7539c482-cf87-4a4d-a52c-102ba6296a7b\") " pod="openshift-marketplace/redhat-marketplace-grjnj" Jan 26 09:46:00 crc kubenswrapper[4806]: I0126 09:46:00.177430 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xn9w\" (UniqueName: \"kubernetes.io/projected/7539c482-cf87-4a4d-a52c-102ba6296a7b-kube-api-access-5xn9w\") pod \"redhat-marketplace-grjnj\" (UID: \"7539c482-cf87-4a4d-a52c-102ba6296a7b\") " pod="openshift-marketplace/redhat-marketplace-grjnj" Jan 26 09:46:00 crc kubenswrapper[4806]: I0126 09:46:00.204288 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-grjnj" Jan 26 09:46:00 crc kubenswrapper[4806]: I0126 09:46:00.723716 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-grjnj"] Jan 26 09:46:01 crc kubenswrapper[4806]: E0126 09:46:01.190701 4806 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7539c482_cf87_4a4d_a52c_102ba6296a7b.slice/crio-conmon-1b9bb6fa4af158ba7c7ea1a7973ac6ee70de1c2fb4fa5aacdad59ad6e1fafa2e.scope\": RecentStats: unable to find data in memory cache]" Jan 26 09:46:01 crc kubenswrapper[4806]: I0126 09:46:01.585758 4806 generic.go:334] "Generic (PLEG): container finished" podID="7539c482-cf87-4a4d-a52c-102ba6296a7b" containerID="1b9bb6fa4af158ba7c7ea1a7973ac6ee70de1c2fb4fa5aacdad59ad6e1fafa2e" exitCode=0 Jan 26 09:46:01 crc kubenswrapper[4806]: I0126 09:46:01.585826 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-grjnj" event={"ID":"7539c482-cf87-4a4d-a52c-102ba6296a7b","Type":"ContainerDied","Data":"1b9bb6fa4af158ba7c7ea1a7973ac6ee70de1c2fb4fa5aacdad59ad6e1fafa2e"} Jan 26 09:46:01 crc kubenswrapper[4806]: I0126 09:46:01.586214 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-grjnj" event={"ID":"7539c482-cf87-4a4d-a52c-102ba6296a7b","Type":"ContainerStarted","Data":"89574c115b198439b5103aa48a9765ddc00307320b225bbfa22c51fc7e78519c"} Jan 26 09:46:02 crc kubenswrapper[4806]: I0126 09:46:02.609194 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-grjnj" event={"ID":"7539c482-cf87-4a4d-a52c-102ba6296a7b","Type":"ContainerStarted","Data":"529ba4a5db4d6a790dbe5573bef8b0578389ca594c03e333010c881716b0d8d2"} Jan 26 09:46:03 crc kubenswrapper[4806]: I0126 09:46:03.617682 4806 generic.go:334] "Generic (PLEG): container finished" podID="7539c482-cf87-4a4d-a52c-102ba6296a7b" containerID="529ba4a5db4d6a790dbe5573bef8b0578389ca594c03e333010c881716b0d8d2" exitCode=0 Jan 26 09:46:03 crc kubenswrapper[4806]: I0126 09:46:03.617963 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-grjnj" event={"ID":"7539c482-cf87-4a4d-a52c-102ba6296a7b","Type":"ContainerDied","Data":"529ba4a5db4d6a790dbe5573bef8b0578389ca594c03e333010c881716b0d8d2"} Jan 26 09:46:04 crc kubenswrapper[4806]: I0126 09:46:04.627903 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-grjnj" event={"ID":"7539c482-cf87-4a4d-a52c-102ba6296a7b","Type":"ContainerStarted","Data":"0e45b71b3e774c05c60aa99b4e27ef00ff0945b2b643e529219f8dfabe34d2bc"} Jan 26 09:46:04 crc kubenswrapper[4806]: I0126 09:46:04.660959 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-grjnj" podStartSLOduration=3.10377147 podStartE2EDuration="5.660940482s" podCreationTimestamp="2026-01-26 09:45:59 +0000 UTC" firstStartedPulling="2026-01-26 09:46:01.603938276 +0000 UTC m=+6740.868346322" lastFinishedPulling="2026-01-26 09:46:04.161107258 +0000 UTC m=+6743.425515334" observedRunningTime="2026-01-26 09:46:04.659670716 +0000 UTC m=+6743.924078782" watchObservedRunningTime="2026-01-26 09:46:04.660940482 +0000 UTC m=+6743.925348528" Jan 26 09:46:10 crc kubenswrapper[4806]: I0126 09:46:10.204662 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-grjnj" Jan 26 09:46:10 crc kubenswrapper[4806]: I0126 09:46:10.207201 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-grjnj" Jan 26 09:46:10 crc kubenswrapper[4806]: I0126 09:46:10.265818 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-grjnj" Jan 26 09:46:10 crc kubenswrapper[4806]: I0126 09:46:10.748646 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-grjnj" Jan 26 09:46:10 crc kubenswrapper[4806]: I0126 09:46:10.797119 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-grjnj"] Jan 26 09:46:12 crc kubenswrapper[4806]: I0126 09:46:12.699601 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-grjnj" podUID="7539c482-cf87-4a4d-a52c-102ba6296a7b" containerName="registry-server" containerID="cri-o://0e45b71b3e774c05c60aa99b4e27ef00ff0945b2b643e529219f8dfabe34d2bc" gracePeriod=2 Jan 26 09:46:12 crc kubenswrapper[4806]: I0126 09:46:12.929709 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-hmrwv"] Jan 26 09:46:12 crc kubenswrapper[4806]: I0126 09:46:12.932112 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hmrwv" Jan 26 09:46:12 crc kubenswrapper[4806]: I0126 09:46:12.963211 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hmrwv"] Jan 26 09:46:13 crc kubenswrapper[4806]: I0126 09:46:13.052095 4806 scope.go:117] "RemoveContainer" containerID="30b0621374ad1344bda7b792ef451495a6c6e23e6c45091ee047de6935c61017" Jan 26 09:46:13 crc kubenswrapper[4806]: E0126 09:46:13.052402 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 09:46:13 crc kubenswrapper[4806]: I0126 09:46:13.076646 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cj6c6\" (UniqueName: \"kubernetes.io/projected/e39b07af-2ce3-4118-843d-7c4fe05848ec-kube-api-access-cj6c6\") pod \"redhat-operators-hmrwv\" (UID: \"e39b07af-2ce3-4118-843d-7c4fe05848ec\") " pod="openshift-marketplace/redhat-operators-hmrwv" Jan 26 09:46:13 crc kubenswrapper[4806]: I0126 09:46:13.076711 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e39b07af-2ce3-4118-843d-7c4fe05848ec-catalog-content\") pod \"redhat-operators-hmrwv\" (UID: \"e39b07af-2ce3-4118-843d-7c4fe05848ec\") " pod="openshift-marketplace/redhat-operators-hmrwv" Jan 26 09:46:13 crc kubenswrapper[4806]: I0126 09:46:13.076798 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e39b07af-2ce3-4118-843d-7c4fe05848ec-utilities\") pod \"redhat-operators-hmrwv\" (UID: \"e39b07af-2ce3-4118-843d-7c4fe05848ec\") " pod="openshift-marketplace/redhat-operators-hmrwv" Jan 26 09:46:13 crc kubenswrapper[4806]: I0126 09:46:13.178613 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e39b07af-2ce3-4118-843d-7c4fe05848ec-utilities\") pod \"redhat-operators-hmrwv\" (UID: \"e39b07af-2ce3-4118-843d-7c4fe05848ec\") " pod="openshift-marketplace/redhat-operators-hmrwv" Jan 26 09:46:13 crc kubenswrapper[4806]: I0126 09:46:13.179708 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cj6c6\" (UniqueName: \"kubernetes.io/projected/e39b07af-2ce3-4118-843d-7c4fe05848ec-kube-api-access-cj6c6\") pod \"redhat-operators-hmrwv\" (UID: \"e39b07af-2ce3-4118-843d-7c4fe05848ec\") " pod="openshift-marketplace/redhat-operators-hmrwv" Jan 26 09:46:13 crc kubenswrapper[4806]: I0126 09:46:13.179893 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e39b07af-2ce3-4118-843d-7c4fe05848ec-catalog-content\") pod \"redhat-operators-hmrwv\" (UID: \"e39b07af-2ce3-4118-843d-7c4fe05848ec\") " pod="openshift-marketplace/redhat-operators-hmrwv" Jan 26 09:46:13 crc kubenswrapper[4806]: I0126 09:46:13.180008 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e39b07af-2ce3-4118-843d-7c4fe05848ec-utilities\") pod \"redhat-operators-hmrwv\" (UID: \"e39b07af-2ce3-4118-843d-7c4fe05848ec\") " pod="openshift-marketplace/redhat-operators-hmrwv" Jan 26 09:46:13 crc kubenswrapper[4806]: I0126 09:46:13.180946 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e39b07af-2ce3-4118-843d-7c4fe05848ec-catalog-content\") pod \"redhat-operators-hmrwv\" (UID: \"e39b07af-2ce3-4118-843d-7c4fe05848ec\") " pod="openshift-marketplace/redhat-operators-hmrwv" Jan 26 09:46:13 crc kubenswrapper[4806]: I0126 09:46:13.214641 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cj6c6\" (UniqueName: \"kubernetes.io/projected/e39b07af-2ce3-4118-843d-7c4fe05848ec-kube-api-access-cj6c6\") pod \"redhat-operators-hmrwv\" (UID: \"e39b07af-2ce3-4118-843d-7c4fe05848ec\") " pod="openshift-marketplace/redhat-operators-hmrwv" Jan 26 09:46:13 crc kubenswrapper[4806]: I0126 09:46:13.256090 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hmrwv" Jan 26 09:46:13 crc kubenswrapper[4806]: I0126 09:46:13.360936 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-grjnj" Jan 26 09:46:13 crc kubenswrapper[4806]: I0126 09:46:13.486185 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5xn9w\" (UniqueName: \"kubernetes.io/projected/7539c482-cf87-4a4d-a52c-102ba6296a7b-kube-api-access-5xn9w\") pod \"7539c482-cf87-4a4d-a52c-102ba6296a7b\" (UID: \"7539c482-cf87-4a4d-a52c-102ba6296a7b\") " Jan 26 09:46:13 crc kubenswrapper[4806]: I0126 09:46:13.486321 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7539c482-cf87-4a4d-a52c-102ba6296a7b-catalog-content\") pod \"7539c482-cf87-4a4d-a52c-102ba6296a7b\" (UID: \"7539c482-cf87-4a4d-a52c-102ba6296a7b\") " Jan 26 09:46:13 crc kubenswrapper[4806]: I0126 09:46:13.486388 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7539c482-cf87-4a4d-a52c-102ba6296a7b-utilities\") pod \"7539c482-cf87-4a4d-a52c-102ba6296a7b\" (UID: \"7539c482-cf87-4a4d-a52c-102ba6296a7b\") " Jan 26 09:46:13 crc kubenswrapper[4806]: I0126 09:46:13.487425 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7539c482-cf87-4a4d-a52c-102ba6296a7b-utilities" (OuterVolumeSpecName: "utilities") pod "7539c482-cf87-4a4d-a52c-102ba6296a7b" (UID: "7539c482-cf87-4a4d-a52c-102ba6296a7b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 09:46:13 crc kubenswrapper[4806]: I0126 09:46:13.496773 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539c482-cf87-4a4d-a52c-102ba6296a7b-kube-api-access-5xn9w" (OuterVolumeSpecName: "kube-api-access-5xn9w") pod "7539c482-cf87-4a4d-a52c-102ba6296a7b" (UID: "7539c482-cf87-4a4d-a52c-102ba6296a7b"). InnerVolumeSpecName "kube-api-access-5xn9w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:46:13 crc kubenswrapper[4806]: I0126 09:46:13.516034 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7539c482-cf87-4a4d-a52c-102ba6296a7b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7539c482-cf87-4a4d-a52c-102ba6296a7b" (UID: "7539c482-cf87-4a4d-a52c-102ba6296a7b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 09:46:13 crc kubenswrapper[4806]: I0126 09:46:13.588296 4806 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7539c482-cf87-4a4d-a52c-102ba6296a7b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 09:46:13 crc kubenswrapper[4806]: I0126 09:46:13.588510 4806 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7539c482-cf87-4a4d-a52c-102ba6296a7b-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 09:46:13 crc kubenswrapper[4806]: I0126 09:46:13.588533 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5xn9w\" (UniqueName: \"kubernetes.io/projected/7539c482-cf87-4a4d-a52c-102ba6296a7b-kube-api-access-5xn9w\") on node \"crc\" DevicePath \"\"" Jan 26 09:46:13 crc kubenswrapper[4806]: I0126 09:46:13.709411 4806 generic.go:334] "Generic (PLEG): container finished" podID="7539c482-cf87-4a4d-a52c-102ba6296a7b" containerID="0e45b71b3e774c05c60aa99b4e27ef00ff0945b2b643e529219f8dfabe34d2bc" exitCode=0 Jan 26 09:46:13 crc kubenswrapper[4806]: I0126 09:46:13.709452 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-grjnj" event={"ID":"7539c482-cf87-4a4d-a52c-102ba6296a7b","Type":"ContainerDied","Data":"0e45b71b3e774c05c60aa99b4e27ef00ff0945b2b643e529219f8dfabe34d2bc"} Jan 26 09:46:13 crc kubenswrapper[4806]: I0126 09:46:13.709483 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-grjnj" event={"ID":"7539c482-cf87-4a4d-a52c-102ba6296a7b","Type":"ContainerDied","Data":"89574c115b198439b5103aa48a9765ddc00307320b225bbfa22c51fc7e78519c"} Jan 26 09:46:13 crc kubenswrapper[4806]: I0126 09:46:13.709502 4806 scope.go:117] "RemoveContainer" containerID="0e45b71b3e774c05c60aa99b4e27ef00ff0945b2b643e529219f8dfabe34d2bc" Jan 26 09:46:13 crc kubenswrapper[4806]: I0126 09:46:13.709514 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-grjnj" Jan 26 09:46:13 crc kubenswrapper[4806]: I0126 09:46:13.726170 4806 scope.go:117] "RemoveContainer" containerID="529ba4a5db4d6a790dbe5573bef8b0578389ca594c03e333010c881716b0d8d2" Jan 26 09:46:13 crc kubenswrapper[4806]: I0126 09:46:13.746592 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-grjnj"] Jan 26 09:46:13 crc kubenswrapper[4806]: I0126 09:46:13.753565 4806 scope.go:117] "RemoveContainer" containerID="1b9bb6fa4af158ba7c7ea1a7973ac6ee70de1c2fb4fa5aacdad59ad6e1fafa2e" Jan 26 09:46:13 crc kubenswrapper[4806]: I0126 09:46:13.757559 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-grjnj"] Jan 26 09:46:13 crc kubenswrapper[4806]: I0126 09:46:13.775925 4806 scope.go:117] "RemoveContainer" containerID="0e45b71b3e774c05c60aa99b4e27ef00ff0945b2b643e529219f8dfabe34d2bc" Jan 26 09:46:13 crc kubenswrapper[4806]: E0126 09:46:13.776304 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e45b71b3e774c05c60aa99b4e27ef00ff0945b2b643e529219f8dfabe34d2bc\": container with ID starting with 0e45b71b3e774c05c60aa99b4e27ef00ff0945b2b643e529219f8dfabe34d2bc not found: ID does not exist" containerID="0e45b71b3e774c05c60aa99b4e27ef00ff0945b2b643e529219f8dfabe34d2bc" Jan 26 09:46:13 crc kubenswrapper[4806]: I0126 09:46:13.776347 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e45b71b3e774c05c60aa99b4e27ef00ff0945b2b643e529219f8dfabe34d2bc"} err="failed to get container status \"0e45b71b3e774c05c60aa99b4e27ef00ff0945b2b643e529219f8dfabe34d2bc\": rpc error: code = NotFound desc = could not find container \"0e45b71b3e774c05c60aa99b4e27ef00ff0945b2b643e529219f8dfabe34d2bc\": container with ID starting with 0e45b71b3e774c05c60aa99b4e27ef00ff0945b2b643e529219f8dfabe34d2bc not found: ID does not exist" Jan 26 09:46:13 crc kubenswrapper[4806]: I0126 09:46:13.776377 4806 scope.go:117] "RemoveContainer" containerID="529ba4a5db4d6a790dbe5573bef8b0578389ca594c03e333010c881716b0d8d2" Jan 26 09:46:13 crc kubenswrapper[4806]: E0126 09:46:13.776750 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"529ba4a5db4d6a790dbe5573bef8b0578389ca594c03e333010c881716b0d8d2\": container with ID starting with 529ba4a5db4d6a790dbe5573bef8b0578389ca594c03e333010c881716b0d8d2 not found: ID does not exist" containerID="529ba4a5db4d6a790dbe5573bef8b0578389ca594c03e333010c881716b0d8d2" Jan 26 09:46:13 crc kubenswrapper[4806]: I0126 09:46:13.776782 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"529ba4a5db4d6a790dbe5573bef8b0578389ca594c03e333010c881716b0d8d2"} err="failed to get container status \"529ba4a5db4d6a790dbe5573bef8b0578389ca594c03e333010c881716b0d8d2\": rpc error: code = NotFound desc = could not find container \"529ba4a5db4d6a790dbe5573bef8b0578389ca594c03e333010c881716b0d8d2\": container with ID starting with 529ba4a5db4d6a790dbe5573bef8b0578389ca594c03e333010c881716b0d8d2 not found: ID does not exist" Jan 26 09:46:13 crc kubenswrapper[4806]: I0126 09:46:13.776806 4806 scope.go:117] "RemoveContainer" containerID="1b9bb6fa4af158ba7c7ea1a7973ac6ee70de1c2fb4fa5aacdad59ad6e1fafa2e" Jan 26 09:46:13 crc kubenswrapper[4806]: E0126 09:46:13.777036 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b9bb6fa4af158ba7c7ea1a7973ac6ee70de1c2fb4fa5aacdad59ad6e1fafa2e\": container with ID starting with 1b9bb6fa4af158ba7c7ea1a7973ac6ee70de1c2fb4fa5aacdad59ad6e1fafa2e not found: ID does not exist" containerID="1b9bb6fa4af158ba7c7ea1a7973ac6ee70de1c2fb4fa5aacdad59ad6e1fafa2e" Jan 26 09:46:13 crc kubenswrapper[4806]: I0126 09:46:13.777081 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b9bb6fa4af158ba7c7ea1a7973ac6ee70de1c2fb4fa5aacdad59ad6e1fafa2e"} err="failed to get container status \"1b9bb6fa4af158ba7c7ea1a7973ac6ee70de1c2fb4fa5aacdad59ad6e1fafa2e\": rpc error: code = NotFound desc = could not find container \"1b9bb6fa4af158ba7c7ea1a7973ac6ee70de1c2fb4fa5aacdad59ad6e1fafa2e\": container with ID starting with 1b9bb6fa4af158ba7c7ea1a7973ac6ee70de1c2fb4fa5aacdad59ad6e1fafa2e not found: ID does not exist" Jan 26 09:46:13 crc kubenswrapper[4806]: I0126 09:46:13.779895 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hmrwv"] Jan 26 09:46:13 crc kubenswrapper[4806]: W0126 09:46:13.785474 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode39b07af_2ce3_4118_843d_7c4fe05848ec.slice/crio-20d2b1d96941f30752be6e72a8b7cd03c69b6e12513691d1d1b92e41f1972cb3 WatchSource:0}: Error finding container 20d2b1d96941f30752be6e72a8b7cd03c69b6e12513691d1d1b92e41f1972cb3: Status 404 returned error can't find the container with id 20d2b1d96941f30752be6e72a8b7cd03c69b6e12513691d1d1b92e41f1972cb3 Jan 26 09:46:14 crc kubenswrapper[4806]: I0126 09:46:14.719037 4806 generic.go:334] "Generic (PLEG): container finished" podID="e39b07af-2ce3-4118-843d-7c4fe05848ec" containerID="f22512165ad3b506b521627bfdcf8789c0fcb8bb0a733c09bbc438a3402a9228" exitCode=0 Jan 26 09:46:14 crc kubenswrapper[4806]: I0126 09:46:14.719072 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hmrwv" event={"ID":"e39b07af-2ce3-4118-843d-7c4fe05848ec","Type":"ContainerDied","Data":"f22512165ad3b506b521627bfdcf8789c0fcb8bb0a733c09bbc438a3402a9228"} Jan 26 09:46:14 crc kubenswrapper[4806]: I0126 09:46:14.719267 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hmrwv" event={"ID":"e39b07af-2ce3-4118-843d-7c4fe05848ec","Type":"ContainerStarted","Data":"20d2b1d96941f30752be6e72a8b7cd03c69b6e12513691d1d1b92e41f1972cb3"} Jan 26 09:46:15 crc kubenswrapper[4806]: I0126 09:46:15.053328 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539c482-cf87-4a4d-a52c-102ba6296a7b" path="/var/lib/kubelet/pods/7539c482-cf87-4a4d-a52c-102ba6296a7b/volumes" Jan 26 09:46:15 crc kubenswrapper[4806]: I0126 09:46:15.728741 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hmrwv" event={"ID":"e39b07af-2ce3-4118-843d-7c4fe05848ec","Type":"ContainerStarted","Data":"7fea3759abe8a21dca20d517bf8f6499350553637a73e3bb9e73e1c0919315cb"} Jan 26 09:46:19 crc kubenswrapper[4806]: I0126 09:46:19.768486 4806 generic.go:334] "Generic (PLEG): container finished" podID="e39b07af-2ce3-4118-843d-7c4fe05848ec" containerID="7fea3759abe8a21dca20d517bf8f6499350553637a73e3bb9e73e1c0919315cb" exitCode=0 Jan 26 09:46:19 crc kubenswrapper[4806]: I0126 09:46:19.769050 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hmrwv" event={"ID":"e39b07af-2ce3-4118-843d-7c4fe05848ec","Type":"ContainerDied","Data":"7fea3759abe8a21dca20d517bf8f6499350553637a73e3bb9e73e1c0919315cb"} Jan 26 09:46:20 crc kubenswrapper[4806]: I0126 09:46:20.779882 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hmrwv" event={"ID":"e39b07af-2ce3-4118-843d-7c4fe05848ec","Type":"ContainerStarted","Data":"8d52a59ac679ec137be0e1213a738ef4ea2dff4c5ecc0ee25c1d49e06d03c609"} Jan 26 09:46:20 crc kubenswrapper[4806]: I0126 09:46:20.807940 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-hmrwv" podStartSLOduration=3.369707001 podStartE2EDuration="8.807915782s" podCreationTimestamp="2026-01-26 09:46:12 +0000 UTC" firstStartedPulling="2026-01-26 09:46:14.721577869 +0000 UTC m=+6753.985985925" lastFinishedPulling="2026-01-26 09:46:20.15978665 +0000 UTC m=+6759.424194706" observedRunningTime="2026-01-26 09:46:20.802783866 +0000 UTC m=+6760.067191922" watchObservedRunningTime="2026-01-26 09:46:20.807915782 +0000 UTC m=+6760.072323848" Jan 26 09:46:23 crc kubenswrapper[4806]: I0126 09:46:23.257209 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-hmrwv" Jan 26 09:46:23 crc kubenswrapper[4806]: I0126 09:46:23.258662 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-hmrwv" Jan 26 09:46:24 crc kubenswrapper[4806]: I0126 09:46:24.306285 4806 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-hmrwv" podUID="e39b07af-2ce3-4118-843d-7c4fe05848ec" containerName="registry-server" probeResult="failure" output=< Jan 26 09:46:24 crc kubenswrapper[4806]: timeout: failed to connect service ":50051" within 1s Jan 26 09:46:24 crc kubenswrapper[4806]: > Jan 26 09:46:27 crc kubenswrapper[4806]: I0126 09:46:27.041676 4806 scope.go:117] "RemoveContainer" containerID="30b0621374ad1344bda7b792ef451495a6c6e23e6c45091ee047de6935c61017" Jan 26 09:46:27 crc kubenswrapper[4806]: E0126 09:46:27.042217 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 09:46:34 crc kubenswrapper[4806]: I0126 09:46:34.351842 4806 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-hmrwv" podUID="e39b07af-2ce3-4118-843d-7c4fe05848ec" containerName="registry-server" probeResult="failure" output=< Jan 26 09:46:34 crc kubenswrapper[4806]: timeout: failed to connect service ":50051" within 1s Jan 26 09:46:34 crc kubenswrapper[4806]: > Jan 26 09:46:40 crc kubenswrapper[4806]: I0126 09:46:40.042196 4806 scope.go:117] "RemoveContainer" containerID="30b0621374ad1344bda7b792ef451495a6c6e23e6c45091ee047de6935c61017" Jan 26 09:46:40 crc kubenswrapper[4806]: E0126 09:46:40.043261 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 09:46:43 crc kubenswrapper[4806]: I0126 09:46:43.321239 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-hmrwv" Jan 26 09:46:43 crc kubenswrapper[4806]: I0126 09:46:43.404996 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-hmrwv" Jan 26 09:46:44 crc kubenswrapper[4806]: I0126 09:46:44.146996 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hmrwv"] Jan 26 09:46:45 crc kubenswrapper[4806]: I0126 09:46:45.064836 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-hmrwv" podUID="e39b07af-2ce3-4118-843d-7c4fe05848ec" containerName="registry-server" containerID="cri-o://8d52a59ac679ec137be0e1213a738ef4ea2dff4c5ecc0ee25c1d49e06d03c609" gracePeriod=2 Jan 26 09:46:45 crc kubenswrapper[4806]: I0126 09:46:45.575981 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hmrwv" Jan 26 09:46:45 crc kubenswrapper[4806]: I0126 09:46:45.748388 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e39b07af-2ce3-4118-843d-7c4fe05848ec-utilities\") pod \"e39b07af-2ce3-4118-843d-7c4fe05848ec\" (UID: \"e39b07af-2ce3-4118-843d-7c4fe05848ec\") " Jan 26 09:46:45 crc kubenswrapper[4806]: I0126 09:46:45.748623 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e39b07af-2ce3-4118-843d-7c4fe05848ec-catalog-content\") pod \"e39b07af-2ce3-4118-843d-7c4fe05848ec\" (UID: \"e39b07af-2ce3-4118-843d-7c4fe05848ec\") " Jan 26 09:46:45 crc kubenswrapper[4806]: I0126 09:46:45.748834 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cj6c6\" (UniqueName: \"kubernetes.io/projected/e39b07af-2ce3-4118-843d-7c4fe05848ec-kube-api-access-cj6c6\") pod \"e39b07af-2ce3-4118-843d-7c4fe05848ec\" (UID: \"e39b07af-2ce3-4118-843d-7c4fe05848ec\") " Jan 26 09:46:45 crc kubenswrapper[4806]: I0126 09:46:45.749045 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e39b07af-2ce3-4118-843d-7c4fe05848ec-utilities" (OuterVolumeSpecName: "utilities") pod "e39b07af-2ce3-4118-843d-7c4fe05848ec" (UID: "e39b07af-2ce3-4118-843d-7c4fe05848ec"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 09:46:45 crc kubenswrapper[4806]: I0126 09:46:45.749346 4806 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e39b07af-2ce3-4118-843d-7c4fe05848ec-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 09:46:45 crc kubenswrapper[4806]: I0126 09:46:45.757160 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e39b07af-2ce3-4118-843d-7c4fe05848ec-kube-api-access-cj6c6" (OuterVolumeSpecName: "kube-api-access-cj6c6") pod "e39b07af-2ce3-4118-843d-7c4fe05848ec" (UID: "e39b07af-2ce3-4118-843d-7c4fe05848ec"). InnerVolumeSpecName "kube-api-access-cj6c6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 09:46:45 crc kubenswrapper[4806]: I0126 09:46:45.852914 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cj6c6\" (UniqueName: \"kubernetes.io/projected/e39b07af-2ce3-4118-843d-7c4fe05848ec-kube-api-access-cj6c6\") on node \"crc\" DevicePath \"\"" Jan 26 09:46:45 crc kubenswrapper[4806]: I0126 09:46:45.873146 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e39b07af-2ce3-4118-843d-7c4fe05848ec-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e39b07af-2ce3-4118-843d-7c4fe05848ec" (UID: "e39b07af-2ce3-4118-843d-7c4fe05848ec"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 09:46:45 crc kubenswrapper[4806]: I0126 09:46:45.954968 4806 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e39b07af-2ce3-4118-843d-7c4fe05848ec-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 09:46:46 crc kubenswrapper[4806]: I0126 09:46:46.074459 4806 generic.go:334] "Generic (PLEG): container finished" podID="e39b07af-2ce3-4118-843d-7c4fe05848ec" containerID="8d52a59ac679ec137be0e1213a738ef4ea2dff4c5ecc0ee25c1d49e06d03c609" exitCode=0 Jan 26 09:46:46 crc kubenswrapper[4806]: I0126 09:46:46.074551 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hmrwv" event={"ID":"e39b07af-2ce3-4118-843d-7c4fe05848ec","Type":"ContainerDied","Data":"8d52a59ac679ec137be0e1213a738ef4ea2dff4c5ecc0ee25c1d49e06d03c609"} Jan 26 09:46:46 crc kubenswrapper[4806]: I0126 09:46:46.074594 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hmrwv" event={"ID":"e39b07af-2ce3-4118-843d-7c4fe05848ec","Type":"ContainerDied","Data":"20d2b1d96941f30752be6e72a8b7cd03c69b6e12513691d1d1b92e41f1972cb3"} Jan 26 09:46:46 crc kubenswrapper[4806]: I0126 09:46:46.074625 4806 scope.go:117] "RemoveContainer" containerID="8d52a59ac679ec137be0e1213a738ef4ea2dff4c5ecc0ee25c1d49e06d03c609" Jan 26 09:46:46 crc kubenswrapper[4806]: I0126 09:46:46.074810 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hmrwv" Jan 26 09:46:46 crc kubenswrapper[4806]: I0126 09:46:46.104478 4806 scope.go:117] "RemoveContainer" containerID="7fea3759abe8a21dca20d517bf8f6499350553637a73e3bb9e73e1c0919315cb" Jan 26 09:46:46 crc kubenswrapper[4806]: I0126 09:46:46.129475 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hmrwv"] Jan 26 09:46:46 crc kubenswrapper[4806]: I0126 09:46:46.145201 4806 scope.go:117] "RemoveContainer" containerID="f22512165ad3b506b521627bfdcf8789c0fcb8bb0a733c09bbc438a3402a9228" Jan 26 09:46:46 crc kubenswrapper[4806]: I0126 09:46:46.148233 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-hmrwv"] Jan 26 09:46:46 crc kubenswrapper[4806]: I0126 09:46:46.188938 4806 scope.go:117] "RemoveContainer" containerID="8d52a59ac679ec137be0e1213a738ef4ea2dff4c5ecc0ee25c1d49e06d03c609" Jan 26 09:46:46 crc kubenswrapper[4806]: E0126 09:46:46.189333 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d52a59ac679ec137be0e1213a738ef4ea2dff4c5ecc0ee25c1d49e06d03c609\": container with ID starting with 8d52a59ac679ec137be0e1213a738ef4ea2dff4c5ecc0ee25c1d49e06d03c609 not found: ID does not exist" containerID="8d52a59ac679ec137be0e1213a738ef4ea2dff4c5ecc0ee25c1d49e06d03c609" Jan 26 09:46:46 crc kubenswrapper[4806]: I0126 09:46:46.189360 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d52a59ac679ec137be0e1213a738ef4ea2dff4c5ecc0ee25c1d49e06d03c609"} err="failed to get container status \"8d52a59ac679ec137be0e1213a738ef4ea2dff4c5ecc0ee25c1d49e06d03c609\": rpc error: code = NotFound desc = could not find container \"8d52a59ac679ec137be0e1213a738ef4ea2dff4c5ecc0ee25c1d49e06d03c609\": container with ID starting with 8d52a59ac679ec137be0e1213a738ef4ea2dff4c5ecc0ee25c1d49e06d03c609 not found: ID does not exist" Jan 26 09:46:46 crc kubenswrapper[4806]: I0126 09:46:46.189379 4806 scope.go:117] "RemoveContainer" containerID="7fea3759abe8a21dca20d517bf8f6499350553637a73e3bb9e73e1c0919315cb" Jan 26 09:46:46 crc kubenswrapper[4806]: E0126 09:46:46.189844 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7fea3759abe8a21dca20d517bf8f6499350553637a73e3bb9e73e1c0919315cb\": container with ID starting with 7fea3759abe8a21dca20d517bf8f6499350553637a73e3bb9e73e1c0919315cb not found: ID does not exist" containerID="7fea3759abe8a21dca20d517bf8f6499350553637a73e3bb9e73e1c0919315cb" Jan 26 09:46:46 crc kubenswrapper[4806]: I0126 09:46:46.189866 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7fea3759abe8a21dca20d517bf8f6499350553637a73e3bb9e73e1c0919315cb"} err="failed to get container status \"7fea3759abe8a21dca20d517bf8f6499350553637a73e3bb9e73e1c0919315cb\": rpc error: code = NotFound desc = could not find container \"7fea3759abe8a21dca20d517bf8f6499350553637a73e3bb9e73e1c0919315cb\": container with ID starting with 7fea3759abe8a21dca20d517bf8f6499350553637a73e3bb9e73e1c0919315cb not found: ID does not exist" Jan 26 09:46:46 crc kubenswrapper[4806]: I0126 09:46:46.189880 4806 scope.go:117] "RemoveContainer" containerID="f22512165ad3b506b521627bfdcf8789c0fcb8bb0a733c09bbc438a3402a9228" Jan 26 09:46:46 crc kubenswrapper[4806]: E0126 09:46:46.190074 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f22512165ad3b506b521627bfdcf8789c0fcb8bb0a733c09bbc438a3402a9228\": container with ID starting with f22512165ad3b506b521627bfdcf8789c0fcb8bb0a733c09bbc438a3402a9228 not found: ID does not exist" containerID="f22512165ad3b506b521627bfdcf8789c0fcb8bb0a733c09bbc438a3402a9228" Jan 26 09:46:46 crc kubenswrapper[4806]: I0126 09:46:46.190089 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f22512165ad3b506b521627bfdcf8789c0fcb8bb0a733c09bbc438a3402a9228"} err="failed to get container status \"f22512165ad3b506b521627bfdcf8789c0fcb8bb0a733c09bbc438a3402a9228\": rpc error: code = NotFound desc = could not find container \"f22512165ad3b506b521627bfdcf8789c0fcb8bb0a733c09bbc438a3402a9228\": container with ID starting with f22512165ad3b506b521627bfdcf8789c0fcb8bb0a733c09bbc438a3402a9228 not found: ID does not exist" Jan 26 09:46:47 crc kubenswrapper[4806]: I0126 09:46:47.058724 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e39b07af-2ce3-4118-843d-7c4fe05848ec" path="/var/lib/kubelet/pods/e39b07af-2ce3-4118-843d-7c4fe05848ec/volumes" Jan 26 09:46:55 crc kubenswrapper[4806]: I0126 09:46:55.042134 4806 scope.go:117] "RemoveContainer" containerID="30b0621374ad1344bda7b792ef451495a6c6e23e6c45091ee047de6935c61017" Jan 26 09:46:55 crc kubenswrapper[4806]: E0126 09:46:55.043202 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 09:47:08 crc kubenswrapper[4806]: I0126 09:47:08.042051 4806 scope.go:117] "RemoveContainer" containerID="30b0621374ad1344bda7b792ef451495a6c6e23e6c45091ee047de6935c61017" Jan 26 09:47:08 crc kubenswrapper[4806]: E0126 09:47:08.042830 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 09:47:23 crc kubenswrapper[4806]: I0126 09:47:23.043027 4806 scope.go:117] "RemoveContainer" containerID="30b0621374ad1344bda7b792ef451495a6c6e23e6c45091ee047de6935c61017" Jan 26 09:47:23 crc kubenswrapper[4806]: E0126 09:47:23.044207 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 09:47:36 crc kubenswrapper[4806]: I0126 09:47:36.042839 4806 scope.go:117] "RemoveContainer" containerID="30b0621374ad1344bda7b792ef451495a6c6e23e6c45091ee047de6935c61017" Jan 26 09:47:36 crc kubenswrapper[4806]: E0126 09:47:36.046137 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50" Jan 26 09:47:49 crc kubenswrapper[4806]: I0126 09:47:49.047391 4806 scope.go:117] "RemoveContainer" containerID="30b0621374ad1344bda7b792ef451495a6c6e23e6c45091ee047de6935c61017" Jan 26 09:47:49 crc kubenswrapper[4806]: E0126 09:47:49.048062 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-k2tlk_openshift-machine-config-operator(d07502a2-50b0-4012-b335-340a1c694c50)\"" pod="openshift-machine-config-operator/machine-config-daemon-k2tlk" podUID="d07502a2-50b0-4012-b335-340a1c694c50"